Sub-threshold operation has received a lot of attention in limited performance applications.However, energy optimization of sub-threshold circuits should be performed with the concern of the performance limitation of such circuit. In this paper, a dual size design is proposed for energy minimization of sub-threshold CMOS circuits. The optimal downsizing factor is determined and assigned for some gates on the off-critical paths to minimize the energy at the maximum allowable performance. This assignment is performed using the proposed slack based genetic algorithm which is a heuristic-mixed evolutionary algorithm. Some gates are heuristically assigned to the original and the downsized design based on their slack time determined by static timing analysis. Other gates are subjected to the genetic algorithm to perform an optimal downsizing assignment taking into account the previous assignments. The algorithm is applied for different downsizing factors to determine the optimal dual size for low energy operation without a performance degradation. Experimental results are obtained for some ISCAS-85 benchmark circuits such as 74283, 74L85, ALU74181, and 16 bit ripple carry adder. The proposed design shows an energy per cycle saving ranged from (29.6% to 56.59%) depending on the utilization of available slack time from the off-critical paths. © School of Engineering, Taylor’s University.
In recent years, the migration of the computational workload to computational clouds has attracted intruders to target and exploit cloud networks internally and externally. The investigation of such hazardous network attacks in the cloud network requires comprehensive network forensics methods (NFM) to identify the source of the attack. However, cloud computing lacks NFM to identify the network attacks that affect various cloud resources by disseminating through cloud networks. In this paper, the study is motivated by the need to find the applicability of current (C-NFMs) for cloud networks of the cloud computing. The applicability is evaluated based on strengths, weaknesses, opportunities, and threats (SWOT) to outlook the cloud network. T
... Show MoreNon uniform channelization is a crucial task in cognitive radio receivers for obtaining separate channels from the digitized wideband input signal at different intervals of time. The two main requirements in the channelizer are reconfigurability and low complexity. In this paper, a reconfigurable architecture based on a combination of Improved Coefficient Decimation Method (ICDM) and Coefficient Interpolation Method (CIM) is proposed. The proposed Hybrid Coefficient Decimation-Interpolation Method (HCDIM) based filter bank (FB) is able to realize the same number of channels realized using (ICDM) but with a maximum decimation factor divided by the interpolation factor (L), which leads to less deterioration in stop band at
... Show MoreThis study examines the relationship between the increase in the number of tourists coming to Tunisia and GDP during the period 1995-2017, using the methodology of joint integration, causal testing and error correction model. The research found the time series instability of the logarithm of the number of tourists coming to Tunisia and the output logarithm but after applying the first differences, these chains become stable, THUS these time series are integrated in the first differences. Using the Johansson method, we found the possibility of a simultaneous integration relationship between the logarithm of the number of tourists coming to Tunisia and the logarithm of GDP in Tunisia, and there is a causal relationship in one direc
... Show MoreThis paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used. Experimental results shows LPG-PCA method
... Show MorePsychological damage is one of the damages that can be compensated under the fault of negligence in the framework of English law, where the latter intends to include an enumeration of civil errors on the basis of which liability can be determined, and aims under each of these errors to protect a specific interest (for example, defamation protects Among the damage to reputation and inconvenience are the rights contained on the land), and the same is true for the rest of the other errors. Compensation for psychological damage resulting from negligence has raised problems in cases where the psychological injury is "pure", that is, those that are not accompanied by a physical injury, which required subjecting them to special requirements by the
... Show MoreThis work represents development and implementation a programmable model for evaluating pumping technique and spectroscopic properties of solid state laser, as well as designing and constructing a suitable software program to simulate this techniques . A study of a new approach for Diode Pumped Solid State Laser systems (DPSSL), to build the optimum path technology and to manufacture a new solid state laser gain medium. From this model the threshold input power, output power optimum transmission, slop efficiency and available power were predicted. different systems configuration of diode pumped solid state laser for side pumping, end pump method using different shape type (rod,slab,disk) three main parameters are (energy transfer efficie
... Show MoreLong memory analysis is one of the most active areas in econometrics and time series where various methods have been introduced to identify and estimate the long memory parameter in partially integrated time series. One of the most common models used to represent time series that have a long memory is the ARFIMA (Auto Regressive Fractional Integration Moving Average Model) which diffs are a fractional number called the fractional parameter. To analyze and determine the ARFIMA model, the fractal parameter must be estimated. There are many methods for fractional parameter estimation. In this research, the estimation methods were divided into indirect methods, where the Hurst parameter is estimated fir
... Show More