To improve the efficiency of a processor in recent multiprocessor systems to deal with data, cache memories are used to access data instead of main memory which reduces the latency of delay time. In such systems, when installing different caches in different processors in shared memory architecture, the difficulties appear when there is a need to maintain consistency between the cache memories of different processors. So, cache coherency protocol is very important in such kinds of system. MSI, MESI, MOSI, MOESI, etc. are the famous protocols to solve cache coherency problem. We have proposed in this research integrating two states of MESI's cache coherence protocol which are Exclusive and Modified, which responds to a request from reading and writing at the same time and that are exclusive to these requests. Also back to the main memory from one of the other processor that has a modified state is removed in using a proposed protocol when it is invalidated as a result of writing to that location that has the same address because in all cases it depends on the latest value written and if back to memory is used to protect data from loss; preprocessing steps to IES protocol is used to maintain and saving data in main memory when it evict from the cache. All of this leads to increased processor efficiency by reducing access to main memory
The problem of the high peak to average ratio (PAPR) in OFDM signals is investigated with a brief presentation of the various methods used to reduce the PAPR with special attention to the clipping method. An alternative approach of clipping is presented, where the clipping is performed right after the IFFT stage unlike the conventional clipping that is performed in the power amplifier stage, which causes undesirable out of signal band spectral growth. In the proposed method, there is clipping of samples not clipping of wave, therefore, the spectral distortion is avoided. Coding is required to correct the errors introduced by the clipping and the overall system is tested for two types of modulations, the QPSK as a constant amplitude modul
... Show MoreThe development of analytical techniques is required for the accurate and comprehensive detection and measurement of antibiotic contamination in the environment. Metronidazole is a common antibacterial, antiprotozoal, and antibiotic drug. Thiamine is a vital biological and medicinal ingredient that is involved in the metabolism of proteins, fats, and carbohydrates that produce energy. The study aims to identify the drugs in a mixture without separation to provide more information to confirm if a drug is present in a combination. Metronidazole and thiamine are two examples of pharmaceutical and environmental samples that can be identified using spectrophotometric techniques because of their low cost and simplicity of use. The operati
... Show More<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreThe aim of this research is to compare traditional and modern methods to obtain the optimal solution using dynamic programming and intelligent algorithms to solve the problems of project management.
It shows the possible ways in which these problems can be addressed, drawing on a schedule of interrelated and sequential activities And clarifies the relationships between the activities to determine the beginning and end of each activity and determine the duration and cost of the total project and estimate the times used by each activity and determine the objectives sought by the project through planning, implementation and monitoring to maintain the budget assessed
... Show MoreThis work deals with the separation of benzene and toluene from a BTX fraction. The separation was carried out using adsorption by molecular sieve zeolite 13X in a fixed bed. The concentration of benzene and toluene in the influent streams was measured using gas chromatography. The effect of flow rate in the range 0.77 – 2.0 cm3/min on the benzene and toluene extraction from BTX fraction was studied. The flow rate increasing decreases the breakthrough and saturation times. The effect of bed height in the range 31.6 – 63.3 cm on benzene and toluene adsorption from BTX fraction was studied. The increase of bed height increasing increases the break point values. The effect of the concentration of benzene in the range 0.0559 – 0.2625g/
... Show MoreIn this paper we present a method to analyze five types with fifteen wavelet families for eighteen different EMG signals. A comparison study is also given to show performance of various families after modifying the results with back propagation Neural Network. This is actually will help the researchers with the first step of EMG analysis. Huge sets of results (more than 100 sets) are proposed and then classified to be discussed and reach the final.