Companies compete greatly with each other today, so they need to focus on innovation to develop their products and make them competitive. Lean product development is the ideal way to develop product, foster innovation, maximize value, and reduce time. Set-Based Concurrent Engineering (SBCE) is an approved lean product improvement mechanism that builds on the creation of a number of alternative designs at the subsystem level. These designs are simultaneously improved and tested, and the weaker choices are removed gradually until the optimum solution is reached finally. SBCE implementations have been extensively performed in the automotive industry and there are a few case studies in the aerospace industry. This research describe the use of trade-off curve as a lean tool to support SBCE process model in CONGA project, using NASA simulation software version 1.7c and CONGA demonstration program (DEMO program) to help designers and engineers to extract the design solution where it exists according to the customer requirement and to extract alternative nearest solutions from the previous project that meet customer requirement to achieve low noise engine at an aerospace company and also extract the infeasible region where the designers cannot make any prototype in this region before manufacturing process begin, that will lead to reducing rework, time and cost.
In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
Semantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
A Multiple System Biometric System Based on ECG Data
Password authentication is popular approach to the system security and it is also very important system security procedure to gain access to resources of the user. This paper description password authentication method by using Modify Bidirectional Associative Memory (MBAM) algorithm for both graphical and textual password for more efficient in speed and accuracy. Among 100 test the accuracy result is 100% for graphical and textual password to authenticate a user.
Research is a central component of neurosurgical training and practice and is increasingly viewed as a quintessential indicator of academic productivity. In this study, we focus on identifying the current status and challenges of neurosurgical research in Iraq.
An online PubMed Medline database search was conducted to identify all articles published by Iraq-based neurosurgeons between 2003 and 2020. Information was extracted in relation to the following parameters: authors, year of publication, author’s affiliation, author’s specialty, article type, article citation, journal name, journal
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Visible light communication (VLC) is an upcoming wireless technology for next-generation communication for high-speed data transmission. It has the potential for capacity enhancement due to its characteristic large bandwidth. Concerning signal processing and suitable transceiver design for the VLC application, an amplification-based optical transceiver is proposed in this article. The transmitter consists of a driver and laser diode as the light source, while the receiver contains a photodiode and signal amplifying circuit. The design model is proposed for its simplicity in replacing the trans-impedance and transconductance circuits of the conventional modules by a simple amplification circuit and interface converter. Th
... Show MoreCurrently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show More