Justify by citing examples the use of Truman’s triangle in classifying technology across generations Truman’s triangle is a relationship triangle developed by Truman Reynolds that attempts to compare three attributes of a network, object or service.

The principle behind the triangle expresses the facts that when three attributes are to be evaluated, the first two (randomly chosen) attributes will always disagree or not favor the last attribute. By simply adding a “not” to the last attribute this relationship is clearly seen.The Truman’s triangle is highly used in Network environment owing to the fact that network is not cheap; however it is expected to be fast and good. The figure below shows the Truman’s triangle GOOD GOOD CHEAP CHEAP FAST FAST A typical example of the application of Truman’s triangle in designing computer architecture is: a computer that is expected to be fast and good is usually “not” cheap (Workstation desktops) likewise a computer that is expected to be cheap but yet good is obviously “not” going to be fast (Netbooks) References Please kindly find attached 2.State Moore’s Law and use it to justify improvement in technology and across generation Intel co-founder Gordon E. Moore formulated two laws that govern the improvements in computing power as seen today.

The two laws can be seen below: 1. The number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years 2. The capital cost of a semiconductor fab also increases exponentially over time In his actual words he was quoted saying: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year...

Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer” Some computer hardware specialists consider Moore's Law particularly applicable to the construction and use of electronics.

The capabilities and thus the usefulness of most digital, electronic devices are tied to the law. Aspects such as processing speed, memory capacity and digital resolution improve at the exponential rate predicted by the law. This has held true for 40 years, having significantly driven technological and social change in that time, and that pace is expected to continue for at least another decade. From the era of the single core processors, we now have the quad core, I3, I7 core etc.

each independently boasting millions of transistors and released within a 2 to 3 yr. interval.The figure above shows Intel improvements of their core from 1971 to 2004. It can be observed that there has obviously been a doubling of transistors on the cores spanning through the years. This has led to improvements in computer systems based on market demand for faster processing of data, energy efficiency and at a very affordable price. On the other hand, the result of this doubling of the core in every two years, might appear to be more economical for the consumers however the impact is felt by the manufactures as price of materials required for advancing technology (e.

. , photoresists and other polymers and industrial chemicals) are derived from natural resources such as petroleum and so are affected by the cost and supply of these resources.This phenomenon resulted to the second Moore’s law which agrees that overtime capital cost of semiconductor fabs will increase exponentially. References 1.

http://en. wikipedia. org/wiki/Moore%27s_law 2. http://glassvisage.

hubpages. com/hub/Moores-Law 3. http://www. postcarbon. org/article/299173-fight-of-our-lives-moore-s-law# 4. http://singularityhub.

om/2011/05/04/moores-law-lives-intel-announces-launch-of-22-nanometer-3d-transistor-video/ 3. What is cloning and how has it helped to build compatible computers or computer family? In computing, cloning is the act of mimicking a hardware or software to make it function the same way as the original or better over several platforms. Cloning a system is intended to accomplish compatibility with emerging hardware or software for the purpose of achieving reduced cost, promoting competition, standardization and availability across all platforms.A clone is made to work like the original by taking into account specific functionalities provided by one system and making it available to multiple systems at the same capacity or better.

An example is OpenOffice word document processor that is intended to supplant of MSword. IBM in 1981 introduced a new line of PC called the IBM PC. In order to break down the monopoly, companies like Compaq decided to make clones of the IBM PC using components that where readily available in the market. However the challenge was the bios which could not be implemented due to copyrights infringement.This was later made possible through reverse engineering the bios and making a better and more efficient bios.

Much later other companies such as HP, Dell, took up this same plight. The result of this is better computers bagged with better price value. The use of the term "PC clone" to describe IBM PC compatible computers fell out of use in the 1990s. the class of machines it now describes are simply called PCs. Cloning has made it possible for individuals to design PCs that meet their specific requirement simply by purchasing the various components that are readily available in the market and couple them together to form a PC.

Today more than 6 million people globally make use of cloned computer systems for their daily business and this has proven to be a more efficient way to build systems over time and has resulted to the development of high-end computers at a reasonable price as well as steep competition amongst manufacturers. References 1. http://en. wikipedia.

org/wiki/Clone_%28computing%29 2. http://www. techsoup. org/learningcenter/software/page6684. cfm 4. What constitutes the ALU? ALU Stands for "Arithmetic Logic Unit.

" An ALU is an integrated circuit within a CPU or GPU that performs arithmetic and logic operations.Arithmetic instructions include addition, subtraction, and shifting operations, while logic instructions include Boolean comparisons, such as AND, OR, XOR, and NOT operations. ALUs are designed to perform integer calculations. Therefore, besides adding and subtracting numbers, ALUs often handle the multiplication of two integers, since the result is also an integer. However, ALUs typically do not perform division operations, since the result may be a fraction, or a "floating point" number.

Instead, division operations are usually handled by the floating-point unit (FPU), which also performs other non-integer calculations.While the ALU is a fundamental component of all processors, the design and function of an ALU may vary between different processor models. For example, some ALUs only perform integer calculations, while others are designed to handle floating point operations as well. Some processors contain a single ALU, while others include several arithmetic logic units that work together to perform calculations. Regardless of the way an ALU is designed, its primary job is to handle integer operations. Therefore, a computer's integer performance is tied directly to the processing speed of the ALU.

Most of a processor's operations are performed by one or more ALUs. An ALU loads data from input registers, an external Control Unit then tells the ALU what operation to perform on that data, and then the ALU stores its result into an output register. The Control Unit is responsible for moving the processed data between these registers, ALU and memory. Fig 1. A simple 1bit ALU Fig 1 above illustrates a simple 1 bit ALU showing its constituents clearly separated into three (3) parts, namely : Logical, Decoder, full Adder.It can be observed from the diagram that there are inputs ; A, B, F0, F1, and Carry In.

Also it can be noticed that there are 3-input AND gates and a 4-input OR gate. The 3-input AND gate only outputs a 1 if all 3 inputs are 1s, and the 4-input O gate always outputs a 1, except when all inputs are 0s. The Finputs control the enable lines. No matter which of the four possible combinations of F inputs is put in, only one (and each time different) enable line will be "turned on.

" Thus, the function of the decoder subpart is to figure out which of the 4 operations will be done.The A and B are used as the regular inputs for all operations. The full adder ensures all output is ANDed with the corresponding enable line. The logical Unit is simply a collection of Boolean operations AB, A+B, NOT B. As with the full adder, each of their outputs is ANDed with the corresponding enable line. On the very right, all of the outputs are ORed together.

However, only one of the four inputs could potentially be a 1 because of the enable lines. The diagram, however only represents a 1-bit ALU. Most likely, an 8-bit ALU is more convenient for useful operations.