It covers all computer science & engg. ebooks, subject notes, research papers, projects, etc. Exam Papers. UPTU CBNST (). C.B.N.S.T- IIIrd Sem Faculty Name: Dr.(Mrs.) Shilpi Gupta Text books recommend-Numerical Documents Similar To CBNST Industrial Sociology notes. Note: This question paper contains three secti. Section A, Section B and Section C with weightage of 20, 30 and 50 marks respectively. Follow the instructions.
|Published (Last):||17 January 2011|
|PDF File Size:||3.92 Mb|
|ePub File Size:||3.67 Mb|
|Price:||Free* [*Free Regsitration Required]|
Ntoes computingfloating-point arithmetic FP is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times.
A number is, in general, represented approximately to a fixed number of significant digits the significand and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen.
A number that can be represented exactly is of the following form:. The term floating point refers to the fact that a number’s radix point decimal pointor, more commonly in computers, binary point can “float”; that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of different nites of magnitude: The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers grows with the chosen scale.
Over the years, a variety of floating-point representations have been used in computers.
However, since the s, the most commonly encountered representation is that defined by the IEEE Standard. The speed of floating-point operations, commonly measured in terms of FLOPSis an important characteristic of a computer systemespecially for applications that involve intensive mathematical calculations.
A floating-point unit FPU, colloquially a math coprocessor is a part of a computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number, usually as a string of digits.
:: Computer Based Numerical and Statistical Techniques
There are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the digit string can be of any length, and the location of the radix point is indicated by placing an explicit “point” character dot or comma there. If the radix point is not specified, then the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit.
In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the decimal point in the middle, whereby “” would represent In scientific notationthe given number is chnst by a power of 10so that it lies within a certain range—typically between 1 and 10, with the radix point appearing immediately after the first digit. The scaling factor, as a power of ten, is then indicated separately at the end of the number.
Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of:. To derive cbnat value of the floating-point number, the significand is multiplied by the base raised to the power of the exponentequivalent to shifting the radix point from its implied position by a number of places equal note the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative.
In storing such a number, the base 10 need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred. The occasions on which infinite expansions occur depend nots the base and its prime factors. The way in which the significand including its sign and exponent are stored in a computer is implementation-dependent. In this binary expansion, let us denote the positions from 0 leftmost bit, or most significant bit to 32 rightmost bit.
It is used to round the bit approximation to the nearest bit number there are specific rules for halfway valueswhich is not the case here.
When this is stored in memory using the IEEE encoding, this becomes the notfs s. The significand is assumed to have a binary point to the right of the leftmost bit. It can be required that the most significant digit of the significand of a non-zero number be non-zero except when the corresponding exponent would be smaller than the minimum one. This process is called normalization.
Therefore, it does not need to be represented in memory; allowing the format to have one more bit of precision. This rule is variously called the leading bit conventionthe implicit bit conventionthe hidden bit convention or the assumed bit convention. The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives:. InLeonardo Torres y Quevedo designed an electro-mechanical version of Charles Babbage ‘s Analytical Engineand included floating-point arithmetic.
The first commercial computer with floating-point hardware was Zuse’s Z4 computer, designed in — The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers. The mass-produced IBM followed in ; it introduced the use of a biased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that botes it were said to be “scientific computers”, or to have ” scientific computation ” SC capability see also Extensions for Scientific Computation XSC.
It was not until the launch of the Intel i in that general-purpose personal computers had floating-point capability in hardware as a standard feature. Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations.
Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early s, leading to the creation of the IEEE standard once notew bit or bit word had become commonplace.
This standard was significantly based on a proposal nnotes Intel, which was designing the i numerical coprocessor; Motorola, which was designing the around the same time, gave significant input as well.
Innootes and computer scientist William Kahan was honored with notew Turing Award for being the primary botes behind this proposal; he was aided by his student Jerome Coonen and a visiting professor Harold Stone.
A floating-point number consists of two fixed-point components, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend noyes their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number.
On a typical computer system, a double precision bit binary notee number has a coefficient of 53 bits including 1 implied bitan exponent of 11 bits, and 1 sign bit. The number of normalized floating-point numbers in a system BPLU where. Namely, positive and negative zerosas well as denormalized numbers. IEC in This first standard is followed by almost cnst modern machines. It was revised in The standard provides for many closely related formats, differing in only a few details.
Five of these formats are called basic formats and others are termed extended formats ; three of these are especially widely used in computer hardware and languages:. Increasing the precision of the floating point representation generally reduces the amount of accumulated round-off error caused by intermediate calculations. Any integer with absolute value less than 2 24 can be exactly represented in the single precision format, and any integer with absolute value less than 2 53 can be exactly represented in the double precision format.
Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get bit integers on platforms that have double precision floats but only bit integers.
The standard specifies some special values, and their representation: Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive cgnst compare equal, and every NaN compares unequal to every value, including itself. Finite floating-point numbers are ordered in the same way as their values in the set of real numbers.
Floating-point numbers are typically packed into nootes computer datum as the sign bit, the exponent field, and the significand or mantissa, from left to right. For the IEEE binary formats basic and extended which have extant hardware implementations, they are apportioned as follows:.
While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed “bias” added to it. Values of all 0s in this field are reserved for the zeros and subnormal numbers ; values of all 1s are reserved for the infinities and NaNs.
Normalized numbers exclude subnormal values, zeros, infinities, and NaNs. In the IEEE binary interchange formats noted leading 1 bit of a normalized significand is not actually stored in the computer datum. It is called the “hidden” or “implicit” bit. Because of this, single precision format actually has a significand with 24 bits of precision, double precision format has 53, and quad has The sum of the cnbst bias and the exponent 1 isso this is represented in single precision format as.
This interpretation is useful for visualizing how the values of floating-point numbers vary with the representation, and allow for certain efficient approximations of floating-point operations by integer noets and bit cbnat. For example, reinterpreting a float as an integer, taking the negative or rather subtracting from a fixed number, due to bias and implicit 1then reinterpreting cbnts a float yields the reciprocal.
Explicitly, ignoring significand, taking the reciprocal noted just taking the additive inverse of the unbiased exponent, since the exponent of the reciprocal is the negative of the original exponent.
Hence actually subtracting the exponent from twice the bias, which corresponds to unbiasing, taking negative, and then biasing. For the significand, near 1 the reciprocal is approximately linear: More significantly, bit shifting allows one to compute the square shift left by 1 or take the square root shift right by 1.
This leads to approximate computations of the square root ; combined with the previous technique for taking the inverse, this allows the fast inverse square root computation, which ntoes important in graphics processing in the late s and s. This can be exploited in some other applications, such as volume ramping in digital sound processing. This holds even for the last step from a given exponent, where the significand overflows into the exponent: Thus as a graph it is linear pieces as the significand grows for a given exponent connecting the evenly spaced powers of two when the significand is 0with each linear piece having twice the slope bcnst the previous: Each piece takes the same horizontal space, cbnnst twice the vertical space of the last.
Because the exponent is convex up, nktes value is always greater than or equal to the actual shifted and scaled exponential curve through the points with significand 0; by a slightly different shift one can more closely approximate an exponential, sometimes notws, sometimes underestimating. Conversely, interpreting a floating-point number as an integer gives an approximate notfs and scaled logarithm, with each piece having half the slope of the last, taking the same vertical space but twice the horizontal space.
Since the logarithm is convex down, the approximation is always less than the corresponding logarithmic curve; again, a nptes choice of scale and shift as at above right yields a closer approximation. In most run-time environmentspositive zero is usually printed as “0” and the negative zero as “-0”. As with notds approximation scheme, operations involving “negative zero” can occasionally cause confusion. Subnormal values fill the underflow gap with values where the absolute distance between them is the same as for adjacent values just outside the underflow gap.
This is an improvement over the older practice to just have zero in the underflow gap, and where underflowing results were replaced by zero flush to zero. Modern floating-point hardware usually handles subnormal values as well as normal valuesand does not require software emulation for subnormals. The infinities of the extended real number line can be represented in IEEE floating-point datatypes, just like ordinary floating-point values like 1, 1.
They are not error values in any way, though they are often but not always, as it depends on the rounding used as replacement values when there is an overflow. Upon a divide-by-zero exception, a positive or negative infinity is returned as an exact result.
In general, NaNs will be propagated i. There are two kinds of NaNs: A signaling NaN in any arithmetic operation including numerical comparisons will cause an “invalid” exception to be signaled.