Error describes any issue that arises unexpectedly that causes a computer to not function properly.
Computers can encounter either software errors or hardware errors.
Software errors are the most common types of errors on a computer and are often fixed with software updates or patches.
Hardware errors are any defects with hardware inside the computer or connected to the computer.
An error is a message shown to the user, whereas a bug is a problem in the code that caused the error.
Data Error may be triggered through a human error during data encoding, failed calibration of sensors that records inaccurate readings, or even possibly glitches of some sort.
Modelling Error occurs usually in the second step of modeling process where a modeler probably made simplifying assumptions or determines equations that are not suitable to the problem which causes model’s results from Step 3 to deviate drastically from reality.
Implementation Error is where computational scientists are said to have a risk in making logical errors, if such issue arises, results may lead to a disastrous consequence.
Computational Error means somewhere in the process they incorrectly added, subtracted, multiplied or divided.
Numerical methods are to provide practical procedures for obtaining the numerical solutions of problems to a specified degree of accuracy.
Scientific computing is shaped by the fact that nothing is exact.
A mathematical formula that would give the exact answer with exact inputs might not be robust enough to give an approximate answer with (inevitably) approximate inputs.
Accuracy: how closely the computed value agrees with the true value.
Precision: how closely individually computed values agree with each other.
Many computer languages allows floating-point numbers to be presented in exponential form as a decimal fraction multiplied a power of 10.
A digit is said to be significant if it is either a non-zero digit or any zero (s) lying between two non-zero digits are used as a placeholder, to indicate a retained place.
In the decimal representation of an approximate number, the nth digit after decimal is said to be correct if the absolute error does not exceed one half unit in the nth place.
Absolute error is an error that refers to the absolute value of the difference between exact answer and the computed answer.
Absolute Error can be obtained using the mathematical expression: Absolute Error = | correct - result |.
Relativeerror in contrast to Absolute error, is often expressed as a percentage and the difference divided by the absolute value of the exact answer, where the exact answer is assumed a nonzero value.
Relative Error can be obtained using the mathematical expression: (absoluteerror/correct) X 100%.