Software bug
Part of a series on |
Software development |
---|
A software bug is a design defect (bug) in computer software. A computer program with many or serious bugs may be described as buggy.
The effects of a software bug range from minor (such as a misspelled word in the user interface) to severe (such as frequent crashing).
In 2002, a study commissioned by the US Department of Commerce's National Institute of Standards and Technology concluded that "software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product".[1]
Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations.
History
[edit]Terminology
[edit]Mistake metamorphism (from Greek meta = "change", morph = "form") refers to the evolution of a defect in the final stage of software deployment. Transformation of a "mistake" committed by an analyst in the early stages of the software development lifecycle, which leads to a "defect" in the final stage of the cycle has been called 'mistake metamorphism'.[2]
Different stages of a mistake in the development cycle may be described as mistake,[3]: 31 anomaly,[3]: 10 fault,[3]: 31 failure,[3]: 31 error,[3]: 31 exception,[3]: 31 crash,[3]: 22 glitch, bug,[3]: 14 defect, incident,[3]: 39 or side effect.
Examples
[edit]Software bugs have been linked to disasters.
- Software bugs in the Therac-25 radiation therapy machine were directly responsible for patient deaths in the 1980s.[4]
- In 1996, the European Space Agency's US$1 billion prototype Ariane 5 rocket was destroyed less than a minute after launch due to a bug in the on-board guidance computer program.[5]
- In 1994, an RAF Chinook helicopter crashed, killing 29; was initially blamed on pilot error, but was later thought to have been caused by a software bug in the engine-control computer.[6]
- Buggy software caused the early 21st century British Post Office scandal.[7]
Controversy
[edit]Sometimes the use of bug to describe the behavior of software is contentious due to perception. Some suggest that the term should be abandoned; replaced with defect or error.
Some contend that bug implies that the defect arose on its own and push to use defect instead since it more clearly connotates caused by a human.[8]
Some contend that bug may be used to coverup an intentional design decision. In 2011, after receiving scrutiny from US Senator Al Franken for recording and storing users' locations in unencrypted files,[9] Apple called the behavior a bug. However, Justin Brookman of the Center for Democracy and Technology directly challenged that portrayal, stating "I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users."[10]
Prevention
[edit]Preventing bugs as early as possible in the software development process is a target of investment and innovation.[11][12]
Language support
[edit]Newer programming languages tend to be designed to prevent common bugs based on vulnerabilities of existing languages. Lessons learned from older languages such as BASIC and C are used to inform the design of later languages such as C# and Rust.
Languages may include features such as a static type system, restricted namespaces and modular programming. For example, for a typed, compiled language (like C):
float num = "3";
is syntactically correct, but fails type checking since the right side, a string, cannot be assigned to a float variable. Compilation fails – forcing this defect to be fixed before development progress can resume. With an interpreted language, a failure would not occur until later at runtime.
Some languages exclude features that easily lead to bugs, at the expense of slower performance – the principle being that it is usually better to write simpler, slower correct code than complicated, buggy code. For example, the Java does not support pointer arithmetic which is generally fast, but is considered dangerous; relatively easy to cause a major bug.
Some languages include features that add runtime overhead in order to prevent some bugs. For example, many languages include runtime bounds checking and a way to handle out-of-bounds conditions instead of crashing.
A compiled language allows for detecting some typos (such as a misspelled identifier) before runtime which is earlier in the software development process than for an interpreted language.
Techniques
[edit]Programming techniques such as programming style and defensive programming are intended to prevent typos.
For example, a bug may be caused by a relatively minor, typographical error (typo) in the code. For example, this code executes function foo
only if condition
is true.
if (condition) foo();
But this code always executes foo
:
if (condition); foo();
A convention that tends to prevent this particular issue is to require braces for a block even if it has just one line.
if (condition) { foo(); }
Enforcement of conventions may be manual (i.e. via code review) or via automated tools.
Specification
[edit]Some contend that writing a program specification which states the behavior of a program, can prevent bugs.
Some contend that formal specifications are impractical for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy.
Software testing
[edit]One goal of software testing is to find bugs.
Measurements during testing can provide an estimate of the number of likely bugs remaining. This becomes more reliable the longer a product is tested and developed.[citation needed]
Agile practices
[edit]Agile software development may involve frequent software releases with relatively small changes. Defects are revealed by user feedback.
With test-driven development (TDD), unit tests are written while writing the production code, and the production code is not considered complete until all tests complete successfully.
Static analysis
[edit]Tools for static code analysis help developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software.
Instrumentation
[edit]Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"
), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten.
Open source
[edit]Open source development allows anyone to examine source code. A school of thought popularized by Eric S. Raymond as Linus's law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow".[13] This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so."[14] An example of an open-source software bug was the 2008 OpenSSL vulnerability in Debian.
Debugging
[edit]Debugging can be a significant part of the software development lifecycle. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that “a good part of the remainder of my life was going to be spent in finding errors in my own programs”.[15]
A program known as a debugger can help a programmer find faulty code by examining the inner workings of a program such as executing code line-by-line and viewing variable values.
As an alternative to using a debugger, code may be instrumented with logic to output debug information to trace program execution and view values. Output is typically to console, window, log file or a hardware output (i.e. LED).
Some contend that locating a bug is something of an art.
It is not uncommon for a bug in one section of a program to cause failures in a different section,[citation needed] thus making it difficult to track, in an apparently unrelated part of the system. For example, an error in a graphics rendering routine causing a file I/O routine to fail.
Sometimes, the most difficult part of debugging is finding the cause of the bug. Once found, correcting the problem is sometimes easy if not trivial.
Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmers. Often, such a logic error requires a section of the program to be overhauled or rewritten.
Some contend that as a part of code review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such.
Typically, the first step in locating a bug is to reproduce it reliably. If unable to reproduce the issue, a programmer cannot find the cause of the bug and therefore cannot fix it.
Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are called heisenbugs (humorously named after the Heisenberg uncertainty principle).
Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, interest in automated aids to debugging rose, such as static code analysis by abstract interpretation.[16]
Often, bugs come about during coding, but faulty design documentation may cause a bug. In some cases, changes to the code may eliminate the problem even though the code then no longer matches the documentation.
In an embedded system, the software is often modified to work around a hardware bug since it's cheaper than modifying the hardware.
Management
[edit]Bugs are managed via activities like documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code.
Tools are often used to track bugs and other issues with software. Typically, different tools are used by the software development team to track their workload than by customer service to track user feedback.[17]
A tracked item is often called bug, defect, ticket, issue, feature, or for agile software development, story or epic. Items are often categorized by aspects such as severity, priority and version number.
In a process sometimes called triage, choices are made for each bug about whether and when to fix it based on information such as the bug's severity and priority and external factors such as development schedules. Triage generally does not include investigation into cause. Triage may occur regularly. Triage generally consists of reviewing new bugs since the previous triage and maybe all open bugs. Attendees may include project manager, development manager, test manager, build manager, and technical experts.[18][19]
Severity
[edit]Severity is a measure of impact the bug has.[20] This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized, but differ by context such as industry and tracking tool. For example, a crash in a video game has a different impact than a crash in a bank server. Severity levels might be crash or hang, no workaround (user cannot accomplish a task), has workaround (user can still accomplish the task), visual defect (a misspelling for example), or documentation error. Another example set of severities: critical, high, low, blocker, trivial.[21] The severity of a bug may be a separate category to its priority for fixing, or the two may be quantified and managed separately.
A bug severe enough to delay the release of the product is called a show stopper.[22][23]
Priority
[edit]Priority describes the importance of resolving the bug in relation to other bugs. Priorities might be numerical, such as 1 through 5, or named, such as critical, high, low, and deferred. The values might be similar or identical to severity ratings, even though priority is a different aspect.
Priority may be a combination of the bug's severity with the level of effort to fix. A bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires significantly more effort to fix.
Patch
[edit]Bugs of sufficiently high priority may warrant a special release which is sometimes called a patch.
Maintenance release
[edit]A software release that emphasizes bug fixes may be called a maintenance release – to differentiate it from a release that emphasizes new features or other changes.
Known issue
[edit]It is common practice to release software with known, low-priority bugs or other issues. Possible reasons include but are not limited to:
- A deadline must be met and resources are insufficient to fix all bugs by the deadline[24]
- The bug is already fixed in an upcoming release, and it is not of high priority
- The changes required to fix the bug are too costly or affect too many other components, requiring a major testing activity
- It may be suspected, or known, that some users are relying on the existing buggy behavior; a proposed fix may introduce a breaking change
- The problem is in an area that will be obsolete with an upcoming release; fixing it is unnecessary
- "It's not a bug, it's a feature"[25] A misunderstanding exists between expected and actual behavior or undocumented feature
Implications
[edit]The amount and type of damage a software bug may cause affects decision-making, processes and policy regarding software quality. In applications such as human spaceflight, aviation, nuclear power, health care, public transport or automotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application.
Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz et al. showed that the median of projects invest 17 percent of the development effort in bug fixing.[26] In 2020, research on GitHub repositories showed the median is 20%.[27]
Cost
[edit]In 1994, NASA's Goddard Space Flight Center managed to reduce their average number of errors from 4.5 per 1000 lines of code (SLOC) down to 1 per 1000 SLOC.[28]
Another study in 1990 reported that exceptionally good software development processes can achieve deployment failure rates as low as 0.1 per 1000 SLOC.[29] This figure is iterated in literature such as Code Complete by Steve McConnell,[30] and the NASA study on Flight Software Complexity.[31] Some projects even attained zero defects: the firmware in the IBM Wheelwriter typewriter which consists of 63,000 SLOC, and the Space Shuttle software with 500,000 SLOC.[29]
Benchmark
[edit]To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs:
- the Siemens benchmark
- ManyBugs[32] is a benchmark of 185 C bugs in nine open-source programs.
- Defects4J[33] is a benchmark of 341 Java bugs from 5 open-source projects. It contains the corresponding patches, which cover a variety of patch type.
Types
[edit]Some notable types of bugs:
Design error
[edit]A bug can be caused by insufficient or incorrect design based on the specification. For example, given that the specification is to alphabetize a list of words, a design bug might occur if the design does not account for symbols; resulting in incorrect alphabetization of words with symbols.
Arithmetic
[edit]Numerical operations can result in unexpected output, slow processing, or crashing.[34] Such a bug can be from a lack of awareness of the qualities of the data storage such as a loss of precision due to rounding, numerically unstable algorithms, arithmetic overflow and underflow, or from lack of awareness of how calculations are handled by different software coding languages such as division by zero which in some languages may throw an exception, and in others may return a special value such as NaN or infinity.
Control flow
[edit]A control flow bug, a.k.a. logic error, is characterized by code that does not fail with an error, but does not have the expected behavior, such as infinite looping, infinite recursion, incorrect comparison in a conditional such as using the wrong comparison operator, and the off-by-one error.
Interfacing
[edit]- Incorrect API usage.
- Incorrect protocol implementation.
- Incorrect hardware handling.
- Incorrect assumptions of a particular platform.
- Incompatible systems. A new API or communications protocol may seem to work when two systems use different versions, but errors may occur when a function or feature implemented in one version is changed or missing in another. In production systems which must run continually, shutting down the entire system for a major update may not be possible, such as in the telecommunication industry[35] or the internet.[36][37][38] In this case, smaller segments of a large system are upgraded individually, to minimize disruption to a large network. However, some sections could be overlooked and not upgraded, and cause compatibility errors which may be difficult to find and repair.
- Incorrect code annotations.
Concurrency
[edit]- Deadlock – a task cannot continue until a second finishes, but at the same time, the second cannot continue until the first finishes.
- Race condition – multiple simultaneous tasks compete for resources.
- Errors in critical sections, mutual exclusions and other features of concurrent processing. Time-of-check-to-time-of-use (TOCTOU) is a form of unprotected critical section.
Resourcing
[edit]- Null pointer dereference.
- Using an uninitialized variable.
- Using an otherwise valid instruction on the wrong data type (see packed decimal/binary-coded decimal).
- Access violations.
- Resource leaks, where a finite system resource (such as memory or file handles) become exhausted by repeated allocation without release.
- Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not lead to an access violation or storage violation. These are frequently security bugs.
- Excessive recursion which—though logically valid—causes stack overflow.
- Use-after-free error, where a pointer is used after the system has freed the memory it references.
- Double free error.
Syntax
[edit]- Use of the wrong token, such as performing assignment instead of equality test. For example, in some languages x=5 will set the value of x to 5 while x==5 will check whether x is currently 5 or some other number. Interpreted languages allow such code to fail. Compiled languages can catch such errors before testing begins.
Teamwork
[edit]- Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy.
- Comments out of date or incorrect: many programmers assume the comments accurately describe the code.
- Differences between documentation and product.
In politics
[edit]"Bugs in the System" report
[edit]The Open Technology Institute, run by the group, New America,[39] released a report "Bugs in the System" in August 2016 stating that U.S. policymakers should make reforms to help researchers identify and address software bugs. The report "highlights the need for reform in the field of software vulnerability discovery and disclosure."[40] One of the report's authors said that Congress has not done enough to address cyber software vulnerability, even though Congress has passed a number of bills to combat the larger issue of cyber security.[40]
Government researchers, companies, and cyber security experts are the people who typically discover software flaws. The report calls for reforming computer crime and copyright laws.[40]
The Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and the Electronic Communications Privacy Act criminalize and create civil penalties for actions that security researchers routinely engage in while conducting legitimate security research, the report said.[40]
In popular culture
[edit]- In video gaming, the term "glitch" is sometimes used to refer to a software bug. An example is the glitch and unofficial Pokémon species MissingNo.
- In both the 1968 novel 2001: A Space Odyssey and the corresponding film of the same name, the spaceship's onboard computer, HAL 9500, attempts to kill all its crew members. In the follow-up 1982 novel, 2010: Odyssey Two, and the accompanying 1984 film, 2010: The Year We Make Contact, it is revealed that this action was caused by the computer having been programmed with two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secret from the crew; this conflict caused HAL to become paranoid and eventually homicidal.
- In the English version of the Nena 1983 song 99 Luftballons (99 Red Balloons) as a result of "bugs in the software", a release of a group of 99 red balloons are mistaken for an enemy nuclear missile launch, requiring an equivalent launch response and resulting in catastrophe.
- In the 1999 American comedy Office Space, three employees attempt (unsuccessfully) to exploit their company's preoccupation with the Y2K computer bug using a computer virus that sends rounded-off fractions of a penny to their bank account—a long-known technique described as salami slicing.
- The 2004 novel The Bug, by Ellen Ullman, is about a programmer's attempt to find an elusive bug in a database application.[41]
- The 2008 Canadian film Control Alt Delete is about a computer programmer at the end of 1999 struggling to fix bugs at his company related to the year 2000 problem.
See also
[edit]- Anti-pattern
- Automatic bug fixing
- Bug bounty program
- Glitch removal
- Hardware bug
- ISO/IEC 9126, which classifies a bug as either a defect or a nonconformity
- List of software bugs
- Orthogonal Defect Classification
- Racetrack problem
- RISKS Digest
- Single-event upset
- Software defect indicator
- Software regression
- Software rot
- VUCA
References
[edit]- ^ "Software bugs cost US economy dear". June 10, 2009. Archived from the original on June 10, 2009. Retrieved September 24, 2012.
- ^ "Testing experience : te : the magazine for professional testers". Testing Experience. Germany: testingexperience: 42. March 2012. ISSN 1866-5705. (subscription required)
- ^ a b c d e f g h i 610.12-1990: IEEE Standard Glossary of Software Engineering Terminology. IEEE. December 31, 1990. doi:10.1109/IEEESTD.1990.101064. ISBN 978-0-7381-0391-4.
- ^ Leveson, Nancy G.; Turner, Clark S. (1 July 1993). "An Investigation of the Therac-25 Accidents". Computer. 26 (7). IEEE Computer Society: 18–41. doi:10.1109/MC.1993.274940. eISSN 1558-0814. ISSN 0018-9162. LCCN 74648480. OCLC 2240099. S2CID 9691171.
- ^ "ARIANE 5 Flight 501 Failure Report by the Inquiry Board". The European Space Agency. Ariane 501 Inquiry Board report (33–1996). July 23, 1996.
- ^ Simon Rogerson (April 2002). "The Chinook Helicopter Disaster". IMIS Journal. 12 (2). Archived from the original on September 15, 1993. Retrieved May 27, 2024. Alt URL
- ^ "Post Office scandal ruined lives, inquiry hears". BBC News. February 14, 2022.
- ^ "News at SEI September 1999". SEI Interactive. 2 (3). Carnegie Mellon University: Software Engineering Institute. September 1, 1999.
- ^ Gregg Keizer (April 21, 2011). "Apple faces questions from Congress about iPhone tracking". Computerworld.
- ^ Gregg Keizer (April 27, 2011). "Apple denies tracking iPhone users, but promises changes". Computerworld.
- ^ Dorota Huizinga; Adam Kolawa (September 2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. ISBN 978-0-470-04212-0.
- ^ McDonald, Marc; Musson, Robert; Smith, Ross (2007). The Practical Guide to Defect Prevention. Microsoft Press. p. 480. ISBN 978-0-7356-2253-1.
- ^ "Release Early, Release Often" Archived May 14, 2011, at the Wayback Machine, Eric S. Raymond, The Cathedral and the Bazaar
- ^ "Wide Open Source" Archived September 29, 2007, at the Wayback Machine, Elias Levy, SecurityFocus, April 17, 2000
- ^ "Maurice Wilkes Quotes". QuoteFancy. Retrieved April 28, 2024.
- ^ "PolySpace Technologies history". christele.faure.pagesperso-orange.fr. Retrieved August 1, 2019.
- ^ Allen, Mitch (May–June 2002). "Bug Tracking Basics: A beginner's guide to reporting and tracking defects". Software Testing & Quality Engineering Magazine. Vol. 4, no. 3. pp. 20–24. Retrieved December 19, 2017.
- ^ Rex Black (2002). Managing The Testing Process (2nd ed.). Wiley India Pvt. Limited. p. 139. ISBN 978-8126503131. Retrieved June 19, 2021.
- ^ Chris Vander Mey (2012). Shipping Greatness - Practical Lessons on Building and Launching Outstanding Software, Learned on the Job at Google and Amazon. O'Reilly Media. pp. 79–81. ISBN 978-1449336608.
- ^ Soleimani Neysiani, Behzad; Babamir, Seyed Morteza; Aritsugi, Masayoshi (October 1, 2020). "Efficient feature extraction model for validation performance improvement of duplicate bug report detection in software bug triage systems". Information and Software Technology. 126: 106344. doi:10.1016/j.infsof.2020.106344. S2CID 219733047.
- ^ "5.3. Anatomy of a Bug". bugzilla.org. Archived from the original on May 23, 2013.
- ^ Jones, Wilbur D. Jr., ed. (1989). "Show stopper". Glossary: defense acquisition acronyms and terms (4 ed.). Fort Belvoir, Virginia: Department of Defense, Defense Systems Management College. p. 123. hdl:2027/mdp.39015061290758 – via Hathitrust.
- ^ Zachary, G. Pascal (1994). Show-stopper!: the breakneck race to create Windows NT and the next generation at Microsoft. New York: The Free Press. p. 158. ISBN 0029356717 – via archive.org.
- ^ "The Next Generation 1996 Lexicon A to Z: Slipstream Release". Next Generation. No. 15. March 1996. p. 41.
- ^ Carr, Nicholas (2018). "'It's Not a Bug, It's a Feature.' Trite – or Just Right?". wired.com.
- ^ Lientz, B. P.; Swanson, E. B.; Tompkins, G. E. (1978). "Characteristics of Application Software Maintenance". Communications of the ACM. 21 (6): 466–471. doi:10.1145/359511.359522. S2CID 14950091.
- ^ Amit, Idan; Feitelson, Dror G. (2020). "The Corrective Commit Probability Code Quality Metric". arXiv:2007.10912 [cs.SE].
- ^ "An Overview of the Software Engineering Laboratory" (PDF). Software Engineering Laboratory Series (SEL-94-005). December 1994.
- ^ a b Cobb, Richard H.; Mills, Harlan D. (1990). "Engineering software under statistical quality control". IEEE Software. 7 (6): 46. doi:10.1109/52.60601. ISSN 1937-4194. S2CID 538311 – via University of Tennessee – Harlan D. Mills Collection.
- ^ McConnell, Steven C. (1993). Code Complete. Redmond, Washington: Microsoft Press. p. 611. ISBN 978-1556154843 – via archive.org.
(Cobb and Mills 1990)
- ^ Gerard Holzmann (March 5, 2009). "Appendix D – Software Complexity" (PDF). Final Report: NASA Study on Flight Software Complexity (Daniel L. Dvorak (Ed.)). NASA Office of Chief Engineer Technical Excellence Program.
- ^ Le Goues, Claire; Holtschulte, Neal; Smith, Edward K.; Brun, Yuriy; Devanbu, Premkumar; Forrest, Stephanie; Weimer, Westley (2015). "The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs". IEEE Transactions on Software Engineering. 41 (12): 1236–1256. doi:10.1109/TSE.2015.2454513. ISSN 0098-5589.
- ^ Just, René; Jalali, Darioush; Ernst, Michael D. (2014). "Defects4J: a database of existing faults to enable controlled testing studies for Java programs". Proceedings of the 2014 International Symposium on Software Testing and Analysis – ISSTA 2014. pp. 437–440. CiteSeerX 10.1.1.646.3086. doi:10.1145/2610384.2628055. ISBN 9781450326452. S2CID 12796895.
- ^ Anthony Di Franco; Hui Guo; Cindy Rubio-González (November 23, 2017). A comprehensive study of real-world numerical bug characteristics. 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE. doi:10.1109/ASE.2017.8115662.
- ^ Kimbler, K. (1998). Feature Interactions in Telecommunications and Software Systems V. IOS Press. p. 8. ISBN 978-90-5199-431-5.
- ^ Syed, Mahbubur Rahman (2001). Multimedia Networking: Technology, Management and Applications: Technology, Management and Applications. Idea Group Inc (IGI). p. 398. ISBN 978-1-59140-005-9.
- ^ Wu, Chwan-Hwa (John); Irwin, J. David (2016). Introduction to Computer Networks and Cybersecurity. CRC Press. p. 500. ISBN 978-1-4665-7214-0.
- ^ RFC 1263: "TCP Extensions Considered Harmful" quote: "the time to distribute the new version of the protocol to all hosts can be quite long (forever in fact). ... If there is the slightest incompatibly between old and new versions, chaos can result."
- ^ Wilson, Andi; Schulman, Ross; Bankston, Kevin; Herr, Trey. "Bugs in the System" (PDF). Open Policy Institute. Archived (PDF) from the original on September 21, 2016. Retrieved August 22, 2016.
- ^ a b c d Rozens, Tracy (August 12, 2016). "Cyber reforms needed to strengthen software bug discovery and disclosure: New America report – Homeland Preparedness News". Retrieved August 23, 2016.
- ^ Ullman, Ellen (2004). The Bug. Picador. ISBN 978-1-250-00249-5.
External links
[edit]- "Common Weakness Enumeration" – an expert webpage focus on bugs, at NIST.gov
- BUG type of Jim Gray – another Bug type
- Picture of the "first computer bug" at the Wayback Machine (archived January 12, 2015)
- "The First Computer Bug!" – an email from 1981 about Adm. Hopper's bug
- "Toward Understanding Compiler Bugs in GCC and LLVM". A 2016 study of bugs in compilers