Discover the world's research
- 20+ million members
- 135+ million publications
- 700k+ research projects
Join for free
-1- UTA CSE
The Mythical Man-Month: Essays on
Software Engineering
by
Frederick P. Brooks, Jr.
•Brooks => Manager for Architecture & Operating System (OS360)
development for IBM360 & was also involved in design and
implementation of the IBM Stretch.
•OS360 is an important product since it introduced several innovative
ideas such as:
•Device independent I/O
•External Library Management
•This book is for professional managers of software development
projects.
•Brooks believes large program development management suffers
from problems related to division of labor.
•Brooks says critical problem is preservation of conceptual integrity
of the product itself => he will propose methods for preserving this
unity in Chpt's 2-7.
-2- UTA CSE
•Chpt 1: The Tar Pit
•Large system programming is like a tar pit:
•The more you struggle, the more you sink in it.
•No one thing seems to be the difficulty, but as a whole, the
programmers get trapped in it and the project dies (e.g., we can take
one paw or hand out of the tar pit, but not our whole body).
•A programming system product must be:
–1) General (for input, as well as algorithms).
–2) Thoroughly tested so it can be depended upon. This increases costs
be as much as 3 times over untested products.
–3) Documented so it can be used, fixed, and extended.
–4) Finally, a programming product becomes a component of a software
system (it works with other programs and can be called or interrupted) =>
I/O interfaces become very important => a programming product must be
tested in all possible combinations with other system components. This
is very difficult => A programming system product as a system
component costs at least 3 times as much as the same program as a
programming product.
–5) Programming product must be within allotted budget.
•A programming system product is 9 times higher in cost than a
stand-alone product (3 for program testing x 3 for system testing).
-3- UTA CSE
•Chpt 2: The Mythical Man-Month
•What one factor has caused the demise of software projects?
•Poor scheduling and timing estimations!
•Problems with scheduling and timing estimations in a software
project:
–1) Estimation of the completion time for the software projects is typically
very difficult to do.
–2) In estimating the completion time of software projects we often assume
that the effort and progress are the same, but that's not true. Especially
we tend to interchange MAN and MONTH.
–3) Scheduled progress is poorly monitored.
–4) When software projects get behind the scheduled time, the typical
solution is to add more MAN-MONTH or manpower to the software project.
•In software projects we always assume that everything will go well!
•In a non-software engineering project we often run into unforeseen
problems that are intractable, and therefore the managers tend to
over-estimate in timing requirements to compensate for them.
•In software engineering, however, we always think of programming
as something which is tractable and which is documented. However,
this assumption is not correct because we always have bugs in
software; and although bugs are tractable, they are very difficult to
detect. That will make the problem almost intractable.
-4- UTA CSE
•The second problem with estimating completion time of software
projects is with the unit of time used to estimate the completion of
the project (i.e., MAN-MONTH).
•There is a fallacy involved with this unit: It is true that the cost of the
project will increase proportional to the MAN-MONTH used for that
project. However, the progress is not going to be achieved
proportional to the number of MAN-MONTHS used for a project.
•Therefore, it is a false assumption that by adding more MAN-
MONTHS you can complete the project earlier.
•MAN and MONTH are in fact interchangeable, only if the project
consists of several independent efforts (INDEPENDENT
PARTITIONED TASKS) and a case where team members do not have
to communicate with each other.
•e.g.: Men for Months Man-Month
• 5 4 20
•X2 /2
• 10 2 20
•In fact, one of the problems with software engineering projects is
that as we add more MAN-MONTHS, the amount of communication
among the men is going to increase rapidly.
-5- UTA CSE
•Therefore, by adding more MAN-MONTHS, we are, in fact, increasing
the completion time of the software project because the
communication is going to affect the completion time of the project.
•Thus, the assumption that MAN and MONTH are interchangeable is
not correct.
•Another very important factor that affects our estimation of
completion time of software projects is the system testing and
debugging process.
•Brooks' rule of thumb for estimating the completion time of software
projects:
–1/3 time for planning;
–1/6 time for coding;
–1/4 time for component tests;
–1/4 time for system test with all components in hand.
•Note that the testing and debugging takes about 50% of the time,
while coding, which people often worry about, only takes about 1/6
of the time.
•Actually, coding is the easiest phase to accurately estimate!
•Underestimating the time it takes for system testing can become
very dangerous because it comes at the end of the project:
–project is already fully funded and has the most manpower allocated to it,
-6- UTA CSE
–the customer is ready to receive the product,
–any delays at that point is going to increase the cost of the system
dramatically,
–and it is going to reduce the confidence of the customer.
•One other problem with the scheduling of software projects is that
software managers often estimate the completion time of the
projects according to the desired date of the customers.
•Instead of making the schedules realistic they try to match what the
customer wants and therefore get into trouble later.
•Brooks' Law:
–Adding manpower to a late software project makes
it later.
•The number of months of a project depends upon its sequential
constraints. The maximum number of men depends upon the
number of independent subtasks. From these two quantities one
can derive schedules using different number of men and months.
-8- UTA CSE
•Let MMc be the average effort expended by a pair of persons,
working on the project, in communicating with each other -
determined to be based on the project not N. Then:
–Assumes no original level isolations. But complete
communication among team members.
•MMn = Effort required without communication
•MM = Total effort including communication:
•Note : This is appropriate for task force , committee
and democratic organizations. Not so for others.
MMc N N MMc Pairs MMc =− × = ×
∑2
2
MM MMn N N MMc N T
TMMn
NN MMc
MMn
NMMc N if N let H MMc
TMMn
NH N
= + − = ×
= + −
≈ + × >> =
≈ + ×
( )
( ),
1
2
1
2
21 2
-9- UTA CSE
•FINDING THE OPTIMUM TASK STAFFING
–(Time to completion minimized):
T
MMn
NMMc N
dT
dN MMn N MMc set to for minimumT
MMn N MMc
NMMn MMn
MMc
opt
opt MMc
= +
= − × +
− × + =
= =
−
−
2
20
20
141
2
2
2
( )
.
-10- UTA CSE
•Example : A 100 Man month task requires an
average of 1 Man Month Communications
effort per pair of project personnel. Nopt = ?
•With the 100 MM task effort est., wouldn't it
be a shocker if no consideration of
communication was made !
NMMn
MMc Persons
T Months
MM T N Man Month
opt
opt
opt opt
= = =
= + × =
= × =
∑
141 141 100
1141
100
141 1
2141 1414
200
. . .
.. .
-11- UTA CSE
•Chpt 3: The Surgical Team
•A lot of the program managers believe that they are better off having
a very small number of sharp Programmers on their team than
having a large number of mediocre programmers.
•However, the problem with an organization like this is that with such
a small number of people in a group you cannot do large system
applications and system programs.
•For example, the OS360 took about 5,000 MAN-YEARS to complete
so if we have a small team of 200 programmers it would have taken
them 25 years to complete the project!
•However, it took about 3 years to complete the project and, at times,
there were more than 1,000 men working on the project.
•Now assume that instead of hiring the 200 programmers we hire 10
very smart and productive programmers, therefore, let's say they can
achieve an improvement factor of 7 in terms of productivity for
programming and documentation, and we can also achieve an
improvement factor of 7 in terms of reduced communication among
the programmers.
•In that case, for our 5,000 MAN-YEARS job, we need 10 years to
complete the project. (5,000/(10X7X7)).
•Solution:
-12- UTA CSE
•Large programming jobs must be broken into smaller subtasks and
each programmer will work on a subtask.
•But as a whole, the team must act like a surgical team, in other
words, there would be a team leader or chief programmer (surgeon),
who is in charge of the whole thing and other team members try to
help him complete the project (surgery).
•Chief Programmer (and Co-Pilot)
–Administrator (handles money, people, space, machines, etc.)
»Secretary
–Editor (Supports Chief Programmers Documentation preparation)
»Secretary
–Program Clerk (Maintains code and all technical records)
–Toolsmith (Serves the chief programmer's needs for utilities, proc's,
macros, etc.)
–Tester (Devises system component tests from the functional spec's,
Assists in day to day debugging, etc.)
•Difference between a surgical team and a team of collaborating
colleagues:
–In the surgical team, the whole project is the brainchild of the surgeon and
he/she is in charge of the project. The surgeon (or chief programmer) is
in charge of keeping the conceptual integrity of the project.
–In a collaborative team, on the other hand, everyone is equal and
everyone can have input into how the project should change and how the
integrity should be preserved, and that sometimes causes chaos.
-13- UTA CSE
•Scaling up: How does one apply the surgical team organization to a
project which involves hundreds or thousands of team members?
–The obvious choice is to use a hierarchical organization in which the large
system is broken into many subsystems and there are surgical teams
assigned to each subsystem.
–Of course, there has to be coordination between the subsystems and that
will be as coordination between the subsystem team leaders (a few chief
programmers).
•Chpt 4: Conceptual Integrity
•Conceptual integrity is probably the most important aspect of large
system programming.
•It is better to have one good idea and carry it through the project
than having several uncoordinated good ideas.
•It is important that all the aspects of the system follow the same
philosophy and the same conceptual integrity throughout the
system. Thus, by doing so, the user or the customer only needs to
be aware of or know one concept.
•The conceptual integrity of the whole system is preserved by having
one system architect who designs the system from top-to-bottom.
•It should be noted that the system architect should stick with the
design of the system and should not get involved with the
implementation.
-14- UTA CSE
•This is because implementation and design are two separate phases
of the project.
•When we talk about the architect and architecture of a system we are
referring to the complete and detailed specifications of all the user
interfaces for the software system.
•The implementation of the system, on the other hand, is what goes
on behind the scenes, and what is actually carried out in coding and
in programming.
–An example is a programming language such as FORTRAN. The
architecture of FORTRAN specifies the characteristics of
FORTRAN, its syntax and its semantics. However, we can have
many implementations of FORTRAN on many different
machines.
•A system architecture is based on user requirements.
•An architect defines the design spec's while a builder defines the
implementation spec's.
•How is the architecture enforced during the implementation?
–Manual is the chief product of the architect and it carries the
design spec's.
–Meetings to resolve problems and discuss proposed changes.
–Product testing.
-15- UTA CSE
•Chpt 7: Why Did the Tower of Babel Fail
•Originally, the people of Babel spoke the same language and they
were all united.
•Therefore, they were successful in initiating and implementing the
Tower of Babel to reach God.
•However, later, God made them speak different languages and the
project failed!
•The Tower of Babel failed not because of technological limitations or
lack of resources. It failed because of communication problems and
disorganization that followed because of lack of communication.
•The same problem can be applied to large software engineering
projects. Many large system programs fail because of
communication problems among the programmers and among the
implementers.
•How does one cope with the communication problem?
•Besides telephone calls, meetings, E-mail and other forms of
communication, keeping a workbook for the project is extremely
important.
•The workbook essentially contains all the documentation relevant to
the project (Objectives, External Spec's, Internal Spec's, etc.).
•It especially contains documentation about the changes made to the
project as the project progresses.
-16- UTA CSE
•This will be a tool where the team members can go to find out what
the newest changes are and what other people are doing.
•It should be noted that, as the projects become larger and larger, and
the number of people involved become more and more, the size of
the project workbook increases and we have to impose some kind of
a structure for the workbook. For example, we may use a tree
structure organization, or we may have to use indexing and
numbering systems in order to find what we are looking for.
•One of the problems to deal with is how to keep track of changes of
the project in the workbook. There are two things that need to be
done:
–1) All the changes should be clearly highlighted in the documents and
this can be done, for example, by a vertical bar in the margin for any line
which has been changed.
–2) There should be a brief summary of changes attached, which
highlights or explains briefly the significance of the changes done to the
project.
•Organization:
–Organization is a means to control communication in a project.
–We have already talked about tree organization, as well as matrix
organization, and also we have talked about the communication involved
in each structure.
–For example, in the tree organization we see a clear division of
responsibilities and the communication becomes limited among
subordinates and the supervisors.
-17- UTA CSE
•Chpt 11: Plan to Throw One Away
•Building a pilot model or a pilot system is an intermediate step
between the initial system implementation (or prototyping) and the
final large-scale system.
•Building a pilot model is necessary in order to overcome the
difficulties unforeseen when going from a small-scale project to a
large-scale project.
•The pilot system should be thrown away and should not be given to
the customer as a final model. This is because the pilot model often
has bugs in it and would reduce the customer's confidence in the
producer.
•Software Maintenance:
–The major difference between software maintenance and hardware
maintenance is that in hardware maintenance we simply repair or replace
components and the architecture and the interface to the user doesn't
change.
–However, in software maintenance we make design changes to the
software and typically add new functions and because of this, the
interface to the user changes.
–Also, in software maintenance, fixing the problems and new functions
also introduce new bugs which requires further testing.
–What typically happens is that as we add more functions, we are adding
more bugs which in turn requires more fixes and modifications.
Therefore, software maintenance is a very difficult job.
-18- UTA CSE
•Chpt 14: Hatching a Catastrophe
•Milestones must be clearly defined, very sharp and easily
identifiable.
•PERT charts are very important in keeping track of the critical tasks
in the whole project.
•By identifying the critical path in a PERT chart we know which tasks
cannot afford even one day of delay in their completion.
•Chpt 15: The Other Face
•Read chapter 15 for details on how to document a large program.
-21- UTA CSE
DESIGN METHODOLOGIES
•Design Notations :
–Software :
»Data Flow Diagrams
»Structure charts
»Procedure Templates
»Pseudocodes
»Structured Flowcharts
»Structured English
»Decision Tables
–Hardware
»Block Diagrams
»Schematic and wiring diagrams
»Layout diagrams
»Detailed drawings (FAB, detail, PCB masks)
-22- UTA CSE
•Design Strategies (software and hardware)
–No generally applicable recipes.
–Conceptual gap : System requirements ----> Primitive
bldg blocks
–Many design alternatives
–(S/W): Space - Time trade offs. (H/W): thermal , Structural
& Maintenance access analysis.
–Iteration
–Divide and Conquer : Partition the problem into smaller
manageable pieces.
•Design Techniques (S/W) :
–Stepwise refinement
–Levels of Abstraction
–Structured Design
–Integrated Top Down Development
–Bottom Up development
–Jackson Structured Programming
-23- UTA CSE
DESIGN TECHNIQUES
•TOP DOWN APPROACH
–Strong system requirements
–The successive decomposition of a large problem into
smaller sub problems until suitable primitive building
blocks are reached.
-24- UTA CSE
•BOTTOM UP APPROACH
–Restricted resources and weak system requirements.
–The basic primitives are used to build useful and more
powerful problem oriented building blocks.
DESIGN TECHNIQUES
... One of the main reference works in computer science is Donald Knuth's The Art of Computer Programming (1968), and, as Rosenberg (2008) correctly notices, Knuth chose to refer to computer programming as an art in the title. In the nineteenth century, technology was often referred to as mechanical arts (Peters, 2015), and tech historian Nathan Ensmenger (2015) compares programming to poetry, as does Frederik Brooks (1975) in his legendary book on programming. Brooks rhetorically asks why programming is fun, and one answer he provides is that the programmer, like the poet, is someone who creates art by using their imagination. ...
... "Not even my wife understands", as Sören phrased it, adding "you know, it is a little bit abstract". This resonates in Brooks's (1975) early statement about computers being incredibly complex, the most complex technology ever built. The programmers, however, understand computers. ...
... Rosenberg's (2008) account of how a development team worked for four years in their quest for "transcendent software" is an example of this quality of sticking to long-term goals, and not giving up, even after countless setbacks. Also, Brooks (1975) emphasises that if something fails, admit it frankly, but above all, find another way to tackle the problem. Grit is thus connected to not giving up, which in turn is connected to believing in oneself (as discussed in Chapter 4). 5 The programmers I interviewed told stories of struggling with a problem, and when they succeeded, they were recognised for their efforts. ...
- Jakob Svensson
In our connected data societies, the importance of algorithms and automated systems is obvious. They determine search engines' rankings, what driverless cars do when a child appears on the road, and stock market changes. Today data-driven algorithms and automated systems are everywhere. While algorithms and automated systems themselves are often a topic of controversy and debate, this book is about the people behind them; it is an account of the cultures, values, and imaginations that guide programmers in their work designing and engineering software and digital technology. Technology, it is argued, is not neutral and developed free of context. And since algorithms and automated systems exercise power in connected data societies, it is pivotal to understand their creators, who could be labelled, it is argued in the book, Wizards of the Web. This book is the result of an ethnographically inspired study based on interviews with software engineers and programmers, observations made at tech headquarters and conferences in Denmark, Sweden, Brazil, Germany, India, and the US, and a case study of the introduction of algorithmic automation on the front page of a Scandinavian newspaper. The author, Jakob Svensson, is professor of Media and Communication Studies at Malmö University. The book is part of the research project Behind the Algorithm (funded by the Swedish Research Council, 2018–2020).
... This hierarchical, twolayered organisation pattern can be detected by measuring the rich-club coefficient Φ(k) or the extend to which well-connected nodes relate to each other (see Figure 2a) [75]. The relatively small size of the core team reflects prestige and reputation biases [76] as well as limiting communication overheads, as suggested by the Brook's law "adding more people to a late software project makes it later" [77]. Interestingly, these observations are consistent with a recent cultural study suggesting how group size depends on the complexity of the task [78]. ...
The evolution of computing is an example of a major, transformative technological adaptation still unfolding in human history. Information technologies are supported by many other knowledge domains that have evolved through a cumulative cultural process, yet at the same time computing affects the tempo and mode of cultural evolution, greatly accelerating innovation processes driven by recombination of present technologies. Additionally, computing has created entire new domains for cumulative cultural evolution, furthering an era dominated by digital economies and media. These new domains offer very desirable qualities for cultural evolution research and digital archaeology, including good coverage in data completeness in widely different aspects of human culture, from social networks to innovation in programming languages. We review the major transitions in information technologies, with especial interest in their connections to a biological evolutionary framework. In particular, software vs. hardware evolution poses an interesting example of symbiotic technologies that display strong social dependencies as well as an extrinsic fitness due to energetic and temporal constrains. Properly accounting for the interplay of material and social factors can explain the coexistence of gradualism and punctuated dynamics in cultural and technological evolution.
... These resourcing constraints created high turnover and a vicious circle as the resource-constrained team then had to on-board new staff further reducing their work capacity. IT work is seen to be a systems effort with complex inter-relationships and high communication needs [81]. Hence, adding more resources is cited to extend schedules. ...
- Lisa F. Seymour
- Ashley Koopman
Business Process Management Suites (BPMSs) have been adopted in organisations to model, improve and automate business processes as they aim to increase the quality, efficiency and agility of their business processes. Yet, many organisations struggle to achieve the benefits they expected from a BPMS. This interpretive case study in a large South African financial services organisation explains factors found to negatively impact successful BPMS adoption. The paper describes how an IT team struggled to increase process agility with a BPMS in a large legacy application landscape. The dominant factors causing the struggle were the difficulty of integrating with other applications and a lack of governance around BPM. Interesting findings on the difficulties in resourcing BPM IT teams are presented. The impact of BPM strategy, culture and governance on BPM methods, resourcing, data and technology is explained. The BPM literature lacks empirical qualitative case studies and theoretical models. This paper aimed to contribute to both needs. The theoretical contribution of this paper is two models. The first inductively derived explanatory contextual model should be useful for practitioners wanting to adopt a BPMS. Using this study's findings and models from the literature, a second, more generic explanatory model of information system performance is derived for a BPMS.
... It might be simply a proxy to the project size (i.e. to the LOC). It might be due to the increased communication complexity and the difficulty to coordinate multiple developers, as suggested by Brooks in the mythical "The Mythical Man Month" (Brooks, 1975). Part of it might also be a reflection of Linus's law, as discussed in Sect. ...
- Idan Amit
- Dror G. Feitelson
The effort invested in software development should ideally be devoted to the implementation of new features. But some of the effort is invariably also invested in corrective maintenance, that is in fixing bugs. Not much is known about what fraction of software development work is devoted to bug fixing, and what factors affect this fraction. We suggest the Corrective Commit Probability (CCP), which measures the probability that a commit reflects corrective maintenance, as an estimate of the relative effort invested in fixing bugs. We identify corrective commits by applying a linguistic model to the commit messages, achieving an accuracy of 93%, higher than any previously reported model. We compute the CCP of all large active GitHub projects (7,557 projects with 200+ commits in 2019). This leads to the creation of an investment scale, suggesting that the bottom 10% of projects spend less than 6% of their total effort on bug fixing, while the top 10% of projects spend at least 39% of their effort on bug fixing — more than 6 times more. Being a process metric, CCP is conditionally independent of source code metrics, enabling their evaluation and investigation. Analysis of project attributes shows that lower CCP (that is, lower relative investment in bug fixing) is associated with smaller files, lower coupling, use of languages like JavaScript and C# as opposed to PHP and C++, fewer code smells, lower project age, better perceived quality, fewer developers, lower developer churn, better onboarding, and better productivity.
... There are also evident challenges in executing the Active strategy. In highly interdependent system development, the logic that all pairs of individuals with interdependent tasks should be communicating regularly becomes infeasible (Brooks, 1995). In an organization of several thousand or more individuals, it is difficult to develop a robust social network that spans a significant portion of those individuals. ...
- Arianne Collopy
Coordination – ensuring interfacing groups are working with consistent data, interpretations of that data, and consistent goals – becomes necessary when design work is distributed or partitioned among different individuals or teams. System design optimization algorithms are well-established for the coordination of analytical design problems. However, in actual system design, individuals are unlikely to coordinate by following algorithmic procedures. In a design organization, distributed tasks must be first partitioned so that they can be worked in parallel, and then coordinated so that the results can be joined together to effect the overall project goal. In this organizational context, coordination is primarily a communicative process focused on information sharing among parallel tasks. This research focuses on these individual communication behaviors and demonstrates their impact on system-level coordination. This dissertation addresses two research questions. First, what approaches and behaviors do individuals use to facilitate system-level coordination of distributed design work? This is answered with a pair of exploratory studies. A qualitative study based on interviews of industry experts found that proactive, empathetic leadership-based behaviors and more passive, authority-based behaviors are complementary approaches to facilitating coordination. A quantitative study based on a survey of novice designers resulted in identification of coordination roles within teams. Text analysis and network analysis of survey data identified five roles: some are more communicative and focused on integrative tasks and leadership, whereas others are less communicative and focused on documentation and detailed engineering tasks. The findings suggest a parallel to the active and passive behaviors identified in the qualitative study. The second research question asks what is the quantitative impact of these behaviors on system-level performance. A descriptive agent-based model was developed that simulates the impact of individual behaviors on the system-level performance of a distributed design task requiring coordination. Results from this model indicate a correlation between more active agent behaviors and higher performance, but at the cost of increased peer-to-peer interactions. This dissertation illustrates the importance of a balance between proactive, empathetic-leadership-based and more passive, authority-based processes and behaviors for the effective coordination of decomposition-based design work. There are three primary contributions of this work. This dissertation highlights that coordination is a central task of systems engineering personnel in engineering design organizations. This work uniquely shows how theory pertaining to coordination applies to and describes systems engineering activities, particularly in relation to engineering design. The second contribution of this work is the development of quantitative approaches to describe and evaluate coordination practice in decomposition-based system design. This is shown through an agent-based simulation model to assess the impact of communicative and information-seeking behavior on coordination and design task outcomes. This model also serves as a platform that allows extensive parametric analysis, permitting exploration of a variety of design tasks, organizational structures, and agent behaviors. Finally, this work illustrates the importance of coordinator roles in effective distributed design tasks. Frequently, managerial and integrative roles are not included in measures of coordination effectiveness, which we argue is not representative of true systems engineering practice. This dissertation is a step towards the inclusion of systems engineering and management roles in models of coordination effectiveness.
... We were however conscious that simply adding more staff would not be the most e cient solution to the problem of delivering against our four KPI. Social distancing, both within the laboratory and the wider site, was crucial to maintain, in addition to the understanding that once a certain team size is reached, the addition of further resource can make processes less e cient due to sub-optimal communication and reporting 16,17 . ...
- Julie A. Douthwaite
- Christopher A. Brown
- John Ferdinand
- Roger Clark
On 11th March 2020, the UK government announced plans for the scaling of COVID-19 testing, and on 27th March 2020 it was announced that a new alliance of private sector and academic collaborative laboratories were being created to generate the testing capacity required. The Cambridge COVID-19 Testing Centre (CCTC) was established during April 2020 through collaboration between AstraZeneca, GlaxoSmithKline, and the University of Cambridge, with Charles River Laboratories joining the collaboration at the end of July 2020. The CCTC lab operation focussed on the optimised use of automation, introduction of novel technologies and process modelling to enable a testing capacity of 22,000 tests per day. Here we describe the optimisation of the laboratory process through the continued exploitation of internal performance metrics, while introducing new technologies including the Heat Inactivation of clinical samples upon receipt into the laboratory and a Direct to PCR protocol that removed the requirement for the RNA extraction step. We anticipate that these methods will have value in driving continued efficiency and effectiveness within all large scale viral diagnostic testing laboratories.
- James Llinas
- Hesham Fouad
- Ranjeev Mittu
With backgrounds in the science of information fusion and information technology, a review of Systems Engineering (SE) for Artificial Intelligence (AI)-based systems is provided across time, first with a brief history of AI and then the systems' perspective based on the lead author's experience with information fusion processes. The different types of AI are reviewed, such as expert systems and machine learning. Then SE is introduced and how it has evolved and must evolve further to become fully integrated with AI, such that both disciplines can help each other move into the future and evolve together. Several SE issues are reviewed, including risk, technical debt, software engineering, test and evaluation, emergent behavior, safety, and explainable AI.
ResearchGate has not been able to resolve any references for this publication.
Posted by: jonjonspearowe0268473.blogspot.com
Source: https://www.researchgate.net/publication/220689892_The_Mythical_Man-Month_Essays_on_Software_Engineering
0 Comments