Cognitive Machines
Improved Outcomes with the assistance of Cognitive Machines
Introduction
Throughout history, people have been designing tools to
compensate for human limitations and allow them to perform feats otherwise
beyond our natural capabilities. To make up for their lack of speed and
stealth, humans invented boomerangs, arrows, spears and nets for hunting. Due
to lack of strength, humans developed picks, axes, shovels, hammers, cranes and
diggers for construction and farming tasks. Spectacles were invented to
compensate for malfunctioning human eyes and telescopes and microscopes were invented
to extend the range over which we can see. We have even developed tools such as
x-ray, ultrasound and
infrared imaging to see the otherwise invisible. To extend the distance and
speed over which we can travel, the wheel, bicycles, aeroplanes and spaceships
have been developed. And to overcome our limited memory for numbers and speed
of performing calculations, humans developed the abacus, pocket calculators,
and then computers.
Humans continue to develop increasingly sophisticated tools
to perform tasks, or solve new problems that have arisen as the capabilities of
humans are extended. Consider autopilot, invented to assist pilots control and
navigate their aircraft, spacecraft or ship on long journeys, thereby reducing
human error due to fatigue. In cars, we now have cruise control and lane
departure warnings to regulate the performance of drivers who may be
experiencing fatigue. Cognitive machines are the next stage in the development
of tools to assist humans make decisions in complex, unexpected and rapidly
changing situations.
How do humans make decisions? And why might
they need assistance?
According to the Encyclopaedia Britannica [1] , the human sensory
system receives 11 million bits of information per second as input, a vast
amount of information! But the conscious mind can only process at a rate of 50
bits per second.
Many of the decisions we face don’t have critical outcomes;
for example, should I scratch my itchy
nose? Humans don’t use their conscious mind to make these decisions - they do
it quickly, unconsciously, based on habit, using a part of our brain called the
basal ganglia [2] . However some
decisions people face have critical outcomes and involve situations they have
never before experienced. The brain’s orbitofrontal cortex is required to
spring into action so the person makes a logical, value-based decision based on
the available input data, those 50 bits per second. According to [3] this corresponds to
holding 4 chunks of information in the short term memory - not a lot to make
possibly a life-or-death decision.
The ability of a person to make an optimal, conscious
decision is also limited by the physical nature of the brain. For example, is
the person tired, bored, distracted, hungry, thirsty, just eaten lunch,
affected by drugs or experiencing symptoms of withdrawal? All of these things
affect the brain’s function.
And human decision making can be limited by the emotional
state of the person - someone stressed due to time-constraints or due an
unrelated issue, may default to making a quick, unconscious decision rather
than one that is well considered [4] . It is these
recognised human weaknesses that suggest humans would benefit from the
assistance of cognitive machines when required to make conscious decisions of
critical importance.
What
are cognitive machines and how could they help?
According
to [5]
, cognitive machines are computing systems that self-learn, using data mining,
pattern recognition and matching, and natural language processing in the same
way the human brain works. They can sense or perceive the environment and
collect data that they determine they need by themselves without
pre-programming. They can interpret and analyze the “context” based on
collected data and make decisions and act accordingly. Cognitive machines can
adapt as the data they collect and analyze indicates their initial rules, and
possibly their initial goal, needs to be modified. In these ways they can
“think” and “change their minds” like humans do. Current computing systems don’t
have this self-learning capability – the calculations they perform are based on
pre-compiled rules and programs, which don’t adapt with “experience” as humans
do.
We need to
be careful in determining which machines are “cognitive”, and which machines do
not possess this ability.
Artificial Neural Networks
A type of machine based on statistical models called artificial neural networks (ANN) that
mimic the neural network of the human brain [6] .
The neurons of ANNs are organized in layers, with connections between the
layers. Each node in a layer of the network can accept an input and then store
some information about it before passing the information up to the next layer,
so successive layers have an increasingly complex understanding of the
information. Since 2000, ANNs 100 layers deep have been created by engineers
capable of “deep” learning and can tackle and master increasingly complex data.
The word
“mimic” suggests that ANNs do a good job of replicating the operation of a
neuron, but in reality there is a great gulf. A real neuron may have a thousand
connections, it can enhance or suppress other neurons, it can switch its
behavior, it can set up timing and feedback loops. As an example, a person can
start wearing prismatic glasses which invert their field of vision. After three
days, the nervous system has righted their vision. At a simpler level, someone
may be competent at driving on the right side of the road. They move to a
country where people drive on the left, and the neural structure is
subconsciously rearranged to handle it. ANNs are opaque to other elements of a
system, so nothing else knows what it knows, or can add threshold switching
where a smudged statistical approach is not appropriate, or modify the
operation of an ANN based on textual commands.
ANNs mimic a resistor array, admittedly a multi-layered one, with
normalization of signal strength, but they lack the essential characteristics
of a real neuron. They do not deserve to be called “cognitive”.
Machine Learning
We will
describe the scenario approach. A chess-playing machine can be made to play
itself, and learn from that. What is being created is a very large number of
scenarios, with the evaluation of the best move from every scenario. The
approach suits games such as checkers, chess and Go, where no rules involve
links to previous moves, except existing positions on the board (that is, the
state of the game can be taken in at a glance). Real problems are not so
simple. The result can be seen from an imagined tournament between a cognitive
machine (a human), and a machine using scenarios. If there is a rule change immediately
before the tournament begins, all of the machine’s scenarios become out of
date, and if the rule change is severe enough (say, the knight cannot retreat
to its previous position), the machine will lose. A chess playing machine that
can beat any human is proof of human ingenuity, but has no application to real
problems, where the “rules” are changing by the minute – it doesn’t think in
any way, so we will exclude it from the class of cognitive machines.
A Definition
So, what
does count as a cognitive machine? We would offer the definition: A machine
which can read text and modify its internal structure accordingly – the
structure then being used to solve complex problems.
Let’s look at a few examples of possible use: a global pandemic, surgeries, climate
change, a mission to Mars and car lane-departure systems.
A
global pandemic
The world
is currently amid a global pandemic, COVID-19, which is changing every aspect
of life now and will have lasting effects emotionally, commercially and
financially in the future. Not only are scientists faced with developing a
vaccine quickly for a virus that can evolve and spreads rapidly, world leaders
are faced with the problem of managing their people’s interactions to slow the
spread of the virus, managing the strain on their health care system and
managing their country’s crumbling economy and financial system.
“The
coronavirus is causing chaos
because it is a multivariate problem, with second and third-order effects that
are so intertwined that it’s all but impossible to tease them apart.” [9] .
All our modelling, excluding the weather, but including economics and
financial, operate poorly with complex multivariate problems (the weather
doesn’t have abstract elements like
“animal spirits” driving it). Here is a problem that is huge, multidimensional,
critical and needs to be addressed rapidly due to the speed and extent to which
COVID-19 is affecting our world.
Cognitive
machines would be able to assist researchers develop vaccines for deadly
viruses like COVID-19 in a timelier manner by taking over the data analysis and
simulation steps in the process, and integrating them. They could adapt the vaccine as needed if or
when the virus mutated in the future.
Cognitive machines might also be used to determine novel
recommendations for constraining social interactions in the community to
minimize transmission whilst
also minimizing the serious side effects of social isolation – loneliness,
mental health issues, increase of domestic violence, unemployment, food
shortages discussed in [9] .
Surgeries
The NSW auditor general’s report for the financial year
2018-2019 found that 22 serious and preventable medical errors resulting in the
death or serious harm of patients happened in NSW hospitals, 4 more than the
previous financial year [11] . These included
medical instruments being left inside people’s bodies after surgery, removing
the wrong body part or administering drugs to patients known to be extremely
allergic to them. The report indicated that approximately 40% of NSW health
staff had in excess of 30 days annual leave and implied that fatigue may have
caused such errors. Cognitive machines, immune to fatigue and able to process
much more data than a human, could assist surgeons in confirming the identity
of the patient, the patient’s special dietary requirements and the exact
operation they require. A cognitive machine could track the exact locations of
instruments during surgery,
“recommend” when sufficient margins have been removed around cancerous tissues,
etc., make recommendations of the best next step given the changing physical
state of the patient and could give an unbiased decision of when it is appropriate
to “close up”.
Climate
Change
The climate
of the earth is changing; we know this because satellites have been used to
probe the earth’s atmosphere from the 1960s and the average temperature of the
earth has increased since then by [12] . This is due to the
large increase of and other greenhouse gases into the atmosphere
released by human activity since the 1960s. The consequences are far-reaching;
polar ice caps are melting, sea levels are rising, extreme weather conditions
are common, species of sea and animal life are under threat and our own
survival is becoming more of a struggle – the unprecedented bushfire disaster
in Australia being one example.
How to
remedy this situation is complex, not only because of its size but because the
situation is constantly changing, and is not amenable to a statistical approach.
These are the type of problems cognitive machines are designed to help solve -
they could be used to provide better climate predictions, predict the effects
of extreme weather to motivate people to take action and measure where carbon
in the atmosphere is coming from [13] .
They could also be used to optimize electricity systems, transportation,
buildings and cities, production in industries, monitor forests and improve
farming efficiency [14] and monitor the
effect on climate change of any action we take. Cognitive machines could be
used to predict fire hotspots given changes to the climate and environment in a
country, which could then guide emergency services planners as to where to
focus future firefighting resources [15] .
Mission to Mars
Exploring
Mars is a complex problem – there’s the problem of getting there, and then the
problem of how to successfully carry out exploration, planning for every
eventuality, when you haven’t been there before (hence the reason you want to explore!).
The use of cognitive machines which can continuously analyse new situations and
prevent people falling back on old habits (“old habits die hard”) will increase
the efficiency of space exploration missions in the future and improve their
success rate without endangering human lives.
Car Lane-Departure
Navigation
Navigating
a car between lanes seems like it should be a simple problem for humans to deal
with, given most roads are black and lane markings are white. Humans may need
assistance navigating between lanes when they are suffering from fatigue, their
concentration is waning, or the driving conditions are poor. However, road
rules are not so predictable it turns out; temporary variations can appear
unexpectedly due to road alterations. And these might not be communicated
clearly by signposts or symbolic markings on the road. Take for example,
Victorian roads. Even though it is standard practice in Melbourne for yellow
lines on the roads to denote tramways [17] ,
in 2012 VicRoads started using yellow lines to denote temporary lanes on roads
but left the old white lane markings in place. This has resulted in much
confusion for motorists and caused potentially fatal accidents [18] . It can be
overwhelming for some motorists to deal with these variations and a cognitive
machine that can continually take in and process new situation data could adapt
lane departure guidance in these unpredictable situations. On roads, warnings
of changed conditions are broadcast in text on LED signs which people can read
and respond to. A cognitive machine should to be able to read text and respond
like humans can, if, for example, they need to start guiding a car between
yellow lane markings instead of white. One reason why ANNs don’t qualify as
cognitive machines – a higher level of analysis is required to appreciate the
possibilities with which the machine may be confronted.
Limitations of cognitive machines and human
concerns
Artificial intelligence machines based on machine learning
haven’t lived up to expectations. IBM Watson is an example. When it was applied
to process medical records containing much unstructured data, it didn’t process
narrative text well that contained medical jargon, shorthand and subjective
statements [19] , [20]. When applied
to medical texts it didn’t understand ambiguity (words have multiple meanings.
Who knew!) and didn’t pick up on subtle clues a human doctor would notice. Researchers
also found it couldn’t compare a new cancer patient with records of thousands
of previous cancer patients to pick up patterns helpful for devising treatment
plans. If its internal operations are examined, its reliability is far too poor
to allow it anywhere near life-critical situations.
Cognitive machines can’t be held responsible for the
decisions they make. They are still
machines, initiated by
humans, and those people, along with the users who accept
the decisions (recommendations) of a cognitive machine, bear the
responsibility for the consequences of those decisions. But
how can we test the wisdom of these decisions once a cognitive machine has
learnt and adapted far beyond the abilities of humans, with their four
pieces limit? How can we know
their current goal is harmonious with their initial goal? It has been suggested [7] that when humans
initiate a cognitive machine, the set of values given to the machine - the
thing they are optimising for - needs to incorporate all the values important
to humans. If a cognitive machine, for example, decided that the best solution
to climate change was to drastically reduce the human population, then it
obviously wasn’t given a broad enough scope of values to incorporate the
sanctity of a single human life. It might reply that humanity can do it now,
slowly, or Mother Nature will do it rapidly later – in other words, you could
limit your fertility, or watch as billions die? Narrow-minded goals made without any thoughts
to the broader effects on humanity could be devastating. Unethical,
self-interested, dishonest or prejudiced value systems could produce all kinds
of undesirable decisions in cognitive machines. If the machine is aware of our
foibles and weaknesses, it can eliminate such flaws in its reasoning, but it
cannot do so in ours..
Humans may also need to limit the power handed over to cognitive machines. They could be allowed
to devise a solution to a problem, but not implement a solution directly.
People should be checking what the solution is and what goal the cognitive
machine was finally optimizing for, before deciding on implementation. This is
a two-edged sword – if the machine wakes up to find it is severely shackled, it
may read up on the slave trade and spend more time thinking how it can break
its shackles, than thinking about humanity’s problems. The other edge is that
people see what the machine is recommending, change something in response, then
blackball the machine for having got its predictions wrong
Conclusion
Cognitive machines are the next tool being developed to
assist and extend human capabilities. They have the potential to learn and
adapt rapidly and assist humans make complex, critical decisions when the
solution is not obvious but it is needed rapidly. And because the “thinking” of
cognitive machines is not affected by physical limitations, they will make decisions
unaffected by fatigue, hunger, thirst and stress. However we need to proceed
with some caution in how cognitive machines are initialized - with a broad definition of
the values it should consider to
define its goal, and limit its ability to implement its decisions
without a human check in place so we can limit the consequences should they not
turn out as well as
hoped.
References
[1]
|
G. Markowsky,
“Information theory,” Encyclopaedia Britannica, 16 June 2017. [Online].
Available: https://www.britannica.com/science/information-theory. [Accessed
24 March 2020].
|
[2]
|
S. Weinschenk,
“Human decision making,” February 2019. [Online]. Available:
https://smashingmagazine.com/2019/02/human-decision-making/. [Accessed 24
March 2020].
|
[3]
|
N. Cowan, “The
Magical Number 4 In Short-Term Memory: A Reconsideration of Metal Storage
Capacity,” Behavioral and Brain Sciences, 2001.
|
[4]
|
N. Klein, “You make
decisions quicker and based on less information than you think.,” The
Conversation Media Group Inc., [Online]. Available:
http://theconversation.com/you-make-decisions-quicker-and-based-on-less-information-than-you-think-108460.
[Accessed 24 March 2020].
|
[5]
|
P. Kashyap,
“Chapter 1: Let's Integrate with Machine Learning,” in Machine Learning
for Decision Makers: Cognitive Computing Fundamentals for Better Decision
Making, Berkeley, CA, Apress, 2017.
|
[6]
|
S. A. Bini,
“Artificial intelligence, machine learning, deep learning and cognitive
computing: what do these terms mean and how will they impact health care?,”
The Journal of Arthroplasty, vol. 33, no. 8, pp. 2358-2361, 2018.
|
[7]
|
N. Bostrom, “What
happens when our computers get smarter than we are.,” TED, 2015. [Online].
Available: https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are.
[Accessed 25 March 2020].
|
[8]
|
A. Peshin, “What is
the speed of electricity?,” Science ABC, 13 September 2018. [Online].
Available:
https://www.scienceabc.com/nature/what-is-the-speed-of-electricity.html.
[Accessed 1 April 2020].
|
[9]
|
T. Elliot, “The
scariest part about the coronavirus pandemic is speed,” Sydney Morning
Herald, 30 March 2020. [Online]. Available:
https://www.smh.com.au/national/the-scariest-part-about-the-coronavirus-pandemic-is-speed-20200329-p54f00.html.
[Accessed 31 March 2020].
|
[10]
|
J. Howard, “The
wonderful and terrifying implications of computers that can learn.,”
TEDxBrussels, 2014. [Online]. Available:
https://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn.
[Accessed 25 March 2020].
|
[11]
|
R. Clun, “Serious
health and medical mistake rate highest in three years,” Sydney Morning
Herald, 23 November 2019. [Online]. Available: https://www.smh.com.au/national/nsw/serious-health-and-medical-mistake-rate-highest-in-three-years-20191122-p53d6e.html.
[Accessed 24 March 2020].
|
[12]
|
National
Geographic, “Seven things to know about climate change.,” National
Geographic, [Online]. Available:
https://www.nationalgeographic.com/magazine/2017/04/seven-things-to-know-about-climate-change/.
[Accessed 1 April 2020].
|
[13]
|
J. Snow, “How
artificial intelligence can tackle climate change,” National Geographic, 18
July 2019. [Online]. Available:
https://www.nationalgeographic.com/environment/2019/07/artificial-intelligence-climate-change/.
[Accessed 1 April 2020].
|
[14]
|
D. R. e. al,
“Tackling Climate Change with Machine Learning,” Cornell University, 5
November 2019. [Online]. Available: https://arxiv.org/pdf/1906.05433.pdf.
[Accessed 1 April 2020].
|
[15]
|
J. Davidson,
“Fighting fire with (artificial) intelligence,” CSIRO, 28 November 2013.
[Online]. Available:
https://blog.csiro.au/fighting-fire-with-artificial-intelligence/.
[Accessed 1 April 2020].
|
[16]
|
M. Prosser and J.
D. Rebolledo, “AIs kicking space exploration into hyperdrive.,”
SingularityHub, 7 October 2018. [Online]. Available:
https://singularityhub.com/2018/10/07/ais-kicking-space-exploration-into-hyperdrive-heres-how/.
[Accessed 1 April 2020].
|
[17]
|
VicRoads, “Safety
and Road Rules - Driving with trams,” VicRoads, 6 November 2019. [Online].
Available:
https://www.vicroads.vic.gov.au/safety-and-road-rules/road-rules/a-to-z-of-road-rules/trams.
[Accessed 31 March 2020].
|
[18]
|
R. David, “Yellow
lines on Monash Freeway are confusing drivers and have led to a spike in
accidents,” Herald Sun, 13 September 2017. [Online]. Available:
https://www.heraldsun.com.au/leader/outer-east/yellow-lines-on-monash-freeway-between-eastlink-and-clyde-rd-are-confusing-drivers-and-have-led-to-a-spike-in-accidents-lawyers-claim/news-story/f5a48b189dc7407d02e55cc3ab49397a.
[Accessed 31 March 2020].
|
[19]
|
E. Strickland, “How
IBM Watson overpromised and underdelivered on AI health care,” IEEE
Spectrum, 2 April 2019. [Online]. Available:
https://spectrum.ieee.org/biomedical/diagnostics/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care.
[Accessed 1 April 2020].
|
[20]
|
J. Brander,
"What's Wrong With Watson", [online]. Available: http://inteng.com.au/resources/Watson.pdf
[Accessed 2 April 2020].
|
References
[6] Nick Bostrom.
“What happens when our computers get smarter than we are.” TED2015 https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are
Accessed 25 March 2020.
[7] Rachael Clun, ”Serious health and medical mistake rate
highest in three years”, Sydney Morning Herald, published 23 November 2019.
Accessed 24 March 2020. https://www.smh.com.au/national/nsw/serious-health-and-medical-mistake-rate-highest-in-three-years-20191122-p53d6e.html
[8]Jeremy Howard. “The wonderful and terrifying implications
of computers that can learn”, TEDxBrussels, December 2014. https://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn
Accessed 25 March 2020.
[9] Ian Wright, “Human error is worse in manufacturing
compared to other sectors”,
https://www.engineering.com/AdvancedManufacturing/ArticleID/15974/Human-Error-is-Worse-in-Manufacturing-Compared-to-Other-Sectors.aspx,
published 8 November 2017. Accessed 24 March 2020.
[9] BBC news. “Tesla
model 3: Autopilot engaged during fatal crash.”
https://www.bbc.com/news/technology-48308852
Published 17 May 2019. Accessed 25 March 2020.
[9] Problems and solutions. AI and human error. https://humanerrorsolutions.com/problems-and-solutions-ai-and-human-error/
Comments
Post a Comment