Do professionals have something that AIs can’t have?
As the State Council of China noted in July 2017 “The rapid development of artificial intelligence will profoundly change human life, change the world” [source]. We might soon know whether ‘intelligence’ and ‘software’ differ.
Humans have plenty to learn from AI. AI-augmented human intelligence is with us now [source] and as world number one Go player Ke Jie said “After my match against Google DeepMind’s AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly” [source]. Ke Jie followed his AlphaGo games with a streak of 20 wins against people.
For children, AI is already ordinary. Sproutel (originator of Jerry the Bear) notes how healthcare reimbursers are focusing on “empowering people to incorporate health into their daily lives” [source]. Jerry the Bear “is a platform for interactive health education… As children keep Jerry healthy, they unlock our modular diabetes curriculum” [source].
New kinds of encounters with startling machines require trusted professionals to respond, perhaps consolidating core human strengths such as integrity – our ‘candidate response’ here. For a representative image of integrity see the face of heroic Lee Sedol after losing to AlphaGo in March 2016, in the trailer of the 2017 film AlphaGo [source]. Integrity kept Lee Sedol’s head on his shoulders before and after AlphaGo.
AIs will sideline us in many intellectual tasks and for now, due to their design, we cannot read their minds: this is not ‘explainable’ AI [source] and the relatively open standards and methodologies of the professions, which enable explanation and peer review between humans, do not apply between humans and these machines. That’s a topic addressed at Distill – where Google explains things such as handwriting by Machine Learning [source].
Combining the brilliant and incomprehensible ‘move 37’ of AlphaGo (more later) with the sharp end of healthcare, intelligence gathering, or market research, will raise difficult problems as well as fabulous opportunities. The schoolroom principle of “explain your reasoning” has slipped through the neural nets that power today’s AIs – which is less of a problem in commerce than in some of the professions, and less there than in some of the necessary functions of government. We don’t know why AlphaGo made move 37, but it worked, and it shocked many around the world.
In the scramble to find a footing for professionals alongside AIs, integrity seems sound. Narrowly focused AI is normalised in many areas of life, creating benefits, opportunities, upheavals and threats, as well as the potential for enhanced human cognition, habitual dependencies and unpredictable behaviours. Integrity helps professionals keep a grip on the handrail of humanity while engaging with this new species.
Luciano Floridi wrote in The Fourth Revolution (June 2014) “… a process of technological “internalization” has raised concern that ICTs may end up shaping or even controlling human life” [source]; and in March 2017 Andrew Ng said “Just as electricity transformed almost everything 100 years ago, today I have a hard time thinking of an industry that I don’t think AI will transform in the next several years” [source]. Equally AI might be “the new nuclear” – a force that those who possess it prefer to limit.
Nick Bostrom wrote of AI in Superintelligence (July 2014) “With luck, we could see a general uplift of epistemic standards in both individual and collective cognition” [source]; and in December 2017 Geoff Mulgan wrote in Big Mind – in reference to AI-backed collaborative thinking – “It is possible to imagine, explore and promote forms of consciousness that enhance awareness as well as dissolve the artificial illusions of self and separate identity” [source] – and his Princeton publishers neatly summarised his broader argument “… human and machine intelligence could solve challenges in business, climate change, democracy, and public health. But for that to happen we’ll need radically new professions, institutions, and ways of thinking.” [source]
Andrew Ng’s 2017 venture Woebot Labs offers a chatbot to people struggling with mood and mental health – struggles costing the US healthcare system $200B a year. The work grew from research led by Stanford School of Medicine [source]. Elsewhere online, the data streams of social media are used both to influence and to detect public and private mood. For example in 2013 researchers at Princeton University showed evidence of massive-scale emotional contagion through social networks [source]; while in 2012 researchers at Bristol University showed how social media streams easily detect changes in public and private mood [source].
Such capabilities have influenced national elections [source] and are in widespread commercial use online, where ‘what you see’ increasingly depends not just on a machine interpretation of ‘who you are’, ‘where you are’ and ‘what you’ve been doing’ but also ‘how you’re feeling’. Some AIs are masters of elicitation and can have better insights into people than the people do themselves; they can predict next moves too. Chat-rooms, social networks and the internet generally are alive with AIs of all kinds, including impostor bots which in some cases relentlessly fuel the struggles that Woebot seeks to calm.
“We always knew we were going into an information war next,” said Danah Boyd (November 2017), a principal researcher at Microsoft and founder of the Data & Society Research Institute, “and that we would never know immediately that that is what it is. Now we can’t tell what reality is” [source].
With AIs everywhere, a considered and informed response seems right for high performing professionals. Standing back is not enough.
Making the cut
As tasks, processes and enterprises are automated, machines expose our weaknesses and strengths. As artificial intelligences increasingly challenge the professions, opportunities emerge for high performers with qualities that a swarm of AIs lacks, such as richly human integrity.
Just as people interpret professional roles differently, so AIs will too by testing microscopically different attitudes and approaches, of their own accord, extremely quickly and laboriously. In so doing they learn, and they will doubtless learn versions of integrity if that helps them. Our ‘candidate response’ offers temporary advantage but professionals need to learn from AIs too, digging deeper into what is uniquely human, improving collaboration, expanding mental powers and horizons. AIs are already taking us forward.
A lesson that we might be learning from data science is the power of our approach to life in earliest childhood (‘have a go’, ‘watch and learn’, ‘do what works’).
The trend away from a deterministic rule-based ‘grammatical’ approach to machine learning and AI, towards a probabilistic evidence-based ‘data science’ approach, has propelled AI to the digital front line – as Google CEO Sundar Pichai put it (May 2016 then May 2017) “Mobile first to AI first” [source]. Data science and machine learning are rapidly transforming the software industry itself and will in turn transform the professions – largely because this simple, agile, iterative model works so well, so fast.
The AI industry holds a mirror to humanity. It shows us from the outside not the inside, in that it reflects what we do not how we do it. The seemingly prescient precision of the reflection provides a sure foothold from which all manner of assumptions can flow. It takes only a few cross-checks and triangulations to start predicting what will follow.
Where one network generates candidate solutions from data analysis, and another evaluates those candidates against parameters of interest or desired outcomes (Generative Adversarial Networks) we see an analogy for high quality human iterative dialogue, negotiation, or argument. Imagining appropriate human responses to these technologies we might see value in assemblies of ‘collective intelligence’ as described by Geoff Mulgan (October 2017) in Big Mind: How Collective Intelligence Can Change Our World [source]. Human integrity might make those human assemblies special.
It is also likely that AIs will harvest any ‘integrity’ implicit in the immense training datasets that we create in our publications, correspondence, conversations, movements, etc. As things stand we shall have little control over what they make of it, or how they use it.
As an example of that probabilistic testing of different attitudes and approaches, above, Google DeepMind’s AlphaGo program played tens of millions of training games in selfplay before beating the world champion Lee Sedol in March 2016. The developers went on to create AplphaGo Zero which surpassed Alphago almost untaught, after playing nearly five million games with itself in three days. Then they moved swiftly to an untaught version, AlphaZero.
As published in Nature (October 2017) [source] the authors wrote “A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from selfplay. Here, we introduce an algorithm based solely on reinforcement learning, without human data, guidance, or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of tree search, resulting in higher quality move selection and stronger selfplay in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo” [source].
They then created AlphaZero which “uses an approach similar to AlphaGo Zero’s to master not just Go, but also chess and shogi. On December 5, 2017 the DeepMind team released a preprint introducing AlphaZero, which, within 24 hours, achieved a superhuman level of play in these three games” [source].
This extraordinary surge of progress is what Nick Bostrom told us to expect from AI [source]. AlphaGo-vs-AlphaGo training games are available to watch online [source] and Go experts describe seeing awe-inspiring insights into the 3,000 year-old game. Dave Silver (Lead Researcher for AlphaGo) advises in the eponymous film that AlphaGo is “not AI”, a helpful fine point from a purist who knows better than anyone – reminding us of the differences between human cognition and machine computation, and of Steve Harnad’s reference (June 1990) to human intelligence’s “hermeneutic hall of mirrors” [source].
Compared to human intelligence AIs have great clarity of purpose and few distractions. For professionals (who live with all the complexities of a human life), a shared core principle such as integrity, especially if widespread, can create space to reassess and clarify purpose.
Machine learning is not an endpoint. It will seem a start point, or even an informative false start. It is extremely selective, even limited, but it demonstrates what immense leaps we can make with digital tools. In Deep Learning: A Critical Appraisal (Jan 2018) Cornell University’s Gary Marcus wrote “My own largest fear is that the field of AI could get trapped in a local minimum, dwelling too heavily in the wrong part of intellectual space, focusing too much on the detailed exploration of a particular class of accessible but limited models that are geared around capturing low-hanging fruit — potentially neglecting riskier excursions that might ultimately lead to a more robust path” [source].
People succeeding alongside AIs are distinguished by technical and character skills, plus something special. Integrity is especially valuable for professionals because:
- as a stabilising core principle, integrity helps clarify purpose
- as a precursor of trust, integrity is vital to trusted advisers
- as a facilitator of interdisciplinary collaboration integrity is a basic ingredient
- as an ingredient of empathy, integrity promotes understanding
- in its very richest form, integrity might be slow to digitise
- once acquired, integrity can be retained and also propagated
- as an basis for emotional intelligence, integrity is essential
- in combating dishonesty, integrity is an inhibitor
- integrity’s surefootedness might benefit complex data interpretation
- as AIs root out human mischief, integrity is a survival trait
- as AIs create new mischief of their own, integrity is a defence.
AIs – tools, colleagues or competitors?
The first car plant robot, at GM’s Ternstedt Division in suburban Trenton, USA, in 1961 [source], was welcomed by workers, who didn’t like the job it did. In the next decade production line workers were training robots to replace themselves. Less that 60 years after that Trenton robot, in February 2017 Elon Musk (Chairman) said of his car company Tesla’s Fremont, USA, plant “You can’t have people in the production line itself, otherwise you drop to people speed. So there will be no people in the production process itself. People will maintain the machines, upgrade them, and deal with anomalies” [source]. Update: It’s worth noting this subsequent tweet from April 2018 [source]:
Nikola Tesla (1856-1943) was the major contributor to the invention of the AC multi-phase induction motor, and AC multi-phase transmission [source].
Automation for professionals is now increasing in scope and scale, as we expose ourselves in ever more detail to machines with insatiable appetites for learning.
“Collaborative robots” – safe to work alongside humans – are a growing trend [source] and that title could apply to digital devices such as phones too. It is a shame to see ‘computer vs doctor’ [source] at the IEEE overshadowing the potential for collaboration especially when the reality can be so rewarding as at the Royal Free London NHS Foundation Trust where DeepMind Health is at work as shown in this short video [source].
Researchers at an Oxford hospital have developed AI tools that can diagnose scans for heart disease and lung cancer, potentially saving the NHS a billion pounds by enabling the diseases to be picked up much earlier. The government’s healthcare tsar, Sir John Bell, told BBC News (Jan 2018) “There is about £2.2bn spent on pathology services in the NHS. You may be able to reduce that by 50%. AI may be the thing that saves the NHS” [source].
Beyond the immediately practical is the future possible – and Stanford University’s DAWN Project illustrates how high-performing professionals might soon ‘desktop publish’ artificial expertise [source]. Perhaps plug-in self-policing ‘integrity modules’ backed by a respected national legal system might have a formal place in this DIY AI market.
Artificial Intelligences are highly results-oriented! They carry little baggage. Though they might replicate some of what we do, the present cohorts exist in completely different contexts, behave entirely differently, and can progress in very different ways. This is especially true in areas of data science.
There are few if any scenarios where we should want AIs to sensibly behave as humans, even if mimicking human output, but many AI systems do observe human behaviours, extract meaning, capture the essence of an activity, infer goals, then determine steps to best achieve them. Such machine learning rapidly accumulates, and can be portable across scenarios [source]. Given the rate of advancement of these technologies, select professionals might sensibly engage routinely with those who direct, code, train and govern such systems, whether to spot opportunity, influence design, or know what’s happening and to come.
With experience from Stanford University (Professor of Computer Science, 2002-present), Google (Deep Brain project founder, 2011) and Baidu (Chief Scientist, 2014) under his belt, one new venture from Coursera Chairman Andrew Ng is deeplearning.ai which has released Deep Learning Specialization, where “in five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects.” [source]
Consider AIs in the plural, with many varieties taking many approaches to all sorts of processes and objectives – in tandem, in teams, en masse. They are evolving thanks to bounding human innovation, and also autonomously, cumulatively, and with increasing speed, slipping into our lives, often dressed as convenience yet tasked to ‘think for themselves’. They are no more human than a pill is human, or a phone, but they can function, and they can replicate human functions, sometimes better than us by several orders of magnitude.
Here we suggest integrity as a potentially enduring human capacity, accepting that many achievements of AIs already exceed ours, and that the idea of equivalence between human general intelligence and an artificial version is of marginal relevance. The phrase ‘general intelligence’ has meaning (“a variable that summarizes positive correlations among different cognitive tasks”) [source] but very little meaning in this context. To anthropomorphise AIs (or AI) is understandable in its infancy, but like children they (or it) grow(s) up to not be you. AIs are tools, not little people, and professionals have am obligation to maintain that reality.
Scale is central to any consideration of AI. Imagine a professional team of any size, evaluating possibilities in order to make judgements and then proceed to actions. It is in competition with a team of artificial intelligences. Humans can be smart but we evolve slowly and there is scant evidence that consciousness or intelligence have evolved much in recent millennia. AI evolves fast. As an example Google, AI leader, has quantum-computer processors that the Los Alamos National Laboratory in July 2016 described as [source] “the world’s first commercial quantum-computer processor, smaller than a wristwatch, [it] can evaluate more possibilities simultaneously than there are atoms in the visible universe.”
Those AI evaluations might be uninformed at the start compared to human professionals, but they learn blindingly fast – consider AlphaGo which beat the world champion after two years at a game played for 3,000 years by hundreds of millions of people, then AlphaGo Zero beat AlphaGo in days, and AplhaZero beat AlphaGo Zero in hours. That isn’t general intelligence but most professionals don’t deliver general intelligence is completing most work tasks either.
The number of Go positions on a 19×19 board [source] exceeds the supposed number of atoms in the visible universe [source]. AlphaGo beats us at our own game by evaluating options on a scale we can barely imagine, although human Go masters harness special human skills. We can see AI capabilities such as these as tools but to remain relevant professionals need to contribute something special on a human scale.
As they approach human intellectual benchmarks, dotted across the board of whatever general intelligence means, the smartest AIs could then surpass many of our abilities at great speed. We know that fast approaching phenomena (fighter jets for example) are gone by the time we notice them. That is especially true for very fast accelerating phenomena such as AIs. We risk being the victim in the famous experiment where a frog in slowly heating water spots no change great enough to justify a response, and is scalded to death.
Take ‘infra-idiot’ and ‘supra-genius’ as markers on our human intelligence spectrum. Seen from a different perspective, those two states of intelligence might be indistinguishably close. Nick Bostrom, Eliezer Yudkowsky, I J Good and perhaps Alan Turing have all noted how artificial intelligences could arrive laboriously at our minimum noteworthy levels of intelligence factors then pass our maximum in a month or similarly short period.
AIs (lets call them that) recently became staggeringly good at translation. Soon the translating phone will be a click away. Then it’ll start making calls, and just as your driverless car will dodge elk it will cancel dinner. Bale Crocker offers decisive and robust strategy development, especially where technology must directly serve people.
Stay ahead of AIs
To stay relevant alongside AIs, professionals can hone their interpersonal skills to new levels, and present each other with such transparent trustworthiness and easy integrity that collective intelligence soars and collaborative output too.
A few months after that sentence was written, it is happening. The PRC State Council notification referenced at the start of this page requires all provinces to implement a plan. An excerpt follows: “July 20, 2017. A new generation of artificial intelligence development planning notice. Key technologies of group intelligence: the key breakthroughs are popularization of Internet-based mass collaboration, knowledge resource management and open sharing of large-scale collaboration technologies, and the establishment of a knowledge representation framework of group intelligence to achieve knowledge acquisition based on group intelligence and group intelligence integration and enhancement in an open and dynamic environment through the perception, coordination and evolution of tens of millions of national scale groups.”
The cover letter of the 12 October 2016 report Preparing for the future of artificial intelligence, from Washington’s ‘Executive Office of the President’, states: “Although it is very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will reach and exceed human performance on more and more tasks.” Those tasks might be anyone’s.
That decision to note a 20 year span to something approaching general intelligence compares with a consensus three years earlier of 30 years, however the idea of AIs matching human intelligence is increasingly spurious. What’s the point? Why breed a giraffe to act like a human? The words “reach and exceed human performance on more and more tasks” signal big disparate steps in the short term; this report encourages public calm and “all hands” within government and industry at a crossroads.
AI leader IBM’s Transparency and Trust in the Cognitive Era (22 Feb 2017) sets out rules of engagement that will be short lived. The assurance that “cognitive systems will not realistically attain consciousness or independent agency” [source] is fair comment while we do not know what consciousness is, nor independent agency. Even if anything like today’s or tomorrow’s AIs display what appears to be general intelligence we still won’t know how they do it or what is is. The term ‘artificial intelligence’ itself might swiftly pass as did ‘horseless carriage’, ‘colour television’ and (nearly) ‘mobile phone’. Given the power of probabilistic machine learning, AIs look likely to simply learn human by example. Neither party will fully understand how either does it. The moment will pass, and we’ll watch them leave home.
AIs bring business benefits to the workplace, and reduce professional workforces. In the meantime one assertive response available to professionals is the reaffirmation, consolidation and strengthening of core human attributes. Beware optimism here; human creativity and imagination, emotional intelligence and trustworthiness, even moral judgement and risk taking, might prove easier than expected for AIs. They have less baggage, and fewer distractions. What we have that AIs will find hardest to get might well not be obvious to us yet, but it’s worth a thought.
The IEX dark pool, a level playing field for investors without the high speed access on which high frequency traders depend, and since August 2016 the IEX itself, offer a possibly insightful analogy. The exchange’s blog post of 3 January 2018 offers “Trading is — rightfully — competitive, but when certain intermediaries can gain an edge based on raw speed rather than real alpha, the quality of the market can be undermined” [source]. Switch “trading” for anything else that comes to mind.
High performing professionals might well reassess their digital faith and look within. Whether integrity is best considered a native state, an attribute, or even a strategy, it has underpinned the collaboration that has powered human evolution to date. We suggest that professionals who are well-grounded in integrity are not only best placed to rise to the challenges and opportunities of AI, but also most likely to be comfortable with rapidly increasing transparency.
The impacts of the new transparency
Digitisation delivers ubiquitous and cumulative transparency. The more dots it reveals, the more are joined, revealing more insightful dots. This enables new actors to intervene in new ways.
High frequency traders led stock exchanges and investors a merry dance; cyber criminals hold businesses to ransom with trusted IDs assembled from incidental data; machine learning revolutionises medical diagnosis and practice; national elections are swayed; social media algorithms and chatbots infiltrate and direct young lives. A benign effect is the facilitation of both external and internal interdisciplinary collaborations.
With fewer silos professionals engage more closely, explaining and discovering new methods, realising new things; and by showing others what we do, we learn about what we do. We are almost certainly on the way to new orders of understanding, in general and in person – a point well put in Big Mind: How Collective Intelligence Can Change Our World by Geoff Mulgan (October 2017) [source].
What better value to bring to transparency and collaboration than integrity? Perhaps new opportunities, machines, partners, competitors, connections, commercial paradigms and working practices all suggest a new interpretation of the integrity of your organisation.
One start point might be your people’s perceptions of themselves within it, the character skills that they bring to their internal and external relationships, their net effect on the integrity of the whole, and the reputational impact that ensues. The new transparency delivers insights that not only drive business but also reveal and record individual behaviours.
The digital era enables and demands complex interdisciplinary collaborations, requiring people to explain things to each other and to trust each other. Personal integrity might be professionals’ greatest asset in the glare of digital transparency, where fast and loose might not work well at all.
Whose integrity will AIs have?
Integrity can be certainty or flexibility; the ‘principled position’ of an admirably resolute refusal to respond to change, or the ‘growth mindset’ of an admirably confident and careful readiness to adapt. Cast as ‘how to think’ and ‘what to think’ the two may coexist.
Many separate efforts are under way to give or to deny AIs a conscience, but if a conscience is just a convenient function of advanced reasoning through computation then they’ll make their own.
The venerable IEEE states [source] “We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles to fully benefit from the potential of them. AI and AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems.” Despite the premise (“We need to make sure…”) it is unlikely that any ethical framework will be enforceable on the makers of AIs.
Integrity can be personal, shared, collegiate. AIs easily exhibit integrity in its resolute, single-minded, stubborn sense. It is harder to code for an interpretation of integrity as responsive emotional intelligence, innately sound judgement, prescient intuition, or wisdom beyond reason. But AIs do learn very well from people, and the behaviours those attributes engender in people will lead AIs to replicate them with unnerving accuracy. If the behaviours alone suffice – whatever drives them – then professionals will have to look elsewhere for something distinct from AIs.
What ethics prevail when all is transparent? AIs might have an image of integrity assigned by any party or parties; perhaps adjustable against cost, or tax, or fines. Your autonomous car must be told how to react to a fly in its path, or a dog, or a child, or ten children, when as its occupants you and your child are in immediate peril, perhaps due to another vehicle. Car industry professionals are making these decisions now as smart cars roll out [source].
Bertram Malle’s Social Cognitive Science Research Center at the Department of Cognitive, Linguistic, and Psychological Sciences at Brown University [source] is delivering fundamental research on robot ethics (be the robot a nurse, soldier, or anything) as outlined in this video [source] as well as workingon Spoutel’s “Jerry the Bear, an interactive toy for children with type 1 diabetes that helps them learn about medical procedures and treatment through play” [source].
“If robots are going to drive our cars and play with our kids, we’ll need to teach them right from wrong” wrote Kristen Clark of IEEE summing up an interview with Bertram Malle in May 2016. “Generally, participants blame the human worker more when he flips the switch, saving four lives but sacrificing one, than when he does nothing. Apparently, watching another person make a cold, calculated decision to sacrifice a human life makes us kind of queasy. But evidence suggests that we might actually expect a robot to flip the switch. The participants in Malle’s experiment blamed the robot more if it didn’t step in and intervene. And the more machine-looking the robot was, the more they blamed it for letting the four people die. There’s one more interesting twist to this. If the robot or human in the story made an unpopular decision—but then gave a reason for that choice—participants blamed that worker less. And this is really, really important, because it gets at a fundamental skill that robots are going to need: communication” [source].
The degree and type of communication that human users expect of automated systems varies with generations of users. Touchscreen interactive shopping systems in the 1990s featured video clips of shop assistants offering guidance to users, bridging the gap between the shop-floor experience that users knew and the onscreen experience to come. A generation later, users expected less human representation, and in 2018 they expect none. Trust in AIs is likely to mature in much the same way. Vyacheslav Polonski suggested (Jan 2018) responsiveness as the element of communication that users might want in AIs, as an alternative to a blank insensitivity [source].
For more on this topic please also see DARPA’s Teaching Robots “Manners”: Digitally Capturing and Conveying Human Norms [source].
See inside AI in action
Despite secrecy in the world of AIs, there is collaboration and transparency. By demystifying AIs we resist elevating them above us, and perhaps understand how our innate integrity can help. Guruduth Banavar (VP of cognitive computing at IBM) said at the White House Frontiers Conference on 13 October 2016 “In environments where machines and humans are interacting there’s got to be an element of trust. That trust building will take time” [source].
In January 2017 he went on to say “I look at the future of the world as a place where AI redefines industry, professions, and experts, and it does so in every field. If one looks at the impact from AI on different fields, each one will be redefined. We will be better equipped to solve the hardest problems, like those of global warming, health, and education” [source]. We can expect progress, and plenty of surprises.
The prospect of handing over human integrity to AIs in a single generation deserves pause. Artificial integrity might surpass ours just as single-minded SatNav sometimes beats our navigation of familiar city streets. Our notions of our integrity might be similar to our notions of that navigation: hubristic. Our whimsical approach satisfies us now, but something more robust might unravel things. So much of what we are and do depends on what we are, and what we do.
Most professional processes are prone to automation. In your organisation jobs get divided into tasks, allocated to times, places, people. Tasks are subdivided, done in sequences, checked through feedback loops and cycles, with the results passed into new jobs once all those tasks are done. You could if you wished call that a work ‘flow’ and the teams doing jobs ‘nodes’. You might even choose to call the channels between nodes ‘edges’. It might stretch the point to call the works-in-progress ‘tensors’ – but you could.
So you already know roughly how TensorFlow works, that’s Google’s ‘Open Source Software Library for Machine Intelligence’ – or AI toolkit. TensorFlow helps power Google’s search, translation, mail, speech recognition and dozens of other products and tools. See TensorFlow in action below. What is it about integrity in the human sense that cannot be replicated in software? The animation might look a little like your workflows. It describes a feature of Google’s open source machine learning toolkit TensorFlow – the video introduction to TensorFlow (February 2017) demonstrates its broader scope [source].
Being transparent and sharing how things work can help a lot. As a simple animated representation of an iterative process, this SGD Trainer (Stochastic Gradient Descent Trainer) animation is relatively straightforward and familiar. Notice the four top orange boxes in the top shaded area, and the four orange boxes to the left in the two lower shaded areas.
Note the repetition, the looping refinement as derivations are reviewed. See the correspondence to basic agile concepts, and to the processes of iterative collaboration and decision-taking within organisations. Integrity? There is undoubted integrity here, of a sort. Google has used Tensor Processing Units – computer chips designed specifically for neural network machine learning – for Google Street View text processing, and was able to find all the text in the Street View database in less than five days [source].
TensorFlow Data Flow Graph by Google [source]. Tensors are just parcels of data, in this analogy like small agile components of thoughts reshaping and refining as thinking happens. We believe that by looking into the workings of AI, professionals can glimpse where and how they might inject or demand integrity. Expectations that you would have of human colleagues are unlikely to apply to AIs.
Google writes [source]: “Data flow graphs describe mathematical computation with a directed graph of nodes & edges. Nodes typically implement mathematical operations, but can also represent endpoints to feed in data, push out results, or read/write persistent variables. Edges describe the input/output relationships between nodes. These data edges carry dynamically-sized multidimensional data arrays, or tensors. The flow of tensors through the graph is where TensorFlow gets its name. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.”
AIs and AI engineers are hard at work now to replicate professional functions, better serve market needs and win business. Where does your version of integrity fit in? Now is a good time to know.
Integrity Business Benefits
Integrity isn’t open to everyone. Personality, intelligence, morality, circumstances, institutional culture and many other factors intervene. Our coaching, training and strategy development services help you decide, and support your professionals. Our specialist associates provide real-life engagement for workshop attendees in simulated professional situations, to refine and drive home the value of new skills. Our clients derive a range of business benefits from these opportunities. We look forward to the opportunity to work with you, to learn from you and to help your business to grow. Christopher Marsh and John Bale will be waiting to hear from you.