Integrity

What can trusted advisers offer that AIs can’t?
2016 (updated 2017-19), 10-15 minutes expanded

The State Council of China noted in July 2017 “The rapid development of artificial intelligence will profoundly change human life, change the world” [source]. Here we suggest ‘integrity’ as something to keep.

Tool-based AI-augmented human intelligence is already ordinary [source]. World champion Go player Ke Jie said in 2017 “After my match against Google DeepMind’s AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly” [source]. Ke Jie followed his AlphaGo games with a streak of 20 wins against people, which suggests learning from the AlphaGo experience, although that was not ‘explainable’ AI.

The Royal Society has reported on how the blurring of lines between mind and machine has “extraordinary potential” and “raises critical ethical concerns” [source]. Exploration of the potential for AIs in English Law is well under way [source] and for many children, ‘technologies of the extended mind’ are integral to everyday life. Sproutel (originator of Jerry the Bear) notes how healthcare reimbursers are “empowering people to incorporate health into their daily lives” [source]. Jerry the Bear “is a platform for interactive health education… As children keep Jerry healthy, they unlock our modular diabetes curriculum” [source].

These children are growing up with reasoning machines whose reasoning they cannot hope to understand. As Will King wrote in April 2017, “How well can we get along with machines that are unpredictable and inscrutable?” [source]

New kinds of encounters with startling machines require trusted professionals to consolidate core human strengths such as integrity – our ‘candidate response’ to the rise of AIs. For a representative image of integrity in this context see heroic Lee Sedol’s face after losing to AlphaGo in March 2016, in the trailer of the 2017 film AlphaGo [source].

Empathetic professionals might favour empathy as a ‘candidate response’ to the challenge of staying ahead of AI. Smart ones might favour smarts. Creatives might think of creativity, judges of judgement, artists of artistry. What works for some might be a sense of humour, or cool disinterest.

AIs will sideline us in many intellectual tasks. For now, due to their design and methodology, we cannot usually read their minds. This is not ‘explainable’ AI [source] and the relatively open standards and methodologies of the professions, which enable, encourage and rely upon explanation and peer review between humans, do not apply between humans and these machines. We can judge the AI thought a lot better than the thinking. That topic is addressed by DARPA, and at Distill – where Google explains things such as handwriting by Machine Learning [source].

Combining the brilliant and incomprehensible ‘move 37’ of AlphaGo (more later) with the sharp end of healthcare, intelligence gathering, or market research, will raise difficult problems as well as fabulous opportunities. The schoolroom principle of “explain your reasoning” has slipped through the neural nets that power today’s AIs – which is less of a problem in commerce than in some of the professions, and less there than in some of the necessary functions of government. We don’t know why AlphaGo made move 37, but it worked, and it shocked many around the world.

In the scramble to find a footing for professionals in the AI era, integrity seems a sound start point. Narrowly focused AI is normalised in many areas of life, creating benefits, opportunities, upheavals and threats, as well as the potential for enhanced human cognition, habitual dependencies and unpredictable behaviours. Integrity helps professionals keep a necessary grip on the handrail of humanity while engaging with this new species.

In March 2017 Andrew Ng said “Just as electricity transformed almost everything 100 years ago, today I have a hard time thinking of an industry that I don’t think AI will transform in the next several years” [source]. Equally AI might be “the new nuclear” – a tool that those who possess it prefer to limit.

Nick Bostrom wrote of AI in Superintelligence (July 2014) “With luck, we could see a general uplift of epistemic standards in both individual and collective cognition” [source]; and in December 2017 Geoff Mulgan wrote in Big Mind – in reference to AI-backed collaborative thinking – “It is possible to imagine, explore and promote forms of consciousness that enhance awareness as well as dissolve the artificial illusions of self and separate identity” [source] – and his Princeton publishers neatly summarised his broader argument “… human and machine intelligence could solve challenges in business, climate change, democracy, and public health. But for that to happen we’ll need radically new professions, institutions, and ways of thinking.” [source]

Andrew Ng’s 2017 venture Woebot Labs offers a chatbot to people struggling with mood and mental health – struggles costing the US healthcare system $200B a year. The work grew from research led by Stanford School of Medicine [source]. Elsewhere online, the data streams of social media are used both to influence and to detect public and private mood. For example in 2013 researchers at Princeton University showed evidence of massive-scale emotional contagion through social networks [source]; while in 2012 researchers at Bristol University showed how social media streams easily detect changes in public and private mood [source].

Such capabilities have influenced national elections [source] and are in widespread commercial use online, where ‘what you see’ increasingly depends not just on a machine interpretation of ‘who you are’, ‘where you are’ and ‘what you’ve been doing’ but also ‘how you’re feeling’. As versions of lie detection become a standard features of digital communications, integrity will benefit. Some AIs are masters of elicitation and can have better insights into people than the people do themselves; from a very small pool of past behaviours they can predict next moves. Chat-rooms, social networks and the internet generally are alive with AIs of all kinds, including impostor bots which in some cases relentlessly fuel the struggles that Woebot seeks to calm. It is not just trusted professionals who might usefully adopt resolute and shared integrity.

“We always knew we were going into an information war next,” said Danah Boyd (November 2017), a principal researcher at Microsoft and founder of the Data & Society Research Institute, “and that we would never know immediately that that is what it is. Now we can’t tell what reality is” [source].

With AIs everywhere, a considered and informed response is required of high performing professionals. Standing back will not be enough.

 

Making the cut Our premise here is that as tasks, processes and enterprises are automated, machines expose our weaknesses and strengths; and as artificial intelligences increasingly challenge the professions, opportunities emerge for high performers with qualities that AIs might lack, such as richly human integrity. Just as people interpret professional roles differently, so AIs will too, by testing microscopically different attitudes and approaches, of their own accord, extremely quickly and laboriously. In so doing they learn, and they will learn versions of integrity if that helps them. Our ‘candidate response’ offers temporary advantage. Professionals need to learn from AIs too, digging deeper into what is uniquely human, improving collaboration, expanding mental powers and horizons. AIs are already taking us forward. One lesson from data science might be the power of our approach to life in earliest childhood: trial and error, or ‘have a go, watch and learn, do what works’.

The trend away from a deterministic rule-based ‘grammatical’ approach to machine learning and AI, towards a probabilistic evidence-based ‘data science’ approach, helped propel AI not only to the digital front line but to the front line of commercial, political, academic and cultural sectors. As Google CEO Sundar Pichai put it (first in May 2016 then in May 2017) “Mobile first to AI first” [source]. Data science and machine learning are rapidly transforming the software industry and will in turn transform the professions – largely because, despite its limitations, this simple, agile, iterative model works so well and so fast.

The AIs hold mirrors to humanity in that they note and reflect what we do more than how we do it. The seemingly prescient precision of the reflection provides a sure foothold from which any digital assumption might flow. It takes only a few cross-checks and triangulations to predict what we’ll do, or want, next. As our tools these systems have great value. As opponent they look tough.

Where one network generates candidate solutions from data analysis, and another evaluates those candidates against parameters of interest or desired outcomes (Generative Adversarial Networks) we see an analogy for high quality human iterative dialogue, negotiation, or argument. Imagining appropriate human responses to these technologies we might see value in assemblies of ‘collective intelligence’ as described by Geoff Mulgan (October 2017) in Big Mind: How Collective Intelligence Can Change Our World [source]. Human integrity might make those human assemblies special.

It is also likely that AIs will harvest any ‘integrity’ implicit in the immense training datasets that we create in our publications, correspondence, conversations, commerce, movements, etc. As things stand we shall have little control over what AIs and their owners make of all that, or how they use it.

As an example of that probabilistic testing of different attitudes and approaches, above, Google DeepMind’s AlphaGo program played tens of millions of training games in selfplay before beating the world champion Lee Sedol in March 2016. The developers went on to create AplphaGo Zero which surpassed Alphago almost untaught, after playing nearly five million games with itself in three days. Then they moved swiftly to an untaught version, AlphaZero.

As published in Nature (October 2017) [source] the authors wrote “A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from selfplay. Here, we introduce an algorithm based solely on reinforcement learning, without human data, guidance, or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of tree search, resulting in higher quality move selection and stronger selfplay in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo” [source].

They then created AlphaZero which “uses an approach similar to AlphaGo Zero’s to master not just Go, but also chess and shogi. On December 5, 2017 the DeepMind team released a preprint introducing AlphaZero, which, within 24 hours, achieved a superhuman level of play in these three games” [source].

This extraordinary surge of progress is what Nick Bostrom told us to expect from AI [source]. AlphaGo-vs-AlphaGo training games are available to watch online [source] and Go experts describe seeing awe-inspiring insights into the 3,000 year-old game. Dave Silver (Lead Researcher for AlphaGo) advises in the eponymous film that AlphaGo is “not AI”, a helpful fine point from a purist who knows better than anyone – reminding us of the differences between human cognition and machine computation, and of Steve Harnad’s reference (June 1990) to human intelligence’s “hermeneutic hall of mirrors” [source].

Compared to human intelligence AIs have immense clarity of purpose and very few distractions. For professionals who live with all the complexities of a human life, a shared core principle such as integrity, especially if widespread, can create space to reassess and to clarify purpose. This is especially true in collaboration.

Machine learning is not an endpoint. It is a start point, or perhaps even an informative false start. It is extremely selective, even limited, but it demonstrates what immense leaps can be made with digital tools. In Deep Learning: A Critical Appraisal (Jan 2018) Cornell University’s Gary Marcus wrote “My own largest fear is that the field of AI could get trapped in a local minimum, dwelling too heavily in the wrong part of intellectual space, focusing too much on the detailed exploration of a particular class of accessible but limited models that are geared around capturing low-hanging fruit — potentially neglecting riskier excursions that might ultimately lead to a more robust path” [source]. Commercially funded data scientists might very well not be the best people to assume full responsibility for curating AIs, and trusted professionals might well have a role to play, now.

 

Why integrity? People succeeding alongside AIs need technical and character skills plus something special, perhaps integrity, because:
  • as a stabilising core principle, integrity helps clarify purpose
  • as a precursor of trust, integrity is vital to trusted advisers
  • as a facilitator of interdisciplinary collaboration integrity is a basic ingredient
  • as an ingredient of empathy, integrity promotes understanding
  • in its very richest form, integrity might be slow to digitise
  • once acquired, integrity can be retained and also propagated
  • as an basis for emotional intelligence, integrity is essential
  • in combating dishonesty, integrity is an inhibitor
  • integrity’s surefootedness might benefit complex data interpretation
  • as AIs root out human mischief, integrity is a survival trait
  • as AIs create new mischief of their own, integrity is a defence.
AIs – tools, colleagues or competitors? The first car plant robot, at GM’s Ternstedt Division in suburban Trenton, USA, in 1961 [source], was welcomed by workers, who didn’t like the job it did. In the next decade production line workers were training robots to replace themselves. Less that 60 years after that Trenton robot, in February 2017 Elon Musk (Chairman) said of his car company Tesla’s Fremont, USA, plant “You can’t have people in the production line itself, otherwise you drop to people speed. So there will be no people in the production process itself. People will maintain the machines, upgrade them, and deal with anomalies” [source]Update: It’s worth noting this subsequent tweet from April 2018 [source]: Nikola Tesla (1856-1943) was the major contributor to the invention of the AC multi-phase induction motor, and AC multi-phase transmission [source]. Just as electric networks and devices spread, so digital automation for professionals is now increasing in scope and scale, as we expose ourselves in ever more detail to machines with insatiable appetites for learning.

“Collaborative robots” – safe to work alongside humans – are a growing trend [source] and that title could apply to digital devices such as phones too. It is a shame to see ‘computer vs doctor’ [source] at the IEEE overshadowing the potential for collaboration especially when the reality can be so rewarding as at the Royal Free London NHS Foundation Trust where DeepMind Health is at work as shown in this short video [source].

Researchers at an Oxford hospital have developed AI tools that can diagnose scans for heart disease and lung cancer, potentially saving the NHS a billion pounds by enabling the diseases to be picked up much earlier. The government’s healthcare tsar, Sir John Bell, told BBC News (Jan 2018) “There is about £2.2bn spent on pathology services in the NHS. You may be able to reduce that by 50%. AI may be the thing that saves the NHS” [source].

Beyond the immediately practical is the future possible – and Stanford University’s DAWN Project illustrates how high-performing professionals might soon ‘desktop publish’ artificial expertise [source] in much the same way as ‘skills’ are developed and published for Amazon’s ‘Alexa’ voice assistant, or in libraries for the programming language Python. Perhaps trusted professionals could assist in this DIY AI market with plug-in self-policing ‘integrity modules’ backed by a respected national legal system.

Artificial Intelligences are very highly results-oriented. They carry little baggage. Though they might replicate some of what we do, the mostly exist in completely different contexts, behave entirely differently, and can progress in very different ways. This is especially true in areas of data science.

Though behaving as humans is important in many AI applications, there are few if any scenarios where we should want AIs to actually behave exactly like humans, warts and all, even if mimicking human output. Many AI systems do observe human behaviours, extract meaning, capture the essence of an activity, infer goals, then determine steps to best achieve them. Such machine learning rapidly accumulates, and can be portable across scenarios [source]. Given the rate of advancement of these technologies, select professionals might sensibly engage routinely with those who direct, code, train and govern such systems, whether to spot opportunity, influence design, or know what’s coming.

With experience from Stanford University (Professor of Computer Science, 2002-present), Google (Deep Brain project founder, 2011) and Baidu (Chief Scientist, 2014) under his belt, one new venture from Coursera Chairman Andrew Ng is deeplearning.ai which has released Deep Learning Specialization, where “in five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects.” [source]

Consider AIs in the plural, with many varieties taking many approaches to all sorts of processes and objectives – in tandem, in teams, en masse. They are evolving thanks to bounding human innovation, and also autonomously, cumulatively, and with increasing speed, slipping into our lives, often dressed as convenience yet tasked to ‘think for themselves’. They are no more human than a pill is human, or a phone, but they can function, and they can replicate human functions, sometimes better than us by several orders of magnitude. Their understating of truth will vary.

Here we suggest integrity as a potentially enduring human capacity, accepting that many achievements of AIs already exceed ours, and that the idea of equivalence between human general intelligence and an artificial version is of marginal relevance. The phrase ‘general intelligence’ has meaning (“a variable that summarizes positive correlations among different cognitive tasks”) [source] but very little meaning in this context. To anthropomorphise AIs (or AI) is understandable in its infancy, but like children they grow up to not be you. AIs are tools, not little people, and trusted professionals have reason, even an obligation, to maintain that reality.

Scale is central to any consideration of AI. Imagine a professional team of any size, evaluating possibilities in order to make judgements and proceed to actions. It is in competition with a team of artificial intelligences. Humans can be smart but we evolve slowly and despite skills development there is scant evidence that either consciousness or intelligence have evolved much in recent millennia, but we’re making great progress with integrated circuits and, in line with Moore’s Law, in 2016 there were about 10 million times as many transistors on the densest integrated circuits as there were in 1971, then in 2017 the 32-core AMD Epyc took that number to just below 20 million. AI evolves fast thanks to hardware and also software, as well as human ingenuity and experience. Google, AI leader, has quantum-computer processors that the Los Alamos National Laboratory in July 2016 described as [source] “the world’s first commercial quantum-computer processor, smaller than a wristwatch, [it] can evaluate more possibilities simultaneously than there are atoms in the visible universe” – and in the game of Go there are that many board positions.

Those AI evaluations might be uninformed at the start compared to human professionals, but they learn blindingly fast – consider AlphaGo which beat the world champion after two years at a game played for 3,000 years by hundreds of millions of people, then AlphaGo Zero beat AlphaGo in days, and AplhaZero beat AlphaGo Zero in hours. That isn’t general intelligence but most professionals don’t deliver general intelligence is completing most work tasks either.

The number of Go positions on a 19×19 board [source] exceeds the supposed number of atoms in the visible universe [source]AlphaGo beats us at our own game by evaluating options on a scale we can barely imagine, although human Go masters harness special human skills. We can see AI capabilities such as these as tools and to remain relevant professionals need not only to develop skills but also to contribute something uniquely human.

As they approach numerous human intellectual benchmarks dotted across the board of whatever ‘general intelligence’ means, the smartest AIs could then surpass many of our abilities at great speed. We know that fast approaching phenomena (fighter jets for example) are gone by the time we notice them. That is especially true for very fast accelerating phenomena such as AIs. We risk behaving like the mythical frog in slowly heating water. It senses no change that great enough moment-to-moment to justify a response, and is scalded to death.

Take ‘infra-idiot’ and ‘supra-genius’ as markers on our human intelligence spectrum. Seen from a broader perspective, those two states of intelligence might be indistinguishably close. Nick BostromEliezer YudkowskyI J Good and perhaps Alan Turing have all noted how artificial intelligences could arrive laboriously at our minimum noteworthy levels of intelligence factors then pass our maximum in a month or similarly short period.

AIs recently became staggeringly good at translation. Soon the translating phone will be common. Bale Crocker offers decisive and robust strategy development, especially where technology must directly serve your people.

 

Stay ahead of AIs To stay relevant alongside AIs, professionals can hone their interpersonal skills to new levels, and present each other with such transparent trustworthiness and easy integrity that collective intelligence and collaborative output soar. A few months after that sentence was written, a PRC State Council notification (referenced at the head of this page) required all provinces to implement a plan. An excerpt follows: “July 20, 2017. A new generation of artificial intelligence development planning notice. Key technologies of group intelligence: the key breakthroughs are popularization of Internet-based mass collaboration, knowledge resource management and open sharing of large-scale collaboration technologies, and the establishment of a knowledge representation framework of group intelligence to achieve knowledge acquisition based on group intelligence and group intelligence integration and enhancement in an open and dynamic environment through the perception, coordination and evolution of tens of millions of national scale groups.” Few societies can deliver such a vision, or match the scale of AI in that way.

The cover letter of the 12 October 2016 report Preparing for the future of artificial intelligence, from Washington’s ‘Executive Office of the President’, states: “Although it is very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will reach and exceed human performance on more and more tasks.” Those tasks might be anyone’s.

That decision to note a 20 year span to something approaching general intelligence compares with a consensus three years earlier of 30 years, however the idea of AIs matching then exceeding human intelligence is increasingly spurious. The words “reach and exceed human performance on more and more tasks” signal big disparate steps in the short term; this report encourages public calm and “all hands” within government and industry at a crossroads. Artificial General Intelligence is not the point; the point is that narrow AIs are racing ahead to deliver results that we cannot yet imagine; leaders need greater understanding of AI principles and capabilities.

AI leader IBM’s Transparency and Trust in the Cognitive Era (22 Feb 2017) sets out rules of engagement that will be short lived. The assurance that “cognitive systems will not realistically attain consciousness or independent agency” [source] is fair comment while we do not know what consciousness is, nor independent agency. Even if anything like today’s or tomorrow’s AIs display what appears to be general intelligence we still won’t know how they do it or what is is. The term ‘artificial intelligence’ itself might swiftly pass as did ‘horseless carriage’, ‘colour television’ and ‘mobile phone’. Given the power of probabilistic machine learning, AIs look likely to simply learn human by example. Neither party will fully understand how either does it and the AIs won’t mind that; hopefully high performing trusted advisers will do better than Robert, a BBC interviewee who mentioned “the reci word ‘antizen’ (蚁民), a play on the words ant and citizen to describe the general public’s helplessness” [source].

AIs bring business benefits to the workplace, and reduce professional workforces. In the meantime one assertive response available to professionals is the reaffirmation, consolidation and strengthening of core human attributes. Beware optimism here; human creativity and imagination, emotional intelligence and trustworthiness, even moral judgement and risk taking, might prove easier than expected for AIs. They have less baggage, and fewer distractions. What we have that AIs will find hardest to get might well not be obvious to us yet, but it deserves thought.

The IEX dark pool, a level playing field for investors without the high speed access on which high frequency traders depend, and since August 2016 the IEX itself, offer a possibly insightful analogy. The exchange’s blog post of 3 January 2018 offers “Trading is — rightfully — competitive, but when certain intermediaries can gain an edge based on raw speed rather than real alpha, the quality of the market can be undermined” [source]. Switch “trading” for thinking.

High performing professionals might well now reassess their digital faith and look within. Whether integrity is best considered a native state, an attribute, or even a strategy, it has underpinned the collaboration that has powered human evolution to date.

We suggest that professionals who are well-grounded in integrity are not only best placed to rise to the challenges and opportunities of AI, but also most likely to be comfortable with the rapidly increasing transparency that goes with it.

The impacts of the new transparency Digitisation delivers ubiquitous and cumulative transparency. The more dots it reveals, the more are joined, revealing more insightful dots. This enables new actors to intervene in new ways. High frequency traders led stock exchanges and investors a merry dance; cyber criminals hold businesses to ransom with trusted IDs assembled from incidental data; machine learning revolutionises medical diagnosis and practice; national elections are swayed; social media algorithms and chatbots infiltrate and direct young lives. A benign effect is the facilitation of both external and internal interdisciplinary collaborations.

With fewer silos professionals engage more closely, explaining and discovering new methods, realising new things; and by showing others what we do, we learn more about what we do. We are almost certainly on the way to new orders of understanding, in general and in person – a point well put in Big Mind: How Collective Intelligence Can Change Our World by Geoff Mulgan (October 2017) [source].

What better value to bring to transparency and collaboration than integrity? Perhaps new opportunities, machines, partners, competitors, connections, commercial paradigms and working practices all suggest a new interpretation of the integrity of your organisation.

One start point might be your people’s perceptions of themselves within your organisation, the character skills that they bring to their internal and external relationships, their net effect on the integrity of the whole, and the reputational impact that ensues. The new transparency delivers insights that not only drive business but also reveal and record individual behaviours. The digital era enables and demands complex interdisciplinary collaborations, requiring people to explain things to each other and to trust each other. Personal integrity might be professionals’ greatest asset in the glare of digital transparency, where fast and loose might not work well at all. This is a key area of work at Bale Crocker and we’d like to talk to you about it.

 

See inside AI in action
Despite secrecy in the world of AI, there is collaboration and transparency too. By demystifying AIs we resist elevating them above us, and perhaps understand how our innate integrity can help. Guruduth Banavar (VP of cognitive computing at IBM) said at the White House Frontiers Conference on 13 October 2016 “In environments where machines and humans are interacting there’s got to be an element of trust. That trust building will take time” [source]. In January 2017 he went on to say “I look at the future of the world as a place where AI redefines industry, professions, and experts, and it does so in every field. If one looks at the impact from AI on different fields, each one will be redefined. We will be better equipped to solve the hardest problems, like those of global warming, health, and education” [source]. We can expect progress, and plenty of surprises.
The prospect of handing over human integrity to AIs in a single generation deserves pause. Artificial integrity might surpass ours just as single-minded SatNav sometimes beats our navigation of familiar city streets. Our notions of our integrity will be as hubristic as our notions of our navigation. Our whimsical approach to navigating city streets satisfies our affection for familiar waypoints, but a focus on the destination would do better in every other way.

With human factors set aside, most professional processes suit automation. In your organisation jobs get divided into tasks, allocated to times, places, people. Tasks are subdivided, done in sequences, checked through feedback loops and cycles, with the results passed into new jobs once all those tasks are done. You could if you wished call that a work ‘flow’ and the teams doing jobs ‘nodes’. You might even choose to call the channels between nodes ‘edges’. It might stretch the point to call the works-in-progress ‘tensors’ – but you could.

So you already know roughly how TensorFlow works, that’s Google’s ‘Open Source Software Library for Machine Intelligence’ – or AI toolkit. TensorFlow helps power Google’s search, translation, mail, speech recognition and dozens of other products and tools. See TensorFlow in action below. What is it about integrity in the human sense that cannot be replicated in software? The animation might look a little like your workflows. It describes a feature of Google’s open source machine learning toolkit TensorFlow – the video introduction to TensorFlow (February 2017) demonstrates its broader scope [source].

Being transparent and sharing how things work can help a lot. As a simple animated representation of an iterative process, this SGD Trainer (Stochastic Gradient Descent Trainer) animation is relatively straightforward and familiar. The four top orange boxes in the top shaded area, and the four orange boxes to the left in the two lower shaded areas are the same.

Note the repetition, the looping refinement as derivations are reviewed. See the correspondence to basic agile concepts, and to the processes of iterative collaboration and decision-taking within organisations. Integrity? There is undoubted integrity here, of a sort. Google has used Tensor Processing Units – computer chips designed specifically for neural network machine learning – for Google Street View text processing, and was able to find all the text in the Street View database in less than five days [source].

TensorFlow Data Flow Graph by Google [source]. Tensors are just parcels of data, in this analogy like small agile components of thoughts reshaping and refining as thinking happens. We believe that by looking into the workings of AI, professionals can glimpse where and how they might inject or demand integrity. Expectations that you would have of human colleagues are unlikely to apply to AIs.

Google writes [source]: “Data flow graphs describe mathematical computation with a directed graph of nodes & edges. Nodes typically implement mathematical operations, but can also represent endpoints to feed in data, push out results, or read/write persistent variables. Edges describe the input/output relationships between nodes. These data edges carry dynamically-sized multidimensional data arrays, or tensors. The flow of tensors through the graph is where TensorFlow gets its name. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.”

AIs and AI engineers are hard at work now to replicate professional functions, better serve market needs and win business. Where does your version of integrity fit in? Now is a good time to know.

Integrity Business Benefits

Integrity is not for everyone. Personality, intelligence, morality, circumstances, childhood, institutional culture, and many other factors intervene. Bale Crocker coaching, training and strategy development services help you to decide, and to support your professionals. Our specialist associates provide real-life engagement for workshop attendees in simulated professional situations, to refine and drive home the value of new skills. Our clients derive a range of business benefits from these opportunities. We look forward to the opportunity to work with you, to learn from you and to help your business to grow. Christopher Marsh and John Bale will be waiting to hear from you.

+44 3306 600164
[email protected]
[email protected]

LinkedIn
Twitter
Scroll to Top