Tim Squirrell is a PhD candidate in Science and Technology Studies at the University of Edinburgh. His research focusses on construction and negotiation of authority and expertise on the internet, with a focus on fitness and nutrition communities.

Debating Artificial Intelligence, Algorithms and Automation

Debates about AI, algorithms and automation are increasingly common as the world becomes ever more dependent upon various technologies to keep functioning, and as new technologies throw up difficult ethical questions which we might not yet be equipped to deal with. This post deal with a number of different themes in these debates, including: issues with respect to labour, particularly universal basic income; implications for education; the ingrained biases and issues with algorithms; who takes responsibility when machines go wrong; whether machines can be considered to have moral status; and the issues pertaining to superintelligence.

Examples

TH fears the development of independent artificial intelligence.

THW prohibit all research aiming to create sentient artificial intelligences

THW ban all further research into artificial intelligence that can independently learn and develop

THW give highly advanced artificial intelligence the same rights as humans

THW programme self-driving cars to prioritise the number of lives saved when faced with unavoidable collisions as opposed to prioritising the safety of the driver.

THW prohibit the use of predictive algorithms in criminal trials

THS the creation of Personal Artificially Intelligent Robotic Romantic Partners

THW ban self-learning sex robots that aim to be highly gratifying and realistic

THS the creation and use of autonomous killing robots (can identify and engage targets without further human intervention)

THW, assuming it was technologically possible, replace all human soldiers on the battlefield with robots

THBT not further research should be done on the creation of Lethal Autonomous Robots

Given the existence of Strong AI, this house would grant them the ability to disobey orders. (Strong Artificial Intelligence refers to (hypothetical) machines or robots with comparable intellectual abilities to humans'.)

TH welcomes the continuing automation of labour

Issues with respect to labour

1.     Threat to human dignity:

a.     Replacing people in positions that require respect and care, like customer services, therapists, nursemaids, soldiers, judges and police officers.

b.     Why? We require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated.

c.     But they may also be impartial – conditions where we might prefer to have automated judges and police that have no personal agenda at all.

d.     But this might just be impossible – see arguments about ingrained biases.

2.     Threat to employment:

a.     More technically feasible to automate predictable physical activities than unpredictable ones – 78% of predictable could be automated with currently demonstrated technology, as opposed to 25% of unpredictable.

3.     Entrenchment of economic inequality:

a.     Our economic system is based on compensation for contribution to the economy/society, often assessed using an hourly wage. Majority of companies still dependent on hourly work when it comes to products and services. AI allows companies to drastically cut down on relying on the human workforce, so revenues go to fewer people. Individuals who have ownership in AI-driven companies will make all the money.

4.     Increasingly important for people to become more flexible in their ability to switch between jobs and careers

a.     History tells us that technology and automation tend to create more jobs than they destroy

                                               i.     Example: ATMs were supposed to destroy bank teller jobs. The number in each branch went down, but it reduced the cost of running a bank branch, allowing banks to open more branches in response to customer demand.

1.     Rather than destroying jobs, automation redefines them towards the things humans can’t do.

                                             ii.     Problem is that this relies on this kind of industrial revolution being the same as all prior ones.

1.     It may be different – affects all industries, and also the new industries which emerge are less labour-intensive (e.g. Instagram had 12 employees when Facebook bought it for $1bn).

                                            iii.     We also don’t know what these jobs are yet – difficult to predict.

                                            iv.     US Dept of Labor predicts that around 65% of school children today will be employed in jobs that don’t yet exist.        

                                              v.     Oxford paper in 2013 – nearly half of all jobs today are at risk of automation within 10-15 years.

                                            vi.     About 35% of British jobs vulnerable in Britain; 49% in Japan.

b.     Flexibility important – probably becoming more important to focus on core competencies than content – e.g. critical thinking, creative thinking, effective communication and effective interaction.

                                               i.     This also makes the case for employing people based not on academic qualifications but on capacity and willingness to learn.

1.     Quite difficult to measure – relies on standardised testing or holistic analysis which is difficult to do and is likely to allow more biases to creep in

2.     But if it does work, it could allow us to eradicate the kind of glass floor granted to individuals by virtue of being born into privilege.

c.     The one example of an industry where jobs have not expanded or kept pace with technological change is manufacturing, but this may be more to do with business cycles and offshoring to China.

5.     Deep learning means that machines are likely to take jobs we didn’t previously anticipate:

a.     Analysing images like X-Rays and CT scans. Really good at image recognition – using training data. Potential to make healthcare more accurate and efficient.

                                               i.     Can see if blobs on a CT scan are blood vessels, artefacts or malignant nodules. 50% better at classifying malignant tumours and a false negative rate of 0 (compared with 7% for humans).

                                             ii.     Empowers practitioners, turning average doctors into experts – increases their capacity to do work.

1.     Helpful in developing world where there is a shortage of specialists.

6.     What determines vulnerability to automation is whether work is routine, whether the task is manual or cognitive.

a.     Vulnerable: Workers in transport and logistics; office support (receptionists and security guards); sales and services (cashiers, counter clerks, accountants).

b.     Worry about job bifurcation: middle-skill jobs decline, but both low-skill and high-skill jobs expand.

                                               i.     Economy bifurcates into two groups doing non-routine work: highly paid, skilled workers (architects, senior managers) and low-paid, unskilled workers (cleaners, burger-flippers).

                                             ii.     Stagnation of median wages in the west suggests this is already having an effect (but hard to disentangle impact of offshoring)

c.     The same big data that allows companies to improve marketing and customer-service operations also gives them the raw material to train machine-learning systems to perform the jobs of more people.

                                               i.     “E-discovery” software can search legal documents quicker than clerks or paralegals.

1.     But rather than making paralegals redundant, it’s just made discovery less costly and increased demand for it.

2.     Judges are more willing to allow discovery, because it’s cheaper and easier.

                                             ii.     Sport and market journalism can be automated.

7.     Impacts on the Developing World

a.     Automation may have a much bigger impact in developing countries than in rich ones because much of what they provide is essentially embodied labour: cheap goods made by low-wage workers, cheap services such as operating call-centres, or doing domestic and construction work overseas.

                                               i.     If automation makes rich countries more self-sufficient in these areas, they’ll have less need for the products and services that have been driving exports and growth in the developing world

                                             ii.     Automation could erode the comparative advantage of a lot of the developing world

b.     Rich countries own the technologies and patents associated with robots and AI, and stand to benefit if they cause a surge in productivity.

c.     Automation could deny poorer countries the opportunity for economic development through industrialisation.

                                               i.     Industrial automation meant that manufacturing employment in China and India peaked at 15%, compared with Britain’s 45%.

                                             ii.     It may mean that emerging economies in Africa and South America will find it harder to achieve economic growth by moving workers from fields to factories.

                                            iii.     Without manufacturing jobs to build a middle class, they may end up with high income inequality (and all the problems that come with that) baked into their core economic structure.

Universal Basic Income

Examples:

THBT progressive parties should advocate and campaign for the introduction of UBI

THW provide a universal basic income

THW replace means tested welfare with a regular, unconditional and universal Basic Income paid by the State to all residents

THBT individuals have a right to a basic income regardless of capacity of willingness to work

 

UBI is a dramatic simplification of the welfare system that involves paying everyone a fixed amount, regardless of their situation, and doing away with all other welfare payments.

1.     People who are not working, or are working part-time, are not penalised if they decide to work more, because their welfare payments don’t decline as their incomes rise.

2.     Gives people more freedom to decide how many hours they wish to work

a.     Principle that time is the most important commodity that we have, and that we ought to be in control of it to as great an extent as possible

3.     Might encourage people to retrain by providing them with a small guaranteed income while they do so

4.     If technology does supplant employment to the extent some predict, then UBI will be a way to (a) keep the consumer economy going and (b) support the non-working population

a.     A lack of demand would lead to stagnation and deflation, whilst concentrating wealth at the top of the chain and preventing the economy from growing in such a way that this could ever change.

5.     Broad support across the political spectrum, from both libertarians (who want to do away with government interference into people’s lives) and lefties (who want to ensure that people don’t starve no matter what, and ideally don’t want people’s livelihood to be contingent on their participation in the economy).

a.     Welfare systems are often byzantine and dehumanising

                                               i.     They have barriers to claims which are either deliberate or just artefacts of the way the system has been created – estimated ~5bn pounds of unclaimed benefits in the UK every year.

                                             ii.     They’re run by either state agencies, which are often deeply inefficient, or outsourced to private contractors who have targets and profit incentives – they try to reduce the claimant count as much as possible.

1.     This means people constantly have to prove themselves – see the way that paraplegic individuals in the UK until recently had to go and see if they were fit for work every few months.

2.     These are also people who are already disconnected from and feel rejected by the state, and this additional imposition upon them is worse than it would be on a person who has the time and emotional energy to spare.

b.     Welfare system which are targeted at specific subgroups are subject to capture and lobbying interests

                                               i.     Support for benefits correlates with how many people receive that benefit, and as such specific benefits which help disadvantaged people are the most likely to be the first ones cut – see their lack of numbers, money and political capital (often disabled people just can’t advocate for themselves because of the constraints on their physical and emotional energy).

                                             ii.     Pensions stay constant or go up (triple lock) even at times when it’s not really affordable because of government incentives to pander to the groups who keep them in power

                                            iii.     Lobbying means that the groups with the most political capital and willingness to harass and harangue ministers and civil servants are those who get the best deal out of the social welfare system – and these tend to be the rich and well-connected

c.     In contrast, UBI is less likely to be cut on the whim of a government – just like pensions, and the non-means tested benefits we give to elderly people (e.g. winter fuel allowance, free TV licence, free bus pass).

                                               i.     This means that you’re not subject to sudden fluctuations in your benefits

1.     Allows you to plan for the future

2.     Means you’re not living in constant fear of being sanctioned

3.     You don’t have to spend time demonstrating that you’re looking for work in the exact way they want you to

4.     Means you can put some money aside to account for financial shocks (e.g. death of a family member, sudden illness, car breakdown)

                                             ii.     BUT UBI is likely to be used as a political football

1.     Becomes a big contentious issue in elections – nobody will ever want to put it down, and anyone who does so will likely become extremely unpopular

2.     This means it’s likely to stay at unsustainable levels for too long, even if it needs to be cut back for the good of the country’s finances

3.     The problem with the “we’ll just tax the rich more” solution is that (a) they might leave and (b) if it doesn’t work, it means that your country’s finances continue to deteriorate and you risk having credit ratings agencies downgrade your debt, making it harder for the government to borrow money to finance future projects.

4.     This means it may not be used or changed in a sensible way.

6.     Problems

a.     Could stifle complaints about technology causing disruption and inequality, allowing geeks to go on inventing the future unhindered

b.     A UBI that replaced existing welfare budgets would be steeply regressive.

                                               i.     In the USA, if you took all existing social, pension and welfare spending and divided it equally, each citizen would get about $6000 a year ($6200 in Britain) at PPP.

                                             ii.     This would reduce income for the poorest and give the rich money they don’t need.

                                            iii.     Means-testing would undermine the simplicity of UBI, and therefore its low administrative cost.

1.     One way of reducing the cost would be to make it contingent on going and collecting it each month, adding a small administrative hurdle that would discourage those for whom it is a pittance.

c.     A less elegant but more practical approach might be negative income tax or earned-income tax credits.

d.     Concern that it would discourage some people from retraining, or working at all

                                               i.     Empirical studies suggest that it encourages people to reduce their working hours slightly rather than stop working altogether

1.     Work gives people things over and above their salary: community, enjoyment of their job, a feeling of purpose, tie out of the house

e.     Incompatible with open borders and free movement of workers:

                                               i.     Without restrictions on immigration or entitlement it might attract a lot of free riders form abroad and cause domestic taxpayers to flee.

7.     Experimentation and examples:

a.     Finland are experimenting this year. 2000 unemployed people receiving €560 every month, regardless of whether they look for work or not. No requirement to report on how they spend their money, will continue to receive it even if they find a job.

                                               i.     Designed to simplify the benefits system and reduce unemployment.

                                             ii.     Currently, jobless people can refuse low-income or short-term work if they fear that they will have their benefits significantly cut because of their increased income

                                            iii.     Unemployment rate of 8.1%, hasn’t changed since 2014, 213,000 people have no jobs. Average private sector worker makes €3500 a month.

b.     Glasgow and Fife councils are currently designing trials of a UBI initiative, but haven’t settled on an income level or the scale.

                                               i.     Glasgow is the poorest local authority in Scotland, with 1/3 children in the city living in poverty.

                                             ii.     In Fife, more than 34% of workers earn less than the living wage of £7.85 per hour.

c.     In June 2016, the Swiss voted strongly against implementing a UBI on a national scale.

d.     A poll in May 2016 suggest 2/3 British people support it.

Implications for Education

MOOCs (Massive Open Online Courses) had a lot of buzz around 2010 or so: they use short video lectures, discussion boards and auto-grading of coursework. They’re also often free (Khan Academy). Other examples include Coursera, Udacity and The Great Courses. Millions have used them, though buzz died down.

1.     High drop out rates due to low investment of time and capital, as well as lack of deadlines or feeling of urgency

2.     Ability to socialise is diminished in that discussion boards just aren’t the same as going to the pub

3.     Useful for people who are unable to leave home or live in rural areas

a.     But qualifications less likely to be recognised by universities or employers; may require you to go a bit further to prove yourself

                                               i.     Particularly problematic when many employers are using algorithms and automation to filter out applications and invite people to interview, and these filters tend to use e.g. GCSEs or other standard qualifications as proxies for job-worthiness.

4.     Useful for reskilling

a.     But requires you to be IT and internet-literate, something which may not be the case for many people who need to reskill

b.     Particularly useful if you’re interested in programming, which is (a) where a lot of jobs are, (b) well-paid, (c) doesn’t necessarily require you to leave your home or community, and (d) easy to teach through digital media

5.     AI potential for adaptive learning – software that tailors courses for each student individually, presenting concepts in the order they will find easiest to understand and enabling them to work at their own pace.

a.     Work best in areas where large numbers of pupils have to learn the same material and a lot of data can be collected.

                                               i.     Being used in Brazil (Geekie) – guides pupils through the high-school syllabus in thousands of schools.

b.     Allow teachers to act as mentors rather than lecturers.

Education since 1945 has emphasised specialisation, so students know more and more about less and less. As knowledge becomes obsolete more quickly, the most important thing will be learning to relearn.

1.     Introduction of nanodegrees which can be undertaken in a few months and completed alongside a full-time job.

a.     Only costs a few hundred dollars, firms are more willing to pay in order to get the returns.

b.     Would be aided by more cooperation between government, training providers and employers over certification.

2.     Apprenticeships not necessarily the alternative to academic schooling.

a.     They take 5-7 years, which doesn’t make sense given the rapidity with which required skills are changing.

b.     Increase in companies (Siemens) setting up “earn-and-learn” programmes which give apprentices degrees from local community colleges as well as no student debt.

3.     Increase in importance of soft skills like perseverance, sociability and curiosity, which correlate with adaptability. Relies on the argument that character is a skill rather than a trait.

Moral Calculus of machines

1.     Our moral calculuses and instincts are skewed by circumstances:

a.     Trolley Problem – most people’s instincts tell them to pull the switch in order to save the five, but not to push the fat person in front of the trolley in order to save the five.

b.     Our determination of what is right or wrong becomes complex when we mix in emotional issues related to family, friends, tribal connections, and the details of the actions that we take.

c.     The difficulty of doing the right thing does not rise out of us not knowing what it is, but of us being unwilling to pay the price that the right action demands.

2.     Robot morality:

a.     An ethical or moral sense for machines could be built on a utilitarian base.

                                               i.     Metrics could be coarse-grained (save as many people as possible), nuanced (Nobel laureates and children first), or detailed (evaluate each individual by education, criminal history, social media mentions etc).

b.     Special circumstances: doctors don’t euthanise patients to spread the wealth of their organs, even if there’s a net positive with respect to survivors.

                                               i.     People in certain professions (lawyers, religious leaders, military personnel – people who establish special relationships with individuals) have to conform to separate codes of ethics around the needs and rights they interact with.

c.     Likely, then, that autonomous cars will sacrifice the few to save the many. Will seek the “best” outcomes independent of whether or not they themselves are comfortable with the actions.

d.     Crucial for machines to be able to explain their moral decisions to us, in the same way it’s necessary for medical robots to explain the reasoning behind diagnoses.

e.     Problem: what constitutes the “best” outcome is itself contingent on our moral intuitions. Which characteristics should be favoured the most highly? There is just no consensus amongst humanity about the moral thing to do – see the entire field of moral philosophy, or literally any film about some kind of moral dilemma.

                                               i.     This means that the programmer becomes sovereign – their biases become encoded in the machine, and they decide which characteristics are weighted most highly.

3.     Argument that machines should be programmed using decision trees instead of neural networks and genetic algorithms.

a.     This is because decision tree algorithms obey modern social norms of transparency and credibility, allowing us to see exactly how they’ve come to take a particular action or make a particular judgement.

                                               i.     Genetic algorithms and neural nets are much more epistemically inaccessible to humans, because they rely upon processes that are difficult to explain and use thousands of instances in order to produce the “optimal” algorithm.

Algorithms - Ingrained biases, vested interests and mistakes

1.     Basing intelligence on misinformation – if the input is wrong or misleading, there’s nothing that can be done about it. You can’t “tweak” the algorithm to make it work, because Garbage In will always lead to Garbage Out.

a.     There is every indication that we will base our intelligence on misinformation in many instances, because we’re not capable of perfectly accounting for all variables – and the entire history of science suggests that we are usually very wrong about complicated problems and phenomena.

                                               i.     Important – the kind of problems we are likely to encounter are unpredictable by definition. We don’t know when we’re wrong, so there’s no way for us to be able to account for this when we’re building machines.

                                             ii.     What does this look like?

1.     Racial biases in recidivism algorithms because you’re basing it off of a racist justice system

2.     Flash crashes in stock markets

3.     War crimes – civilians killed because accidentally identified as terrorists or enemy combatants

                                            iii.     Training phase for machines – where they learn to detect the right patterns and act according to their input.

1.     Once a system is fully trained, it can go into test phase, where it’s hit with more examples and we see how it performs.

2.     Training phase can’t cover all possible examples that a system may deal with in the real world. Systems can be fooled in ways that humans wouldn’t be – random dot patterns can lead them to see things that aren’t there.

3.     We have to ensure that machines perform as planned, and that people can’t overpower it to use it for their own ends (but also that it can’t overpower humans).

2.     Programmers are taught to programme to favour false positives – they’d rather have an algorithm that sends people to jail when they’re innocent than letting a murderer go who might kill ten people

3.     Bayesian data analysis – reinforces biases in a way that you don’t see as them being reinforced. People get given results and then go towards the mainstream biases they are fed.

a.     Bayesian networks work by updating your propensity to believe a certain thing in light of new evidence.

                                               i.     They require you to have an initial propensity to believe a proposition, and then to weight incoming evidence based on how surprising it is in light of the prior evidence you’ve collected.

                                             ii.     So something that is really surprising but seems credible and leads you away from your initial hypothesis would be weighted highly, but something which is only a minor aberration wouldn’t do much to budge your epistemic disposition.

                                            iii.     This means that our beliefs will eventually tend towards an equilibrium, so long as no external circumstances change radically.

b.     Good example of this is in porn searches: we’re pointed towards videos that reinforce our pre-existing preferences, but also subtly tweak them towards what is “mainstream” or profitable. Means that sexual discovery isn’t actually as free-form as we’d like to think it is.

                                               i.     Problematic because the mainstream tends to be misogynistic, racist, transphobic etc.

c.     We’re also broadly unaware of our own biases (see implicit bias tests which show most people are more racist/sexist than they think, or look at the stats on police shootings of unarmed black men).

                                               i.     This means we’re unable to calibrate ourselves such that we’re relying on algorithms “the right amount”.

4.     Trying to discover why an algorithm is racist would be incredibly difficult.

a.     If the machine learning algorithm is based on a complicated neural network or a genetic algorithm produced by directed evolution, then it may prove nearly impossible to understand why or how the algorithm is judging applicants or defendants based on their race.

b.     A machine learner based on decision trees or Bayesian networks is much more transparent to programmer inspection.

                                               i.     An auditor could then discover that the AI algorithm had been using the address information of applicants who were born or previously resided in predominantly poverty-stricken areas.

5.     Some challenges of machine ethics are just like other challenges involved in designing machines: designing a robot arm to avoid crushing stray humans is no more morally fraught than designing a flame-retardant sofa. There are new programming challenges, but no new ethical challenges.

a.     When AI algorithms take on cognitive work with social dimensions, then the algorithm inherits the social requirements.

                                               i.     It would be really frustrating to find that no bank in the world would approve your seemingly excellent loan application, and nobody knows why, and nobody can find out even in principle.

1.     Maybe you have a first name that is strongly associated with deadbeats, but we could never know.

b.     Algorithms which take over social functions need to be predictable to those they govern.

                                               i.     Legal principle of stare decisis binds judges to follow past precedent wherever possible. This is important because the legal system has to be predictable so that, e.g. contracts can be written knowing how they will be executed.

                                             ii.     The job of the legal system isn’t necessarily to optimise society, but to provide a predictable environment within which citizens can optimise their own lives.

                                            iii.     This conflicts with the principles of engineers, who believe that binding the future to the past is bizarre when technology is always improving. This is one instance in which our social and technological principles conflict with each other.

c.     AI algorithms have to be robust against manipulation.

                                               i.     E.g. a machine vision system which scans luggage for bombs has to be robust against humans deliberately searching for exploitable flaws in the algorithm – like a shape which, placed next to a pistol, neutralises recognition.

6.     Most major tech companies are platform-based and create very little (or no) content (e.g. Google, Twitter, Facebook, Uber, AirBnB).

a.     This means that their business model is contingent upon making information as easily accessible as possible and as such they are reluctant to censor people (e.g. Twitter gains a lot of revenue from Trump still being on there).

                                               i.     People underplay in debates how important the “freedom of information movement” is to companies, instead citing hate speech as though it were a discursive halter.

b.     To demonstrate how much companies value information exchange: Google tried to go into China and was blocked from 2010. They would occasionally get requests from the government to block certain search terms (e.g. political scandals). After a couple of months their gmail service was aggressively hacked – and the hackers were targeting political activists. They withdrew from China because they didn’t want people to think that the results they were getting were unbiased (when they were filtered) and because they couldn’t guarantee email security for activists and didn’t want them relying on that.

                                               i.     The government asked Google to filter search results and they refused.

                                             ii.     They also wanted Google to store Chinese users’ data on China’s servers. Google also refused this.

7.     IBM uses the phrase “human-machine augmentation” when people ask them about robots taking over the world. This can be used to frame arguments as “look, AI just provides additional help” or “programmers really don’t want to answer ethical questions”.

a.     One of the largest concerns is that programmers – and business leaders – are unconcerned about the ethics of machine learning, and are unwilling to engage with philosophical and ethical concerns.

                                               i.     There’s a degree to which this is untrue – see Elon Musk being one of the biggest donors to the Centre for the Study of Existential Risk.

                                             ii.     But we probably should be concerned about the fact that we are training a generation of programmers who have no conception of serious ethics, and have the belief that they don’t need that understanding.

Responsibility – when things go wrong, who takes the blame?

1.     Modern bureaucrats often take refuge in established procedures that distribute responsibility so widely that no one person can be identified to blame for the catastrophes that result. This could be an even better refuge – a machine is provably disinterested, after all.

a.     Even if an AI system is designed with a user override, the career incentive of a bureaucrat would be personally blamed if that override goes wrong would be to blame the AI for any negative outcomes.

b.     Problem: moral (and therefore legal) responsibility generally requires us to be in reason-sensitive control of a decision and its outcomes; that is, we have to have been able to make a different decision in response to evidence of some kind.

2.     If an AI makes a decision which takes a significant cognitive load off of a professional (e.g. diagnosing cancer, killing people in war, sentencing someone to life in prison), it’s hard to find who is to blame.

a.     The programmer can’t really be blamed, because by definition they couldn’t foresee all possible outcomes of an algorithm – that’s why the algorithm was built, so that it could take on more complex cognitive tasks.

b.     Likewise, the doctor/soldier/lawyer is harder to blame, because so much of the decision was out of their hands. If the algorithm was able to display all of its reasoning, then they could potentially evaluate it. But sometimes the decision has to be made in a split-second, or will be based on such a volume of literature that it would take weeks or months to be able to comb through it all to understand it. Again, this is in part why the algorithm was created – there’s just too much information in the world for any one person to handle.

c.     The AI itself also can’t really be blamed, because we don’t really think it has agency in the human sense.

3.     It’s important that we’re able to hold agents morally and legally responsible:

                    i.     For peace of mind – so we know what we could change so something doesn’t go wrong again in the future

                  ii.     For restitution – we have practices of punishment so that we can (a) deter future instances of wrongdoing and (b) allow those who were wronged to feel as though their pain is recognised

                 iii.     For damages – for civil suits to work, we have to be able to apply a locus of blame

                 iv.     For regulation – we only really allow new technologies into the market when we are certain that we are able to regulate them, because otherwise we don’t know what to do when something goes wrong.

                   v.     For investment – people don’t want to invest in new technologies if they can’t be certain of who is going to be held responsible if something goes wrong, because they could be financially liable

                 vi.     For insurance – if we don’t know who’s going to be legally liable for mistakes, then it’s difficult to (a) acquire insurance in the first place and (b) for insurance companies to work out what premiums they should charge – so they might be prohibitively expensive, or just not offer it at all

4.     Response to this might be that we can decide on a case-by-case basis who should be held responsible, in the same way we do with humans.

a.     But precedent and predictability are important for all of the above.

 

Narrow versus General Artificial Intelligence

 

1.     Current AI algorithms with human-equivalent or superior performance are characterised by a deliberately-programmed competence only in a single, restricted domain. Deep Blue became the world champion at chess, Watson at Jeopardy, but they couldn’t play checkers or drive a car or make a scientific discovery.

a.     This resembles other kinds of life except humans – a bee can build a hive, and a beaver can build a dam, but not the other way round. A human can learn to do both, but this is pretty unique to humans.

b.     2015 – bot named Eugene Goostman won the Turing Challenge for the first time. Fooled more than half of human raters into thinking they had been talking to a human being.

c.     Bots can channel unlimited resources into building relationships – never grow tired of emotional labour.

 

 

2.     The problems of narrow AI are reasonably easy to envisage. The problems of an AGI are not.

a.     We start to see it with Deep Blue: the programmers couldn’t just preprogram a database containing all possible moves for every possible chess position. If they had input what they considered the strongest move in any given situation, the resulting system wouldn’t have been able to make stronger chess moves than its creators. So in creating a superhuman chess player, the human programmers necessarily sacrificed their ability to predict its behaviour – only in the narrow, specific sense.

 

3.     It is a qualitatively different problem to design a machine that will operate safely across thousands of different contexts, including those not specifically envisioned by either designers or users. There may be no local specification of good behaviour, and all we’re left with is the ability to dictate broad goals.

a.     To build an AI which acts safely in many domains, we have to specify good behaviour in such terms as “do X, such that the consequence of X is not harmful to humans”.

                                               i.     This involves extrapolating the distant consequences of actions. And it’s only effective if the machine explicitly extrapolates the consequences of its behaviour. So the machine has to be able to foresee all the possible consequences of any action in order for our general rules to work.

b.     Imagine an engineer saying “I have no idea how this plane I built will fly safely, or indeed at all – it might flap its wings or inflate itself with helium or do something else I haven’t even imagined. I can assure you it is safe, though.”

                                               i.     There is no other guarantee of ethical behaviour we can place on a general intelligence.

c.     So verifying the safety of the system becomes a greater challenge because we must verify what the system is trying to do rather than being able to verify the system’s safe behaviour in all operating contexts.

 

Can machines have moral status?

 

Our dealings with beings possessing moral status are not exclusively a matter of instrumental rationality: we also have moral reasons to treat them in certain ways, and to refrain from treating them in certain other ways.

1.     If X has “moral status”, it means that “because X counts morally in its own right, it is permissible/impermissible to do things to it for its own sake”.

a.     What attributes ground moral status?

                                               i.     Sentience – the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer

                                             ii.     Sapience – a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent

b.     Things with sentience but not sapience could be considered “marginal humans”, who have somewhere between minimal and full moral status.

c.     If an AI has sentience or sapience of a kind similar to that of a normal human adult, then it would have full moral status, equivalent to that of human beings.

                                               i.     Principle of Substrate Non-Discrimination: if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.

1.     Rejecting this position would amount to embracing a position similar to racism – substrate lacks fundamental moral significance in the same way and for the same reason as skin colour does.

                                             ii.     Principle of Ontogeny Non-Discrimination: if two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status

1.     We don’t believe that causal factors such as family planning, assisted delivery, IVF, gamete selection, etc – which introduce an element of deliberate choice and design in the creation of human persons – have any necessary implications for the moral status of the progeny. So we can’t discriminate just because it’s designed.

2.     Current AI systems have no moral status – we can change, copy, terminate, delete or use computer programmes as we please. The moral constraints which we are subject to in our dealings with them are grounded in our responsibilities to other humans and animals.

a.     Insofar as moral duties stem from moral status considerations, we ought to treat an artificial mind in just the same way as we ought to treat a qualitatively identical natural human mind in a similar situation.

3.     The problem of non-sentient sapience:

a.     Artificial intellects are constituted differently to human intellects (not having brains and similar cognitive architectures) but may still exhibit human-like behaviour or possess the behavioural dispositions normally indicative of personhood. So it might be possible for a machine to be sapient, but not be sentient or have conscious experiences of any kind. If it were possible, it would raise the question whether a non-sentient person would have any moral status – and if so, whether it would be the same moral status as a sentient person.

4.     The AI’s subjective rate of time may deviate drastically from the rate that is characteristic of a biological human brain.

a.     E.g. if we uploaded a sentient being to a computer, and then ran the upload programme on a faster computer, this could cause the upload (if linked to an input device like a video camera) to perceive the external world as if it had been slowed down.

                                               i.     This isn’t the same as being mistaken about the flow of time, but is a physical property.

b.     Problems: in cases where the duration of an experience, should duration be measured in objective or subjective time?

                                               i.     If a fast AI and a human are in pain, is it more urgent to alleviate the AI’s pain, on grounds that it experience a greater subjective duration of pain?

                                             ii.     Principle of Subjective Rate of Time: in cases where the duration of an experience is of basic normative significance, it is the experience’s subjective duration that counts.

5.     Reproduction brings up new ethical issues.

a.     Rapid reproduction – given access to computer hardware, an AI could duplicate itself very quickly. The AI copy would be identical to the original, and so would be born completely mature, and the copy could begin making copies of its own immediately.

b.     Our current ethical norms about reproduction include some version of a principle of reproductive freedom – it is up to each individual or couple to decide for themselves whether to have children and how many children to have.

                                               i.     Another norm is that society must step in to provide the basic needs of children in cases where their parents are unable or refusing to do so.

c.     If an AI desires to reproduce very rapidly, and ends up with members of the upload clan who can’t pay the electricity bill or pay the rent for the computational processing and storage needed to keep them alive, does the welfare state need to kick in?

                                               i.     If the population grows faster than the economy, then resources will run out, at which point uploads will either die or their ability to reproduce will be curtailed.

d.     The point here is to recognise the extent to which our ordinary normative precepts are implicitly conditioned on the obtaining of various empirical conditions, and the need to adjust these precepts accordingly when applying them to hypothetical futuristic cases in which their preconditions are assumed not to obtain.

6.     Genetic algorithms work by creating many instances of a system at once, of which only the most “successful” survive and combine to form the next generation of instances. This happens over many generations and is a way of improving the system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

 

What are the issues surrounding Superintelligence?

 

1.     Superintelligence = smarter-than-human AI.

a.     Achievable by creating an AI sufficiently intelligent to understand its own design, which could then redesign itself and create a more intelligent successor system, which would do the same again in a positive feedback cycle.

b.     Could also be achievable by increasing processing speed. Fastest observed neurons fire 1000 times per second; fastest axon fibres conduct signals at 150 metres/second, a half millionth of the speed of light. It’s physically possible to build a brain which computes a million times as fast as a human brain without shrinking its size or rewriting its software. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world. This would be a weak superintelligence in that it would think like a human but faster.

2.     Capabilities:

a.     Human type achievements: patent new inventions, publish groundbreaking research papers, make money on the stock market, lead political power blocks.

b.     Civilisation type achievements: invent capabilities that futurists commonly predict for human civilisations a century or millennium in the future, like molecular nanotechnology or interstellar travel.

c.     Species type achievements: changes of cognitive architecture might produce insights that no human-level mind would be able to find, or even represent, after any amount of time.

3.     Stakes become global/cosmic scale – humanity could be extinguished and replaced by nothing we would regard as worthwhile.

a.     Existential risk: a risk where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

b.     A positive outcome for superintelligence could preserve Earth-originating intelligent life and help fulfil its potential.

c.     Consider the good-story bias: our intuitions about which future scenarios are plausible and realistic are shaped by what we see on TV and read in novels – so our intuitions are likely biased towards overestimating the probability of scenarios that make for a good story, because they’re more familiar and feel more real.

                             i.     A scenario in which humanity suddenly goes extinct without warning may be much more probable than one in which human heroes repel and invasion of robots.

d.     AIs have no fixed characteristics – they won’t necessarily be “good” or “evil”.

                             i.     It depends on the AI design we’re talking about.

                           ii.     AI may be inherently impossible to control, in that it will have the intelligence to overcome any human barriers and can rewrite its own source code to be anything it wants to be.

                          iii.     A self-modifying mind is likely to have a stable utility functions based on their initial design, and so that initial design is vital in terms of lasting effects.

1.     Bayesian branches of AI seem more amenable to predictable self-modification than genetic or neural programming.

2.     Therefore this matters for contemporary AI research.

                          iv.     A superintelligent AI may have a different ethical perspective to us: people today are unlikely to be seen as ethically perfect by future civilisations, in part because of our failure to recognise ethical problems they deem relevant.

1.     Given this, it might be bad to create a mind that is stable in ethical dimensions along which human civilisations tend to exhibit directional change. E.g. if the Ancient Greeks had done this, we might have been stuck with slavery forever.

2.  So we probably shouldn’t try to invent a “super” version of what our own civilisation considers to be ethics – instead, we have to try to understand the structure of ethical questions in the same way we understand the structure of chess.

 

 

 

 

 

 

 

 

"Just Talk": The Limits of Tackling Stigma

An Introduction to the Strong Programme in the Sociology of Scientific Knowledge