Rare Opportunity or History Revisited? The Pitfalls and Prospects of Ethical AI in Light of Public Ethical Responses to the Telegraph

: This article undertakes a comparative ethical analysis of the types of public expectations and concerns related to the development of two technologies: the telegraph and artiﬁ cial intelligence. For each technology I provide a historical survey of public ethical expectations and concerns followed by a survey of the outcome or results of those expectations. Expectations and concerns of the telegraph era public are drawn together from popular and public literature and regulation of the period, whereas the expectations and concerns of our contemporary public AI engagement are drawn both from popular literature and public surveys, and supported by a manual search and ranking of a number of ethics related terms found in the raw feedback of the Stakeholder Consultation on the EU Commission High Level Expert Group Guidelines for Trustworthy AI. I then go on to compare those results, highlighting the similarities and diﬀ erences between the two technologies, in particular the positive economic and socially responsible use expectations outcomes and the negative concerns regarding monopoly, regulation, and control. Finally, I argue that, taking the telegraph outcome as a guide, an ethical focus on accentu-ating positive expectations toward AI is more likely to produce deﬁ nite results than concentrating upon prohibitory and negative approaches.


Introduction
Records of ethics as a human endeavor are 2500 years old at least and perhaps older.Typically, the pursuit of ethics has been a matter of philosophers advancing a considered view of how humans in community should act.But alongside this, ethical results have been sought by the general public indirectly, as currently with artifi cial intelligence.In this article I undertake a comparative ethical analysis of public ethical expectations and concerns regarding AI against public expectations and concerns which arose regarding an earlier technology, the telegraph.I have chosen the latter in particular because it ignited considerable public debate in its day, just as AI now does.
First, I consider the rise of the telegraph during the period from about 1850 to 1900, in European and North American society.Public ethical reaction to this new technology is explored from a number of angles.How extensive was the public reaction to the telegraph and what form did it take?What were some of the major ethical concerns regarding the technology?What were the eventual results of this public ethical engagement?From there I move to the current situation regarding AI ethics as a demand of the public.After briefl y surveying the history of AI ethics, I ask similar questions.How big is the public demand for AI ethics?What are its major concerns?Again, what have been the results so far of this public engagement?
I then compare the diff erences and similarities in the public engagement of the telegraph and AI and go on to make some suggestions regarding AI ethical eff ort, in light of the development of the telegraph.The goal is to soberly assess the potential for AI ethics as a public demand, i.e. to help peel away mere hype from a practical ethical engagement and spark ethical discussion through the comparison.Understanding the public reception of the telegraph can off er a fresh point of view to help us better refl ect upon AI development, while off ering a potential corrective to the hype surrounding AI, since as I will show 'we've been there before' with the telegraph.Finally, it can serve as an exercise in comparative ethics.
One may object that a precise and quantitative assessment of public ethical engagement is difficult to impossible for either technology, since on the one hand AI is too young a technology -its use on a public scale is still too brief -and conversely the telegraph is too old a technology, for us to have precise assessments.And in that case, such a comparative eff ort might be either premature or vague.Yet, taking a pragmatic view of ethics, in which ethics is not less than an accretion of consistent and successive actions and assessments in which the moral individual and society build one another up by turns, then there is an advantage to be gained in understanding the social developments of morality and comparing such developments.A work of this length can at least lay some groundwork for the comparison in question and thus off er some preliminary indications of what we can -or cannot -expect, as an outcome of the public ethical engagement of AI.

Telegraph Ethics
In exploring the public engagement of the telegraph as a new technology, I ask two concessions of the reader.The fi rst regards the distinction between ethics and morality.The central issue I am concerned with here is: t h e i n t e r e s t o f t h e g e n e r a l p u b l i c i n t h e i s s u e s s u r r o u n d i n g t h e d e v e l o p m e n t, u s e, e f f e c t s, a n d r e g u l a t i o n o f a t e c h n o l o g y i n r e l a t i o n t o i t s e ff e c t s u p o n t h e g e n e r a l p u b l i c.Thus, I will take morality as understood in the historic period under discussion and ethics as now popularly understood, to be synonymous, practically speaking.My main interest is the popular sense that 'something has to be done' about the technology because of the new issues, positive and negative, opened up by its development, use, etc.The positive sense of 'what good can we do or achieve with this technology,' and the negative sense of 'what problems are this technology causing and what ought we not do with it?'join public morality and ethics.
The second concession involves the diff erences in type of technology.The electric telegraph became largely a technology for communication between humans.But it did not begin that way, being originally developed for train signaling.1 AI is more than a communication technology.But AI can also be regarded as communication technology on a much more complex level, e.g.algorithms can help in other communication technologies, can help extract and communicate complex information in tandem with other technologies, and are popularly envisioned as standing as surrogates -e.g. the various ChatGPTs -for human communicators.The diff erences in purposes of the technologies should not alter the usefulness of the comparison, since the main interest is in exploring public engagements in new technologies as a potential for ethical progress, and in applying whatever insights might be found toward a practical progress in AI ethics.

Scale of Public Reaction to the Telegraph
The fi rst practical and commercial electrical telegraphs were invented in the 1830s and came into widespread commercial use in the 1840s.We have no exact way of measuring the scale of public moral reaction to the telegraph, but we can get a sense of it by considering the commentary which arose at the time in various social groups.
An opening comment upon on the completion of the Atlantic telegraph cable, helps capture some of the public perception of the technology.
The completion of the Atlantic Telegraph, the unapproachable triumph which has just been achieved in the extension of the submarine electrical Cable between Europe and America, has been the cause of the most exultant burst of popular enthusiasm that any event in modern times has ever elicited.So universal and joyful an expression of public sympathy betokens a profound emotion that will not immediately pass away.The laying of the Telegraph Cable is regarded, and most justly, as the greatest event in the present century. 2he fi rst such cables soon failed, but by 1866 they had been made permanent and the public interest seems to have been undiminished by the failure.The later eff ort of 1866 was spoken of thus: "The importance of this, the latest and greatest success of the art of telegraphy, can scarcely by overrated, and it will, … rank among the greatest, because the most practically important, of the achievements of the age." 3 These are strong words.What was unique about the telegraph, however, compared to the many other technological inventions of the time, was that it was viewed as enhancing other technologies, as a sort of overlay upon them.This eff ect was viewed in a positive sense as a moral eff ect.Briggs and Maverick quote Lord Carlisle speaking of the 'moral links' between Europe and North America which are to be strengthened immeasurably by the 'material link' of the telegraph. 4Gordon, speaks of the sense of moral obligation of Peter Cooper, a Unitarian, businessman, and one of the prominent architects of the Atlantic telegraph. 5Alfred Vail, an inventor and confederate of Morse, envisions the telegraph as destined to produce "a greater amount of moral infl uence upon the community, if under proper guidance, than any discovery in this or any other past age of the world," but added that in the wrong hands the infl uence would result in an enormous amount of evil. 6The expectation, couched in glowing and optimistic terms by many, was that the telegraph would weave its way over the whole globe, helping thought overcome time and space and human diff erences, and ultimately creating a global city of humanity with shared interests.
Thus, the telegraph was perceived as a moral project on a grand scale.But who was caught up in this public interest in it?Most prominently, journalists and newspapers were excited by the telegraph and were caught up in and expanded the hype. 7Businessmen, and business in general also embraced the new technology, particularly in the stock markets, quickly recognizing that advantages in the speed of information could be extremely profi table.8Politicians, likewise, embraced the telegraph, and social reformers such as Osborne Ward pressed for its installation at rates and places which would be a help to the working classes. 9uch of the interest was on the side of the wealthy, but not all.The perspective of the poor is harder to gauge, but we have some sense of the interests and views of those who began to work within the technology.Telegraph employees were not highly paid.Horatio Alger's fi ctional Telegraph Boy, Frank Kavanagh, is shown as both poor and scorned socially by the higher social classes he encounters.On the other hand, the need for telegraph operators provided a new, if poorly paid, potential role for many women, giving them a chance to break out of existing stereotypes about female work roles. 10In general, the telegraph was widely welcomed in sparsely populated and underdeveloped areas, 11 an outlook which persisted as late as the 1940s., as the author can attest to, based on talks with his own grandparents.
Other social classes viewed the telegraph with a spiritual and utopian outlook, including inventors, and futurists.Many religious people were caught up in the idea of the spiritual type of action exhibited by the electricity of the telegraph.Henry Rogers, an English congregationalist minister envisioned the eventual success of the Atlantic telegraph as a 'great campaign' destined to be won, which would benefi t the public in the gaining of the h i g h e r m o r a l v i r t u e s o f p a t i e n c e a n d p e r s e v e r a n c e which would be necessary to overcome the difficulties of the disseminating the technology. 12Supp-Montgomerie documents a rise in the Spiritualist movement which accompanied the spread of telegraphy, and which may have numbered as many as 11 million adherents in the US alone. 13The beliefs of the movement closely tracked the possibilities evoked by telegraphy, sometimes literally, and blurred the boundaries between science, religion, and magic. 14

Public Ethical Engagement of the Telegraph
Clearly telegraphy and the telegraph, its possibilities and its immediate eff ects, caught the public attention on a grand scale.There was, as we have seen, an overarching sense, expressed in many variations, that: 'the telegraph will change the human condition immeasurably.'This despite the early failures of the telegraph.And, whether misplaced or not, at the highest level it was a moral and ethical sense, an urge to break out into a higher value of the human condition.
It had a less vague ethical component as well, which the public debated vigorously.This is found in the literature of the time in questions about the technology, proscriptions for it, and reactions to the way it developed.I will categorize the issues under ethically positive and ethically negative.My interest is not in analyzing the technology ethically from a contemporary perspective, but in outlining the ethical/moral issues which seemed important enough to warrant discussion by the public of the time.

Positive Expectations
On the positive side we have seen that many, including its inventors and developers such as Vail, envisioned the telegraph as a positive moral infl uence upon the community in general which would lead to something U t o p i a n o r s p i r i tu a l betterment.The actual working out of this positive infl uence in practice was somewhat vague, but the dream was there.
George Wilson, professor of technology at Edinburgh, outlining the history of telegraph development up to 1858, evokes the similarity with communication in nature, as well as the large-scale cooperation that humanity is capable of in an eff ort such as telegraphy."The best interests of the world are bound up in its progress, and its mission is emphatically one of peace … it off ers men a common speech in which all mankind can converse together" 15 .In the beginning at least, there was a hope that enhanced ability to communicate, in terms of speed and distance, and the universalization of communication, would help bring about p e a c e by resolving disputes at international scales, before they turned into larger confl icts.
From another angle the telegraph was viewed as a herald of technology acting as medium for human moral and social betterment.In this an e c o n o m i c component was married to a particular vision of s o c i a l r e s p o n s i b i l i t y.The telegraph became a symbol for, as Carey notes, a justifying ideology for a new class of what might be called 'technologists,' i.e. professional engineers and researchers with deliberate plans to integrate technology with economic and industrial development in order to enhance humanity's future. 16In time these people would become a major part of what C.P. Snow would later distinguish as one of the 'two cultures.'That culture was intent on bettering the world through technology, because as Snow argued: "… technology is rather easy … technology is the branch of human experience that people can learn with predictable results." 17

Negative Concerns
The question of how much the telegraph should cost, and whether it should be allowed as a m o n o p o l y, became major points of discussion.Du Boff argues that in the beginning the telegraph was viewed by many as a defense against monopoly.This did not last.Within a few decades one telegraph company, Western Union, had come to monopolize all telegraphy in the US, engendering a push to nationalize telegraphy. 18In England the Journal of the Society of Arts called for nationalization, noting that the benefi ts of telegraphy had been 'neutralized' by the heavy prices imposed by telegraph companies.
Hand in hand with the cost question was the question of whether the technology should be government r e g u l a t e d and government owned.This worry was present from the fi rst demonstrations of the new technology, as Vail noted. 19Another related worry paralleled that of the monopolization of telegraph use: the monopolization and control of information.US politician Henry Clay, saw this immediately. 20This might be a temporal monopolization of information, since some types of messages got priority over other messages.It could also be a deliberate withholding of information in time or in scope, by those, e.g.Australian newsrooms, who received the information fi rst at the receiving end of telegrams. 21he telegraph also changed the nature and scale of information.It c o m m o d if i e d and objectifi ed information, and resulted in leaner information stripped of local color as well as an overload of that information. 22Information took on an ethical character in that its manipulation became subject to the many ethical issues common to all objects created by and exchanged among humans, including the issue of control noted above, but also hoarding, falsifying, theft, and overcharging.
We have already observed that the workers employed by the telegraph industry were not especially well paid and, in many cases, deprecated.The telegraph services were inoperable apart from an army of l a b o u r comprised predominantly of young boys.According to Downey, Western Union, with its American monopoly upon the telegraph, employed "the nation's single largest child-labor army." 23This was well known and decried by social reformers of the period.
Beyond this however, and with the introduction of women into the operator workforce, telegraphy was viewed as gendered and male along with technology in general, which caused s o c i a l r e s e n t m e n t against women operators, while "in the techie magazines of the times … many authors alluded to a possible loss of a world they idealized, a world threatened by new modes of electrical communication." 24he potential for publicly dangerous, i l l e g a l, or unethical messages, bothered others.The use of the telegraph for illicit romance was a major concern, and marriage by telegraph, which became exceedingly popular, caused anxiety and public disapprobation. 25ome, such as Henry David Thoreau, contrary to the utopians, questioned the p u r p o s e and s p e e d of telegraphic development.Thoreau predicted that the telegraph would both multiply the fl ow of useless information and make it easier to do the unethical: "we are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate … as if the main object were to talk fast and not to talk sensibly." 26Frederick Hedge, in the same camp as Thoreau, discounted the hype also: "The electric telegraph is a cunning invention; but the art of writing, about which little noise was made at the time, was a greater advance in civilization, and a greater blessing to mankind." 27inally, there were some, as Carey notes, who saw soon enough that the telegraph, rather than contributing to a universal brotherhood of mankind, would facilitate the worst aspects of colonial c o n t r o l. 28 Other people, those who were controlled, evidently saw this as well, and in the British Indian Rebellion of 1857, telegraphs and the equipment which supported them attracted some of the of most extreme destruction. 29

Results of Public Ethical Engagement of the Telegraph
The results of the public ethical engagement of the telegraph can be linked more or less to the ethical issues laid out above.Sometimes there appear to have been few or no results, despite considerable public hand wringing and debate.

On the Positive Expectations
On the positive side the results are sometimes harder to disclose, since refl ective closure on open ended goals is difficult.The ideal of the telegraph as a technological spark to a u t o p i a n and spiritual future seems to have remained always an ideal 'a little further on'.At least in terms of moral infl uence, and as other negative results shown below confi rm, the telegraph arguably contributed to some social problems as much as it solved others, so that few would have been ready to characterize the beginning of the 20th century as a utopia.Utopia was the goal with the new technologies, but it was also always, as Iwan Rhys Morus puts it with regard to its many celebrations in the fairs and expositions of the late 19th century: "fi rmly in the future." 30he universalization of communication as a goad to peace, hoped for in the telegraph, did not produce the looked-for results.Supp-Montgomerie quotes a remarkable headline about the Atlantic telegraph cable, representative of the euphoria of the time: "The World's Holiday.no more distance!no more war!…." 31 The numerous wars, from the American Civil war, the Franco-Prussian war, to the Boer war, which arose in the latter half of the 19th century, and then the First World war, showed how very premature these hopes were.The telegraph appears to have increased cooperation at national levels -both in the sense of empires and at small-er scales -only to have that cooperation harnessed to facilitate confl ict at international levels. 32he s o c i a l l y r e s p o n s i b l e use of the telegraph for the integration of human industry and technology for social betterment achieved better results.From the beginning we fi nd calls for it to be adopted in civic infrastructure, such as in 1850s London, as a means to quickly coordinate the eff orts of fi re departments. 33t was also used to facilitate early forms of long-distance medicine, and to facilitate collection of weather data.The improvement of the efficiency and safety of the railways, which had fi rst adopted it, was a benefi t from the beginning.These uses overlapped with e c o n o m i c benefi ts, as railways came to undergird the economy.Information exchange made possible by the telegraph led to the integration of hitherto regional businesses into larger enterprises, to consolidation and efficiency in transportation, 34 to the movement of money for private industry, 35 and to the expansion of banking globally. 36

On the Negative Concerns
The results of calls to deal with the m o n o p o l i z a t i o n, r e g u l a t i o n, and a v a i l a b i l i t y of the telegraph diff ered according to diff erent countries.Calls for de-monopolization in Britain and many Europe countries were met by nationalization of the telegraph in those countries.In the USA public calls for nationalization had little result.There, Western Union quickly gained a national monopoly which it held for many decades, while lobbying hard against all public calls toward nationalization. 37In Europe, on the initiative of France, a treaty was established which codifi ed international rules for telegraph use in European states as well as reducing and standardizing the tariff based upon the French franc. 38In Britain the calls of the public eventually resulted in nationalization and uniformity of fees as well. 39he public discussion over information c o m m o d i f i c a t i o n and c o n t r o l did not achieve results equal to those regarding cost and monopolization, except to the extent that they also engendered anti-monopolization regulation.Information was increasingly controlled, sometimes in subtle and sometimes in direct ways.Telegraph lines could be cut, as they were in the British Indian Rebellion of 1857 and at the outbreak of the American Civil War, but this was the exception.More subtly, newspapers in Australia, resorted to copyrighting telegraphed news information, selective publishing of telegraphed news, and controlling its distribution. 40eformer pushbacks against l a b o u r conditions of telegraph workers, including unionization, strikes, and calls for regulation for child telegraph messengers, resulted in unions and laws.But, as results, they were not distinctly separate from public eff orts in other domains.Moreover, according to Downey, they were only fully successful in urban areas such as New York city, and were combined with largely illusory schemes on the part of Western Union to pretend to care about the career development of telegraph messenger boys.41With regard to s o c i a l r e s e n t m e n t o f c h a n g e and women operators, the latter sometimes joined in strikes, such as the large strikes of Western Union employees, demanding among other things equal pay for women.But these strikes mostly failed to achieve their objectives. 42he concerns about dangerous i l l e g a l or i m m o r a l messages resulted in the administrative controls within the European treaty on telegraphy,43 although their practical eff ect is hard to gauge.In the US operators were segregated by gender in attempts to keep female operators from the 'corrupting infl uence' of male operators. 44The concerns over morally illicit romances had little practical result, beyond giving rise to a class of romance novels which wove the female telegrapher experience into the morals of the day. 45he more philosophically oriented public concerns on s p e e d and p u r p o s e, such as Thoreau's, who viewed the telegraph as merely speeding up and increasing the unethical aspects of human action, do not seem to have issued in any defi nite results.The telegraph propagated widely and the information exchanged increased.There does not appear to have been any signifi cant pause for refl ection upon the intelligence and quality of the messages, or the opportunity and need for underlying ethical improvement that could make the best use of the telegraph.

Artifi cial Intelligence Ethics
In passing to the public ethical engagement of artifi cial intelligence, again, I ask two concessions here.The fi rst is to recognize that in terms of public perception of AI, the overlapping subfi elds which fall under the umbrella of AI are generally not the subject of public ethical interest.So, e.g. the lay public is not likely to respond widely to more technical terms such as M a c h i n e L e a r n i n g, d e e p l e a r ni n g , or a l g o r i t h m.The public do not well distinguish AI from general comput-er technology, but have tended to think of AI and computers together in terms of an artifi cial but human like mind.This is amply demonstrated in older discussions from the latter half of the 20th century.In the US Congress House Committee on Appropriations of 1951, for example, on a government rented tax computer supplied by IBM, we fi nd this exchange: Mr. Canfi eld.I think there has been some publicity about them.Is reference being made to them as a sort of 'seeing eye'?Is that true?
Mr. Williams.They call it 'a brain.' … Mr. Williams.I think what makes the machine so interesting, and why it is called a brain or thinking machine, is that it has the ability to transfer from this counter to another unit, called a storage component, or memory, a partial answer, such as taxable net income. 46e second concession, is that AI ethics is not practically separable from digital ethics and issues such as compliance and data use.The problematic proliferation of domains within engineering and technology ethics has been well discussed by Skaug Saetra and Danaher, 47 who locate AI ethics, data ethics, and digital ethics as domains under computers ethics.Nonetheless, I will take AI ethics to be separate from digital and data ethics because I am interested here in public ethical perception of AI in terms of those aspects of AI which capture the lay public's ethical attention.

Scale of Public Reaction to AI
The general idea of artifi cial intelligence had been brought before the western public mind before the 1950s.Work on artifi cial intelligence as we understand it began in the 1940s.Norbert Wiener elegantly summarized some of the ethical concerns in 1947: "the fi rst industrial revolution … was the devaluation of the human arm by the competition of machinery … [the second] is similarly bound to devalue the human brain …." 48The term artifi cial intelligence was coined by McCarthy in 1956, but there are references to the concept as early as 1838, under the term 'artifi cial brain.'In the earlier development of artifi cial intelligence, the meaning of the concept in the public imagination is clearly not separable from that of computers in general as 'thinking machines,' in the manner discussed in the US Congress Committee cited above.A newspaper article from 1966 speaks of thinking machines, quoting a computer scientist thus: "the basic problem lies in the layman's attempt to think of the computer in human terms," noting also that this tendency has resulted in a public fear of computers. 49Thus, there were many popular terms for the concept of 'non-human mechanical intelligence,' and it was these which sparked the public imagination from the 1940s onward.
If we ask who was and is discussing artifi cial intelligence, we fi nd a great variety.Researchers, including philosophers, scientists, and technology developers were discussing artifi cial intelligence in a professional capacity since at least the 1940s.This engagement has increased with time and according to the successes in advancing artifi cial intelligence.A search for the terms "artifi cial intelligence" in the popular Semantic Scholar search engine rises from a few research articles in the late 1950s to nearly 34,000 articles in 2021 alone.The term AI has overtaken all others recently, but older terms such as "thinking machine" are still extant; one can fi nd that term in 73 articles published in 2020.If talk of AI among STEM fi elds seems obvious, interest from within the humanities and arts seems less so, but is in fact very high, with political science, the arts, history, and other non-STEM fi elds, making up a signifi cant portion of research articles on AI.
AI is widely discussed by NGOs as well.This can be seen on the AI Initiatives page of the Council of Europe, 50 where from a beginning of 1 in 2010, the number of frameworks or declarations on AI had risen to more than 500 as of 2021.A considerable number of businesses in developed countries have embraced AI, with Mc-Kinsey reporting from a 2020 survey of about 2400 participants, that 50% indicated their companies had adopted AI in some form. 51n general, the broader public engagement with the idea of AI appears to have increased.Fast and Horvitz have shown that it has increased sharply since 2009. 52e should be cautious here however, because automated searches for defi nite terms may not take into account more dated popular terms for artifi cial intelligence which are now forgotten.On the other hand, public engagement with the idea of AI has increased more relative to anticipated future benefi ts of AI than to existing benefi ts in existing technologies. 53In other words, there is a strong futuristic leaning component in public engagement of AI.

Public Ethical Engagement of AI
Ethical issues are seemingly numerous.A number of surveys of AI ethics point out many diff erent issues.Hagendorff , for example, notes 22 diff erent ethical issues which are being engaged in AI ethics guidelines 54 .But ethical concerns in the academic and research context may be far removed from public concerns for several reasons.The current structure of the academic publishing system and the profes- sional need to publish which it creates may present a skewed impression of the public importance of certain issues and the system is in danger of falling prey to hype as well.The consulting fi rm Gartner has recently stated that digital ethics, driven by AI, is at the peak of what it calls the Hype Cycle. 55hus, my focus here is not upon AI ethics issues brought up in academic research, but upon those which appear most prominently as part of public concerns and expectations.My eff ort amalgamates insights from recent surveys which have attempted rankings of public concerns, e.g.Fast and Horvitz 56 and Schiff et al. 57 , surveys on public opinion, Lockey et al. 58 , Pew 59 , and my own attempts to fi nd historic references in popular literature dating back to and before the mid-20th century beginnings of AI.Along with these I will refer to data gleaned from the raw feedback of the Stakeholder Consultation on the EU Commission High Level Expert Group Guidelines for Trustworthy AI. 60 The latter, some 562 pages, is available online and gives an interesting overview of the concerns and expectations of lay people regarding AI.A manual search of the feedback was carried out for each of the single words indicated, with the results shown in Figure I and Figure II.Each word was also verifi ed according to its context relative to being a negative ethical concern or a positive ethical expectation and only one instance of a given word was counted for a single comment.In case of concerns which might be located through multiple overlapping words I often tried a number of related words and indicate here the word which gave the highest number of results.Thus, e.g. the words s p e e d, p a c e, and f a s t, resulted in 5, 3, and 12 instances respectively, so I include the 12 instances of f a s t in the results below.
There is a good case to be made for taking an approach which attempts to avoid predominantly academic treatments of AI ethics issues as much as possible, at least in an eff ort of comparative ethics. 61  6 My approach has been to treat as valid only those instances of words within comments which were offered w i t h o u t explicit reference to academic affiliation as an indication of public concerns.Non-academic HLEG consultation feedback was from private individuals, law associations, fi nancial associations, professional associations, churches, NGOs, unions, and businesses.I take all of these to be members of the general public, insofar as having a secondary and lay interest in ethical issues surrounding AI which diff ers from the focused and primary professional interest in ethics of academic AI ethics researchers.The HLEG draft guidelines requiring the consultation feedback do 'prime' the potential terms to some extent.I do not correct for this, but note that what was said in the fi rst draft guidelines served at the same time as a base from which the interested public could indicate -and they often did strongly -which ethical issues were m i s s i n g f r o m o r o n l y w e a k l y p r e s e n t i n the draft.I view this as balancing potential bias toward the particular concerns of the guideline draft writers. 64Note that when a term had several possible variants for the same meaning in context, the search was carried out for the most general part of the term when the uniqueness of the term permitted, e.g.democra for democracy, democratic and monopo for monopoly, monopolize.

Positive Expectations
S o c i a l R e s p o n s i b i l i t y, taken as a use of AI for public benefi t, is one of the highest-ranking contemporary ethical concerns across various categories of the public according to Schiff et al. 65 The notion of social responsibility is very broad and arguably contains a number of perceived potential benefi ts of AI within its generalization.Moreover, as seen in Figure II, all of the searches for positive ethical expectations yielded at least some results and the issues of economy, climate, and peace could be viewed as also falling under the umbrella of social responsibility, even though each can also be viewed separately.Direct mentions of public good can broadly be taken to be synonymous with social responsibility, and again there are some, although I retain the term used by Schiff et al.Within the umbrella of social responsibility, the highest-ranking word of those considered in examining the HLEG guidelines feedback for positive ethical expectations was e c o n o m y.Indeed, there were very few indications that anyone considered AI as potentially bad for the general economy, although its eff ect on particular groups in the economy or on their ways of contributing to the latter was regularly raised as a potential problem, under the term inequality.
This expected positive economic eff ect of AI has a history.The Glasgow Herald of May 1st 1986 for example, speaks of Britain's fi rst artifi cial intelligence computer company bringing new jobs, observing that the company "will off er computer users systems which will allow one computer to 'talk' to another," and that though the number of jobs is small "their value in terms of better services for the major employers is large." 66Emphasis is on the economic betterment of the town and region, even though 'artifi cial intelligence' describes something closer to prognostics here.In the 1960s we fi nd it in vaguer notions of the unlimited potential of computers. 67ore recently the potential role of AI in preventing or mitigating global warming has been popularized, 68 even though the energy and computing infrastructure use of AI counterbalances this.My HLEG feedback search under the term c l i m a t e fi nds it to rank second.Some instances suggest that AI can be developed in an environmentally friendly way without exacerbating climate change problems, i.e.AI will not make things worse at least.Some instances in the feedback are more active in tone, expecting that AI can be used to manage natural resources more sustainably than humans do, improve related human decisions, and streamline major contributors to global warming such as transportation.This mirrors other recent popular accounts of AI use for mitigating climate change related damage, e.g. that of Green. 69eferences to p e a c e also appear in the HLEG feedback.There is a neutral view which hopes that AI can be developed for peaceful applications and according to peace eff orts promoted in political frameworks such as those of the EU.Yet there is also a more proactive view which desires the 'learning aspect' of AI to be deliberately developed based upon peaceful models of human behaviour.This vision of AI for peace is supported by a number of popularizations, e.g.aiforpeace.org,which focuses on peaceful uses of AI, but again also more active plans to use AI to defuse potential war situations, 70 or to use algorithms to help in mediation toward peace. 71 t o p i a n and futuristic notions of the ultimate ends of AI also occur.These have been popularized by those such as Kurzweil, who, in transhumanism, advances the idea of a technological singularity, a point at which technology -mainly AIovertakes human capabilities and becomes self-developing and unstoppable, eventually godlike, with positive results.This 'real world' urge emerged in older science fi ction such as the robot R. Daneel Olivaw in Asimov's early Foundation series and in newer science fi ction like Iain M. Banks' Culture novels, where advanced artificial intelligences guide humanity, overtly and covertly, to our benefi t.As a counterpoint to the public perception of technology as a driver of secularism, the spiritualization of AI is also occurring online, where technological hopes are blending with religious hopes. 72Supporting this, in Figure II, the term i m p r o v i n g (human life) fi gures in third place in the HLEG feedback as a general hope for AI in a sense related more to the idea of golden age or utopia than to the everyday social benefi ts.

Negative Concerns
Under negative concerns, one of the most consistent is the fear of AI eventually controlling humans.These concerns are closely related and predate the 1950s as Halacy noted, and were widely popularized during the 1940s in response to electronic computers, particularly among fi ction writers such as HG Wells, who wrote of the 'Giant Brain.' 73 These two concerns are probably historically tied in with the public tendency to be indiscriminate with regard to viewing the computer generally as a 'thinking machine.' Hughes, in 1966, for example, writes of the public "… fear that computers are challenging human beings for supremacy …" and quite clearly equates artifi cial intelligence with general computer operations. 74The gradual increase of this public concern is suggested in the more recent study of Fast and Horvitz. 75Contemporary negative concern regarding AI control of humans is bound up with the term AGI (artifi cial general intelligence).The latter ranked sixth in HLEG feedback, showing its relative importance and I retain it over more ambiguous terms such as 'control,' also occurring in the feedback.
The highest ranked term by far, among those considered, in the HLEG feedback under negative ethical concerns is that of b i a s.All parties were concerned about AI decisions being biased against, or for, certain groups of people.Unsurprisingly, recent high-profi le cases of algorithmic bias, such as Amazon's hiring algorithm bias, 76 have kept this concern in the public consciousness.R e g u l a t i o n, understood as the AI fi eld not being well regulated and requiring better regulation, ranked second.This was so despite not counting instances of countervailing use of the term, as verifi ed by context, i.e. the view that the current regulation was entirely adequate and more regulation would be a nuisance.The latter view, wherein regulation was not an ethical concern, was prominent among most corporate contributions to the feedback.This shows that the consultation feedback was indeed public in the sense of taking into account ethical concerns well beyond those of corporate special interests.
Concerns about d e m o c r a c y also rank highly, in third place.The context of instances indicated that the concerns were sometimes passive, i.e. given that the HLEG were specifi cally developed for the EU, AI should be deliberately developed to uphold EU member democratic backgrounds and principles which are instantiated in EU policy initiatives.Yet the contexts of other instances indicated a proactive desire to address potentials for interference in democratic processes -e.g.deepfakes and fi lter bubbles -either from or by means of the largest corporate players in AI, particularly social media.That this concern is increasing is confi rmed by popular discussion of research on the issue, see e.g.Haidt, 77 and by public opinion polls, e.g.Pew. 78ast and Horvitz, indicate that a negative concern about work is rising in the public perception of AI. 79 This is supported by the instances of the term j o b in my examination of the feedback.The context of the instances indicates considerable fear of AI replacing humans in workplaces.This is backed up in the rankings of public sector ethics topics which Schiff et al. present. 80Closely related, the worry of work degradation, with a long history in the context of the generally automated workplace -see e.g. the 'Plastac' factory scenes of Tati's 1958 fi lm Mon Oncleis also a public concern.Instances of the term l a b o u r in the consultation feedback, correlated in context with the notion of degradation of working conditions due to AI supported automation, ranked just below that of job loss.
In the mid-range ranks of instances of my search were worries about monopoly, speed of AI development, and military AI uses.Interest in each was about equal.There was a general worry that AI would contribute to the arising of m o n o p o li e s particularly in terms of data accumulation but also that established monopolies prevented the fair use of AI.The overlapping of this concern with those of inequality and availability, shows that unique terms do not give an exact picture, but rather a general impression of public opinion.Speed of AI development -using the term f a s t -generally correlated with worries that regulation could not keep up, or that the HLEG guidelines must be dynamic enough to address the pace of AI development.But some contributors explicitly called for slowing down development to make it ethical and to rethink the technology's social eff ects.One contributor lamented -echoes of Thoreau -that AI and related technologies were developing so fast that a given iteration 'has come and gone' before we can assess it or the human skills developed in relation to it.M i l i t a r y concerns were focused, variously, on ensuring that military uses incorporated explainability, that such uses should clearly and legally locate the responsibility of the initiator of the use, that such uses wrongfully abdicate life and death decisions to a machine, and that such uses should be completely prohibited.

On the Positive Expectations
Many countries have embraced AI recently, creating national bodies to advance it in hopes of e c o n o m i c growth. 81Driven by hype, like many tech domains, AIor at least 'the idea of AI' -is a hot property.Forward-looking studies are eff usive regarding AI economic benefi ts.One has AI raising global GDP by 14% by the year 2030 82 .Others, estimating current economic eff ects of AI tend to support the forward-looking studies.Dawson et al., in 2022, studying AI related US government expenditures over fi ve years, found more than a billion USD in expenditures, particularly in the category of professional, scientifi c, and technical services 83 .Though a large number of contracts were with small vendors, the authors view the growth as 'healthy' because in response to specifi c needs, i.e. not merely hype-based.For 2020, McKinsey, reports a 22 percent increase in EBIT earnings due to AI from the year earlier. 84Thus, AI is indeed a growing factor economically, even in emerging markets, with India, China, and countries in North Africa, driving the adoption of AI. 85 Positive AI contributions to c l i m a t e change issues which go beyond the goals and proposals noted earlier, are limited.AI is being incorporated into systems to predict or help understand climate change related phenomena better.The latter tends to be about the types and scale of climate degradation occurring.Emphasis is on mitigation, e.g.Toews, 86 rather than using AI to actively reverse climate change.As King and Lichtenstein, 87 have argued, we already know what needs to be done.We even know, what AI could do to help, for example in carbon capture techniques. 88ut knowing what to do theoretically and actually doing it are diff erent.
Meanwhile there is every indication that AI has already become a major contributor to climate problems.First, because it is founded upon other systems, not least physical systems. 89whose detrimental impacts in terms of energy and material use increase constantly even along with their efficiency 90 .Second, because the current dominant AI training paradigm focuses on mass computation resources and increasingly large data sets so that the carbon footprint of AI is large, 91 growing, and negating positive uses of AI upon the issue.
Separating the notion of AI uses for p e a c e f u l purposes from AI uses explicitly focused on bringing about peace, is useful in considering results in this category.The former seem unsuccessful, given that they fall under a category of approach to AI ethics which is itself unsuccessful practically, i.e. the debate or approach of not-using AI in some circumstances.As Hagendorff , 2022, argues, 92 talk of non-use of AI is nearly absent from discussions of AI ethics, public and private.This, coupled with the ease of integrating AI with modern weapons such as drones, is bypassing the notion of only using AI for peaceful purposes.Recent documents like the Political Declaration on Responsible Military Use of Artifi cial Intelligence and Autonomy, 93 make no mention of peace, instead pushing for a regulated international 'level playing fi eld' for military AI.Explicit eff orts to operationalize AI uses for peace are also sparse however.It is unclear how recent eff orts like those of the Israeli ministry of foreign aff airs in using deep learning to deep fake political messages toward peace, 94 diff er substantially from similar conventional eff orts toward peace.
The results of the more u t o p i a n positive ethical vision for AI have to wait upon the further future of human development.There are indications that this vision has engendered a general acceptance of the integration of AI into human activity however.A 2020 Australian public survey found that even though 61% have a low understanding of AI, only 7% reject it, with the remainder tolerating, accepting, approving, or embracing it. 95A low understanding seems to overlap with a popular understanding, the latter of which is infl uenced by the more utopian ethical view of AI.Recent AI advances such as ChatGPT, which outwardly presents a veneer of human-like manipulation of language, are both feared in some quarters 96 and lauded as 'transformative' in others. 97Either way, the sudden and intense public hype about such language models is clearly uncomfortably close to past telegraphic public hype: it encapsulates the sense that 'this changes everything' for society.Combined with the fact that no pause is actually taken for a societal rethink of the development of the technology, this leads more toward a utopian than a dystopian reading.Little overt opposition in public sentiment toward the rapid integration of AI in social life now occurring, arguably indicates an acceptance that the 'exciting transformations' which technologies such as ChatGPT herald, are more utopian looking than not, although the utopia is yet to come.
The result of a public focus on s o c i a l r e s p o n s i b i l i t y depends upon the aspect of social responsibility countenanced.Benefi t to the public is hard to qualify or quantify without the benefi t of hindsight.In a sense all the other concerns and expectations discussed here -excepting perhaps utopian hopes -feed into the notion of AI for public good.Based on the more specifi c positive expectations discussed above, economic, climate related, and peace tending, the actual results of the ideal of using AI in a socially responsible way are there, but mixed.Eff orts such as Belgium's CitizenLab, for example, using AI to analyze citizen priorities proactively and encourage participative democratic practices, are direct candidates for a socially responsible use of AI.Yet, though encouraging, these results depend very heavily on strong user participation and quality of input, as Berryhill et al. admit. 98 With those caveats, defi nite positive results are harder to estimate, considering that secondary eff orts are being promoted in parallel to the use of the technology.But that very integration of AI and social engagement may be promising, as I will discuss later.

On the Negative Concerns
Concerns regarding b i a s have triggered various responses.There have been very public commitments to combat bias, e.g.UNESCO panel discussions and the Roman Catholic Church's Rome Call for AI Ethics.Multiple frameworks, such as the OECD AI Principles, have been developed, which lay out generic principles to follow in avoiding bias.Best practices guides have been advanced, e.g.Turner Lee et al., 99 attempting to get developers to think about who is impacted by AI systems.The most obvious results are the development of technical eff orts.These take on a number of forms, but typically there is a focus on the transparency of the algorithm, the development of fairness metrics, data preprocessing techniques identifying sensitive attributes in data and removing or canceling them, and discovering the eff ect of particular sensitive attributes to balance model predictions after processing. 100hough the proliferation of eff orts is unquestionable, their success as results is less so.The Stanford AI Index Report of 2022 addresses engagement of bias among other factors.Damningly, the report fi nds that as AI language models are growing larger, bias generated within them is increasing. 101But not only is bias in AI use increasing but bias in terms of who develops AI technology -lack of diversityhas not decreased signifi cantly. 102So here again, results are no better than mixed: bias is indeed being addressed technically and socially to some extent, but the public desire for results is not translating into less bias.R e g u l a t i o n as an issue of concern seems to be producing some actual results.Various national AI regulatory eff orts are in the works, including the Canadian AIDA (Artifi cial Intelligence and Data Act) and the UK National AI Strategy.The UK eff ort is in an early state.AIDA leaves many details to be developed in future regulation -notably the defi nition of 'high-impact systems' -and exempts both government and military from the Act. 103The most advanced eff ort, the European Commission AI Act proposal, is moving through the adoption process, with tentative adoption in early 2024.The EU AI Act is clearly a fi rst step which will have eff ects beyond the EU.But a number of aspects of its development tend to water down the results.These include: uncertainty as to what counts as an AI system, national level enforcement, and exemption of military AI use.The relation of the act as legal regulation to its purported foundations as an ethical endeavor is also questionable.The latter issue, indiff erent in terms of the regulatory side, nonetheless calls into question the notion that the Act's adoption results from public ethical concern.
US based AI regulation at the national level has not proceeded beyond discussion, although some individual states have taken initiatives, e.g.focusing on specifi c uses of AI such as regulating automated employment decision tools to prevent bias in hiring. 104There are clearly roadblocks at local levels however.New York City Council's AI regulation eff ort has been called a 'spectacular failure,' for example, due to uncertainty in defi ning AI, inability to understand real world AI use, and administrative unwillingness to subject the automated systems to scrutiny which would facilitate regulation. 105esponses to public concerns about AI degrading d e m o c r a c y are limited.The most obvious area to look for results is the demonstrated detrimental eff ect of AI use in social media.Insofar as social media giants can still be viewed as national level institutions however -not perfectly evident in some cases -we can distinguish here between adverse eff ects by AI driven social media internal to national areas and those caused by social media adoption which crosses national boundaries and plays into international tensions.The latter issue is producing results recently, in the banning of social media app TikTok on government issued devices in India, Canada, the USA, and the EU. 106Nonetheless, the precise scope of these bans and the reason for them -data collection -indicate that the results are not so much ethically driven as driven by national and regional eff orts to project or reinforce political and economic power globally.
Eff orts to rein in adverse eff ects of AI on democracy internally have failed.Not only does the very nature of algorithm augmented social media enable and promote the destructive tendencies of human behaviour for the worst, e.g. in misinformation, 107 but as Lauer argues, 108 the generation of inaccurate and toxic information is an ethical problem rather than one that can be solved technologically and social media giants have no interest in solving it because it is the core of their business model.
With regard to the j o b concern, the public perception of the potential eff ect of AI on work has been focused upon: 'will the AI put me out of a job?' NGOs and the public sector have begun to address this in guidelines and frameworks, e.g. the HLEG AI guidelines briefl y mention job loss.But there is little interest in it in the private sector, 109 the context in which, ironically, the concern is most at home.Moreover, recent empirical evidence, covering 33 OECD countries, shows that AI and robots do indeed increase unemployment. 110It seems that, at least in terms of a defi nite engagement of the question that mirrors the public form of this concern, i.e. 'is the development of AI counterproductive in terms of human employment as such?' -Wiener's concern back in 1948 -there have been few results in terms of questioning the ongoing development of AI a s s u c h.This is in keeping with Hagendorff 's suggestion that the option of not using AI remains unconsidered. 111efl ections upon the results of public concerns regarding AI automation degrading labour conditions can be focused upon direct and indirect eff ects.Indirect effects include primarily the expanding practice of using human labour, low paid, or unpaid, often sourced in the global south, 112 and often psychologically distressing, in order to train or guide AI.Direct eff ects include using AI to analyse worker's physical movement, speech, absences, or work tempo in order to correct it to some purported most efficient level.Amazon leads negative headlines with regard to this concern, with Bao et al., 113 observing that AI based work surveillance has led to layoff s of more than 10% of staff in some distribution centers due to purported inefficiency, while impinging upon privacy, and stressing workers.But while the work surveillance industry is expanding rapidly, 114 there is no indication of pausing for deeper consideration.Recent European cases 115 seem to indicate that legal impetus is on the side of corporations in portraying workplace surveillance as necessary and reasonable.
The issue of AGI, AI controlling humans, has sparked high profi le contemporary warnings, e.g.those of Hawking, Musk, and Bostrom.Such warnings were circulated much earlier however, by fi gures such as Marvin Minsky. 116The human control issue has already come to the fore in government and NGO eff orts to regulate AI use, e.g. the stress upon human agency and oversight in the HLEG guidelines.And yet the relatively middling private sector ranking of this concern,117 indicates that the actual engagement of the issue is nowhere near its importance in the public perception of AI.The contemporary and historic public appears to think of the AI control issue in the forward-looking sense of what AI eventually means for human control of human life, whereas private sector and NGOs regard the issue in terms of control of AI with regard to specifi c applications.The recent Pew survey of a majority of AI researchers or developers, indicates that sixty eight percent do not think that principles promoting the public good are in the near future of AI, and the major worry is that social control and profi t seeking are the focus of AI development. 118Thus, there is much hype and sci-fi talk regarding AGI, but no real action regarding the main danger: use of AI as a blind for social control.
The basket of mid-range ranked public ethical concerns, m o n o p o l y, s p e e d o f A I d e v e l o p m e n t, and AI use for m i l i t a r y purposes, unsurprisingly do not fair better than higher ranked concerns.A negative result with regard to the monopoly concern is perhaps a foregone conclusion because in large part the majority of AI development is either linked to or being carried out by corporations whichif taken together -are already eff ective monopolies with regard to the technology and data undergirding AI eff orts in their current paradigm.As Niyazov observes, on an alternate assessment of economic power, sixty-nine of the top one hundred world economies would be corporations. 119Of these, the half dozen or so which lead digital tech and data development are among the largest.
Likewise, the speed of AI development, driven by these monopolies and their access to data shows no signs of pausing.This may not mean that the technological aspect of development can keep up a rapid pace.Despite an early quantifi able technologically driven rapidity,120 the systems underlying AI have limits which are becoming apparent, e.g. in phenomenon such as 'dark silicon.'121In terms of ethical results more properly speaking, i.e. in terms of practical eff orts to limit the speed of AI advance so as to refl ect upon its uses, hype and hope are outstripping wisdom, as we have seen above with regard to economic growth and public sentiment.
Finally, we have seen that upcoming EU regulation has decisively exempted military AI uses.Despite laudable counter eff orts in some countries such as Belgium, 122 a short survey of recent developments indicates that most militaries with the means -including the world's largest -are either already developing or planning AI for direct military applications.Meanwhile, direct uses of AI in military operations have already occurred in the Israel Palestine confl ict,123 possibly with AI equipped killer drones in recent confl icts in Libya,124 and in the Ukraine war both directly on the battlefi eld and with regard to AI enabled analysis of tactics and strategy with an eye toward future military applications.125

Comparing the Telegraph and AI
As shown in Tables I and II below, there should be a sense already that the two technologies have many similarities in terms of public ethical concerns and expectations and the results of those concerns.

Similarities and Diff erences
Both technologies are similar in that their eff ects and potential disasters are more cumulative than instant -in comparison to say the eff ect of a badly built bridge -but public ethical concerns regarding the telegraph are somewhat easier to draw into a conspectus because the history of telegraphy is complete as a practical mode of technology.On the other hand, the telegraph could be seen as a progenitor of AI in some ways: the byways tread in developing the former may have grooved ethical tracks for the latter.
In terms of diff erences, among the predominant public concerns on the AI side, the bias concern is foremost.Bias is directly related to the technical capabilities, purposes, and sources of training data used by algorithms, and does not translate well into telegraph terms.The democratic concern also has no clear equivalent in telegraph terms, though the latter aff ected administrative government considerably in democracies of the time.Telegraphy led to some job losses for those who had previously carried information -e.g. by horse -but the issue does not seem to have been a public concern.
We have seen there was public concern about the availability -cost -of the telegraph services, one which is not evidently present in public concern about AI.This may be because, at least currently and with regard to social AI uses, the development of the latter is largely driven by a paradigm of data acquisition which encourages and profi ts by widespread use.The nature of information itself, whether dangerous or unethical, or, -more philosophically -whether useless, and the notion of the telegraph enhancing the unethical tendencies in human nature, lack a strong equivalent in public AI concerns.Concerns of availability and illegal uses for the telegraph, and concerns of bias for AI, have led to mixed results.Other divergent public concerns, such as social resentment of change, commodifi cation, control, and job loss, have not led to signifi cant results, though some, such as concerns over illegal or immoral uses and AGI have led to mixed results.
Public ethical concern for the telegraph does not seem to have included an environmental component.The telegraph involved "massive deforestation and habitat destruction, but this ecological impact was largely invisible to people who used the technology,"126 much as it is for those who use AI. 127Public concern for AI includes the component as climate concern, but relatively weakly and with emphasis on the technology as a fi x rather than on its environmental footprint.Moreover, there are no signifi cant results accruing from this concern.It seems that out of sight promotes out of mind in both cases.
On the whole however, the similarities are far more striking than the diff erences.The most signifi cant results from the overlap occur in the economic expectation.Insofar as economic results are viewed as benefi ts by the public, then ethical expectations in this regard arguably were -and are -being fulfi lled.Further overlap occurs in telegraphic public expectations of socially responsible use of the telegraph, which mirrors similar expectations for the AI public.For the telegraph these benefi ts were envisioned as civic infrastructure benefi ts, safety, and efficiency.For AI they are broadly perceived as generic 'economic' benefi ts as well as benefi cial responses to pressing global problems.There is also overlap -or continuationwith the sense of the broader benefi t of technological advancement and potential: a 'technology is the path to the future' view.Results from this social responsibility expectation are decidedly mixed however, for both the telegraphic and AI public.
The spiritual, utopian, and futuristic expectations regarding the telegraph, parallel the current public's perception of AI in 'singularities' and notions of benevolent AI to an astonishing degree.In both cases the emphasis has continually shifted to the future.As far as results however, the utopia is never judged to be achieved, but it is always just over the horizon.Expectations of peace from the technology were more pronounced in telegraphic public, but still present in the AI public.In neither case have results been forthcoming toward peace.
With regard to negative concerns, the only common concern to have achieved results, albeit mixed, is that of regulation.And here again, remarkably, current AI results are echoing past telegraph results, with Europe leading and North America and the developed anglosphere -except for the US -only grudgingly following.Concerns regarding monopoly are common and important to both publics, but relatively fruitless in terms of results.
The concern of losing control of AI is also partially mirrored by the telegraphic public's concern over control of the telegraph, except that for the former the worry is a hypothetical takeover by an advanced general AI, an 'evil machine,' whereas for the latter control by corporations and government was the worry.Here, arguably, the telegraphic public were wiser than the contemporary AI public in understanding the real danger of control through a technology, i.e. that the technology facilitates human control of other humans.Indeed, the soft power of control of the global south in the datafi cation economy feeding AI, recalls historic colonial control of the global south through the telegraph.
The labour concern shows defi nite similarities.Both technologies developed a component of underpaid and degraded labour: for the telegraph child labour predominantly, for AI the micro workers sustaining its use by social media and other tech giants.The latter are engaging more often than not in 'ethics washing' in the very same way that telegraphic giants such as Western Union did, i.e. in neither case did concerns produce signifi cant results.
Purpose and speed of development as a public concern for both technologies has not led to any evident pauses for ethical discussion of the technology by society.Even regulatory discussion has not been able to keep pace with AI development, with engagement essentially 'tacked on' after the fact.
Finally, public concerns about military uses of AI are an abject failure in terms of results, despite some small local successes.Insofar as the telegraphic public had such concerns they were not widely voiced, although there is some overlap with worries related to colonial control and expansion as we saw.As with contemporary militaries, the telegraphic era militaries quickly embraced the new technology and even the limited public concerns regarding colonial control were countered by opposing voices calling f o r expanded telegraph use in colonial expansion.Thus, not only was and is military use of these technologies not high on the public agenda, but insofar as it is, the concern appears to achieve nothing.

Insights from the Comparison
What can we learn from the above comparison?It would be easy to simply suggest that looking at areas of overlapping public ethical expectation and concern for the telegraph can show us where we are going wrong this time around with AI, so as to deliberately correct for it.On that reading we might say, for example, that the fact of the similarity between the monopoly concerns of the telegraph public and the monopoly concerns of the present AI public, and the fact that the telegraphic public largely failed to address the issue then, should goad us into redoubling our eff orts this time around.That m i g h t work, but it misses something about public engagement of ethical concerns, namely: public engagement is bound up with individual engagement.Just as the individual can only act practically along ethical principles or toward ethical ideals shaped by public sentiment -the individual is most ethical in company -, so the public can only operationalize its ethical ideals if it can get the individuals within it to act practically toward them.And acting practically means in the fi rst place a c t i n g more than n o t a c t i n g.
On that view, the negative ethical concerns of the telegraphic and AI public face the same headwinds as does any strong individual orientation toward worry and concern, whether ethical or otherwise.Such concerns are endless: "be careful of this," "don't do that," "watch out for a, b, and c…."In short predominantly prohibitive ethics, individual or public, are a fool's errand practically.
This may explain why the results of the negative ethical concerns discussed are so dismal.They can be qualifi ed as negative concerns because the sentiment behind them is prohibiting, policing, or halting, the technologies in some way or other.But we are creatures of act, and it seems that when we come to creating technologiesan area for the play of action if ever there was one -we can't help ourselves, as Van den Eede puts it. 128uccesses arising from negative ethical concerns in either technology are limited.They are those which yoke the prohibitive impulse to some defi nite act which the individual or public can participate in to some degree: regulation (the active creation of laws and guidelines), nationalization (the opening up of participation in the technology to all the individuals so as to activate the public ethical ideal), and unionization (the active localized participation of individuals in communities of mutual support which off set harms experienced relative to the technology).None of these approaches completely stamp out the harms at which they ostensibly aim.They do provide projects upon which the individual and public can act however, so as to be drawn together into community.
But those are not the best successes.For both technologies, the best successes arise from the positive ethical expectations for both technologies.The fi rst of these is economic, and insofar as individuals actively participate in some new creation or development of the technology which builds something, results accrue.Socially responsible uses accompany this, as a result, wherein the individual, integrating their action with the positive social ideals at hand, explores diff erent positive uses for the technology, in science, medicine, administration, transportation, etc.
What about peace and utopian expectations, are they not positive also?Yes, but they are also passive.They are states or outcomes rather than grounds of action.When mistakenly couched in abstract and universal terms they appear as unachieved.When understood in relative and more specifi c terms, i.e. relative to the positive and active public expectations, they appear as advanced just as far as we have actively advanced our economic and socially responsible activities.
Where does that leave us with regard to current negative ethical concerns of the public about AI?If history is any guide, it suggests that we will not make much headway in results on those negative concerns where individual action cannot integrate with public action.Preventing job loss, preventing military use, preventing democratic disinformation, and so on, are not viable ways forward, because such actions end with their own 'success.'Insofar as such attempts are viewed through a fl uid logic, stopping or ending actions passes little or nothing on to the process of future community.Just as we are not aware of crimes that have been prevented, we never really know the job losses, or democratic disinformation that has been prevented.Eff orts to tackle algorithmic bias are a good example here.Bias is not actually being prevented, as noted above, even by technical eff orts.What is happening is that t e c h n i c a l a n d r e l a t e d c o m mu n i t i e s a r e g r o w i n g u p w h e r e r e f l e c t i o n a b o u t b i a s w i l l e v e nt u a l ly t r a n s f o r m o u r s o c i a l u n d e rs t a n d i n g a n d a c t i v e t r e a t m e n t o f o n e a n o t h e r i n t o s o m e t h i n g b e t t e r.We will build our way out of bias.We will never prevent our way out of it.

Conclusion
This paper has been an eff ort in comparative ethics, comparing the telegraph and AI with regard to the public ethical engagement of those technologies.Beginning with a survey of the telegraph in the public imagination, I explored some of the positive expectations and negative public concerns around the technology and the results of those expectations and concerns, repeating this procedure for artifi cial intelligence.Diff erences and similarities were considered to draw insights regarding the course of AI ethics.
C.I. Lewis, the American pragmatist, said of public morality: "having traditions is a tremendous economy.Largely, it accounts for human 'progress' and for 'civilization.'What one generation learns the hard way; later ones may come by without the initial grief and frustrations incident to fi nding out." 129In AI ethics we can avoid some grief by learning from the ethical social outcomes of the telegraph that a more negative and prohibitory path is not likely to produce results.A more positive and creative approach might.
We need to concentrate our eff orts into building an AI technology integrated with community, rather than into technological or other eff orts to police the technology.If having a positive sense of developing the technology in specifi c ways in public applications worked for the telegraph, then it will work for AI.Considering public disapprobation around the telegraph, what has clearly not yielded results, are negatively focused concerns such as information control and commodifi cation, control and war use, and concerns about immoral or frivolous uses.They are not likely to achieve results now for AI development.This does not mean giving up.It means concentrating eff orts toward urging positive developments of AI.In other words, amplifying an active public participation in the technology and amplifying it so strongly as to 'suck all the air out of the room' which might be used for negative approaches. 17

Figure
Figure II.Positive Ethical Expectation Prevalence in HLEG Guidelines for Trustworthy AI, fi rst draft public feedback Borenstein et al.in their 2021 summary of the history of AI ethics, note that while Google Scholar citations for AI ethics have jumped sharply only very recently, nonetheless in fi ction, in fi lm, and in television "… popular culture was far more engaged in issues related to what we now call AI, … [so that] scholarly interest is merely catching up to popular culture in its focus on ethical issues and AI. 62 If we can get some sense of what has fi red the public imagination with regard to the ethics of AI over the longer term, this can be a basis for comparison with the telegraph.63 Singularity Climate Figure I. Negative Ethical Concern Prevalence in HLEG Guidelines for Trustworthy AI, fi rst draft public feedback 64 62 J. Borenstein et al., "AI Ethics: A Long History and a Recent Burst of Attention", Computer 54 [1] (2021).

Table 1 .
Results of Expectations and Concerns of the Telegraphic Public

Table 2 .
Results of Expectations and Concerns of the AI Public