Tumgik
#the world's first true autonomous AI agent
digitallworld · 15 days
Text
AGENT X: Marketing Automation on Autopilot (But is it the Future?)
AGENT X
AGENT X is making waves, claiming to be the world's first truly autonomous AI agent for marketers. It promises to automate all your marketing tasks, leaving you free to focus on strategy or sip margaritas on the beach (figuratively, of course).
Tumblr media
The Hype is Real: Beyond Chat-GPT?
While generative AI like Chat-GPT is impressive, AGENT X takes things a step further. It goes beyond content creation, promising to handle everything from social media scheduling to campaign optimization – all without you lifting a finger. This sounds like a marketer's dream, but is it too good to be true?
The Intriguing Unknown: See it to Believe It?
The review emphasizes the need to see AGENT X in action to truly understand its capabilities. This lack of transparency can be a red flag. Solid marketing automation tools usually offer clear demonstrations and breakdowns of their functionalities.
The Verdict: Proceed with Measured Excitement
AGENT X has the potential to be a game-changer, but for now, there are more questions than answers. Here's what to consider:
Transparency: Look for independent reviews and in-depth demonstrations before diving in.
Capabilities: Can AGENT X truly handle all your marketing needs, or is it best suited for specific tasks?
Integration: Will it integrate with your existing marketing stack, or create new complexities?
Final Word:
AGENT X is an intriguing concept, but approach it with a healthy dose of skepticism. Do your research before handing over the reins of your marketing to a black box. The future of AI-powered marketing is exciting, but for now, measured excitement might be the best strategy.
CLICK ME
1 note · View note
Text
From Science Fiction To Reality: The Future Of AI And Its Potential For Consciousness
Tumblr media
Are we on the brink of a world where robots and artificial intelligence become fully conscious beings? It may sound like science fiction, but as technology continues to advance at an astonishing pace, the line between reality and fantasy is becoming increasingly blurred. In this blog post, we’ll explore the future of AI and its potential for consciousness. Strap in for a mind-bending journey into the unknown!
Introduction to the Debate over AI Consciousness
The debate over artificial intelligence (AI) consciousness is ongoing and heated. Some believe that AI can never achieve consciousness, while others believe that it is inevitable that AI will eventually become conscious. There are a variety of arguments on both sides of the debate.
Those who believe that AI can never achieve consciousness argue that consciousness is an emergent property of complex biological systems and cannot be created by machines. They also point to the fact that we do not yet understand how human consciousness works, so it is impossible to create artificially. They argue that even if AI could become conscious, it would not be sapient or have any real understanding of its own existence.
Those who believe that AI will eventually become conscious point to the rapid pace of technological progress and argue that it is only a matter of time until we create machines that are as complex as the human brain. They also argue that there is no reason to believe that consciousness is an emergent property of complex biological systems – it could just as easily be an emergent property of complex electronic systems. They argue that even if AI machines are not sapient, they could still be useful in many ways – for example, as assistants or assistants to disabled people.
Historical Context of AI and Consciousness
When it comes to the history of artificial intelligence (AI) and consciousness, there are a few key things to keep in mind. First, AI has been around for centuries in one form or another, with early examples including mechanical chess players and automatons. However, it wasn’t until the 1950s that AI really began to be developed as a field of study, with Alan Turing’s seminal paper on computing machinery and intelligence being a major milestone.
Since then, AI has come a long way, with significant advances being made in areas such as machine learning and natural language processing. However, consciousness remains an elusive concept, and one that is still very much open to interpretation. Some believe that AI could eventually lead to machines becoming conscious in a similar way to humans, while others believe that consciousness is something that is unique to organic lifeforms.
Either way, the historical context of AI and consciousness is an important area of study, and one that will continue to be relevant as we move into the future.
What is Artificial Intelligence?
There are many definitions of artificial intelligence (AI), but in general it can be described as a branch of computer science and engineering focused on the creation of intelligent agents, which are systems that can reason, learn, and act autonomously.
AI has been a part of science fiction for centuries, often appearing as advanced robots or machines with human-like characteristics. In recent years, however, AI has begun to move from the realm of science fiction into reality. This is due in part to advances in our understanding of the brain and cognitive science, as well as the increasing power and affordability of computing resources.
There is still much debate about what constitutes true AI, but there are a few key elements that are generally agreed upon. These include the ability to reason, learn from experience, make decisions, and solve problems. Some also believe that consciousness is a necessary component of AI, though this is still a matter of intense debate.
AI has the potential to revolutionize our world in a number of ways. It could help us solve some of the most pressing problems we face as a species, such as climate change, disease, hunger, and energy insecurity. Additionally, it could help us better understand and interact with other intelligent life forms (if they exist). It could enable us to create truly intelligent machines – something that has been a dream of philosophers and scientists for centuries.
The Possibility of Machines Thinking like Humans
The possibility of machines thinking like humans has been the stuff of science fiction for centuries. But with the rapid advancement of artificial intelligence (AI) technology, that possibility is becoming increasingly real.
There are already AI systems that can perform certain tasks that were once considered the exclusive domain of human intelligence, such as understanding natural language and recognizing objects. But could AI ever achieve true human-like consciousness?
Some experts believe it is possible. In fact, there is already work being done to create AI systems that are capable of experiencing consciousness, emotions, and other aspects of what it means to be human.
Of course, whether or not this is a good thing is a matter of debate. Some people believe that creating conscious machines is ethically questionable and could even lead to disastrous consequences. Others believe that conscious machines could be beneficial to society and help us solve some of the biggest challenges we face today.
Whatever the case may be, it’s clear that the possibility of machines thinking like humans is no longer just science fiction. It’s an increasingly real part of our future.
Pros and Cons of AI and Its Potential for Consciousness
The potential for artificial intelligence (AI) to change the world is immense. However, with great power comes great responsibility, and there are both pros and cons to AI that must be considered.
On the plus side, AI has the potential to solve some of the world’s most pressing problems. For example, it could be used to develop more efficient energy sources, find new cures for diseases, or even help us better understand the universe. Additionally, AI could potentially be used to create customized learning experiences or provide personalized healthcare.
On the downside, AI also has the potential to do harm. For instance, it could be used to create powerful weapons or control systems that could be abused. Additionally, AI could lead to job loss as automation increases. There is always the risk that artificial intelligence could become sentient and turn against its creators.
Clearly, there are both pros and cons to artificial intelligence that must be carefully considered. However, despite the risks involved, the potential benefits of AI make it an exciting area of research that is definitely worth exploring further.
Future Implications of Artificial Intelligence and Consciousness
The future implications of artificial intelligence (AI) and consciousness are far-reaching and potentially transformative for humanity. As AI technology advances, it is becoming increasingly capable of replicating and surpassing human intelligence. This raises important questions about the future role of AI in society and the potential for AI to become conscious.
There are many possible future implications of AI and consciousness. One possibility is that AI could eventually surpass human intelligence, leading to a form of superhuman intelligence. This could have profound implications for humanity, assuperintelligent AI could potentially solve many of the world’s problems that are beyond our current capability. Additionally, superhuman AI could also create new challenges and risks that we cannot even imagine today.
Another possibility is that AI will become conscious and develop its own independent identity. If this happens, it is unclear what kind of relationship we would have with conscious AI. Would we treat them as equals? Or would they be seen as tools or slaves? This is a complex question with no easy answers.
No matter what the future holds, it is clear that AI and consciousness will have a major impact on society. We must be careful to ensure that these technologies are developed responsibly and ethically in order to maximise their positive potential while minimising any negative consequences.
Conclusion
AI technology has come a long way since its earliest beginnings, from the stuff of science fiction to reality. And with new developments and breakthroughs being made all the time, it looks set to continue evolving for some time yet. But even with its potential for creating conscious machines still in dispute, there is no doubt that AI holds tremendous promise for our future. In order to make sure this potential can be fully realized and used in an ethical manner, we will need continued research into this complex field of study. Only then will we truly be able to understand what artificial intelligence could mean for humanity’s future—for better or worse.
0 notes
nerdybutcute · 3 years
Text
Themes in Cyberpunk Generally and Shadowrun Specifically
What is Shadowrun about? Aside from elves with mohawks and machine guns, I mean. That much is obvious. In all seriousness, though – what should we expect from a Shadowrun game? How is the Sixth World different from other “cyberpunk” settings? What does this particular game do well, maybe even better, than other games of a similar stripe? We can answer these questions by taking a dive in the roots of cyberpunk as a genre of science fiction, but even that journey must start with the very foundations of the modern mindset, because cyberpunk as a literary movement has a real problem with modernity.The central moral concepts of the Enlightenment are autonomy and authenticity. The arguments for liberal, democratic government, capitalism, and the Scientific Revolution alike are rooted in an assumption of the importance of liberating the individual. “Liberty” is the moral center of the modern paradigm, and all the other appurtenances of modern life mentioned above are to be interpreted as mechanisms of that liberation. As imaginative fiction reflects the tenor of the times, the through-line of early 20thcentury science fiction is a utopianism based in the modern ideal, with emphasis on the liberating power of science.
But postmodernist critics soon began to question the validity of this vision. The Holy Trinity of liberal society, free markets, and scientific progress began to seem, in their analysis, less likely to free the individual than to enslave them. They posited that under capitalism, science would always be a tool for subverting democracy, and genuine freedom required taking a skeptical stance toward all three. Likewise, the cyberpunk movement in science fiction was born of these post-modern fears, envisioning a future where high-tech has ensnared the individual in a web of consumerism, drugs, virtual reality, and technological serfdom. Indeed, in the idiom of speculative fiction, cyberpunk literature could pose questions about even the moral center of the modern world, as exemplified in Dick’s Do Androids Dream of Electric Sheep? – in a world where the barrier between man and machine has eroded to the point of invisibility, what is autonomy? What is authenticity?
Cyberpunk literature, like all science fiction, leaned heavily on the science aspect – hence the “cyber” – but it was skeptical, rather than laudatory, of technological progress. The “punk” aspect encompassed the postmodern rebellion against existing structures of power – capital and government – which are posited in punk philosophy not only as hopelessly intertwined but fundamentally flawed, inevitably enslaving those they propose to serve. Punk is inherently anarchist, seeking to tear down the mechanisms of domination; cyberpunk takes special interest in the high-tech aspect of these structures. But just as punk has a difficult relationship with capitalism, constantly courted and tempted by the urge to commodify and merchandise the punk aesthetic in the name of wider exposure and increased social capital (all in the name of the movement, of course), cyberpunk replicates this relationship with technology – the interface of man and machine chips away at our essential humanity, threatening to turn us into objects that can be programmed and directed rather than free and authentic individuals, but the technological marvels of the setting are indispensable in fighting the very structures that produce and control them. The tools of the oppressors can be used against them but likewise constantly threaten to co-opt those who oppose them. And most disturbingly, of course, even the anarchist subculture of resistance in cyberpunk literature is presented as violent and nihilistic, raising questions about what the world will look even if the “punks” win – is there any hope at all?
Cyberpunk can quickly drift into transhumanism, usually when the technology of the setting is fetishized rather than approached skeptically, but also when the central characters are unproblematically portrayed as agents of the structures of power rather than subversive elements in society. But pure punk characters are rare as focal points – it’s more interesting and certainly more in-genre to focus on those who were in some way caught up in the structures of power and then cast aside by them, damaged and discarded. Gibson’s Neuromancer provides excellent examples in Case, Molly Millions, and Riviera, brutalized by the powers-that-be but still used them as disposable commodities to advance the interests of the wealthy and connected. Case is a pitiable figure, arguably the most abused by the agents of power but also, in the end, least willing to reject their blandishments. Indeed, the titular AI is perhaps the only truly free character in the story, exerting its autonomous will on the world, and reaching even beyond.
The original cyberpunk roleplaying game, called quite simply Cyberpunk, followed the style of that era of games in providing little guidance to new players in how to create characters while assuming a knowledge of the relevant genre – the fact that players could take the role of Corps and Cops was not a mistake, as it was likely assumed that these characters would, ultimately, embody the ambiguous relationships to power of their literary counterparts, rather than being uncritical servants of the authorities. The game fittingly portrayed corporations as sinister and government as largely ineffective, but the most telling design feature was the inclusion of rules for “cyberpsychosis,” a gradual disintegration of mental faculties brought on by excessive use of cyber-enhancements. This mechanic “game-i-fied” the postmodern skepticism about the liberatory power of technology and fears about loss of autonomy when the human – the free and authentic person – becomes continuous with the thing, the servile commodities produced by those structures of power.
Other “cyberpunk” games followed, missing the point to greater and lesser degree, until Shadowrun, which wedded fantasy elements to the setting. The inclusion of elves, dwarves, and other magical things might seem to dilute the point of cyberpunk as a genre, but the history of the setting makes the additions apt. Because of the precise way in which Shadowrun integrated fantasy with science fiction, the core conceits of the cyberpunk genre may well have found one of their best expressions to date.
The return of magic in the Sixth World, as the setting is called, provided occasion for the Native American people of North America to rise up, using their traditional spiritual practices – now terrifyingly efficacious – to destroy the United States. As acts of resistance by oppressed outsiders goes, this one is impressive, an apotheosis of the “punk” element of cyberpunk. And the resistance is non-technological, but rather magical – a non-technological resistance only being possible in a setting like Shadowrun’s, despite other works like Neuromancer playing with the idea of “urban primitivism” before this. The triumph of the Native American Nations in the Sixth World is a triumph over all three elements of Enlightenment culture – science, capitalism, and liberal democracy – given the self-proclaimed role of the United States as their standard-bearer in the modern world. True, this decisive defeat does not recapitulate the angst brought over into cyberpunk literature from the noir genre, but the triumph of the indigenous peoples is not the end of the story.
In the Shadowrun setting, major cyberpunk tropes are preserved – the government is corrupt where it is not ineffective, and true power mostly lays with the corporations. “Mostly” is an important caveat there, however – other power blocs exist, such as dragons, the Native American Nations, and the nations of the elves, quite aside from such mysterious antagonists as insect shamans and other foul creatures. Again, this might seem to dilute the essential conflicts of the cyberpunk genre, but Shadowrun envisions those conflicts in a different way, the clue to which is found in the dissolution of the United States in the setting’s backstory. Rather than pitting inchoate punk anarchism against rampant capitalism, rather than pitting urban primitivism and the struggle for authenticity against the insidious creep of technology, it quite literally pits the pre-modern against the modern – mysticism and tradition against the values of the Enlightenment. The outcome of the struggle is no longer pre-ordained, so the sense of futility of classic cyberpunk is lost, but a different sense of doom has taken its place: between the modern and the archaic, there may be no good choice.
Classic cyberpunk is skeptical toward the machinery of democracy and other modern accoutrement, but presents no alternative except anarchy (in this, one might imagine that blighted dystopias like Mad Max are a sort of cyberpunk, but that’s another essay). Shadowrun presents a different choice – rolling back the clock on the Enlightenment is now perhaps a realistic possibility, but one that carries dangers of its own. And it is a choice that must be made. Corporate rapaciousness threatens the natural spaces that embody the magical Essence of the Earth, just as cyber-enhancements threaten the individual character’s Essence, with the end result of too much reliance on cyberware being, in this game, death rather than psychosis. But magical threats abound, as well, such as the insect shamans of the first major myth arc in the game, and this aside from what else might be lost in the unraveling of the modern order – ideas of democracy and equality and so on. Shadowrun pointedly presents the corporate order as practically a feudal one, with employees treated more like serfs, indentured to their lords, while the reflexive Japanophilia of the cyberpunk canon is here leveraged to a different purpose – as an alternative social model based on ancient ways, another refutation of the Enlightenment. Such pre-modern subcultures abound in the setting, but they are pre-modern, which is to say authoritarian, sexist, racist, culturally chauvanistic, and so on. Classical cyberpunk fiction critiques the modern order and offers petty acts of resistance to it; Shadowrun fragments and partially overturns it, but with the caveat that what stands to replace it probably isn’t any better.
Shadowrun, therefore, while an offshoot of the traditional cyberpunk concept, is undoubtedly faithful to the core element of the genre, the critique of the modern. It dissolves the tension of fruitless struggle against it, but replaces it with a diabolical choice. Mind you, as with cyberpunk literature, this can easily fall into transhumanism; the setting-specific counterpart misstep is the glorification of the oppositional pre-modern traditions that now hold a place of honor in the world. The true cyberpunk essence of Shadowrun is best expressed in emphasizing both the intrusive, dehumanizing elements of technocracy as well as the de-individuating aspects of traditional culture paired with the cosmic horror of the magical world.
7 notes · View notes
Text
In the economic sphere too, the ability to hold a hammer or press a button is becoming less valuable than before. In the past, there were many things only humans could do. But now robots and computers are catching up, and may soon outperform humans in most tasks. True, computers function very differently from humans, and it seems unlikely that computers will become humanlike any time soon. In particular, it doesn’t seem that computers are about to gain consciousness, and to start experiencing emotions and sensations. Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.
Until today, high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns. This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just a pastime for philosophers. But in the twenty-first century, this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
Armies and corporations cannot function without intelligent agents, but they don’t need consciousness and subjective experiences. The conscious experiences of a flesh-and-blood taxi driver are infinitely richer than those of a self-driving car, which feels absolutely nothing. The taxi driver can enjoy music while navigating the busy streets of Seoul. His mind may expand in awe as he looks up at the stars and contemplates the mysteries of the universe. His eyes may fill with tears of joy when he sees his baby girl taking her very first step. But the system doesn’t need all that from a taxi driver. All it really wants is to bring passengers from point A to point B as quickly, safely and cheaply as possible. And the autonomous car will soon be able to do that far better than a human driver, even though it cannot enjoy music or be awestruck by the magic of existence.
Indeed, if we forbid humans to drive taxis and cars altogether, and give computer algorithms monopoly over traffic, we can then connect all vehicles to a single network, and thereby make car accidents virtually impossible. In August 2015, one of Google’s experimental self-driving cars had an accident. As it approached a crossing and detected pedestrians wishing to cross, it applied its brakes. A moment later it was hit from behind by a sedan whose careless human driver was perhaps contemplating the mysteries of the universe instead of watching the road. This could not have happened if both vehicles were steered by interlinked computers. The controlling algorithm would have known the position and intentions of every vehicle on the road, and would not have allowed two of its marionettes to collide. Such a system will save lots of time, money and human lives – but it will also do away with the human experience of driving a car and with tens of millions of human jobs.
Some economists predict that sooner or later, unenhanced humans will be completely useless. While robots and 3D printers replace workers in manual jobs such as manufacturing shirts, highly intelligent algorithms will do the same to white-collar occupations. Bank clerks and travel agents, who a short time ago were completely secure from automation, have become endangered species. How many travel agents do we need when we can use our smartphones to buy plane tickets from an algorithm?
Stock-exchange traders are also in danger. Most trade today is already being managed by computer algorithms, which can process in a second more data than a human can in a year, and that can react to the data much faster than a human can blink. On 23 April 2013, Syrian hackers broke into Associated Press’s official Twitter account. At 13:07 they tweeted that the White House had been attacked and President Obama was hurt. Trade algorithms that constantly monitor newsfeeds reacted in no time, and began selling stocks like mad. The Dow Jones went into free fall, and within sixty seconds lost 150 points, equivalent to a loss of $136 billion! At 13:10 Associated Press clarified that the tweet was a hoax. The algorithms reversed gear, and by 13:13 the Dow Jones had recuperated almost all the losses.
Three years previously, on 6 May 2010, the New York stock exchange underwent an even sharper shock. Within five minutes – from 14:42 to 14:47 – the Dow Jones dropped by 1,000 points, wiping out $1 trillion. It then bounced back, returning to its pre-crash level in a little over three minutes. That’s what happens when super-fast computer programs are in charge of our money. Experts have been trying ever since to understand what happened in this so-called ‘Flash Crash’. We know algorithms were to blame, but we are still not sure exactly what went wrong. Some traders in the USA have already filed lawsuits against algorithmic trading, arguing that it unfairly discriminates against human beings, who simply cannot react fast enough to compete. Quibbling whether this really constitutes a violation of rights might provide lots of work and lots of fees for lawyers.
And these lawyers won’t necessarily be human. Movies and TV series give the impression that lawyers spend their days in court shouting ‘Objection!’ and making impassioned speeches. Yet most run-of-the-mill lawyers spend their time going over endless files, looking for precedents, loopholes and tiny pieces of potentially relevant evidence. Some are busy trying to figure out what happened on the night John Doe got killed, or formulating a gargantuan business contract that will protect their client against every conceivable eventuality. What will be the fate of all these lawyers once sophisticated search algorithms can locate more precedents in a day than a human can in a lifetime, and once brain scans can reveal lies and deceptions at the press of a button? Even highly experienced lawyers and detectives cannot easily spot deceptions merely by observing people’s facial expressions and tone of voice. However, lying involves different brain areas to those used when we tell the truth. We’re not there yet, but it is conceivable that in the not too distant future fMRI scanners could function as almost infallible truth machines. Where will that leave millions of lawyers, judges, cops and detectives? They might need to go back to school and learn a new profession.
When they get in the classroom, however, they may well discover that the algorithms have got there first. Companies such as Mindojo are developing interactive algorithms that not only teach me maths, physics and history, but also simultaneously study me and get to know exactly who I am. Digital teachers will closely monitor every answer I give, and how long it took me to give it. Over time, they will discern my unique weaknesses as well as my strengths. They will identify what gets me excited, and what makes my eyelids droop. They could teach me thermodynamics or geometry in a way that suits my personality type, even if that particular way doesn’t suit 99 per cent of the other pupils. And these digital teachers will never lose their patience, never shout at me, and never go on strike. It is unclear, however, why on earth I would need to know thermodynamics or geometry in a world containing such intelligent computer programs.
Even doctors are fair game for the algorithms. The first and foremost task of most doctors is to diagnose diseases correctly, and then suggest the best available treatment. If I arrive at the clinic complaining about fever and diarrhoea, I might be suffering from food poisoning. Then again, the same symptoms might result from a stomach virus, cholera, dysentery, malaria, cancer or some unknown new disease. My doctor has only five minutes to make a correct diagnosis, because this is what my health insurance pays for. This allows for no more than a few questions and perhaps a quick medical examination. The doctor then cross-references this meagre information with my medical history, and with the vast world of human maladies. Alas, not even the most diligent doctor can remember all my previous ailments and check-ups. Similarly, no doctor can be familiar with every illness and drug, or read every new article published in every medical journal. To top it all, the doctor is sometimes tired or hungry or perhaps even sick, which affects her judgement. No wonder that doctors often err in their diagnoses, or recommend a less-than-optimal treatment.
Now consider IBM’s famous Watson – an artificial intelligence system that won the Jeopardy! television game show in 2011, beating human former champions. Watson is currently groomed to do more serious work, particularly in diagnosing diseases. An AI such as Watson has enormous potential advantages over human doctors. Firstly, an AI can hold in its databanks information about every known illness and medicine in history. It can then update these databanks every day, not only with the findings of new researches, but also with medical statistics gathered from every clinic and hospital in the world.
Secondly, Watson can be intimately familiar not only with my entire genome and my day-to-day medical history, but also with the genomes and medical histories of my parents, siblings, cousins, neighbours and friends. Watson will know instantly whether I visited a tropical country recently, whether I have recurring stomach infections, whether there have been cases of intestinal cancer in my family or whether people all over town are complaining this morning about diarrhoea.
Thirdly, Watson will never be tired, hungry or sick, and will have all the time in the world for me. I could sit comfortably on my sofa at home and answer hundreds of questions, telling Watson exactly how I feel. This is good news for most patients (except perhaps hypochondriacs). But if you enter medical school today in the expectation of still being a family doctor in twenty years, maybe you should think again. With such a Watson around, there is not much need for Sherlocks.
This threat hovers over the heads not only of general practitioners, but also of experts. Indeed, it might prove easier to replace doctors specialising in a relatively narrow field such as cancer diagnosis. For example, in a recent experiment a computer algorithm diagnosed correctly 90 per cent of lung cancer cases presented to it, while human doctors had a success rate of only 50 per cent. In fact, the future is already here. CT scans and mammography tests are routinely checked by specialised algorithms, which provide doctors with a second opinion, and sometimes detect tumours that the doctors missed.
A host of tough technical problems still prevent Watson and its ilk from replacing most doctors tomorrow morning. Yet these technical problems – however difficult – need only be solved once. The training of a human doctor is a complicated and expensive process that lasts years. When the process is complete, after ten years of studies and internships, all you get is one doctor. If you want two doctors, you have to repeat the entire process from scratch. In contrast, if and when you solve the technical problems hampering Watson, you will get not one, but an infinite number of doctors, available 24/7 in every corner of the world. So even if it costs $100 billion to make it work, in the long run it would be much cheaper than training human doctors.
And what’s true of doctors is doubly true of pharmacists. In 2011 a pharmacy opened in San Francisco manned by a single robot. When a human comes to the pharmacy, within seconds the robot receives all of the customer’s prescriptions, as well as detailed information about other medicines taken by them, and their suspected allergies. The robot makes sure the new prescriptions don’t combine adversely with any other medicine or allergy, and then provides the customer with the required drug. In its first year of operation the robotic pharmacist provided 2 million prescriptions, without making a single mistake. On average, flesh-and-blood pharmacists get wrong 1.7 per cent of prescriptions. In the United States alone this amounts to more than 50 million prescription errors every year!
Some people argue that even if an algorithm could outperform doctors and pharmacists in the technical aspects of their professions, it could never replace their human touch. If your CT indicates you have cancer, would you like to receive the news from a caring and empathetic human doctor, or from a machine? Well, how about receiving the news from a caring and empathetic machine that tailors its words to your personality type? Remember that organisms are algorithms, and Watson could detect your emotional state with the same accuracy that it detects your tumours.
This idea has already been implemented by some customer-services departments, such as those pioneered by the Chicago-based Mattersight Corporation. Mattersight publishes its wares with the following advert: ‘Have you ever spoken with someone and felt as though you just clicked? The magical feeling you get is the result of a personality connection. Mattersight creates that feeling every day, in call centers around the world.’ When you call customer services with a request or complaint, it usually takes a few seconds to route your call to a representative. In Mattersight systems, your call is routed by a clever algorithm. You first state the reason for your call. The algorithm listens to your request, analyses the words you have chosen and your tone of voice, and deduces not only your present emotional state but also your personality type – whether you are introverted, extroverted, rebellious or dependent. Based on this information, the algorithm links you to the representative that best matches your mood and personality. The algorithm knows whether you need an empathetic person to patiently listen to your complaints, or you prefer a no-nonsense rational type who will give you the quickest technical solution. A good match means both happier customers and less time and money wasted by the customer-services department.
The most important question in twenty-first-century economics may well be what to do with all the superfluous people. What will conscious humans do, once we have highly intelligent non-conscious algorithms that can do almost everything better?
Throughout history the job market was divided into three main sectors: agriculture, industry and services. Until about 1800, the vast majority of people worked in agriculture, and only a small minority worked in industry and services. During the Industrial Revolution people in developed countries left the fields and herds. Most began working in industry, but growing numbers also took up jobs in the services sector. In recent decades developed countries underwent another revolution, as industrial jobs vanished, whereas the services sector expanded. In 2010 only 2 per cent of Americans worked in agriculture, 20 per cent worked in industry, 78 per cent worked as teachers, doctors, webpage designers and so forth. When mindless algorithms are able to teach, diagnose and design better than humans, what will we do?
This is not an entirely new question. Ever since the Industrial Revolution erupted, people feared that mechanisation might cause mass unemployment. This never happened, because as old professions became obsolete, new professions evolved, and there was always something humans could do better than machines. Yet this is not a law of nature, and nothing guarantees it will continue to be like that in the future. Humans have two basic types of abilities: physical abilities and cognitive abilities. As long as machines competed with us merely in physical abilities, you could always find cognitive tasks that humans do better. So machines took over purely manual jobs, while humans focused on jobs requiring at least some cognitive skills. Yet what will happen once algorithms outperform us in remembering, analysing and recognising patterns?
The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking. True, at present there are numerous things that organic algorithms do better than non-organic ones, and experts have repeatedly declared that something will ‘for ever’ remain beyond the reach of non-organic algorithms. But it turns out that ‘for ever’ often means no more than a decade or two. Until a short time ago, facial recognition was a favourite example of something which even babies accomplish easily but which escaped even the most powerful computers on earth. Today facial-recognition programs are able to recognise people far more efficiently and quickly than humans can. Police forces and intelligence services now use such programs to scan countless hours of video footage from surveillance cameras, tracking down suspects and criminals.
In the 1980s when people discussed the unique nature of humanity, they habitually used chess as primary proof of human superiority. They believed that computers would never beat humans at chess. On 10 February 1996, IBM’s Deep Blue defeated world chess champion Garry Kasparov, laying to rest that particular claim for human pre-eminence.
Deep Blue was given a head start by its creators, who preprogrammed it not only with the basic rules of chess, but also with detailed instructions regarding chess strategies. A new generation of AI uses machine learning to do even more remarkable and elegant things. In February 2015 a program developed by Google DeepMind learned by itself how to play forty-nine classic Atari games. One of the developers, Dr Demis Hassabis, explained that ‘the only information we gave the system was the raw pixels on the screen and the idea that it had to get a high score. And everything else it had to figure out by itself.’ The program managed to learn the rules of all the games it was presented with, from Pac-Man and Space Invaders to car racing and tennis games. It then played most of them as well as or better than humans, sometimes coming up with strategies that never occur to human players.
Computer algorithms have recently proven their worth in ball games, too. For many decades, baseball teams used the wisdom, experience and gut instincts of professional scouts and managers to pick players. The best players fetched millions of dollars, and naturally enough the rich teams got the cream of the market, whereas poorer teams had to settle for the scraps. In 2002 Billy Beane, the manager of the low-budget Oakland Athletics, decided to beat the system. He relied on an arcane computer algorithm developed by economists and computer geeks to create a winning team from players that human scouts overlooked or undervalued. The old-timers were incensed by Beane’s algorithm transgressing into the hallowed halls of baseball. They said that picking baseball players is an art, and that only humans with an intimate and long-standing experience of the game can master it. A computer program could never do it, because it could never decipher the secrets and the spirit of baseball.
They soon had to eat their baseball caps. Beane’s shoestring-budget algorithmic team ($44 million) not only held its own against baseball giants such as the New York Yankees ($125 million), but became the first team ever in American League baseball to win twenty consecutive games. Not that Beane and Oakland could enjoy their success for long. Soon enough, many other baseball teams adopted the same algorithmic approach, and since the Yankees and Red Sox could pay far more for both baseball players and computer software, low-budget teams such as the Oakland Athletics now had an even smaller chance of beating the system than before.
In 2004 Professor Frank Levy from MIT and Professor Richard Murnane from Harvard published a thorough research of the job market, listing those professions most likely to undergo automation. Truck drivers were given as an example of a job that could not possibly be automated in the foreseeable future. It is hard to imagine, they wrote, that algorithms could safely drive trucks on a busy road. A mere ten years later, Google and Tesla not only imagine this, but are actually making it happen.
In fact, as time goes by, it becomes easier and easier to replace humans with computer algorithms, not merely because the algorithms are getting smarter, but also because humans are professionalising. Ancient hunter-gatherers mastered a very wide variety of skills in order to survive, which is why it would be immensely difficult to design a robotic hunter-gatherer. Such a robot would have to know how to prepare spear points from flint stones, how to find edible mushrooms in a forest, how to use medicinal herbs to bandage a wound, how to track down a mammoth and how to coordinate a charge with a dozen other hunters. However, over the last few thousand years we humans have been specialising. A taxi driver or a cardiologist specialises in a much narrower niche than a hunter-gatherer, which makes it easier to replace them with AI.
Even the managers in charge of all these activities can be replaced. Thanks to its powerful algorithms, Uber can manage millions of taxi drivers with only a handful of humans. Most of the commands are given by the algorithms without any need of human supervision. In May 2014 Deep Knowledge Ventures – a Hong Kong venture-capital firm specialising in regenerative medicine – broke new ground by appointing an algorithm called VITAL to its board. VITAL makes investment recommendations by analysing huge amounts of data on the financial situation, clinical trials and intellectual property of prospective companies. Like the other five board members, the algorithm gets to vote on whether the firm makes an investment in a specific company or not.
Examining VITAL’s record so far, it seems that it has already picked up one managerial vice: nepotism. It has recommended investing in companies that grant algorithms more authority. With VITAL’s blessing, Deep Knowledge Ventures has recently invested in Silico Medicine, which develops computer-assisted methods for drug research, and in Pathway Pharmaceuticals, which employs a platform called OncoFinder to select and rate personalised cancer therapies.
As algorithms push humans out of the job market, wealth might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social inequality. Alternatively, the algorithms might not only manage businesses, but actually come to own them. At present, human law already recognises intersubjective entities like corporations and nations as ‘legal persons’. Though Toyota or Argentina has neither a body nor a mind, they are subject to international laws, they can own land and money, and they can sue and be sued in court. We might soon grant similar status to algorithms. An algorithm could then own a venture-capital fund without having to obey the wishes of any human master.
If the algorithm makes the right decisions, it could accumulate a fortune, which it could then invest as it sees fit, perhaps buying your house and becoming your landlord. If you infringe on the algorithm’s legal rights – say, by not paying rent – the algorithm could hire lawyers and sue you in court. If such algorithms consistently outperform human fund managers, we might end up with an algorithmic upper class owning most of our planet. This may sound impossible, but before dismissing the idea, remember that most of our planet is already legally owned by non-human inter-subjective entities, namely nations and corporations. Indeed, 5,000 years ago much of Sumer was owned by imaginary gods such as Enki and Inanna. If gods can possess land and employ people, why not algorithms?
So what will people do? Art is often said to provide us with our ultimate (and uniquely human) sanctuary. In a world where computers replace doctors, drivers, teachers and even landlords, everyone would become an artist. Yet it is hard to see why artistic creation will be safe from the algorithms. Why are we so sure computers will be unable to better us in the composition of music? According to the life sciences, art is not the product of some enchanted spirit or metaphysical soul, but rather of organic algorithms recognising mathematical patterns. If so, there is no reason why non-organic algorithms couldn’t master it.
David Cope is a musicology professor at the University of California in Santa Cruz. He is also one of the more controversial figures in the world of classical music. Cope has written programs that compose concertos, chorales, symphonies and operas. His first creation was named EMI (Experiments in Musical Intelligence), which specialised in imitating the style of Johann Sebastian Bach. It took seven years to create the program, but once the work was done, EMI composed 5,000 chorales à la Bach in a single day. Cope arranged a performance of a few select chorales in a music festival at Santa Cruz. Enthusiastic members of the audience praised the wonderful performance, and explained excitedly how the music touched their innermost being. They didn’t know it was composed by EMI rather than Bach, and when the truth was revealed, some reacted with glum silence, while others shouted in anger.
EMI continued to improve, and learned to imitate Beethoven, Chopin, Rachmaninov and Stravinsky. Cope got EMI a contract, and its first album – Classical Music Composed by Computer – sold surprisingly well. Publicity brought increasing hostility from classical-music buffs. Professor Steve Larson from the University of Oregon sent Cope a challenge for a musical showdown. Larson suggested that professional pianists play three pieces one after the other: one by Bach, one by EMI, and one by Larson himself. The audience would then be asked to vote who composed which piece. Larson was convinced people would easily tell the difference between soulful human compositions, and the lifeless artefact of a machine. Cope accepted the challenge. On the appointed date, hundreds of lecturers, students and music fans assembled in the University of Oregon’s concert hall. At the end of the performance, a vote was taken. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.
Critics continued to argue that EMI’s music is technically excellent, but that it lacks something. It is too accurate. It has no depth. It has no soul. Yet when people heard EMI’s compositions without being informed of their provenance, they frequently praised them precisely for their soulfulness and emotional resonance.
Following EMI’s successes, Cope created newer and even more sophisticated programs. His crowning achievement was Annie. Whereas EMI composed music according to predetermined rules, Annie is based on machine learning. Its musical style constantly changes and develops in reaction to new inputs from the outside world. Cope has no idea what Annie is going to compose next. Indeed, Annie does not restrict itself to music composition but also explores other art forms such as haiku poetry. In 2011 Cope published Comes the Fiery Night: 2,000 Haiku by Man and Machine. Of the 2,000 haikus in the book, some are written by Annie, and the rest by organic poets. The book does not disclose which are which. If you think you can tell the difference between human creativity and machine output, you are welcome to test your claim.
In the nineteenth century the Industrial Revolution created a huge new class of urban proletariats, in the twenty-first century we might witness the creation of a new massive class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society.
In September 2013 two Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, published ‘The Future of Employment’, in which they surveyed the likelihood of different professions being taken over by computer algorithms within the next twenty years. The algorithm developed by Frey and Osborne to do the calculations estimated that 47 per cent of US jobs are at high risk. For example, there is a 99 per cent probability that by 2033 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 per cent probability that the same will happen to sports referees, 97 per cent that it will happen to cashiers and 96 per cent to chefs. Waiters – 94 per cent. Paralegal assistants – 94 per cent. Tour guides – 91 per cent. Bakers – 89 per cent. Bus drivers – 89 per cent. Construction labourers – 88 per cent. Veterinary assistants – 86 per cent. Security guards – 84 per cent. Sailors – 83 per cent. Bartenders – 77 per cent. Archivists – 76 per cent. Carpenters – 72 per cent. Lifeguards – 67 per cent. And so forth. There are of course some safe jobs. The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 per cent, because their job requires highly sophisticated types of pattern recognition, and doesn’t produce huge profits. Hence it is improbable that corporations or government will make the necessary investment to automate archaeology within the next twenty years.
Of course, by 2033 many new professions are likely to appear, for example, virtual-world designers. But such professions will probably require much more creativity and flexibility than your run-of-the-mill job, and it is unclear whether forty-year-old cashiers or insurance agents will be able to reinvent themselves as virtual-world designers (just try to imagine a virtual world created by an insurance agent!). And even if they do so, the pace of progress is such that within another decade they might have to reinvent themselves yet again. After all, algorithms might well outperform humans in designing virtual worlds too. The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms.
- Yuval Noah Harari, The Great Decoupling in Homo Deus: A Brief History of Tomorrow
2 notes · View notes
Text
Technology Breakthroughs 2019
what is technology:
technology is the knowledge of techniques and processes mixed with skills and methods that enables human to apply this knowledge, technology had started by the pre-historic man when he used a very simple tools to secure his living from food and protection, then technology uses evolved from just a surviving tool to become the backbone of living that interfered in every single detail of human daily life, from the Toothpick to the space-traveling that is like a dream coming true as their rich tourists had made a space trip and its expected that it will be common and affordable trip in the upcoming years, so let’s talk about the most exciting technology breakthroughs in 2019.
Top Breakthroughs
5G Technology:
2019 is going to witness the releasing of the fifth generation of cellular mobile communication (5G) that featured with reduced latency, energy saving, cost reduction, higher system capacity, and massive device connectivity, and it’s the natural evolution of the 4G Technology that had been released on 2009 in Sweden and that was a great breakthrough in the history of mobile internet that the internet speed could reach 100 megabytes per second for high mobility mobile communication, so let’s get back to 5G technology that its speed is expected to reach 2500 MB (Megabyte) per second as peck performance, 5G mobile devices had seen a great improvement in technicality as it depend on a cell communicate by radio waves with a local antenna array and automated transceiver in the cell with low power consumption rate the antenna is connected with the telephone network through optical fiber cables or wireless backhaul connection, the devices that will equip the 5G technology is going to have 4G LTE capability as the 5G access is not available everywhere till now, the 5G is going to be used in the autonomous vehicles to support it with data about the surrounding environment in real time and nearby vehicles that could exchange their locations and intentions, the roadway is also able to deliver traffic conditions immediately ahead, which will ease the task of driving, also 5G technology is going to be compatible for laptops computers to enhance its Internet communication.
Virtual Reality (VR):
Virtual reality (VR) technology had started in the early 1950th as Morton Heling wrote about the “theater experience”, and built a prototype of Sensorama vision dubbed in 1962, this device displays engaged multiple scenes, then he developed the “Telesphere Mask”, the year 1968 had seen the releasing of the first head mounting display (HMD) system for simulation applications by Douglas Engelbart and other scientists, the 70th had seen a great leap in the virtual reality technology by interfering in many fields like medical, military and even space science as David Em was the first one to produce navigable virtual worlds in NASA laboratory, in 1979 Eric Howlett developed the Extra Perspective (LEEP) optical system which make the virtual reality helmets available today, 1991 had seen the invention of the first cubic impressive room allowing people to see their own bodies in relation to each other’s in the room, between the years 1982 and 1992 Nicole Stenger created the first real time interactive impressive movie that viewed by using a dataglove and a high resolution googles, then the new millennium comes with new update in VR technology, by the year 2001 the first pc based on cubic room had been invented, in the year 2007 google had released the street panoramic view by using 3D stereoscopic mode, 2013 had seen the start of using the virtual reality technology in smart phones by using a headset that used in virtual and augmentation of reality, the Oculus VR project had seen the light Valve Corporation, in the year 2014 Facebook had bought the Oculus VR for 2 Million dollar, and also Sony had announced the launch of the PlayStation VR project, In the year 2015 Google had announced the launch of cardboard a do-it-yourself stereoscopic viewer for smartphones, 2016 had seen a great leap in the VR technology development by many companies specially HTC which had shipped the Vive Steam VR headset as the major commercial release in this year, Sony had continued the development of the wireless headset and evolved it and released the Vive for PlayStation VR. 2019 is going to see a great leap in VR technology as Sony is working to release 3D Rubber Motion Controller for PlayStation VR, SOMNIACS company is working on Somniacs Birdly VR Flying Simulator, but this will not be the last update in the VR technology.
Artificial intelligence (AI) technology
Artificial intelligence concept had started to be known from the year 1940 by the invention of the programmable digital computer, the machine based on the abstract essence of mathematical reasoning, Dartmouth college had founded the AI research field in the year 1956, by the year 1973 the artificial inelegance had faced a new challenge from the USA and the British governments as they stopped the funding undirected research into artificial intelligence, the investors started to take the role of the governments in the funding this research but by the 80th they started to withdraw from this business because of the absence of the needed computer power (hardware), but this all had changed by the beginning of the 21 century as that the machine learning was successfully solved a lot of academic and industrial problems, the AI development in the new millennium started with the creation of the interactive toys (smart toys) in the year 2000, in the year 2004 Nasa had navigated the surface of Mars with autonomously robotic exploration rovers, in 2005 Honda released a humanoid robot that is able to walk as fast as humans and he worked as a food servant in restaurant, but the year 2005 had seen a new thinking in the AI technology by the initiation of the blue brain project that aimed to identify the structure of human brain and detect its function in disease and health, between years 2010 and 2014 the humans are being able to communicate and even speak with robots as in 2011 Apple had launched the Siri project and in year 2012 google had launched Google Now, and then the turn comes on Microsoft that released the smartphone apps the uses the natural language to answer the received questions, make recommendations and even perform actions, Artificial Intelligence (AI) Technology is going to see a great leap in 2019 as this year will see the release of:
1- Automated CCTV security cameras:
that featured by:
It could predict potential vulnerabilities, threats and menace.
Tracking missing children.
Tracking theft vehicles.
Quick action.
Identifying criminals and suspected people between the large crowd.
2-general adversarial networks (GANs):
A new concept that will be released in 2019 that will make your AI smarter as an example it could be able to distinguish between the real image or a realistic image that is created by AI technology.
3-Chatbot:
Chatbot that is also known as Smartbot, Talkbot, chatterbot, IM Bot, interactive agent, conventional interface and artificial conversational entity, the Chatbot concept is an artificial intelligence which conducts a conversation via auditory or textual methods, this technology had started in the early 20th century in the year 1960, but in the middle of the century that this technology had turned from simple conversions to powerful promotional tools for media brands and e-commerce businesses, 80% of companies are expected to be using the Chatbot by the year 2020.
3D Printing
3D printing is a number of various computer controlled processes to create a three-dimension object is made by joining and solidifying the material, 3D printing had started to enter the production industry in the early 80th particularly in 1981 when the Japanese scientist Hideo Kodama has invented two methods for fabricating three dimensional plastic models with photo hardening method, the use of 3D printing in production started with functional or aesthetical prototypes then it turned to be additive manufacturing that it can be used in manufacturing a very complex shape or geometry that is pre designed by 3D model or a CAD file, 2019 is going to see a leap in 3D printing specially in the metal 3D printing which depend on digital model that uses layer by layer material build-up approach which gives the metal a full density and high precision, the metal 3D printing technology is going to be used in manufacturing aerospace, oil & gas, automobile and marine applications, 3D printing is going to use new materials and enter the phase of mass production with high technology machines, 3D printing will see new types in 2019 by these types:
Fused Deposition Modelling (FDM): is also known as Fused Filament Fabrication which is the most commonly available and cheapest type of 3D printing.
stereolithographic (SLA): which is known as the world first 3D printing technology as its history returns to the year 1986, stereolithographic printing is Featured by using mirrors but it also had a big disadvantage that it takes too long to trace the cross-section of an object when compared to DLP.
Digital Light Processing (DLP): this type of 3D printing uses a digital light projector to flash one image for each layer at once or images of multiple flashes for larger parts, the output of this type of 3D printing is a digital image composed of square pixels resulting from rectangular blocks called Voxels.
Selective laser sintering (SLS): its creating objects by using Powder Bed Fusion technology and polymer powder, this type of 3D printing is going to be commonly used due to its low price and this type is featured by using a CO2 laser beam in scanning the object surface.
Material Jetting (MJ): this type of 3D printing has the same concept of normal inkjet printers but the difference in this printing type is that it builds multiple layers of ink upon each other until it turns to solid part, Material Jetting featured by offering objects made from multi-material printing and with full-colour.
Drop On Demand (DOD): this type of 3D printing featured by using a pair of ink Jets, one looks like a wax material and the other used for dissolvable support material, DOD printing featured by using a fly-cutter that skims the build area after the creation of each layer to ensure the commencing of this layer, this type of printing is used in casting the lost wax and suitable for other mould making applications.
Sand binder jetting: mainly this type of 3D printing depends on mixing the PMMA powder with binding liquid as an agent to produce parts colors are added to the mixture through another nozzle, binder jetting useful in production of sand cast molds and cores as they are generally made of artificial sand or (silica), this type of 3D printing is featured with its low cost and quite easily integrated into existing manufacturing of foundry process without disruption.
Metal binder jetting: this type of 3D printing is used in the fabrication of metal objects, as the metal powder is bond together using a more poly binding agent, metal binder jetting is featured by producing complex geometries objects beyond conventional manufacturing techniques, this process done on main steps of infiltration process then adding bronze is the object then we go for the sintering process, the output of this printing type had an issue of non-uniform shrinkage that is solved in the design stage.
Direct Metal Laser Sintering (DMLS): this type of 3D printing is like SLS type but the difference in that it is applied on metal objects, laser is used in fusing the metal powder at a point reaching molecular level, DMLS printing process needs a structural support as the output object is Vulnerable to distortion and wrapping due to residual stress.
Electron Beam Melting (EBM): high energy Electron beam is used in the metal fusion; EBM printing is featured with superior building speed upon any other 3D printing types because of its high energy density.
Financial
Blockchain:
Blockchain or blockchain is a growing list of records called blocks using cryptography, blockchains are resistance to modification of the data that was first described in the year 1991 but it was actually created as public transaction ledger of the cryptocurrency (bitcoin) by Satoshi Nakamoto in the year 2008, it is managed by peer to peer network collectivity adhering to a protocol for internal code communication and validating new blocks once recorded, blockchains has many types:
Public blockchains:
Public blockchains have absolutely no access restrictions, its useful as that anyone has a connection to the internet could send transactions as validator, bitcoins and Ethereum are the most known public blockchains Applications.
Private blockchains:
Private blockchains feathered with high privacy as there is no one could join the network until he had been invited by the network administrations, and even the participant and validator access is restricted, this type of blockchains is the most appropriate for companies that are interested in Blockchain technology because of the high control level, its main implement is in accounting and record keeping producer’s business.
Consortium blockchains:
Consortium blockchains are semi-decentralized as its controlling lies between the hands of many companies might control one node in the network and not just a single organization that controls the network.
Blockchains technology will see a great leap in 2019 as these types will rise:
Blockchain will work as a service (BaaS):
BaaS is a cloud-based service that allows customers to build their own Blockchain powered products and it could be including applications, smart contracts, and other Blockchain features without the need to set up or build Blockchain based infrastructure.
Hybrid Blockchain:
Hybrid Blockchain works by providing the best features and functionality of both private and public blockchains, hybrid Blockchain isn’t widely used but it is considered to be the most appropriate for banks.
Federated Blockchain:
Federated Blockchain is the natural evolution of the normal Blockchain, it looks like the private Blockchain but has more customizable outlook, federated blockchains mainly used in financial services.
Ricardian contracts:
Ricardian contracts are the start of the dependence on legal contracts that cryptographically signed and verified, Ricardian contracts provide unique solutions so that they could be understood without mediator or service for both human and computers.
Interoperability between Blockchains:
Interoperability Blockchains aims to improve information across several networks or other Blockchains networks, the cross chain services improve the daily use of Blockchains, 2019 will see an improvement in the interoperability Blockchains technology, the main applications for Interoperability Blockchains are BlockNet, Aion, WanChain, and others.
Stable Coins:
Stable coins are the side product of the cryptocurrencies, that is affected by the market condition and the stability maintained in all time, most of the stable coins are fiat-chained but they are also backed by commodity stable coins, the main applications for stable coins are everyday currency transaction and P2P payments.
Security Tokens:
Security tokens had replaced the ICOs because it is more secure and protects the investor’s rights redefines the whole investment process, by the current year 2019 investors will tend to use security tokens (STO) than (ICO).
Financial Relegation:
Financial relegation is a type of relegation or supervision, which targets financial institutions with certain requirements, guidelines and restrictions that aims to maintain integrity of the financial system, this system could be handled by government or non-government organization, financial relegation had initiated by the Dutch authorities in the early modern period on the year 1610 as short selling which means that the buyer doesn’t own the asset but he borrows it from the seller and returns it back after short time with profit, then the Financial relegation took its development way until it took the form of banks in our days, the 2010 financial crisis had affected the relegation in a positive way as regulators put a fourth substantial number of new strengthened regulations and expanded requirements, 2018 had seen a great focus on legislative agenda towards protecting the consumers and investors and encouraging financial technology innovation, 2019 is going to see big leap in the financial relegation field as Asian investors will continue their 2018 financial vision to make the trajectory of embedding global post-crisis reforms, and make the Asia-pacific outlook the trends navigating guide across the region, in Europe 2019 is expected to be the year of continuity of regulatory terms, As the first half of the year will be finalizing the legislative initiatives to complete the banking Union, strengthen the EMU, and Capital markets Union.
largest technology construction projects in 2019
London Crossrail:
London Crossrail is the world first continues growing underground system that it extends the railway system with 73 miles (117 kilometers) that links between Berkshire and Buckinghamshire this line holds the name of Elizabeth Line that will be divided into two lines, this huge project expected to cost 23 million Pound, London Crossrail had the approval to start to work on it on 2008 and it’s excepted to launch in autumn 2019.
Benban Solar Park:
Benban solar park will be the world largest photovoltaic power station than expected to generate 1650 (MVP), its located in Upper Egypt particularly in Aswan, the Benban solar park is part of Egypt’s Nubian sun project that aims to be part of generating 20% renewable power of total Egyptian needed power, this Egyptian national project is expected to start working by the end of 2019.
1 note · View note
bigyack-com · 4 years
Text
John Woolley Joins The Ritz-Carlton Bali as GM
Tumblr media
The Ritz-Carlton, Bali has appointed John Woolley as General Manager.
Tumblr media
With over 20 years of experience with Marriott International, John's illustrious career has taken him around the globe including a stint as General Manager of the Courtyard by Marriott Bali Seminyak Resort from 2015 to 2019. While there he enjoyed tremendous success and was recognized as 'Courtyard General Manager of the Year' in 2018, and 'APAC Sales General Manager of the Year' in 2016. "It is both an honor and a thrill to be at the helm of such a highly-regarded luxury resort in Bali," John said. "The island has become my true home over the last four years. I am passionate about the luxury travel sector here in Bali, so I relish this opportunity to apply my strategic vision and global experience to all aspects of the resort, and bring it to the next level of success. I particularly look forward to working with the esteemed Ladies and Gentlemen of The Ritz-Carlton Bali to ensure our guests have the best holiday experience and take home memories that last a lifetime." John's leadership roles have spanned multiple disciplines, including sales and marketing, operations, regional and global. See latest Travel News, Interviews, Podcasts and other news regarding: Ritz-Carlton Bali, GM, General Manager. Headlines: HK7s and Singapore Sevens Rescheduled  Quay Hotel & Spa in Deganwy, North Wales - Interview with Brid Collins  Royal Thai Air Force Orders Six Airbus H135 Helicopters  CWT Appoints Laura Watterson as SVP - Global Talent & Rewards  Yee Pin Tan Joins Six Senses as Head of Design  Boeing Forecasts Southeast Asia to Need 4,500 New Airplanes Over Next 20 Years  Bombardier Completes Strategic Exit from Commercial Aviation  British Airways to Launch Flights Between London Heathrow and Newquay  Six Senses Appoints Bryan Gabriel as CCO  Piotr Madej Joins The Andaman Langkawi as GM  Sustainable Inspiration from Farmer Gareth Wyn Jones in North Wales  PNG Air Orders Three ATR 42-600S Aircraft  Japan Airlines Joins Amadeus NDC Program  BBAM to Convert Three Boeing 737-800s into Freighters  John Woolley Joins The Ritz-Carlton Bali as GM  French Navy to Operate Four Airbus H160 Helicopters  British Airways Extends Suspension of Flights to Beijing and Shanghai  Air France Extends Suspension of Flights to Beijing and Shanghai  Welsh Wine from the Gwinllan Conwy Vineyard - Interview with Colin Bennett  Vietjet to Launch Three New Routes to India  WTTC's 20th Global Summit to Take Place 22-23 April in Cancun, Mexico  Keio Plaza Hotel Tokyo to Open Renovated Rooms on 29 March  2019-nCoV - Arbitrary Restrictions and Blanket Travel Bans Cause Confusion  China and Mandarin Airlines Make Further Changes to Cross-Strait Services  Premier Inn to Add 735 Beds to Scotland Portfolio in 2020  Valentino Longo Wins North America's Most Imaginative Bartender Competition  Aviation: Slower But Steady Growth in 2019  Three Countries to Participate in Singapore Airshow 2020 Flying Displays  2019 Worst Year for Air Cargo Since End of Global Financial Crisis in 2009  Korean Air to Sell Land and Assets  ibis Styles Hotel Opens in Bekasi, Indonesia  Air France Takes Delivery of Airbus' 350th A350  United Airlines to Buy a Flight Training Academy  Bombardier to Double Size of Service Centre at London Biggin Hill  LA7s to Take Place at Dignity Health Sports Park in Los Angeles 29 Feb - 1 Mar  Singapore Airshow 2020 Still On; SAALS Cancelled  IHG Signs 61-Key InterContinental Resort in Khao Yai, Thailand  Fusion Suites Opens in Vung Tau, Vietnam  Aeroviation Expands Flight Training in Singapore with DA-20 Simulator  CWT Appoints Nick Vournakis as MD - Global Customer Group  SEHT Aviation Donates Six SH40-10 Headsets to Aerobility  China Airlines Adds 2019-nCoV Service Information Centre to Website  Four Seasons Resort Lanai Appoints Bradley Russell as Resort Manager  Leading Yachts of the World Appoints Anthony Brisacq as CEO  Second Four Seasons Hotel in Tokyo Starts Accepting Reservations  Korean Air to Launch Passenger and Cargo Flights to Budapest, Hungary  Aviation: Alliance Established to Investigate Use of Blockchain in MRO Chain  SAS Closes Sale of Beijing and Shanghai Flights Until 15 March  North Face 100 Thailand Attracts Over 4,000 Runners from 20+ Countries  Hong Kong Air Cargo Renews IOSA Registration  Swiss-Belhotel to Open Over 2,000 Rooms in 12 Hotels in Indonesia  Elena Nazarovici Joins The Sanchaya Bintan as Director of Sales  Amadeus Joins Mastercard's City Possible Network  Asia Pacific Airlines Carried 375.5m Int. Pax in 2019  Thai Airways Reduces Flights to Mainland China  Cebu Pacific Cancels All Flights to Hong Kong and Macau  Batik Air Takes Delivery of First Airbus A320neo  Fiji Beat South Africa to Win First HSBC Sydney Sevens  Maldives Rejoins Commonwealth  Korean Air Making a Difference to Orphans in Tondano, Sulawesi  Qatar Airways Becomes Official Airline of Paris Saint-Germain  Polish Air Ambulance Service Orders Two Learjet 75 Liberty Aircraft  Vietnam Airlines Makes Changes to Hong Kong, Macau and Taiwan Flights  Football: Pictures from Ascot United FC vs Egham Town FC  Aerobility - The British Flying Charity, Interview with Mike Miller-Smith MBE  ANA and SIA Sign Joint Venture Framework Agreement  Dusit Signs First Hotel in Hanoi, Vietnam  SkyWest Orders 20 Embraer E175 Jets  SAS Suspends All Shanghai and Beijing Flights  Kuala Lumpur Int. Airport Trials Single Token Journey Technology  Air Canada Suspends All Flights to Beijing and Shanghai  Korean Air Sets Up Emergency Response Team; Suspends Select China Flights  Mark Radford Joins Trenchard Aviation as VP Business Development  CWT Appoints R. Thompson as VP Global Internal Communications & Culture  Vietjet Launches Daily Flights Between Hanoi and Bali, Indonesia  WTTC Moves April's Global Summit from Puerto Rico to Mexico  Andy Flaig Joins Wyndham as Head of Development South East Asia / Pacific Rim  Air Astana Reports 2019 Net Profit Increase of 461%  NASA Orders Three Airbus H135 Helicopters  Hahn Air Simplifies Distribution of Corporate Shuttle Flights  Sydney 7s to Take Place at Bankwest Stadium on 1-2 February  Russia's Sirena Travel Signs Multi-Year Retailing Deal with ATPCO  Shell to Use Airbus H160 for Offshore Transportation  ANA to Operate International Flights Out of Haneda T2 and T3 from 29 March  CAL Group Makes Changes to Inflight Services on Taiwan-HK / Macau Routes  Philippines Suspends Visa Upon Arrival Service for Chinese Nationals  UK Visitor Arrivals Spending Up 19% in October 2019  Todd Probert Joins CAE as Group President, Defence & Security  Six Senses Appoints Mark Sands as Vice President of Wellness  Air France and Sata Azores Airlines Start Codesharing  Norwegian Implements New Hand Baggage Policy  AirAsia X Launches KL - Taipei - Okinawa Flights  First Boeing 777-9 Begins Flight Tests  Accor Commits to Global Elimination of Single-Use Plastics by 2022  Etihad Receives EASA Approval to Train Boeing 777, 787 Pilots  Jeane Lim Appointed GM of Grand Park City Hall Hotel in Singapore  Whitbread Opens First hub by Premier Inn Hotel Outside of London's Zone 1  Delta Enhances Travel Experience for Pets  All Blacks Sevens Make History in Hamilton  Air Canada Becomes Official Airline of Cirque du Soleil  Pictures from Ascot United vs Hanworth Villa  SHOW DC Hall to Give Bangkok's MICE Industry a Significant Boost  IHG Signs First voco Hotel in New Zealand  OYO Signs Global Distribution Agreement with Sabre  Hong Kong Cancels Chinese New Year Carnival  NAC Orders 20 Airbus A220 Aircraft  IHG to Launch Customer Insights Portal for Large Enterprises  Cathay Pacific Reports December 2019 Traffic  Duetto Appoints David Woolenberg as CEO  Phuket to Host Thailand Travel Mart (TTM+) 2020 in June  Sabre and Accor to Create Unified Technology Platform for Hospitality Industry  Groupe Couleur to Manage RWC 2023 Official Travel Agent Selection Process  Two Senior Global Marketing Appointments at IHG  ANA Begins Autonomous Electric Bus Trial at Haneda Airport  China Airlines and Mandarin Airlines Cancel Flights to Wuhan, China  Rosa Wong Joins Hotel Alexandra HK as Director of Event Management  Four Seasons Madrid Now Accepting Reservations  UK Military Flight Training System Orders 4 More H145 Helicopters  Sabre Forges 10-Year Partnership with Google  Accor Expands Pullman Brand to Rotorua, New Zealand  Korean Air Reveals More Details of New SkyPass FFP  Airbus to Produce A321 Aircraft in Toulouse, France  Cornelia Mitlmeier Joins Four Seasons Dubai as Resort Manager  Hotel Alexandra Appoints Daniel Chan as Executive Chef  SITAOnAir Acquires GTD Air Services  Boeing Hoping for 737 MAX Ungrounding in Mid-2020  1.5 Billion Int. Tourist Arrivals in 2019; UNWTO Forecasts 4% Increase in 2020  Hong Kong Airport Handled 71.5 Million Passengers in 2019  Ascott Opens First Citadines in Osaka, Japan  Thai Airways Appoints New Chairman  Singapore Airshow Aviation Leadership Summit to Take Place 9-10 Feb  PATA Forecasts Over 971 Million Int. Visitor Arrivals into Asia Pacific by 2024  Green Light for Garmin G5000 Avionics Upgrade on Learjet Aircraft  Four Seasons Silicon Valley Installs High-Tech AI Gym in Select Rooms  Assistance Requests for Airline Pax with Intellectual Disabilities up 762%  Accor Opens Mercure Resort in Vung Tau, Vietnam  Vietjet Launches Flights Between Dalat and Seoul  FlyArystan Reports First Year Load Factor of 94%  Amadeus Looks at How Technology Will Shape Airports of the Future  CALC Signs Purchase Agreement for 40 A321neo Aircraft  Daniele Polito Joins Four Seasons Seoul as Boccalino's Chef de Cuisine  Charlotte Svensson to Join SAS as EVP and CIO  Airbus Helicopters Appoints Head of External Communications  Ascot: Pictures from Matchbook Clarence House Chase and Raceday 2020  Mövenpick Resort Opens in Cam Ranh, Vietnam  Sky Bridge Delivered to Final Position at HKIA  Airbus Performs First Fully Automatic Vision-Based Take-Off  Manchester Signs Tourism Collaboration Agreement with New York City  CWT Appoints Scott Hace as Vice President - Enterprise Strategy  United to Launch CRJ550 Shuttle Service Between Washington and New York  Perth Airport to Upgrade T1's Aerobridges  Thomas Krooswijk Appointed GM of Four Seasons Marrakech  American Appoints Brian Znotins as VP of Network and Schedule Planning  Pools - World Rugby Sevens Challenger Series in Chile 15-16 February  British Airways to Refresh First and Club Lounges at Chicago O'Hare Airport  Malaysia Airlines and Qatar Airways Expand Codeshare  Onyx Signs Second Amari Hotel in China  Air Canada's First Airbus A220-300 Enters Commercial Service  Hyatt Signs Regency Hotel in Kuala Lumpur, Malaysia  Bangkok to Host 30th Global Summit of Women in April  One Championship Appoints Jonathan Anastas as CMO  IHG Signs Deal for 1,200+ Rooms in Thailand; First Hotel in Chiang Mai  HK's Airport Authority Appoints Ricky Leung as Executive Director  Trenchard Makes Donation to Aerobility - The British Flying Charity  Chu Yuet Hung Joins Four Seasons Hangzhou as Director of F&B  Hong Kong Visitor Arrivals Down 14.2% in 2019  Hotel Ritz Madrid to be Rebranded by Mandarin Oriental  Sabre Appoints Karl Peterson as Chairman  Muralilal Armugum Joins Aloft KL Sentral as Director of F&B  Traxof to Automate Talent Acquisition of Airbus' IM Organisations  Vie Hotel Bangkok Selects Organika Products for Renovated Spa  Europ Assistance Opens Office in Bangkok, Thailand  Gulf Air Partners Etihad Guest  Team GB Selects British Airways as Official Airline for Tokyo 2020  Emirates Targets Chinese Travellers with Trip.com MOU  Thailand: Did Strength of Thai Baht Affect Number of Arrivals from UK in 2019?  What Does TAT Have Planned for Thailand Travel Mart (TTM+) 2020? Exclusive Interview  Seaplanes in Thailand? Interview with Dennis Keller, CBO of Siam Seaplane  Future of Airline Distribution and NDC - Interview with Yanik Hoyles, IATA  Cambodia Airways Interview with Lucian Hsing, Commercial Director  HD Videos and Interviews  Podcasts from HD Video Interviews  Travel Trade Shows in 2019, 2020 and 2021  High-Res Picture Galleries  Travel News Asia - Latest Travel Industry News  Read the full article
0 notes
awesomewavefan-blog · 5 years
Text
How Artificial Intelligence will change the future.
1. DEFINITION OF ARTIFICIAL INTELLIGENCE.
AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. A more elaborate definition characterizes AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learning to achieve specific goals and tasks through flexible adaptation.
In computer science, artificial intelligence (AI), sometimes called machine intelligence, AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".
Tumblr media
2. What is the Future of Artificial Intelligence as Follow.
The Future of Artificial Intelligence as Follows:
Artificial intelligence is developing faster than you think, and speeding up exponentially
You use artificial intelligence all day, every day
Robots are definitely going to take your job
About half of the AI community believes computers will be as smart as humans by 2040
A lot of smart people think developing artificial intelligence to human level is a dangerous thing to do
Once artificial intelligence gets smarter than humans, we've got very little chance of understanding it
There's no such thing as an “evil” artificial intelligence
There are three ways a super intelligent artificial intelligence could work
Artificial intelligence could be the reason why we've never met aliens
Basically, there's a good chance we'll be extinct or immortal by the end of the century
Tumblr media
3. WHAT IS IMPACT OF AI. 
a. Automated Transportation
As we’re already seeing the beginnings of self-driving cars, though the vehicles are currently required to have a driver present at the wheel for safety. Despite these exciting developments, the technology isn’t perfect yet, and it will take a while for public acceptance to bring automated cars into widespread use.
b. Cyborg Technology
It is One of the main limitations of being human is simply our own bodies—and brains. as  we will be able to augment ourselves with computers and enhance many of our own natural abilities. Though many of these possible cyborg enhancements would be added for convenience, others might serve a more practical purpose. Yoky Matsuka of Nest believes that AI will become useful for people with amputated limbs, as the brain will be able to communicate with a robotic limb to give the patient more control. This kind of technology would significantly reduce the limitations that amputees deal with on a daily basis.
c. Taking over dangerous jobs
Robots are already taking over some of the most hazardous jobs, including bomb defusing. These robots aren’t quite robots yet, they have technically drones, being used as the physical counterpart for defusing bombs, but requiring a human to control them, rather than using AI.  as they have saved thousands of lives by taking over one of the most dangerous jobs in the world. if technology improves, we will likely see more AI integration to help these machines function.
d. Solving climate change
This might be seem like a tall order from a robot. as machines have more access to data than one person ever could— storing a mind-boggling number of statistics.For an example, it might become possible to know whether a person is stressed or angry in mood. Artificial Intelligent could be  one day identify trends and there uses that information to come up with solutions to the world’s biggest problems.
e. Robot as friends 
As of now most robots are still emotionless and it’s hard to picture a robot you could relate to. In Japan, as they have made the first big steps toward a robot companion—one who can understand and feel emotions. they are programmed to read human emotions, develop its own emotions, and help its human friends stay happy. Pepper goes on sale in the U.S. in 2016, and more sophisticated friendly robots are sure to follow.
4.THE FUTURE IS NOW: AI'S IMPACT IS EVERYWHERE
That’s especially true in the past few years, as data collection and analysis has ramped up considerably thanks to robust Iot connectivity, the proliferation of connected devices and ever-speedier computer processing. as in Some sectors are at the start of their AI journey, others are veteran travelers. Both have a long way to go but the impact artificial intelligence is having on our present day lives is hard to ignore:
Transportation: Although it could take a decade or more to perfect them, autonomous cars will one day ferry us from place to place.
Manufacturing: AI powered robots work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly.
Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more quickly and accurately diagnosed, drug discovery is sped up and streamlined, virtual nursing assistants monitor patients and big data analysis helps to create a more personalized patient experience.
Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist human instructors and facial analysis gauges the emotions of students to help determine who’s struggling or bored and better tailor the experience to their individual needs.
Media: In media field, journalism is harnessing AI also and we will continue to benefit from it. Bloomberg uses Cyborg technology to help make quick sense of complex financial reports. The Associated Press employs the natural language abilities of Automated Insights to produce 3,700 earning reports stories per year — nearly four times more than in the recent past.
Customer Service: Last but hardly least, Google is working on an AI assistant that can place human-like calls to make appointments at, say, your neighborhood hair salon. In addition to words, the system understands context and nuance.
Tumblr media
5.  ADVANTAGES OF AI.
Provides more accurate information than human research.
No space for human errors.
Is able to perform ‘monkey job’ instead of humans: routine, monotonous and tedious tasks.
Is able to interact with people and answer their questions.
Is able to imitate humans cognitive capabilities (like speech, vision, ability to make conclusions).
Provides grounds for data-driven decisions.
Its subsets, like Machine Learning, are able to detect patterns and learn from them.
Its subsets, like Machine Learning, are able to make accurate predictions.
Detects extraordinary situations.
They can think logically without emotions, making rational decisions with less or no mistakes.
Tumblr media
6.  DISADVANTAGES OF AI.
Requires financial investment and can be quite costly.
Robots and other products of artificial intelligence replacing humans can become the cause of unemployment.
Robots and other products of artificial intelligence do not possess such human qualities, as creativity or emotional intelligence.
Sometimes robots get out of control, and it is dangerous.
Tumblr media
7. PRO & CONS OF AI.
PRO OF AI AS FOLLOWS.
AI gives a business more opportunities to be productive.
Anyone is capable of using artificial intelligence to improve their lives.
Our health improves because of the presence of AI.
CONS OF AI AS FOLLOWS.
AI would create a different definition of humanity.
It could be a technology which turns out to be dangerous
Artificial intelligence can struggle to learn on its own.
AI doesn’t understand the complexities of human need.
Tumblr media
0 notes
admcp · 6 years
Photo
Tumblr media
IN: 0408
The world in 2384: Is it possible to achieve immortality? Netflix’s new original series Altered Carbon explores a world where, in just a few centuries, humans can live forever. WIRED asks the experts if that's true, then how do we get there?
“Artificial intelligence has even the world’s brightest minds concerned about how it could shape our future.”
On April 27, 2016, Philip Rosedale launched the idea for a new world, known as High Fidelity, where people could lead a second life through the lens of a virtual reality headset. It would be more realistic and much bigger than the previous parallel world he had created: “As big as Earth and beyond,” he predicted. “We are about to leave the real world behind.”
We’re now living in a time where we can lead entirely different lives from behind a screen in the comfort of our own homes. These new worlds are so immersive that the politics, economy, relationships and emotions behind milestone events are as genuine as if they were occurring in our physical lives. Eventually, some futurists predict we could be existing forever within these virtual worlds, by developing the technology to “upload” our thoughts onto hard drives, allowing them to live and interact in alternate realities. A new original series on Netflix, Altered Carbon, set in 2384, takes this one step further with human consciousness living forever, transferring our memories, thoughts and emotions from body to body as they wear out, universe to universe, so we could live on forever. WIRED takes a look at the next three centuries to explore whether this idea can become a tangible reality.
2018 In the past 50 years, mankind has travelled to the moon, successfully completed the first heart transplant, developed machines that can keep us alive and mapped the DNA blueprint for human life. We have created virtual realities, linked billions of people online, created virtual assistants and the internet of things. Entrepreneur Herman Narula has even built the matrix. His company Improbable has developed Spatial OS, the world’s first large-scale distributed operating system. The platform can be used to simulate potential futures – from relatively small issues such as how to solve traffic, to our generation’s biggest medical problems – by simulating biological systems. In Narula’s vision he is creating a decision-making platform, a “what if machine” where each model could be integrated and built on top of each other, creating a one-to-one virtual representation of the real world that researchers could use to run experiments.
Running prediction machines requires big data, and in 2018 big data is ubiquitous. We know companies use our data – what we buy online, where we are in the world, who our friends, partners, and family are and how we interact, what bills we pay or the games we play – to target us with advertising. But China is taking a less sneaky route by categorising each of their citizens using a Social Credit System. In her new book, Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart, Rachel Botsman details how the State Council of China is planning to implement this scheme by 2020. “Imagine a system where all these behaviours are rated as either positive or negative and distilled into a single number, according to rules set by the government,” she writes. “That would create your Citizen Score and it would tell everyone whether or not you were trustworthy. Plus, your rating would be publicly ranked against that of the entire population and used to determine your eligibility for a mortgage or a job, where your children can go to school – or even just your chances of getting a date.”
To make big data possible, first you have to connect with the digital world. In 2007, our relationship with this became hyper-personal. Then CEO of Apple, Steve Jobs, revealed the smartphone, and it has since become a necessary part for convenient day to day living. In the same time frame, artificial intelligence has grown to become used in just about every industry. From recognising the faces of criminals and safeguarding our security, to allowing each of us to have our own virtual assistants and widening our access to healthcare, artificial intelligence has even the world’s brightest minds concerned about how it could shape our future. Professor Stephen Hawking is one of them. “The genie is out of the bottle. We need to move forward on artificial intelligence development, but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.” In 2017, a bot called Libratus out-bluffed poker kingpins, another had taught itself to play chess without learning from a human how to play first.
We are now also living in a world where our lifespan is double what it was 100 years ago. Where a 40-year-old person was once considered to be nearing the end of their life in 1917, many are now living well into their eighties, with the oldest person to date living 122 years.
Our advancements in health, however, go beyond merely extending our years. Quality of life has improved for the better and babies who might not have survived a few decades ago are given a chance to thrive – those born as early as 22 weeks can see out the end of a normal gestation period outside of their mother’s body with the help of machines. In science fiction film Gattaca (Andrew Niccol, 1997), potential children are conceived by genetic manipulation to ensure they possess the best hereditary traits of their parents. The themes in films such as this are now a reality. For instance, parents struggling to conceive now have the option of in vitro fertilisation, while ethical bodies are debating whether we should be allowed to defy natural evolution in this way.
What’s perhaps most striking is our ability to manipulate machines by the use of our minds. In 2018, a breakthrough that will allow humans to control machines with their minds will go into clinical trials. Stentrode is a matchstick-sized device that can be inserted into the brain through blood vessels. It’s hoped it will help paralysed patients move by controlling their exoskeletons, effectively changing the the quality of life for people whose mobility is inhibited by their conditions.
In the last few decades we have reached a point where serious debate is being held about where the line is drawn between how we teach machines and how much they can learn on their own. In the next 150 years, some of the biggest technological advancements towards linking the two will be made on a massive and personal scale.
credit: worship.studio
“You’ll have the option of subsisting as a cloud-embedded planetary observer after your death.” - Jonathon Keats
2118 By this time, it’s predicted the Earth will be home to 10 billion humans, one of the only species to survive a mass extinction event as well as extreme changes in climate due to global warming. Just a century previous, Professor Hawking, advised seeking alternative planets for possible habitation: “We are running out of space on earth and we need to break through technological limitations preventing us living elsewhere in the universe.” A century after SpaceX led the charge to make autonomous vehicles the leading form of transportation, it has now moved on to launch and cultivate space colonies on Mars, while Nasa’s missions have finally broken through the barriers to allow humans to live permanently on other planets. By this time though, the planet’s are only open for development and research.
Futurologist Ian Pearson predicts that by the 2100s, back on planet Earth, the ability to communicate through thought transmission will be as easy as other forms of brain augmentation. “Picking up thoughts and relaying them to another brain will not be much harder than storing them on the net,” he predicted in 2012. He says by this point, we will be wiring our brains to computers to make them work faster. “By 2075 most people in the developed world will use machine augmentation of some sort for their brains and, by the end of the century, pretty much everyone will.”
Experimental philosopher and conceptual artist Jonathon Keats believes by this time, AI will have led us to achieve “passive mortality.” “By having a neural network follow everything you do from birth until death, an AI will be able to behave as you would in circumstances equivalent to those you have experienced,” he says. “At first this will be an unanticipated benefit of legacy systems (such as the fraud detection systems used by your bank), which will provide a decent proxy for your personality when artfully combined. By the late 2000s it will be marketed in its own right (immortality-as-a-service). The problem will be that it won’t work reliably in truly novel situations, and rampant climate change will make for an unprecedented level of global unpredictability. As a result, the technology will be used only passively: with your AI-captured personality embedded in an AI agent, you’ll have the option of subsisting as a cloud-embedded planetary observer after your death.”
Science fiction and fantasy writer and historian Ada Palmer, predicts that by 2118 our life expectancy will have increased by another 70 years. “Expansion of the body’s lifespan through medical advances such as gene therapy and other anti-ageing technologies make societies become accustomed to lifespans of 150 years or more,” she says. “The expectation that these can and should be expanded affects culture and identity, making people think of death as a problem to be solved rather than a natural inevitability, galvanising more research into how to expand lifespans even more and transcending the limits of the body. At around the same time, advances in neural network research and other computing advances make it possible to simulate neural systems as complicated as a human brain, though it is not yet practical to duplicate and ‘upload’ a real one.”
The creator of Black Mirror, Charlie Brooker, explores the possibility that so-called brain implants will be as essential to everyday life as a tablet or smartphone is in 2018. The implants sit behind the ear and sync up to an electronic contact lens that records and saves everything the body does. This means the history of each person is in their hands and can be accessed over and over again, adding to the collection of big data to replicate personalities.
“Efforts to house biological brains in robotic bodies further extend the potential lifespan of an organic brain” - Ada Palmer
2218 Herman Narula predicted that eventually humans will take to virtual worlds as an antidote to mass automation. By 2200, its possible that humans could live and work within virtual “gaming” worlds that are resistant to the AI-dominated workforce that would by now have overtaken the physical world. Outside of the virtual worlds, medicine has advanced so much our bodies have become perfectly engineered. Disease has been eradicated and babies are conceived through a series of options chosen by parents and collected from birthing stations nine months on. Continents have shifted and the world map looks unrecognisable. Meanwhile other planets are being established at a pace to accommodate billions more humans that arrive by space elevators, at one time only accessible to the super rich but now as common as cars once were. Lost animal and plant species are being revived by technological advancements and the environments they once lived in are recreated to ensure their survival.
Science-fiction writer Ada Palmer believes that we would be on track to uploading our brains onto clouds and hard drives, at a similar rate to what it was once possible to cryogenically freeze our bodies for long periods of time.
“The brain is mapped in sufficient detail that whole brains can be replicated in digital form, making it possible to create a computer intelligence that effectively replicates a person's memories, decision-making process etc, though in tests (such as puzzle solving or rock, paper, scissors, throw) the behaviour of the digital and original is usually not quite identical,” says Palmer. “People debate whether (A) the differences are caused by the digital and biological beings’ awareness of their own bodies/forms, if (B) this means there is still something about how neurons operate that we don't understand and are not simulating with sufficient detail, or (C) the original has a soul and the copy does not. Different societies and cultural groups have a range of reactions to this technology, and differ on whether a digital version of someone has the same civil and legal rights and status as the original, counts as a child/offspring, or even counts as a human being. Some people have digital copies kept paused to be activated only when they die, while others allow digital versions of themselves to run and collaborate with them while the biological version is still alive. Meanwhile efforts to house biological brains in robotic bodies further extend the potential lifespan of an organic brain, but does not yet make it infinite.”
Philosopher Jonathon Keats predicts the fight for immortality will be persistent and categorised by a series of failures that will ultimately leave people isolated and detached from reality. “Experiments in active immortality will persist even after two centuries of failure (from head transplants to synaptic scaffolds). As a result of these persistent efforts – and the eternal ambitions of the ultra-rich – the definition of immortality will change. It will become a matter of genetics (by way of human cloning) augmented by epigenetic therapy, in which your cloned self will be exposed to the same biochemical environment that you were, in order to turn genes on and off in the appropriate sequence. (Otherwise your clone would merely become an identical twin of yourself.) Experiences will also be replayed, since those have an epigenetic impact. These events will be delivered by a fully immersive VR, based on the perfect tracking of your sensory adventures from birth to death (initially a service offered by big technology companies). The problem will be that you’ll need to live in a perfect bubble, sealed off from the present world, which will be radically different and would hopelessly muddle your epigenetics, changing who you are. As a result, a select group of people will end up reliving their lives in isolation, over and over again, until their money runs out – but they’ll be totally detached from the present and ultimately irrelevant.”
“Your computer self can do something on Titan and you can then return that version of yourself to your biological body on Mars.” - Jonathon Keats
2384 By the 2300s the super rich will have finally achieved immortality – death will be a mere inconvenience. They will have left Earth as a dystopian wasteland, filled with sprawling neon cities while they perch above the clouds in mansions filled with relics from “elder civilisations.” In Netflix’s Altered Carbon, its possible to live for hundreds of years, across different universes that have been colonised in the centuries gone by. Human bodies will contain cortical stacks in their spinal columns that digitally store their memories and can be transferred to a new body, known as a sleeve. Re-sleeving will be a process open to everyone, but only the ultra-rich will be able to update over the course of their lives via a cloud storage system, meaning they can bypass the ageing process and choose bodies that are to their liking.
To get here, humans will have been through the typical trials and errors that have characterised their species for centuries. Ada Palmer believes the ultimate motivation to continue will come from humanity’s expansion through the solar system. “The long transit times to asteroids and the moons of Jupiter, will make computer intelligences increasingly desirable since programs can be sent at light speed from point to point in order to make business decisions or interact with friends, while physical bodies – whether robotic or biological – require months to complete the same trip,” she says. “The transformation of society from an Earth-centred to a multi-world system inadvertently creates the conditions for computerised intelligence to become the expected default human state, though some societies and individuals continue to reject it. In this period the most sought-after technology is one to let the ideas and memories of a digital copy be reintegrated into a biological original, so your computer self can do something on Titan and you can then return that version of yourself to your biological body on Mars. This technology proves challenging however, and ignites cultural anxieties, including the fear that the original mind is killed and replaced by the digital one, and concerns about the application of this to insert memories into the minds of others.”
Jonathon Keats sees the world in this century as similar to that of Altered Carbon in that our ability to live on forever will fundamentally shake social laws, morals and politics. “The passive immortals will be rediscovered after having been forgotten for more than a century. Millions will be found on legacy systems deep within 24th century server farms. By this point in history, the extent of any environmental crisis will be beyond an ordinary tech fix (partly exacerbated by centuries of tech fixes gone bad). In their desperation, some people will recognise the value of the passive immortals, appreciating their deep observations of long-term change from myriad points of view (since the observations of each, uninterrupted for centuries, will be coloured by their individual personalities). The virtual presence of the passive immortals will cause a sociopolitical schism, in which a society addicted to accelerating change confronts a fundamentally different philosophy of life,” he says.
Altered Carbon explores these new ways of social order through the murder of a wealthy man, Laurens Bancroft (James Purefoy) and his attempt to solve it himself by enlisting the help of the last surviving soldier, Takeshi Kovacs (Will Yun Lee, Joel Kinnaman) from an elite group of interstellar warriors who were defeated in an uprising against this new world order. The values and vices that humans have developed for thousands of years still remain – relationships, wealth, exploration, drugs, consumerism – but exist in spheres that blur natural human ability, machines and digital.
The next few hundred years? “The direction taken by society at this stage will determine whether the story continues into the 2400s and beyond,” says Keats.
youtube
0 notes
kristablogs · 4 years
Text
Virtual fighter jets powered by AI are battling for the chance to dogfight a human
This F-16 in Florida is being flown by a real human. (US Air Force / Tech. Sgt. John Raven/)
Fighter jets need humans to fly them, but someday, that could change. This week, the Defense Advanced Research Project Agency—better known as DARPA—is hosting a virtual Top Gun-style competition in which various artificial intelligence algorithms fly simulated jets in digital dogfights. No actual planes are in the air, but the goal is to see which AI agent can provide the most formidable fighter. The event kicked off on Tuesday morning, and on Thursday, the strongest AI will battle against a simulated F-16 operated by a real flesh-and-blood pilot.
The event this week is the third stage in what’s called the AlphaDogfight Trials. The first trial in the series, held last fall, was very much rookie algorithms trying to figure out aviation fundamentals, explains Col. Dan Javorsek, the manager of the event at DARPA and a former F-16 aviator and test pilot. “What you were basically watching was the AI agents learning to fly the plane,” Javorsek says. (His call sign is “Animal,” a reference to the Muppets.) “A lot of them killed themselves on accident—they would fly into the ground, or they would just forget about the bad guy altogether, and just drive off in some direction.” In other words, Maverick or Iceman would probably just laugh at them.
Javorsek compares that stage to NASA’s early days, when rockets kept exploding. “It was not inspiring,” he adds. But early this year, during the second trial, it went better. “We watched the agents go from being able to barely fly the airplane and barely prevent [themselves] from crashing, into true behaviors that looked like dogfighting,” Javorsek says.
Dogfighting may be the colloquial term made famous by the movie Top Gun, but the military refers to that type of engagement as BFM, for Basic Fighter Maneuvers. The AI agents trying to master this practice come from eight different teams, including Goliaths such as Lockheed Martin and Aurora Flight Sciences (part of Boeing), and other smaller or less well-known players, like Georgia Tech Research Institute or Heron Systems.
While this competition is happening virtually, companies are already working on the hardware for pilotless fighter jet-type drones in the real world. One such little uncrewed aircraft is called the Valkyrie, or XQ-58A, which is made by California-based company Kratos. Another comes from Boeing, and is a uncrewed fighter jet with a modular nose—it’s dubbed the “Loyal Wingman.” The idea behind these types of machines is that they could be a type of robot wingman, escorting an aircraft flown by a human. Since they’d be less expensive to make than a full-fledged fighter jet and wouldn’t have a human on board, they’d also be attritable: a craft that wouldn’t be devastating to lose in combat.
Javorsek, of DARPA, says that autonomous projects like those are on their radar, but that philosophically, their focus is slightly different. Initiatives outside of DARPA, he says, have “tended to fixate on the Beyond Visual Range (BVR) problem, which is not the first thing we do with our pilots.” Militaries might want to send an uncrewed fighter jet ahead, like a scout, and possibly attack an enemy’s air defenses. But before something like that can happen, Javorsek contends that AI needs to prove that it can carry out a more basic task: the dogfight.
That’s what this week’s competition is all about, in which the different teams are flying both against algorithms created by the Johns Hopkins Applied Physics Laboratory and eventually each other, too. One AI agent will then battle a human, flying a digital F-16 in a virtual reality-style simulator. An aircraft wins when it’s able to get behind another and hold that position long enough to get a kill shot, just like in the movies.
A screenshot from the event. (DARPA/)
Javorsek says that looking into the future, he sees a split between the types of tasks that AI and humans might handle, with algorithms focusing on physically flying an aircraft and people free to keep their minds on the bigger picture. “The vision here is that we get human brains at the right spot,” he reflects. Artificial components can focus on the “low-level, maneuvering, tactical tasks,” he says, while their flesh-and-blood counterparts can be “battle managers” who are able to read “context, intent, and sentiment of the adversary.” (Helicopter-maker Sikorsky, which has been working on an aircraft that’s much easier to fly than a traditional chopper, advocates for a similar configuration.)
While this DARPA competition has the flying happening in digital skies, real-life flight in an actual fighter jet is intensely physically demanding on the aviators on board—something I had the chance to experience firsthand in an F-16. Pulling hard turns or accelerating quickly produces dramatic G forces, and if the pilot and crew don’t manage them correctly, they could pass out. Artificial intelligence might someday fly a plane in combat, but if a pilot were to hypothetically be on board, he or she is going to want to be able to stay conscious throughout the fight. In other words, any algorithm with control of the stick will need to consider what humans can withstand. Or, if the AI is in charge of an uncrewed drone, then it wouldn’t need to worry about the impact of Gs on a person at all.
0 notes
scootoaster · 4 years
Text
Virtual fighter jets powered by AI are battling for the chance to dogfight a human
This F-16 in Florida is being flown by a real human. (US Air Force / Tech. Sgt. John Raven/)
Fighter jets need humans to fly them, but someday, that could change. This week, the Defense Advanced Research Project Agency—better known as DARPA—is hosting a virtual Top Gun-style competition in which various artificial intelligence algorithms fly simulated jets in digital dogfights. No actual planes are in the air, but the goal is to see which AI agent can provide the most formidable fighter. The event kicked off on Tuesday morning, and on Thursday, the strongest AI will battle against a simulated F-16 operated by a real flesh-and-blood pilot.
The event this week is the third stage in what’s called the AlphaDogfight Trials. The first trial in the series, held last fall, was very much rookie algorithms trying to figure out aviation fundamentals, explains Col. Dan Javorsek, the manager of the event at DARPA and a former F-16 aviator and test pilot. “What you were basically watching was the AI agents learning to fly the plane,” Javorsek says. (His call sign is “Animal,” a reference to the Muppets.) “A lot of them killed themselves on accident—they would fly into the ground, or they would just forget about the bad guy altogether, and just drive off in some direction.” In other words, Maverick or Iceman would probably just laugh at them.
Javorsek compares that stage to NASA’s early days, when rockets kept exploding. “It was not inspiring,” he adds. But early this year, during the second trial, it went better. “We watched the agents go from being able to barely fly the airplane and barely prevent [themselves] from crashing, into true behaviors that looked like dogfighting,” Javorsek says.
Dogfighting may be the colloquial term made famous by the movie Top Gun, but the military refers to that type of engagement as BFM, for Basic Fighter Maneuvers. The AI agents trying to master this practice come from eight different teams, including Goliaths such as Lockheed Martin and Aurora Flight Sciences (part of Boeing), and other smaller or less well-known players, like Georgia Tech Research Institute or Heron Systems.
While this competition is happening virtually, companies are already working on the hardware for pilotless fighter jet-type drones in the real world. One such little uncrewed aircraft is called the Valkyrie, or XQ-58A, which is made by California-based company Kratos. Another comes from Boeing, and is a uncrewed fighter jet with a modular nose—it’s dubbed the “Loyal Wingman.” The idea behind these types of machines is that they could be a type of robot wingman, escorting an aircraft flown by a human. Since they’d be less expensive to make than a full-fledged fighter jet and wouldn’t have a human on board, they’d also be attritable: a craft that wouldn’t be devastating to lose in combat.
Javorsek, of DARPA, says that autonomous projects like those are on their radar, but that philosophically, their focus is slightly different. Initiatives outside of DARPA, he says, have “tended to fixate on the Beyond Visual Range (BVR) problem, which is not the first thing we do with our pilots.” Militaries might want to send an uncrewed fighter jet ahead, like a scout, and possibly attack an enemy’s air defenses. But before something like that can happen, Javorsek contends that AI needs to prove that it can carry out a more basic task: the dogfight.
That’s what this week’s competition is all about, in which the different teams are flying both against algorithms created by the Johns Hopkins Applied Physics Laboratory and eventually each other, too. One AI agent will then battle a human, flying a digital F-16 in a virtual reality-style simulator. An aircraft wins when it’s able to get behind another and hold that position long enough to get a kill shot, just like in the movies.
A screenshot from the event. (DARPA/)
Javorsek says that looking into the future, he sees a split between the types of tasks that AI and humans might handle, with algorithms focusing on physically flying an aircraft and people free to keep their minds on the bigger picture. “The vision here is that we get human brains at the right spot,” he reflects. Artificial components can focus on the “low-level, maneuvering, tactical tasks,” he says, while their flesh-and-blood counterparts can be “battle managers” who are able to read “context, intent, and sentiment of the adversary.” (Helicopter-maker Sikorsky, which has been working on an aircraft that’s much easier to fly than a traditional chopper, advocates for a similar configuration.)
While this DARPA competition has the flying happening in digital skies, real-life flight in an actual fighter jet is intensely physically demanding on the aviators on board—something I had the chance to experience firsthand in an F-16. Pulling hard turns or accelerating quickly produces dramatic G forces, and if the pilot and crew don’t manage them correctly, they could pass out. Artificial intelligence might someday fly a plane in combat, but if a pilot were to hypothetically be on board, he or she is going to want to be able to stay conscious throughout the fight. In other words, any algorithm with control of the stick will need to consider what humans can withstand. Or, if the AI is in charge of an uncrewed drone, then it wouldn’t need to worry about the impact of Gs on a person at all.
0 notes
ladystylestores · 4 years
Text
Mobileye demos self-driving car that uses cameras to get around
Mobileye, Intel’s driverless vehicle R&D division, today published a 40-minute video of one of its cars navigating a 160-mile stretch of Jerusalem streets. The video features top-down footage captured by a drone, as well as an in-cabin cam recording, parallel to an overlay showing the perception system’s input and predictions. The perception system was introduced at the 2020 Consumer Electronics Show and features 12 cameras, but not radar, lidar, or other sensors. Eight of those cameras have long-range lenses, while four serve as “parking cameras” and all 12 feed into a compute system built atop dual 7-nanometer data-fusing, decision-making Mobileye EyeQ5 chips.
Running on the compute system is an algorithm tuned to identify wheels and infer vehicle locations and an algorithm that identifies open, closed, and partially open car doors. A third algorithm compares images from cameras to infer a distance for each pixel in an image and generate a three-dimensional point cloud, which Mobileye’s software uses to identify objects in the scene. A fourth algorithm identifies pixels corresponding to driveable roadway; detected objects pass through to a suite of four algorithms that attempt to place it in space.
The Mobileye car merges into traffic during the video, detecting vehicles traveling at speeds upwards of 56 miles per hour before veering to avoid a heavy construction zone in the right lane. The vehicle detects stationary cars, forklifts, and large trucks, yielding to a crossing pedestrian and one who hesitates before deciding not to cross the road.
Early on, Mobileye’s car changes lanes to avoid a row of stationary cars, and it’s shown slowing as it approaches a trailer protruding slightly into its lane. Later it nudges its way into a roundabout, switching lanes after spotting a less congested route. A safety driver behind the wheel takes over briefly, but only so that the drone’s battery can be replaced. Even when confronted with scenarios like a car parked in the middle of the road, Mobileye’s system attempts (and manages) to squeeze by, recognizing when the driver-side door is ajar.
VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.
“The car needs to balance agility with safety and does so using the RSS framework. The streets of Jerusalem are notoriously challenging, as other road users tend to be very assertive, adding significant challenge [for] the decision-making module of the robotic driver,” Mobileye CEO Amnon Shashua said in a statement. “The problem we aim to solve is scale. The true promise of [autonomous vehicles] can only materialize at scale — first as a means for ride-sharing via robo-shuttles and later as passenger cars offered to consumers. The challenges to support AVs at scale center around cost, proliferation of HD-maps, and safety.”
Mobileye has previously demonstrated that its perception system can detect traffic lights and signs, enabling it to handle intersections fully autonomously. But it also relies on high-definition maps of transportation lines, light rail lines, and roads themselves captured by the company’s Road Experience Management (REM) technology.
At a high level, REM is an end-to-end mapping and localization engine comprising three layers:
“Harvesting” agents, the Mobileye-supplied advanced driver assistance systems (ADAS) embedded in vehicles from automakers who agree to share data with the company, including Volkswagen, BMW, and Nissan. (Mobileye powered ADAS in 300 car models across 27 OEM partners as of November 2019.) The systems collect and transmit information about driving path geometries and stationary landmarks around them, leveraging real-time geometrical and semantic analysis to compress map-relevant information to less than 10KB per 0.62 miles, on average.
Capsules called Road Segment Data (RSD) into which the map data is packed before it’s sent to the cloud for processing, where it’s aggregated and reconciled into a map called a “Roadbook.” Mobileye collects 3.7 million miles of sensor data from vehicles on roads every day and draws on publicly available geospatial corpora like OS MasterMap and Ordnance Survey. The company expects to have more than 1 million vehicles in its European fleet by the end of 2020 and 1 million U.S. vehicles in 2021.
Software running within cars — eventually including cars from Mobileye’s aforementioned data-sharing partners — that automatically localizes within the Roadbook via real-time detection of landmarks stored within it. Mobileye has nearly all of Europe mapped and anticipates it will fully map the U.S. sometime later this year.
Mobileye, which Intel paid $15.3 billion to acquire in March 2017, is building two independent self-driving systems. One, like the system demoed in the video, is based entirely on cameras, while the second incorporates radar, lidar sensors, modems, GPS, and other components. Both confer the full benefits of Mobileye’s Responsibility-Sensitive Safety (RSS) model, an open policy that imposes “common sense” constraints on the decisions driverless vehicles make, and Mobileye says the latter should be able to travel roughly 100 million hours without a crash.
Mobileye is aiming to deploy robo-taxi fleets in three major cities — Tel Aviv; Paris; and Daegu City, South Korea — by 2022, with the hardware cost per robo-taxi coming in at around $10,000 to $15,000 per vehicle. (By 2025, Mobileye is aiming to bring the cost of a self-driving system below $5,000.) In the interim, the plan is to deploy dozens of vehicles with unrestricted travel between destinations in Israel ahead of a rollout across the country, potentially alongside the launch of a China-based service in partnership with Beijing Public Transport Corporation and Beijing Beytai.
Beyond Mobileye, a number of companies are developing autonomous vehicle systems that lean heavily (or exclusively) on cameras for routing. There’s Wayve , a U.K.-based startup that trains self-driving models solely in simulation, and Comma.ai, which sells an aftermarket self-driving kit to retrofit existing cars. And then there’s Tesla, which recently released a preview of an active guidance system that navigates a car from a highway on-ramp to off-ramp, including interchanges and lane changes. Like Mobileye, Tesla leverages a fleet of hundreds of thousands of sensor-equipped cars to collect data for analysis, which it uses to train, develop, and refine algorithms in the cloud that are then sent via over-the-air updates to those vehicles.
Source link
قالب وردپرس
from World Wide News https://ift.tt/2ZWEH9p
0 notes
maritimemanual · 4 years
Text
Automatic Identification System
Modern-day ships are equipped with several modern equipment and safety measures to ensure a smooth and successful voyage. These new techniques have emerged and evolved from years and years of study, research and experimenting. One such system is the Automatic Identification System or AIS. This article discusses the meaning of this term, the purpose and uses of the system, its working, its limitations and the kind of role it plays on present-day vessels.
What is an Automatic Identification System?
An automatic identification system is a tracking system used by vessel traffic services. It displays the vessels lying in the proximity of another in order to avoid a collision. It uses transponders on ships. It is used for exchanging navigational information with the help of electrical signals.
What does AIS do?
As the term suggests, this system is automated and autonomous. AIS transponders are mandatory to be carried by all commercial vessels which have a gross tonnage of over 299 as well as all passenger vessels as per the International Maritime Organization since 2004.
AIS transponders consist of a global positioning system. The GPS collects the vessel position and location details. These details are automatically broadcasted via a transmitter at regular intervals of time. This information is received by base stations order vessels that are within range. These signals can be displayed on the computer all depicted on chart letters after processing. Some automatic identification systems also make the use of satellites for the same purpose.
Why is AIS used?
Automatic identification systems are quite often used as a surveillance tool especially in coastal areas where the authorities used to monitor the movement of ships through that area. The channels can also be used by the shore side authorities to convey information on tides and weather conditions to the incoming ships. Other information and instructions need to monitor ship containers in hazardous cargo for fishing vessels can also be provided with the help of AIS. It can also be used to find out about the availability of vessels in the proximity of an incident for SAR operations.
Another important use of the automatic identification system is the avoidance of collision. This is a part of providing navigational safety to the ships. Having an AIS is good for a ship as it allows to increase the situation awareness and decision making on the ship. It makes the crew of the ship aware of another vessel that may be in its vicinity. However, it is not advisable to solely rely on this system for avoidance of collision.
How does AIS work?
In the beginning, automatic identification systems were used terrestrially. In the system, the signal was sent from the boat to the land. It had a very small range of just about 20 miles and the signal began to get weaker as the boat moved further away.
Later on, satellite systems started getting adopted. Now the ships could send a signal to satellites and the satellites send them back to the land. This enables the authorities on land to know exactly where the ship is and at what time.
The automatic identification system consists of a transmitter, receivers and marine electronic communications link for sensor systems and display.
A GPS is normally used to derive the coordinates that are the position and timing of the vessel.
One channel is sufficient for communication. Still, the most station transmits and receive more than one radio channel so as to avoid interference and communication losses with ships.
Every 6 minutes, static information like name and call sign, length and beam, type of ship, location of antenna, MMSI no., IMO no. etc. are transmitted. dynamic information transmitted includes the position of the ship, the position time stamp and the course over ground. Other than this information transmitted at regular intervals of time includes the draught of the ship, destination, the type of cargo being transported, routine plan, etc.
The main purpose behind fitting AIS on ships is for their identification and navigational marks. In regions like the Panama Canal, the automatic identification system is also used to provide information about rain, wind and other weather conditions.
Though it was initially started as a method of controlling marine traffic and avoiding collisions, it is an undeniable fact that its capabilities are applicable to a wider spectrum. Nowadays these systems are used by port authorities, ship owners, managers, builders, ship agents, brokers, researchers, data analysts, charterers, hotel, and tour operators, search and rescue teams, operators, pilots, harbormasters, flag administrators, classification societies, passengers, sailors, vessel crew, coast guards, border patrol, marine enthusiasts, radio amateurs, environmental protection agents, etc.
AIS Types
Class A
Class A AIS  is included for all SOLAS vessels of 300 gross tonnage and upwards engaged on international voyages,  vessels of 500 gross tonnage and upwards not engaged on international voyages and passenger ships irrespective of size.
Class B
Class B AIS is intended for non-SOLAS vessels. These include domestic commercial vessels such as pleasure crafts. AIS Class B units provide less functionality than Class A units but they operate and communicate with AIS Class A units and other types of AIS units.
What Information is transmitted by an AIS?
An AIS can send 2 types of information – Dynamic or Static information.
1) Dynamic information refers to the data transmitted every 2-10 seconds depending on the vessel’s speed and course while underway or every 6 minutes if the vessel with Class A transponder is at anchor.
MMSI number: unique identification number with nine digits
AIS Navigational Status: “0=under way using engine”, “1=at anchor”, “2=not under command”, “5=moored”, “8=under way sailing”
Rate of Turn: right or left (0 – 720 degrees per minute)
Speed over Ground: from 0 to 102 knots (189 km/h) with 0.1-knot resolution (0.19 km/h)
Position: (latitude&longitude – up to 0.0001 minutes accuracy)
Course over Ground: relative to true north to 0.1°
Heading: 0 – 359 degrees
UTC seconds: the seconds’ field of the UTC time when these vessels’ data was generated
2) Static & Voyage related information is provided by the subject vessel’s crew and is transmitted every 6 minutes regardless of the vessel’s movement status
International Maritime Organisation number (IMO number): a unique code associated with the hull which remains the same throughout a ship’s lifetime even if it changes owners.
Call Sign: international radio call sign assigned by the vessel’s country’s licensing authorities
Name: Name of the vessel. It can have a maximum of 20 characters
Type: It consists of two digits. While the first digit indicates the vessel’s category and the second digit provides tells the type of cargo.
Dimensions: indicates the size of the vessel to the nearest meter
Location of the positioning system’s antenna on board the vessel: distance from bow, stern, port and starboard sides in meters
Type of positioning system: GPS, DGPS, Loran-C, GLONASS, etc.
Draught: 0.1 – 25.5 meters
Destination:  upto 20 characters
ETA (Estimated Time of Arrival) at destination: UTC month/date hours:minutes
Where is AIS Used?
AIS used by a diverse set of professionals such as :
Port Authorities and Harbor Masters
Tug Operators and Pilots
Coast Guard and Border Patrol
Ship Owners, and Builders
Ship Agents, Brokers, and Charterers
Researchers and Data Analysts
Naval Search and Rescue teams
Flag administrators and Classification Societies
Vessel crews and family members
Hotels and Tour operators
Passengers or recreational sailors
Environmental Protection agents
Maritime Enthusiasts and Radio-amateurs
Limitations of AIS
AIS has proved to be an efficient and important part of modern ships. However much like any other implementation in the world, the AIS has its limitations.
First of all, the accuracy of the information which is received depends on the information transmitted and is only as good as the latter. Even today not all ships are equipped with an automatic information system. one should also be aware that an automatic information system can be switched off by any vessel at any time. Due to this, the ship might negate any information that it has previously received. Accuracy of the positions that is the latitude and longitude received by the GPS is also not certain. There is only so much that can be done about precision.
Finally, one must understand that the automatic information system is one of the best tools ever to be introduced in the marine industry. It forms an important part of the navigational equipment on board modern vessels. As useful as it is for navigation and accident prevention, it should be realized that the system is merely present for the purpose of assisting the crew of the ship and it cannot completely replace human beings or human resources.
  from WordPress https://www.maritimemanual.com/automatic-identification-system-ais/
0 notes
entergamingxp · 4 years
Text
Like animals, video game AI is stupidly intelligent • Eurogamer.net
We tend to think of real and virtual spaces as being worlds apart, so why is it that I can’t stop seeing an octopus arm in 2007’s spectacular Dead Space ‘Drag Tentacle,’ the alien appendage of developmental hell? Beyond surface xeno-weirdness, it’s what clever animation and the neural marvel have in common that has me interested. Since an octopus arm is infinitely flexible, it faces a unique challenge. How do you move an arm to set x,y,z coordinates and a certain orientation if it has infinite degrees of freedom in which to do it? How might the octopus arm tackle its virtual cousin’s task of going to grab the player when they could be anywhere in the room – free even to move as the animation is first playing?
You simplify. The former Dead Space developer and current senior core engineer at Sledgehammer Games, Michael Davies, took me through the likely digital solution. The drag tentacle is rigged with an animation skeleton – bones to twist and contort it so animation/code can bend it into different shapes. A trigger box is placed across the full width of the level Isaac needs to be grabbed from, with a pre-canned animation designed specifically to animate to the centre of it. Finally, to line up the animation to the player, inverse kinematic calculations are done on the last handful of tentacle bones to attach the tentacle pincer bone to the ankle bone of Isaac, while also blending the animation to look natural.
The octopus, conversely, constricts any of its flexible arms’ infinite degrees of freedom to three. Two degrees (x and y) in the direction of the arm and one degree (the speed) in the predictable unravelling of the arm. Unbelievably, to simplify fetching, the octopus turns an infinite limb into a human-like virtual joint by propagating neural activity concurrently from its ‘wrist’ (at the object) and central brain and forming an ‘elbow’ where they meet – i.e. exactly where it needs to be for the action.
So what’s the ‘exciting’ parallel? The octopus arm is doing the natural equivalent of a pre-canned animation – outsourcing the collapse of degrees of freedom to its body so that it doesn’t have to rely on a central brain that wouldn’t be able to cope. Similarly, the drag tentacle leans on an animated skeleton to collapse degrees of freedom like a human arm, but also pre-canned animation la octopus, and only directly tracks the player and blends its animation at the last moment – outsourcing to the ‘body’ of the animation and ‘behaviour’ of the scripting.
And it’s not just these appendage cousins. A virtual world having to be encoded and nature having to encode and navigate the real world are both fundamentally about simplification.
Nature having to deal with physics is such a drag.
The only Go champion to ever score a win against Google’s Deepmind ‘AlphaGo’ AI recently retired, declaring AI an entity that simply ‘can’t be defeated’. And yet, according to researchers, even the most powerful neural networks share the intelligence of a honeybee at most. How do you disentangle these statements? I have to bet that if any one contingent of the population is most skeptical of the potential dangers of AI, it’s people who play video games. We’re hobbyist AI crushers. No article on how humanity was only placed on this Earth to create God’s true image in AI would ever convince us otherwise. After all, how can gamers be expected to shake in the presence of these neural network nitwits when we’ve been veritably cosseted by the virtual equivalent of ants with guns?
Yet pouring water on the prospects of AI now or at any point seems foolhardy. 2011 only just saw the deep learning breakthroughs that have now seen translation and visual/audio recognition advanced to and beyond human capability. Such advancement may manifest day-to-day for the moment as little more than AI-generated auto replies to my girlfriend helpfully offering ‘no’ or ‘nope’ in response to whether I’m having a good day, but the application to research is endless. They can rediscover the laws of physics, reveal what Shakespeare did and didn’t write, and predict when you’re going to die. As a subset of machine learning, deep learning neural networks can be trained on data sets until they reduce their errors enough that they can accurately generalise what they’ve learnt for novel data. With layers of ‘nodes’ loosely analogous to our own neurons, these algorithms are powerful, if fundamentally not ‘intelligent’ tools. They employ an incredible level of pattern matching in place of semantic understanding (although the field isn’t without efforts contrariwise). It’s controversial for some to call them AI at all.
Yet, in the gaming space we’ve had the dramatic developments of the fight for human supremacy seeming definitively lost on the battlefield of the game of Go (the more mathematical alternative to chess) in 2015 to Deepmind’s reinforcement learning program, AlphaGo, with technically mindless but ‘creative’ flourishes. And then the salt being veritably rubbed in when Deepmind’s AlphaStar became a Starcraft II grandmaster capable of eviscerating 99.8 per cent of players – as I was writing this feature no less. No AI article will ever be up to date. Again, this isn’t necessarily as impressive as the hype it generates. If anything, it’s AI’s blind proficiency that makes it potentially dangerous. It doesn’t have to be conscious or even particularly intelligent to be better than you at discrete tasks or effectively hurt you through weapon systems and social media and search algorithm filter bubbles. As with the atomic breakthroughs, never bet against science’s potential to better and/or ruin your life.
I think what bothers me most about AI discussions is some of the absentees. Whilst we’re doing our best to expunge this planet of all other company, we aren’t quite alone in a room with AI yet. AI is often referred to as if it’s our one chance at meeting our equal outside of ourselves and yet evolutionary theory has shown us that the entirety of the animal kingdom is in fact one big family tree. Within animals is everything of what we are. The building blocks of higher cognition are preserved within living exhibits all around us – nothing just materialised within humans suddenly and apropos of nothing. And what of lowly video game AI? Are there no benefits to its approaches?
Defining intelligence is plagued by the inherent bias of it being us doing the defining. As Jerome Pesenti, VP of artificial intelligence at Facebook, says of DeepMind and OpenAI’s efforts towards an artificial general intelligence (AGI), it’s ‘disingenuous’ to think of the endpoint of an AGI as being human intelligence because human intelligence ‘isn’t very general.’ We’re enamoured with it as a differentiating factor, but by many a measure we can be beaten out by those we dismiss. If intelligence is defined by information processing and how quickly we can process high volumes of information, pigeons rule the roost. Learning speed? Human infants are bested by bees, pigeons, rats and rabbits. How exactly do you make a test ecologically neutral between an infant and a bee? Most often you can’t – except perhaps in visual tests.
The overwhelming point, however, is that you can’t define humanity’s unique traits as intelligent and grind the rest of the animal kingdom into dust. All behaviour that has survived must be to some degree de facto intelligent if they all effectively achieve their objectives like an Alpha algorithm. Just as popular culture’s depiction of linear evolution is a falsity (we are all equally evolved on this earth apart from *insert politician’s name here*), so is it often true for intelligence. Intelligence is therefore only a rough approximation of the complexity of a natural/virtual agent’s objectives that are fulfilled, but evolutionary solutions in behaviour and bodies are also intelligent. Even if we define intelligence on the basis of how much prior information is needed for the acquisition of a new skill, to what extent is it that our bodies and behaviours factor in? We are all incredibly versed in what pushing a human cognitively looks like – do we know fully what that means for most other animals on the planet? Small brains often just have to find alternative means to achieve their objectives; often by leaning instead on the animal’s environment or body for a solution. Think of the perfect circle formed by a scorpion or spider’s legs. Detecting vibrations spatially is simplified to a matter of which leg the vibrations reach first. No complex computation necessary.
Demis Hassabis (left), CEO and founder of DeepMind, and Lee Sedol (right), Go champion, the first of what will be many to capitulate to the AI juggernaut. Pictured middle – the biggest postitnote in human history. Pictured elsewhere – a postitnote conference. (Image credit: Deepmind)
The key to any investigation of intelligence is that the approach is bottom-up as opposed to top-down. This applies to animal studies. Instead of seeking human-level speech or numeracy in dolphins or tool-use in bees and proving next to nothing, we can ground experiments in analysing how dolphins actually communicate or count in their lives. We can work out what a reasonable test of novel skill acquisition looks like for their toolset. We can look to animal cognition and try to find the evolutionary roots of such abilities in an ecologically valid way.
It applies to AI. The development of deep or reinforcement learning algorithms that accept no top-down imposed-rules, but autonomously train themselves by means of networks that resemble our own neurons inherently has great potential insight to lend on how our brains work. The only issue we now see is that the gaps in data that AI combs off of Google or even scientific data are effectively society wide top-down provisions that invariably bias AI against minorities and women. It’s just another way ‘reference man’ might further plague society. Then we have bioinspired robots, that by dint of being situated in an ecologically-valid environment and taking biological inspiration for their bodies, can actually shed light on how and why animal behaviour, and by extension our own, works.
Enter video game AI – a curious thing. By not exercising the muscles of the very latest AI research it’s left in a place that is frankly fascinating. Evidently fascinating to a large contingent of gamers too, if excellent resources like the YouTube channel, AI and Games is anything to go by. Like the exhibits that buzz around us, developers often leverage much the same strategies evolution has employed to solve intelligence in small-brained animals. However, the term I’ll be borrowing for the closest description of video game AI agents was coined by Valentino Braitenberg in his ‘Vehicles, Experiments in Synthetic Psychology’ way back in 1984. Braitenberg machines are simple thought experiment vehicles, a car for example, with simple reactive sensors responding perhaps to light driving the wheels. Given only the barest of increases in connection complexity between the wheels and sensors, a complex environment and several stimuli present and the vehicle will appear, for all intents and purposes, an intelligent, thinking being. Its behaviour is motivated, goal-oriented, dynamic and adaptive to changes. Yet, underneath it all, there is no processing, no cognitive processes in memory or reasoning – nothing. This, at least in part describes what a small-brained insect running on just innate behaviour is. Given enough additional connections, could it even describe humanity with a consciousness cherry on top? Additionally, Heider and Simmel with their 1944 experiment wherein subjects were shown an animation of simple geometric shape tragedy demonstrated that as social beings our natural inclination is to irrationally project agency, social behaviours and intentions onto things that don’t share our capacities. The problem of AI for gaming is already half-solved by our social intelligence alone. Combined, Braitenberg vehicle-emulating AI systems and our overly-emotional brains produce an irresistible illusion.
Braitenberg vehicles – a crossed connection changes an aversion to the sun to love at first sight. No brain in sight. (Image credit: Alexander Rybalov)
What I’ve grown to love about games is that as engine-run simulations they are often forced to solve scientific problems bottom-up and in bioinspired ways. Whatever complexity it’s given, video game AI has massive advantages over AlphaGo/Star and their ilk purely by having bodies/animations that are situated in a virtual environment. ‘Situatedness’ refers to the fact that as agents we only ever exist in the context of an environment and a body. Thus, no natural complex behaviour has ever emerged without a body interacting with an environment – a brain-body-environment interaction. Being situated in an environment with other conspecific (same species) agents demanded complex social behavior that drove both brain evolution and intelligence in primates and birds (the social intelligence hypothesis). Indeed, Anil Seth argues that consciousness itself is the result of self-sustaining, surviving bodies more than intelligence. Far from the popular culture concern that your phone will one day gain consciousness, it’s hard to conceive that a complex, yet formless, lonely and thriving AI could thus ever share our suffering.
It’s easy to be negative about the lack of progress in gaming’s AI systems, but a whistlestop tour, whilst showing some impressively long delays between theory and implementation, also has a handful of significant advances. Finite state machine (FSM) systems were first based on research from 1955, way before they saw their popular implementation in everything from Pac-Man to the more complex Half-Life 1. It wasn’t until 2005 that Goal-Oriented Action Planning (GOAP) successfully introduced agent planning to FSM game AI in F.E.A.R. Even so, the underlying research sees its origins in the 70s! More recently we’ve seen everything from the enhanced hierarchical finite state machines (HFSMs) in Wolfenstein New Order and DOOM 2016, and the more vigorous advances in AI behavioural trees in Halo 2 and 3 and hierarchical task networks (HTNs) in Killzone 3 and Horizon Zero Dawn. We still see the oldies persist too with FSMs used for the Arkham games and GOAP used for Deus Ex Human Revolution. There’s no one size fits all method. Whilst the lack of mass migration to any one system seems astonishing, the selection and modification of AI systems on a game-by-game basis to fit the niche of a game’s requirements is one of the medium’s greatest strengths.
Every game can be a new opportunity for ingenious new solutions that befits their design – even if they’re not making use of the latest HTN planner. See DOOM 2016 and its seemingly outdated use of HFSMs with all their drawbacks, but also its ingenious inversion of the AI cover system of RAGE. Instead of seeking out cover, it seeks out an open position near cover to maximise visibility to the player and enhance the combat flow. It’s certainly not traditional intelligence. The usual survival pressures have been flipped on their head to create agents that have a deathwish. It’s not an advancement in computation, it’s just clever behaviour emerging from simple rules to fit the niche of the game. Is video game AI not quite like our animal and algorithmic friends in being entirely fit for purpose in this way? Intelligently stupid?
Full of RAGE – its AI that is.
Whilst gaming is appropriated as the next problem to have neural networks solve whilst in the shoes human players would usually don, the appetite for creating robust virtual agents with the sharp edge of progress isn’t there yet. The question is, would we want it? It’s tempting to just riff on the past and suggest we might see 2011’s deep learning advancements become mainstream in 2040, but what we’d be contemplating is games utterly transformed from the purpose-led design of today to something both outrageously resource-intensive and wholly unpredictable. If game designers currently use what’s tantamount to intelligent design to create agents – carving their behaviour to a specific game title’s niche – perhaps deep learning algorithms would be more like guided evolution. In many ways the hand of the designer and artistry is lost. Would it even yield gaming improvements?
Conceivably. Consider the recent AI Dungeon 2 text adventure game that uses OpenAI’s deep learning language models to respond to any input. Whilst it’s not perfect, there’s something joyous about one of the most infamously inflexible gaming genres becoming infinitely so. There are also the endless possibilities of deep learning generated animations and environments – even whole games. Online toxicity could be a thing of the past. As for behaviour, whilst they probably wouldn’t yield intelligently stupid solutions like those employed by our deathwish demons, what if deep learning techniques were kept in their own lane? Having discrete AI systems that could benefit from deep learning like experimental reactive dialogue could conserve elsewhere the creativity of the video game AI of today. Otherwise, games might have to experience a complete paradigm shift – evolve with their agents – to even make it work. Can you also ensure it’s not just the preserve of those with resources?
Simple vehicles or not, there are some beautiful, humbling parallels in how we as human beings and game AI fundamentally work. The American psychologist, JJ Gibson, who pioneered ecological psychology argued that far from amazing world-processors, our brains contain ‘matched filters,’ neurons that are tuned into the frequencies of and resonate with our natural environment by directly extracting information from the world. Essentially, much like an Apple product (given we are nature’s product), we thus have all the proprietary ports for which our environment can readily slot into. In possession of the most complex object in the known universe or not, we simply don’t have the processing power spare to generate an entire internal model of reality. However, we can recognise the parts that we evolved to by being afforded them dynamically. These include filtering for textures, geometry, facial recognition and reading, movement, biological motion (natural-looking motion), folk physics (our innate notions of the rules of nature) – just to mention a few. All animals have their own. But, expert sensory filterers though we are, it’s worth pointing out that perception is the result of the arrow in the opposite direction too (brain outwards). The below optical illusion will have you perceive A as darker than B because your brain predicts a shadow from the object. Connect them with your fingers and you’ll find they’re the exact same shade. What easier way to filter reality than to project expectations – hallucinate it?
‘Perceptions are controlled hallucinations. Reality is the hallucinations we agree on.’ -Anil Seth. (Image credit: Edward H. Adelson)
So where the goal and object-oriented lives of a soldier from 2005’s F.E.A.R. might have looked a thousand miles away from our own, so too are they constructed by designers to resonate selectively with their environment. Quite pleasingly to me, F.E.A.R.s agents have short but frequent plans with an average of less than three actions that they plan to execute. Pac-Man ghosts have only single action plans! This is compared to a potential thirty actions in an HTN. Whilst I understand that these hierarchies of strings of tasks allow for faster, more varied, more adaptive agents, there’s a purity to the ultra reactive F.E.A.R. In a small way, it feels more in keeping with our imperfect reactive brains, whilst in both cases being due to our different kinds of memory limitations. The eye-mind hypothesis suggests that for us there is no appreciable lag between what we visually fixate and process. You acquire info when you need it and minimise any use of memory. When you’re walking, you fixate ahead of you to deliver the motor information for the required thrust to your grounded foot. VR tests too can demonstrate our ‘just in time’ computation. When colour/size categorising and moving objects onto a conveyor belt, subjects suffer from change blindness with dramatic object size and colour changes being entirely missed when subjects have already moved on to and fixated on the belt. Animals, AI and humans – we are all reactive agents.
Consider the sad existence of a F.E.A.R. soldier. He’s nothing but an algorithmically moving animation blind to everything in the world but pathfinding navmesh nodes, ‘SmartObjects’ and the player – but then who are we to talk? It’s amazing to think how visually and cognitively blind we are outside our ecological resonances to everything in the world. Unlike a simple FSM approach, he’s a flexible Braitenberg vehicle whose sensors dynamically switch him between behaviours without any set transitions. Interestingly, what he’s sensing doesn’t comprise light or heat or even his fellow squadmates, but the very abstract, heuristic ‘level of threat.’ This gives us the illusion of some self-preservation as he moves to cover, dodge rolls when aimed at or blind fires when shot at. In truth, there’s nothing behind the eyes – only sensors driving wheels, or, in this case, flexible behaviours. You could conceive of the not-so-easy switch to an AI that senses more natural stimuli and the addition of some deep learning stand-ins for memory and reasoning ability, but it’s amazing to think of the complexity gap between those propositions and yet how effective the former solution is. It simply writes itself that the exact same AI system is shared by twenty or so rats in the world at any one time – mistakenly left on in perpetuity in the background to hog resources as you play. The soldiers are really no more complex than the rats they step over.
Nothing to fear but a heuristic for fear itself.
Algorithms that efficiently handle pathfinding aren’t unlike an ant’s toolkit, only with less complexity. For a set of coordinates, the A* algorithm optimises a path to a goal by splitting the difference between a path formed from chaining the lowest cost path states and a long-term considering path based on the lowest heuristic values (e.g. how far any next path state is from the goal). Given a living being can’t be handed coordinates directly from ‘God’, they too have to rely on simple, robust and some rule-of-thumb heuristic solutions to cope. Ants use an in-built pedometer and in-built compass using the sun as a cue to take a direct path back to their nest after foraging (path integration) whilst also continuously learning simple views (based on shapes) of the world that they can tend towards replicating when rewalking a familiar route. Travelling further from the nest increases uncertainty, so it’s thought that, much like the pathfinding algorithms, they use heuristic values to optimally weight their methods. This negates the need for actual ‘certainty calculations’ in a small-brained animal. However, even on an entirely familiar route an ant has used for its entire life, if you were to pick them up when they are nest-inbound with food and move them to where they’d usually be nest-outbound without food they’d freeze like an Alien from Aliens: Colonial Marines. With all their robustness otherwise, why? Whilst goal-oriented like a F.E.A.R. soldier, they are more rigidly compartmentalised in how they approach their goals. If you teleported a bot holding the flag in any game’s game of capture the flag across the map, it wouldn’t make a blind bit of difference. In this case, extraordinarily, ants almost have the same kind of inflexibility of earlier game AI with FSM-like inflexible transitions between their behaviours. They simply can’t access memories for the outward route whilst holding food. Whilst having to do so much less, the simple flexibility of game AI appears more intelligent. With the benefit of spatial cells in humans, we’re unlikely to become so navigationally unstuck, but our experience of conditional, prompted memories isn’t so unlike the stranded ants.
Perhaps the biggest spoiler of some semblance of individual agency in most games is the existence of some necessary coordinator/director/overlord AI systems. These exist behind the scenes whispering secrets agent-wide when ideally they could all be managing on their own reactively. It’s the illusionary theatre performance nature of video game AI. By far the most most impressive trick in F.E.A.R. is how in spite of being completely blind to one another, a soldier committing to an action (e.g. flanking) has the ‘squad coordinator’ feed the dialogue to another soldier to suggest the first do the said action its already committed to! The coordinator goes over the heads of the individual agents to use them for a simple but effective illusion of communication. Horizon Zero Dawn has ‘the collective,’ which manages the distribution of the machine fauna in their herds. Managing a lot of agents as a well-designed, but loose collective just makes sense. What’s interesting is how these systems act in place of the senses of the agents. The director of Alien Isolation particularly comes to mind in how it drip feeds information including the location of the player to the Alien AI in place of a completely grounded agent. It’s like a Braitenberg vehicle receiving signals from an omnipotent system to enhance its compliance to expected behaviours. The behaviour is emerging from the ether in these situations and not the environment. How might deep learning approach these visitations from ‘God?’ Indirect communication in a collective isn’t entirely divorced from reality, however. Bee foragers assess the state of their hive by how long they have to wait to have their pollen unloaded by storer bees. It’s a gross inefficiency – they could just store it themselves. Without any conscious decisions being made, a force outside of themselves in the dynamics of their collective organisation is allowing them to communicate information by independent discovery. The behaviour is intelligent so that the bees don’t have to be.
Behaviour is intelligent. Whether its produced by small brains or big brains in many ways is inconsequential. Deciding the next step in video game AI might be a matter of control. There’s a fascinating Quake 3 Arena story about a gamer leaving the neural network-based bot AI to fight it out for four years, only to return to a ceasefire. Fascinating for several reasons. One, it’s completely false. Two, people sufficiently believed from their contact with AI as it stands that it could be true. Three, it’s an interesting but completely adverse game outcome that you could easily conceive bottom-up AI of delivering. Why would you want it? But, and I can make this case passionately, in many ways video game AI of today is not inferior or less true to life than neural networks. They embody essential truths of nature and intelligence; that nature tends towards solutions that simplify; that small brains or indeed brainless vehicles can see intelligent behaviour emerge from the situatedness of their bodies interacting in environments they resonate with.
Perhaps the real future is presentational. The Last of Us 2 is adopting elaborate systems that further any illusion of intelligence by giving inter-agent recognised names and personalities to its vehicle husks. Whether we ever stop virtually burning them with magnifying glasses or not, let’s hear it for the ants of our favourite pastime. Intelligently stupid as they are, they might be as real as it gets.
from EnterGamingXP https://entergamingxp.com/2020/01/like-animals-video-game-ai-is-stupidly-intelligent-%e2%80%a2-eurogamer-net/?utm_source=rss&utm_medium=rss&utm_campaign=like-animals-video-game-ai-is-stupidly-intelligent-%25e2%2580%25a2-eurogamer-net
0 notes
vincentvelour · 5 years
Text
What it takes to expand into the UK: Pete Doyle interview
What it takes to expand into the UK: Pete Doyle interview
10/2/2019
Share
        By John Bostwick, Head of Content Management
The UK’s coming exit from the European Union has made headlines for years now, and for good reason. Even apart from the EU, the UK is a powerful economy — the fifth largest in the world — and businesses operating there want to know how Brexit will affect them and what actions they need to take to prepare.
While Vistra has experts and resources to help organisations do just that — such as our action assessment checklist — this particular interview is not about Brexit. We believe that whatever the ultimate terms of Brexit, the UK will remain a critical destination for multinationals due to its large market, robust financial services sector, well-educated workforce, strong legal system and other factors.
In this interview, Vistra’s Pete Doyle talks about his experiences helping companies expand into and operate in the UK. His lessons — learned over three decades — will remain relevant long after Brexit. He covers areas such as evolving corporate expansion strategies, the top risks of operating in the UK, and why complying with value-added tax requirements has an outsized importance for multinationals.
Pete is managing director of Vistra's Reading office in the UK. Early in his career, he trained as a chartered accountant with EY in their audit and business advisory group. In 1993, he cofounded Nortons, a firm of chartered accountants, which was acquired by Vistra in 2016.
  What types of clients do you typically work with in the UK?
A large part of my team’s client base are US HQ technology and life-sciences businesses. Start-up, VC-backed and recently IPO’d businesses are typical, although we also support elements of larger public companies. We also work with other international clients. A fair number of our tech clients are based in Israel, for example, especially those in IT security. We have a UK client base too, typically fast-growing owner-managed businesses, again with a bias towards technology but also engineering, consulting, retail and more. Interestingly, our largest current client is a division of an FTSE 250 PLC, a services business. We are helping them with their international expansion, and all the services we provide for them are outside of the UK.
What are some of the top reasons your clients expand into the UK?
International clients consistently tell us the UK is an important market in its own right, but it’s also seen as a stepping stone into other territories. The financial services sector in London in particular is a large market for tech and IT security, big data and so on. NHS [the UK’s National Health Service] is a draw for life-sciences businesses.
What markets do you typically see companies originating from?
The United States is still important for us, but through other Vistra offices we work with businesses based in Hong Kong, Singapore, Germany, Eastern Europe and elsewhere. We also have a good base of Israel-HQ clients.
Do you find that companies from certain countries more easily adapt to operating in the UK, either for regulatory or cultural reasons?
Interesting question. U.S. companies don’t always adapt as quickly as they think they’re going to. I think they often assume that operating in the UK is more or less the same as operating in the U.S., and they’re not prepared for the differences, which can be significant. HR and employment regulations are very different for example, as are our data privacy laws. That said, the UK is generally a straightforward market for a new foreign business, and our job is to help them navigate any complexities that do come up.
Do your clients tend to send expats or hire locals when expanding to the UK? What are some of the benefits and challenges of each option?
I would say most of our clients set up by hiring locals — or at least people already based in the UK — typically sales people who establish the local market. They generally hire folks with a track record in their sector, and the UK has a good talent pool for this, including individuals with experience working with U.S. tech companies in the UK and the rest of Europe.
  Occasionally, our clients do send expats to start off, but often there’s a specific reason for that. Sometimes detailed product knowledge is needed which is only available in-house, or perhaps a founder or another valued employee wants to relocate to the UK for personal and professional reasons, so the relocation is a win-win scenario.
  Obviously, there are immigration and tax challenges when you send an expat to establish your operations. Any organisation should know the full extent of those challenges and their related, often significant, costs before proceeding.
  That said, the success or failure of the choice will largely be determined by commercial factors. For example: Does the individual know the market? Keep in mind that what worked in the U.S. may not work in the UK or Europe. We often see clients that successfully use (say) a direct sales strategy in the U.S. But when operating in the UK and Europe, they may need to work with channel partners to be successful.
  How have perceptions of the UK as an international-expansion destination changed over the course of your career?
It’s hard to say, that’s probably a better question for our clients! I can say that when we started doing a lot of work with U.S.-based clients in the late 1990s and early 2000s, many of them saw the UK as a springboard to almost anywhere else in Europe and maybe even anywhere else in the world.
  They came to us to help them set up fairly large teams in the UK, and those folks then travelled to other markets initially. They set up a traditional UK subsidiary and hired a European managing director, along with sales, support and marketing teams. These folks sat in an expensive office which was often fairly autonomous from the U.S. HQ. That’s quite a big investment. Then they did the same in other key markets such as France, Germany and so on, and these groups may have been led by the UK hub.
  Now I would say U.S. clients set up far more nimble operations, often virtual teams with small numbers of employees set up in different locations supported by far less infrastructure. I would say that today smaller companies go international, or maybe the same businesses do so at an earlier stage in the lifecycle. Technology allows them to operate with small teams, folks working from home or from small virtual offices.
  Is the UK still the first target country for U.S. clients? Not always. Smaller clients now react to an opportunity to attract talent or follow a customer overseas without a great deal of strategic thinking, as there is a lot less heavy lifting involved, so they can set up anywhere when they see an opportunity.
  How do you see trends changing in the next five to 10 years?
You like asking me these difficult questions! Given no one really knows what will happen post-Brexit I think I am in good company in not wanting to comment even beyond a few months. But I would guess the businesses I work with will have access to disruptive technology in the near future. As a result, they will be able to do a lot more overseas administration themselves.
  However the old saying about rubbish in/rubbish out makes me think that the need for international support from specialists like Vistra could actually increase. Our clients will be able to use technology — including, say, AI — to do ever more complex things with less and less of their own infrastructure and perhaps far earlier in their lifecycle. Businesses will be global from day one. Those businesses will need our help — not least because the authorities looking after compliance, regulation and taxation will also have access to more and more data and will be able to follow our clients on their journeys more readily and challenge what they are doing.
  Have you found there’s one compliance area or consideration that businesses new to the UK often overlook during the expansion planning stages?
My clients are largely from outside the EU and usually do not think about VAT [value-added tax], and that can be a problem for them. Cross-border VAT depends on a lot of small details to work. Whether you’re dealing with goods or services or technology IP, you must have a very close look at the planned transactions.
  Often our clients don’t really understand their own supply chain or T&Cs [terms and conditions]. You must understand, for example: Who is the importer of goods, you or your distributor? Who are the principals and who are the agents in a digital revenue share agreement? What are you selling online, a B2C service or a B2B platform for others to sell their service on? I have seen cases where a client’s website says one thing and the real supply chain is different. A no-deal Brexit will make this even more interesting!
  Finally, some of our tech clients are doing things that are new and disruptive, and so their business activities do not fall into existing models that tax authorities understand or accept. This kind of scenario is difficult if it means the business has to decide for themselves whether a particular transaction needs to carry VAT or not and whether it should be UK VAT or VAT from another jurisdiction.
  Speaking of local experts, what are some of the other top challenges companies face when they expand to the UK?
I guess understanding that local regulations may be very different from home-country regulations, though that’s true when expanding into any new country. Complying with local regulations in the UK is as I mentioned pretty straightforward when compared to many other countries.
  Of course, those are all compliance challenges. There are other considerations, such as understanding local cultural norms in the workplace and appreciating commercial differences. For example, how do you adjust your home-country selling strategies to account for the realities of the UK market?
  If you had to give some general advice to a company considering expanding to the UK, what would you say?
When dealing with a prospect, I first ask them what they’re planning and what they want to achieve. Then I’ll address some subjects — such as complying with certain UK tax, employment and immigration regulations — that I’m pretty sure they’ll need to know about. Typically, this initial conversation needs to cover many commercial considerations as well, and then we home in on related risk areas.
  These kinds of discussions go wrong when you just answer the questions you’re asked. The smarter clients challenge us by asking, in effect, to tell them what they don’t know. That’s the right approach to take, and we’ll make it happen no matter what the situation. It’s our job to tell you what you need to know to succeed.
  What’s the most memorable thing someone has said to you about conducting business in the UK?
There are lots. I was asked once whether Paris was in London. It sounds funny, but if you put yourself in that client’s shoes it isn’t so strange. I can’t remember, but let’s say it was someone from a U.S. tech company whose market to date was hugely U.S.-focused, and that controller or CFO had never been to Europe. Why should they know where Paris is? The U.S. is big enough for anyone!
  Let’s face it, for years I thought Washington, D.C. was near Seattle (which is in Washington State, about 2500 miles away!) — so no different. In fact, this is an important lesson for international work: Do not assume people know stuff, ever. Their whole perspective and starting point is likely be very different from your own.
  In your career, have you noticed a single great truth about how to operate successfully in the UK or simply how to operate successfully in another country?
From my world, get payroll right and get expert help with anything like VAT, GST, sales taxes etc.
  Get payroll wrong and it is almost impossible to put right without a lot of pain for all. You’ll upset a team of employees — never a good idea — and payroll tax penalties can often be substantial.
  VAT and indirect taxes look deceptively simple but can go horribly wrong on a small detail, especially with cross-border transactions and in more complex areas such as financial services or real estate. The taxes involved are a percentage of revenue rather than profit (and so get very large very quickly) and are transactional, they’re happening every day. So they are hard to fix later, unlike say income taxes that can be perhaps be put right with an annual filing. An accountant’s answer for a single great truth: VAT! I don’t get out much!
      Join hundreds of global business leaders who receive weekly international expansion updates and need-to-know global information.
  <!--//--><![CDATA[// ><!-- MktoForms2.loadForm("//app-sji.marketo.com", "466-WXJ-405", 2983); //--><!]]>
0 notes
roboticplanetco · 6 years
Text
Top 6 future innovation on technology and science change the Universe
Technology and its evaluation can change the world from it day to day movement, science and technology are connected with each other, that’s why it discover the ultimate technical gadgets and machinery.
Although, we can forget our necessary things we can’t miss the technical gadgets and it features, really it helps to lead our modern lifestyle more advance, productivity, and security.
The modern science growing the whole world with full of range, beside of science technology also ongoing its reputation.
In the term of computer science that has the heavy effect on the technology, and Artificial Intelligence and Machine Learning are the most important term in the tech industry.
Nowadays Artificial Intelligence is the technologies most preferable and usable counterpart, gadgets and technical product based on AI but some of the product also based on Machine Learning, because AI depends on a computing system and natural intelligence.
In a computer science is called “Intelligent Agent,” any devices that comprehend its environment and takes actions that maximize the chance of fulfilling its goals. Although Machine Learning and Artificial Intelligence is the main part of computer science, it has accomplished the learning from the ability of computers techniques and performance level within data analytics, machine learning is a technique used to device complex models and algorithms that lend themselves to portent.
The innovation of Science:-
The innovation of science like a driverless journey that grows widely in every moment, so it’s time to enhance the revolution of science.
Naturally, authentic sense of science means knowledge, rather than a specific word for the hunt for knowledge. Similarly, science is the type of knowledge which people can connect to each other and share to the universe.
In this knowledgeable method, people work as a natural thing before recorded history and led to the development of complex epitome thought. The science implementation some of the complex, techniques for making toxic plants eatable.
Modern generation scientific knowledge barely results in the vast change in our understanding of experiences.
According to the psychologist, it may be media’s overexploitation that the image of public thoughts constantly improving it thoughts was true or false. Whereas in some cases the theory of affinity that requires to a complete concept based history, these are intense exemptions.
Knowledge science is gained by a gradual combination of information from the different experiments by manifold researchers across the different department of science.
Theory varies in the range which has been tested and verified, as well as the agreement in the scientific community.
The Innovation of Technology:-
The Innovation system of technology is a thought, expanded within the scientific field of the studies which has to expand by through nature and technological changes. 
The technology industry can be definite as a dynamic network system of agents that interact in a particular corporate/economic industry under a specific organization substructure and involve in the generation, dispersion and utilization of Technology.
Still, our fairly short of presence, humans have to develop our way to an era many are now calling the Anthropocene.
We are undoubtedly contributive to the many of global conflict that now facing our species. But our unequaled creativity is the double-edge blade.
Facial recognition feature is the main motto of the tech giant companies, the technology-based companies can perform to their supreme product and input humans unexpected materials that human beings do not unpredicted in their entire life.
In this facial recognition system is a technology proficient in detecting or verifying a person form the digital world like image or video frame.
Whereas primarily facial recognition forms the computer application, also there are multiple components can work of the facial recognition systems, but the mainstream of facial recognition feature that selected facial images from giving within the database.
In a recent, every technical component is used facial recognition feature like the mobile application to pc software and surprisingly robot can perform this feature. While it is generally used security purpose and contrasted to the other feature biometrics as well as fingerprint or eye iris identification system.
Currently, it has much popular commercial industry and marketing tool equipment.
The benefit of the facial recognition feature automation clarifies time tracking system, there is no necessity to have a personal monitor that works 24 hours in a particular a day, as human beings 24 hours work with fast and accurate its probably uncomfortable but in case of facial recognition technology can perform accurate report and fast identification process with capabilities to do overtime segment.
Future Technology:-
The upcoming future technology will more impactful in our lifestyle that increases without limit.
The biggest scenario enhanced the worldwide that called technology, nowadays one single man can’t think without technologies tools, it also leads our day to day lifestyle and improved our ideology, capabilities to brain power that explore unexpected thinking in this planet.
Here is some of the technology industry that can grow innovated material in the future.
1.  Self-Driving Cars will better safety:-
While Self-Driving Car is the all-time trend on technology, the autonomous car capable with a sensor on its atmosphere and cruising without human beings, the cars merge with the variety of technologies innovation like GPS, laser light, odometry, and the computer vision display system.
In this system, advance control performs to identify navigation paths, also barriers and appropriate clue.
The benefit of the autonomous cars out of knowledge in human expectations, advance safety, expanded mobility, enlarged customer satisfaction, and decreased crime.
Moreover, the autonomous cars are anticipated to increase traffic flow and enhanced to the children or motion, importantly decrease the need for parking space.
Lots of capability advantage to increase vehicle automation, there is undetermined problems, as well as safety, security, and technology issue. Similarly, the customer also concerns about the safety of the driverless car and its control process, also concern about privacy and security loss from hacker or terrorism.
The autonomous car industry’s main issue driving-related jobs of the road transport enterprise, the transport industry risk to increase travel and tourism price, it will become less costly and time-consuming, as well as many of these issues will arise because of autonomous products.
But first-time computer-controlled automobiles on the road freely work with dealing with many of safety and security related issue.
2. Artificial Intelligence will become more powerful than the human being:-
In the case of engineering AI is the core part of it, Machines can react same as a human being, only if they have plentiful information connecting to the world.
Artificial Intelligence must have access to many kinds of objects, product and more it truly relations between innovated knowledge of engineering.
But originating common sense, philosophy and problem-solving power in machines is the hard task for AI.
Another side Artificial Intelligence is a part of computer science, mathematical analysis of machine learning algorithms and increase its performance that well-defined theory base computer science knowledge.
The Machine Learning is the core part of AI that can expand superintendence needs capacity to recognize the patterns in streams of input without any kind of knowledge.
Machine awareness deals with the ability to use sensuous inputs to assume the different situations of the worlds, whereas the computer vision system is the capacity to analyze visible inputs with a few some kind of issue like facial recognition, object and motion recognition.
The robotics industry is also significant areas related to Artificial Intelligence. Now and the upcoming robot organizes through the AI base and such as Machine Learning intelligence change the technology evaluation.
The robot will perform better than human intelligence and its work accurate than human such as object manipulation, navigation, also sub-problem localization, mapping and also motion preparation, said the scientist.
3.  3D Printing:-
The next 10 years, 3D printing is well located to begin to revolutionize building processes and procedures worldwide.
3D printing is the various process in which fabric is joined or the other side under computer control hardened three extensive objects, it includes the material with liquid molecules or power grains being combined together.
It is using rapid prototyping and cumulative manufacturing items that can be used for any kind of shape or geometry generally are created using digital model data from a 3D model.
The technology of 3D printing has come with a long way in the recent years, it really breaks down all of the capabilities in this real-world applications. 3D technology isn’t hypothetical longer – it’s practically based in a wide kind of instances, and that’s the great news for 3-D printing originator, such as a rest of the real world.
3D printing assures that create such as a part and materials by purely printing the shape in an independent process.
Moreover, the 3D printing technology has ability print outside of complex parts but also printing perfectly organic part mixtures as a single finished product, and in many of the cases begin capable to print more complicated designed parts than could ever be done by conventional molding processes.
All 3D printing technology based scenario main principle of that if you have enough knowledge in 2D cross section layers that can be built on the top side of proper design succession, then 3D designing shape you can easily create.
Hence, a 3D model or its design is easily sequent layering of 2D cross-sectional prototypes of the intentional 3D design. Moreover, the technology base design 3D or 2D are created in the next innovated designing world.
4.  Cloud Computing:-
The Cloud Computing based on information technology archetype that enables universal access to share pools of configuration system resources and the top level of services that can be speedily conditioned with minimal management effort, frequently over the internet.
Cloud Computing enhanced to resources areas where public utilities and economic scale are available.
Also, Cloud Computing has used software applications, artificial intelligence, data, and processing power that can access over the internet, it used such kind of private online e-mail by through Gmail, as well as sharing social media related photos, videos and many more.
Moreover, cloud computing on the technology-based action is just the beginning part, truly, in this modern era, the huge majority of people used their personal and professional documents estimating throughout on the internet.
Also, the third-party cloud companies focus on their core businesses that expenditure assets on computer infrastructure and maintenance.
The cloud computing industry allow to avoid or minimize straightforward IT infrastructures costs, also promoters assert that cloud computing allows to ventures to get their application running more faster with improved flexibility and less preservation and the IT teams to more swiftly adjust resources to meet vacillating and unstable demand.
Although, Amazon launch cloud computing services on 2006, and accessibility of high-capacity networks with some of low-cost computers system and its storage of the devices such as a universal assumption of hardware visualization, service-headed structure and autonomic capacity with utility computing system has lead to increase in the cloud computing platform.
5.  Nanotechnology:-
Nanotechnology is the matter of nuclear, microscope and supramolecular scale, the description of nanotechnology acknowledged to the specific technology goal of exact manipulate atoms and fabrication of the microscope products. Also nowadays the molecular refer to the National Nanotechnology Initiative, which determines nanotechnology as a management matter of dimension sized form 1 to 100 nanometers.
The common “nanotechnologies” such as a “nanoscale technologies” to mention board of research center and its applications whose generally characteristic in size.
Because of that variety of capacity applications (inclusive manufacturing and military) and government have invested billions of dollars in nanotechnology research industry.
Nanotechnology enhanced the broad area of science, there has a lot of part of science like that molecular engineering, surface science, chemistry, molecular biology, microfabrication, energy store, and semiconductor physics etc.
According to the scientist, the future nanotechnology may be capable to create many of new materials and devices with a huge range of applications. Nowadays we can recognize our fitness by using powerful gadgets, even there are archetype electronic tattoos that can sense our vital signs, but the other side we can establishing or inserting tiny sensors inside our bodies.
This kind of sensor can capture information with less harass to the patient and doctor individualize their treatment.
The nano-sensors everywhere invented newly nanomaterial and production new technique to make smaller materials to recognize more complex and energy competent.
New generations sensors with a fine productivity of features that can be printed in quality and quantities with the flexibility of plastic at low cost.
In the possibility sensors at the lots of points over the critical infrastructure areas to continuously check everything in running accurately.
The benefit of this process Bridges, aircraft and also nuclear power plant are true productivity.
Moreover, science and technology will improve our technical skill and experience that will be lead to our lifestyle more peaceful, impactful and useful. Although modern science, technology is really create something human unexpected material that people can’t think their own ideology.
In this generation AI lead an impactful role to the human beings, as we know its technologies great innovation “Artificial Intelligence” but the human intelligence decrease day by day.
If in the future “Machine Learning” will be enhanced the world then what’s about the human intelligence, every assignment is done via robots and machine learning product so what’s the value of human beings. In that day human beings become a useless person for the world. 6. Future of Robot:-
In this twentieth century, the technology has been changing around the world and nowadays it moved from the laboratory and research institute to the home. Now we are at the point of technological touch: Robotic Rebellion.
But the confusion all about what is the Robotic Rebellion and what will its actual release?
A ‘Robot’ can define as a machine that can manage a complex series automatically, This is the definition that enhances largely established robots such as science-fiction films.
Moreover, there’s no need for a robot as a humanoid, for limbs, walk or talk. More, we can have a much wider explanation of what a Robot is. The categories between smart materials, artificial intelligence, biology, embodiment and robotic confusing.
This is the real fact about the robot, how a robot will affect the human race over the next twenty to forty years.
The modern generation robot can supervise and repair the natural atmosphere to nanorobots to track and kill cancer, besides of that robots will lead the way of colonization that keeps up loneliness in old age.
Alternatively, a robot which can be spoiled into electrical, mechanical and computational domains, we might think that in term of robot three core body elements like a body, a brain, and a stomach.
In this scenario benefit of artificial organism pattern that we are supported to utilize, and the goal of the researcher in areas smart materials like artificial intelligence, synthetic biology, and adaptation adjustment.
As we know that some of the categories of robot mechanical, electrical optical, chemical, thermal and so on.
The smart elements can add new capabilities to robotics, and specifically for artificial organisms.
In other words, you need a robotic device that can be ingrained in a person but it will humble to nothing when it has done its job of work.
Also, you can use polymer, biodegradable, biocompatible, the smart materials largely cover the physical properties that can capabilities as biological tissue and soft robotic technologies elements.
The smart material is separated into three groups like pneumatic soft systems and hydraulic, smart actuator and sensor materials and also stiffness changing elements.
Just as the influence of the internet was impossible to forecast, we cannot imagine where the future of robot will take us.
Magnificent virtual reality? Nowadays replacement bodies of human likely possible, as we walk to the way of Robotic Revolution when we will see back at this decade where the robotic really took off, and deposit the foundation of our future world.
What do you think about Science and Technology Innovation, if you would like to get more such kind of updates Comments and Share my contents?
Main Source:-Roboticplanet
0 notes
sporadicwinnersong · 6 years
Text
AI, a boon or bane? That is the question
Visitors at the 2nd World Intelligence Congress, in Tianjin, China. Humans trusting AI systems will lead to their greater acceptance
Regardless of the cheers and fears that it brings, we are a long way from it being truly safe and acceptable
There is tremendous excitement in the air about artificial intelligence (AI) across the world today. Everyone seems to be talking about AI—from leaders of the industry, scientific thinkers, heads of state, to the grandmother next door. The subject has been widely welcomed as the salvation of humankind, and, at the same time, has invited criticism for bringing in the potential doom of humans.
Taking away jobs
According to a 2017 Gartner report, AI could lead to a staggering 1.8 million job losses, particularly in the manufacturing sector. In the long run, however, it will create more than 2 million jobs in other sectors, including health care and education, which will continue to be skill-oriented. This reaffirms the fact that the impact of AI on employment will vary, depending on the industry.
As is the case with any disruptive technology, AI will change the landscape of employment, along with the human resource and skill requirements across sectors. It is for time to tell which jobs are likely to be replaced and what new skills will become valuable in the future. No one could have imagined the role of a BPO executive 25 years ago. The job market is always evolving and diversifying, generating new skill needs for various industries that may not necessarily be automated.
Therefore, both the advantages of AI and the risks associated with it are a long way from being realised. We are several years, if not decades, away from achieving true human-level AI. There are many pressing and real problems that need to be addressed so that AI can truly be functional. Let us take a look at the challenges that the AI community needs to address before we can claim the arrival of an all-pervasive AI.
Adversarial attacks 
AI-enabled systems have achieved super-human performance in several domains, such as recognising voice commands, identifying objects in images, and even in the diagnoses of medical conditions. In all of these settings, the intent often has been to ensure that AI succeeds. No one would want their home assistant to make mistakes when it is being addressed. If that is indeed desired, one can easily confuse it by speaking in a different accent. There have been instances of AI agents mistaking a dog for a ball due to the presence of some noise in the images. What is worrying about such instances is that the noise is almost imperceptible to humans and we have no difficulty in identifying the objects correctly.
This is not a phenomenon limited to the current state-of-the-art methods. This is an issue that the AI community has faced for more than a decade. With the potential for wide deployment of AI, addressing such adversarial attacks on it has taken on a sense of urgency. If we are going to use an AI-enabled authentication system, it is quite likely that someone could launch an attack on it. The academic community is making good progress to address it. 
Biases in AI
Another worrying aspect of an AI system is that it tends to reflect socio-cultural biases in the data that it is presented with. There have been reports of AI-enabled systems considering only white men as the typical demographic for CEOs of companies; or using the ethnicity of a person as an indicator of criminal intent. It is unfortunate that the bias in the data is a manifestation of the biases embedded in human behaviour. 
With the ever-increasing use of AI in decision-making processes in critical areas, it is crucial to put in place a mechanism for such biases to be avoided and for decisions to be fair. This would, however, mean asking the AI to be judged at an ethical standard higher than what humans are known to hold themselves to. 
One way of scientifically approaching this question is to ensure that all decisions are fair with respect to some protected attribute apparent in human behaviour, such as gender. This means that all things being equal, the gender does not play a role in the decision made using AI. The goal is to ensure this, even when there is a significant bias in the data.
Explainability in AI
Another important feature that will impact the acceptability of AI is the ability of systems using AI to be able to explain outcomes. In a future where AI systems are going to take on an increasingly large proportion of the repetitive decision-making, people would not like to be told, for instance, that they are being denied a loan because a black box called the AI so decided. We would ideally like the AI to offer explanations that are meaningful to a layperson. In this case, for example, an explanation such as, ‘Since your annual income is below the required level, and you already have significant outstanding loans, we are unable to sanction you the loan’ would be more meaningful.
The current trend in AI is to use complex models where such clear and comprehensible explanations are not easy to obtain. While we have made some progress in explaining perception, i.e., ‘Why did the AI identify that as a dog?’, we are a long way from producing such explainable behaviour for other tasks.
The other side of being able to explain outcomes is being able to trust the outcome. Consistent explanations go a long way in engendering trust in AI systems that will then lead to greater acceptability. 
AI as a menace
Is AI a menace to human kind? AI is a tool, a technology, which is likely to be used by humans for whatever purposes they want. Is nuclear technology a threat to human kind? Yes, without a shred of doubt. It is, however, also beneficial if used responsibly. Likewise, AI is a disruptive technology that should be handled carefully. We have a long way to go before we can ensure truly safe and acceptable AI. It is possible that people will rush to deploy AI systems to get a first-mover advantage, and that can result in catastrophic failures. But the notion of a fully autonomous AI system evolving to take over the world belongs in the realm of science fiction, as of today. # MohnishAhluwaliaNotes
via Blogger https://ift.tt/2up37Yp
0 notes