Tumgik
#ray solomonoff
rhidvvan · 2 years
Text
Artificail Intelligence: A guide to thinking human 
This write up in cornered around the topic "Artificial Intelligence: A guide to human thinking" A book written by an Americans scientist Professor Melanie Mitchell, she is the Davis Professor of complexity at the Santa Fe Institute New Mexico. Her major works has been in areas of Analogical reasoning, complex system, genetic algorithm, cellular automata and visual recognition.
She received her PhD in 1990 from the University of Michigan  under Douglas Hofstadter and John Holland, for which she developed the copycat cognitive architecture. She is the author of "Analogy-Making as Perception", essentially a book about Copycat. She has also critiqued Stephen Wolfram "A new kind of Science"  and showed that genetic algorithms could find better solutions to the majority problems  for one-dimensional cellular automata. She is the author of An Introduction to Genetic Algorithms, a widely known introductory book published by MIT press in 1996. She is also author of Complexity: A Guided Tour (Oxford University Press, 2009), which won the 2010 Phi Beta Kappa Science Book Award and Artificial Intelligence: A Guide for Thinking Humans . https://en.wikipedia.org/wiki/Melanie_Mitchell
History of Artificial Intelligence:
Going back in Time to the 1950's when the perceptron was developed, 1958 being the year it was made public, press release and all sort. It was one of many attempt to automate intelligence by using inspiration from the brain, it was developed by a psychologist named Frank Rosenblatt, he tried to simulate in a very idealised way how neurone work and a simple network of a neurone, how they might go about recognising some perceptual input. Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had greater processing power than perceptrons with one layer (also called a single-layer perceptron).
The famous conference amongst Artificial Intelligence pioneer held at Dartmoor College in 1956 gathering a 11 attendees namely Marvin Miskey, Julian Bigelow, D.M Mackay, Ray Solomonoff, John Holland, Joh McCarthy, Cluade Shannon, Allen Newell, Herbert Simon, Oliver Selfridge and Nathanial Rochester, there goal was to clarify and develop a thinking machine and also make progress in area like; computer vision, natural language understanding, solving of mathematical problems, driving car and most things humans do.
Definition of Artificial intelligence:
AI is a branch that involves many computational methods, ways of getting machine to do things that we consider to be intelligent. However this is not possible because the term intelligence doesn't have a fixed definition, it keeps changing over time as people keep evolving. That being said, can a AI system behave like human in all circumstances in term of general purpose other than being specified, can this system use common sense.
In specific areas such as speech recognition, AI has been successful but when we look at it, the system only perform that specific task, they can't do anything else and they don't in any sense understand the text they transcribe.
From this point of view in terms of general AI, I will say we are still far away from achieving the goal of machine thinking like humans.
Branches/Evolving of AI:
The perceptron was an early effort of Machine learning but as things evolved people came up with ideas that machine should not just focus on learning but on humans (experts in various field) trying to program in knowledge and rules that programs will use to operate, which brought about;
Expert systems which gain popularity in the 1970s and 1980s, where programmers will interview experts and will try to extract knowledge and rules, then try to program this program into computers, but the approach wasn't as successful as imagined because most of the knowledge experts used was hard to extract from them because most Information are not used by this experts consciously.
Then come in 1980 to 1990 the approach of Statistical learning which tries to unite machine learning with inference from data which turns out to be more successful and is till in use till date as compared to expert system approach.
Deep learning being one of the most effective tool out there, you can think of perceptron as kind of what neural network originally evolved from, however this neural network perform very narrowly defined task but when there is minor changes in data the system produces false result therefore leading us back to AI lacking intelligence in terms of generalisation.
Solution for AI in terms of understanding/intelligence
Use of Analogy can be used to help AI system develop understanding of what they are doing and how they should be doing things. The ability to see abstract similarities is a fundamental to being Intelligent and understand the real world.
The idea of speculation where we can predict what is likely to happen either consciously or unconsciously because of similar experiences and sort of learn from what has or hasn't happened to us.
AI in terms of different approach
A more recent break through in AI is in the field of protein folding and drug design, they were able to use a AI system to look at protein sequence of amino acid and predict how they were going to fold up in 3 dimension. Looking at AI from this point of view one will see that if system works with a human driven approach rather than trying things differently the result of such achievements will be impossible. So there are different approach to AI which are correct for different application of AI
0 notes
Link
Was it a chance encounter when you met that special someone or was there some deeper reason for it? What about that strange dream…. Powered by AutoBlogger.co
5 notes · View notes
michaelgogins · 7 years
Text
Naturalism's Fundamental Equivocation
Before his death Ray Solomonoff wrote a good summary of algorithmic probability theory, his invention (along with Kolmogorov and Chaitin and others), which provides basic assumptions and methods for current research in machine learning and artificial intelligence. Solomonoff clearly understood a number of issues that confuse students in this field.
This article is quite useful because, in addition to providing a clear exposition of basic principles, it exposes a fundamental equivocation that is common in this field and that undercuts some of its essential claims.
Dear reader, please understand that here I am taking Solomonoff's algorithmic probability theory and theory of universal inference as adequate stand-ins for what I call "naturalism" and "mechanism," i.e. the philosophy that Nature is essentially a machine and that human beings and their minds and spirits are, therefore, also essentially machines or parts of a machine. Among other things this philosophy is able to junk metaphysics in favor of science, which is adequate to study algorithmic probability theory, quantum information theory, universal inference, and the like without any further assistance or interference from philosophy.
Universal inference will not work if Nature is not of finite complexity, of course. If Nature is of infinite complexity, then universal inference will not converge and thus cannot actually exist. The question whether Nature is of finite or infinite complexity is a scientific question, and should not be assumed, although most scientists do in fact assume it. This is a risky business and depends on things like the reality or not of chaotic inflation, the correct interpretation of quantum mechanics, and so on. Note that this is not the same as whether Nature is incomputable (Solomonoff assumes, I believe correctly, that Nature is indeed incomputable; but that does not harm his program, for as he notes, it is actually essential to the function of universal inference). Yet that is not what I wish to discuss here.
Solomonoff posits that universal inference depends upon what he calls "subjectivity," by which he means an objectively unmotivated choice of a reference machine for universal inference. He compares this with the learning, based upon subjective experience, performed by human infants. The infant has a set of innate or "pre-programmed" references and expectations, which the infant then modifies in the light of its own subjective experience.
The equivocation which I expose here is as follows. “Objectively unmotivated” means “random,” otherwise there is a discernible cause or objective motivation. But “random” in algorithmic information theory means “of irreducible (or even infinite) computational complexity.”
You can’t have your cake and eat it, too. You can’t have finite complexity to enable universal inference, while at the same time you have infinite complexity to enable objectively unmotivated choice.
But perhaps you can have finite complexity to enable universal inference, while at the same you have irreducible yet finite complexity to enable objectively unmotivated choice. Logically, that is possible.
So that then is the bottom line of naturalism or mechanism: we are machines who can never understand our own limits. That is the opposite of what I would call faith: we are not machines because our limits do not actually exist.
0 notes
brianlichtig · 7 years
Text
15 data and analytics trends that will dominate 2017
Along with social, mobile and cloud, analytics and associated data technologies have earned a place as one of the core disruptors of the digital age. 2016 saw big data technologies increasingly leveraged to power business intelligence. Here's what 2017 holds in store for the data and analytics space.
John Schroeder, executive chairman and founder of MapR Technologies, predicts the following six trends will dominate data and analytics in 2017:
Artificial intelligence (AI) is back in vogue.
In the 1960s, Ray Solomonoff laid the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. In 1980 the First National Conference of the American Association for Artificial Intelligence (AAAI) was held at Stanford and marked the application of theories in software. AI is now back in mainstream discussions and the umbrella buzzword for machine intelligence, machine learning, neural networks and cognitive computing, Schroeder says. Why is AI a rejuvenated trend? Schroeder points to the three Vs often used to define big data: Velocity, Variety and Volume.
Platforms that can process the three Vs with modern and traditional processing models that scale horizontally provide 10-20X cost efficiency over traditional platforms, he says. Google has documented how simple algorithms executed frequently against large datasets yield better results than other approaches using smaller sets. Schroeder says we'll see the highest value from applying AI to high volume repetitive tasks where consistency is more effective than gaining human intuitive oversight at the expense of human error and cost.
Big data for governance or competitive advantage. In 2017, the governance vs. data value tug of war will be front and center, Schroeder says. Enterprises have a wealth of information about their customers and partners. Leading organizations will manage their data between regulated and non-regulated use cases. Regulated use cases data require governance; data quality and lineage so a regulatory body can report and track data through all transformations to originating source. Schroeder says this is mandatory and necessary but limiting for non-regulatory use cases like customer 360 or offer serving where higher cardinality, real-time and a mix of structured and unstructured yields more effective results.
Companies focus on business- driven applications to avoid data lakes from becoming swamps. In 2017 organizations will shift from the "build it and they will come" data lake approach to a business-driven data approach, Schroeder says. Today's world requires analytics and operational capabilities to address customers, process claims and interface to devices in real time at an individual level. For example, any ecommerce site must provide individualized recommendations and price checks in real time. Healthcare organizations must process valid claims and block fraudulent claims by combining analytics with operational systems. Media companies are now personalizing content served though set top boxes. Auto manufacturers and ride sharing companies are interoperating at scale with cars and the drivers. Delivering these use cases requires an agile platform that can provide both analytical and operational processing to increase value from additional use cases that span from back office analytics to front office operations. In 2017, Schroeder says, organizations will push aggressively beyond an "asking questions" approach and architect to drive initial and long term business value.
Data agility separates winners and losers. Software development has become agile where DevOps provides continuous delivery, Schroeder says. In 2017, processing and analytic models will evolve to provide a similar level of agility as organizations realize data agility, the ability to understand data in context and take business action, is the source of competitive advantage not simply having a large data lake. The emergence of agile processing models will enable the same instance of data to support batch analytics, interactive analytics, global messaging, database and file-based models, he says. More agile analytic models are also enabled when a single instance of data can support a broader set of tools. The end result is an agile development and application platform that supports the broadest range of processing and analytic models.
Blockchain transforms select financial service applications. In 2017, there will be select, transformational use cases in financial services that emerge with broad implications for the way data is stored and transactions processed, Schroeder says. Blockchain provides a global distributed ledger that changes the way data is stored and transactions are processed. The blockchain runs on computers distributed worldwide where the chains can be viewed by anyone. Transactions are stored in blocks where each block refers to the preceding block, blocks are timestamped storing the data in a form that cannot be altered. Hackers find it theoretically impossible to hack the blockchain since the world has view of the entire blockchain. Blockchain provides obvious efficiency for consumers. For example, customers won't have to wait for that SWIFT transaction or worry about the impact of a central datacenter leak. For enterprises, blockchain presents a cost savings and opportunity for competitive advantage, Schroeder says.
Machine learning maximizes microservices impact. This year we will see activity increase for the integration of machine learning and microservices, Schroeder says. Previously, microservices deployments have been focused on lightweight services and those that do incorporate machine learning have typically been limited to "fast data" integrations that were applied to narrow bands of streaming data. In 2017, we'll see development shift to stateful applications that leverage big data, and the incorporation of machine learning approaches that use large of amounts of historical data to better understand the context of newly arriving streaming data.
Hadoop distribution vendor Hortonworks predicts:
To read this article in full or to leave a comment, please click here
from CIO http://www.cio.com/article/3166060/analytics/15-data-and-analytics-trends-that-will-dominate-2017.html#tk.rss_all Baltimore IT Support
0 notes
bicorner · 7 years
Text
6 Trends to Expect for Big Data In 2017
6 Trends to Expect for #BigData In 2017
The focus on big data in 2017 will be on the value of that data, according to John Schroeder, executive chairman and founder of MapR Technologies, Inc. Schroeder offers his predictions on the 6 trends in big data we can expect.
Artificial Intelligence is Back in Vogue “In the 1960s, Ray Solomonoff laid the foundations of a mathematical theory of artificial intelligence, introducing universal…
View On WordPress
0 notes