We develop an experimentally testable theoretical framework to answer this question. By making a novel conceptual connection between neural measurement and the theory of random projections, we derive scaling laws relating how many neurons we must record to accurately recover state space dynamics, given the complexity of the behavioral or cognitive task, and the smoothness of neural dynamics. Moreover we verify these scaling laws in the motor cortical dynamics of monkeys performing a reaching task.

Along the way, we derive new upper bounds on the number of random projections required to preserve the geometry of smooth random manifolds to a given level of accuracy. Our methods combine probability theory with Riemannian geometry to improve upon previously described upper bounds by two orders of magnitude.

]]>In conversation with John Markoff, New York Times Technology Reporter

This program is generously supported by Accenture.

Over the coming decades, artificial intelligence will profoundly impact the way we live, work, wage war, play, seek a mate, educate our young and care for our elderly. It is likely to greatly increase our aggregate wealth, but it will also upend our labor markets, reshuffle our social order, and strain our private and public institutions. Eventually it may alter how we see our place in the universe, as machines pursue goals independent of their creators and outperform us in domains previously believed to be the sole dominion of humans. Jerry Kaplan is widely known as an artificial intelligence expert, serial entrepreneur, technical innovator, educator, bestselling author and futurist. He co-founded four Silicon Valley startups, two of which became publicly traded companies, and teaches at Stanford University.

Join Kaplan for an illuminating conversation about the future of artificial intelligence and how much humans should entrust to machines.

]]>Lecture blurb:

The vast amounts of data in many different forms becoming available to politicians, policy makers, technologists, and scientists of every hue presents tantalising opportunities for making advances never before considered feasible.

Yet with these apparent opportunities has come an increase in the complexity of the mathematics required to exploit this data. These sophisticated mathematical representations are much more challenging to analyse, and more and more computationally expensive to evaluate. This is a particularly acute problem for many tasks of interest, such as making predictions since these will require the extensive use of numerical solvers for linear algebra, optimization, integration or differential equations. These methods will tend to be slow, due to the complexity of the models, and this will potentially lead to solutions with high levels of uncertainty.

This talk will introduce our contributions to an emerging area of research defining a nexus of applied mathematics, statistical science and computer science, called “probabilistic numerics”. The aim is to consider numerical problems from a statistical viewpoint, and as such provide numerical methods for which numerical error can be quantified and controlled in a probabilistic manner. This philosophy will be illustrated on problems ranging from predictive policing via crime modelling to computer vision, where probabilistic numerical methods provide a rich and essential quantification of the uncertainty associated with such models and their computation.

Bio

After graduation from the University of Glasgow, Mark Girolami spent the first ten years of his career with IBM as an Engineer. After this he undertook, on a part time basis, a PhD in Statistical Signal Processing whilst working in a Scottish Technical College. He then went on rapidly to hold senior professorial positions at the University of Glasgow, and University College London.

He is an EPSRC Established Career Research Fellow (2012 – 2017) and previously an EPSRC Advanced Research Fellow (2007 – 2012). He is the Director of the EPSRC funded Research Network on Computational Statistics and Machine Learning and in 2011, was elected to the Fellowship of the Royal Society of Edinburgh, when he was also awarded a Royal Society Wolfson Research Merit Award. He has been nominated by the Institute of Mathematical Statistics to deliver a Medallion Lecture at the Joint Statistical Meeting in 2017. He is currently one of the founding Executive Directors of the Alan Turing Institute for Data Science.

His research and that of his group covers the development of advanced novel statistical methodology driven by applications in the life, clinical, physical, chemical, engineering and ecological sciences. He also works closely with industry where he has several patents leading from his work on e.g. activity profiling in telecommunications networks and developing statistical techniques for the machine based identification of counterfeit currency which is now an established technology used in current Automated Teller Machines. At present he works as a consultant for the Global Forecasting Team at Amazon in Seattle.

The Alan Turing Institute is the UK’s National Institute for Data Science.

The Institute’s mission is to: undertake data science research at the intersection of computer science, mathematics, statistics and systems engineering; provide technically informed advice to policy makers on the wider implications of algorithms; enable researchers from industry and academia to work together to undertake research with practical applications; and act as a magnet for leaders in academia and industry from around the world to engage with the UK in data science and its applications.

The Institute is headquartered at The British Library, at the heart of London’s knowledge quarter, and brings together leaders in advanced mathematics and computing science from the five founding universities and other partners. Its work is expected to encompass a wide range of scientific disciplines and be relevant to a large number of business sectors.

For more information, please visit: https://turing.ac.uk

]]>Moderator: Benjamin Levy, Co-Founder, BootstrapLabs

Panelists:

Dr. Long Phan, CEO & CTO, TopFlight Technologies

Luis Dussan, Founder & CEO, Aeye

Jeff Hawkins, Co-Founder, Numenta

Numenta Workshop Oct 2014 Redwood City CA

Q&A:

Attorney Brian D. Wassom, whose groundbreaking book Augmented Reality Law, Privacy, and Ethics was recently published by Elsevier, surveys these topics in this 45 minute panel.

AWE 2015 featured over 200 companies leading the charge in augmented and virtual reality, wearable tech and the internet of things. Nearly 3,000 tech professionals engaged in over 20+ workshops, 170+ AWE-inspiring talks, 200+ interactive demos including a VR experience powered by UploadVR. AWE is about giving superpowers to the people and making them better at anything they do in work and life.

Dave Griesbach (Security Expert Google), Jerri Lynn Hogg (Professor, Fielding University), Dave Lorenzini (Founder, CEO, EFX).

See more at http://AugmentedWorldExpo.com

]]>Sid Kouider

Visiting Professor of Psychology, NYUAD

Sign up to our mailing list to stay informed of upcoming NYU Abu Dhabi Institute events: http://nyuad.nyu.edu/en/news-events/a…

To view our past events and videos, click here: http://nyuad.nyu.edu/en/news-events/a…

Follow NYU Abu Dhabi Institute on social media:

Facebook: https://www.facebook.com/pages/NYU-Ab…

Twitter: https://twitter.com/NYUADInstitute

Instagram: http://instagram.com/nyuadinstitute/

Follow NYU Abu Dhabi on social media:

Visit our website: http://nyuad.nyu.edu/en/

Facebook: https://www.facebook.com/NYUAD

Twitter: https://twitter.com/NYUAbuDhabi

Instagram: http://instagram.com/nyuabudhabi

Evelina is a machine learning researcher working in bioinformatics and statistical genomics. She is developing mathematical models which integrate different types of genomic data to distinguish cancer subtypes.

She studied computational statistics and machine learning at University College London and currently she is finishing her PhD at Cambridge University.

Evelina has used many different languages to implement machine learning algorithms, such as Matlab, R or Python. In the end, F# is her favourite and she uses it frequently for data manipulation and exploratory analysis.

She writes a blog on F# in data science at http://www.evelinag.com.

Understanding cancer behaviour with F#

Data science is emerging as a hot topic across many areas both in industry and academia. In my research, I’m using machine learning methods to build mathematical models for cancer cell behaviours. But using today’s data science tools is hard – we waste a lot of time figuring out what format different CSV files use or what is the structure of JSON or XML files. Often, we need to switch between Python, Matlab, R and other tools to call functions that are missing elsewhere. And why are many programming languages used in data science missing tools standard in modern software engineering?

In this talk I’ll look at data science tools in F# and how they simplify the life of a modern scientist, who heavily relies on data analytics. F# provides a unique way of integrating external data sources and tools into a single environment. This means that you can seamlessly access not only data, but also R statistical and visualization packages, all from a single environment. Compile-time static checking and rich interactive tooling gives you many of the standard tools known from software engineering, while keeping the explorative nature of simple, scripting languages.

Using examples from my own research in bioinformatics, I’ll show how to use F# for data analysis using various type providers and other tools available in F#.

]]>In Machine Learning and Computer Vision, M-Theory is a learning framework inspired by feed-forward processing in the ventral stream of visual cortex and originally developed for recognition and classification of objects in visual scenes. M-Theory was later applied to other areas, such as speech recognition. On certain image recognition tasks, algorithms based on a specific instantiation of M-Theory, HMAX, achieved human-level performance.[1]

The core principle of M-Theory is extracting representations invariant to various transformations of images (translation, scale, 2D and 3D rotation and others). In contrast with other approaches using invariant representations, in M-Theory they are not hardcoded into the algorithms, but learned. M-Theory also shares some principles with Compressed Sensing. The theory proposes multilayered hierarchical learning architecture, similar to that of visual cortex.

Source: Wikipedia

]]>