AITopics is the Internet's largest collection of information about the research, the people, and the applications of Artificial Intelligence. Our mission is to educate and inspire through a wide variety of curated and organized resources gathered from news, conferences and publications.
AITopics 3.0 is brought to you by The Association for the Advancement of Artificial Intelligence (AAAI) and is powered by AI technology from i2k Connect. The new engine combines machine learning with subject matter expert knowledge to automatically tag documents with unique, accurate and consistent metadata. Building on the familiar online shopping experience, you can discover and analyze just the information you need.
When you first arrive, you will see the latest information. As you add search terms and filters, the matching documents are updated.
When you use this site on a mobile device, tap the icon at the top of the screen to show or hide the search box, and tap the icon to show or hide the left-hand-side menus.
Click on any page for help or display the complete user manual.
Search results are shown in the right column on large displays and cover the screen on small displays.
The most recent items are displayed first. Click "Sort results by" to sort by Relevance or Title.
Click on "Show as Treemap" for a graphical display of the distribution of items. Click on any box to see more detail.
Information displayed for each item includes:
Search for items that contain all of the words you enter, or use this advanced search syntax.
|Exact phrase||"This That"|
||This AND That ( or This That )|
|This OR That|
|This NOT That|
pagetext:environmental (Womens Glycerin Cushion Black Max Neutral Blue Diva Island Running 15 Shoe Pink Brooks item contents)
filepath:JPT (string contained in the URL)
store:"health news" (e.g., name of news feed)
views:"Technology|Information Technology|Artificial Intelligence" (a topic in one of the selected taxonomy lenses)
|Wildcards||title:Coal* ( Find items with words in the title that start with "Coal" )|
|Proximity||"crude train"~3 ( Find items where 'train' and 'crude' occur within 3 words of each other; e.g., "train carrying crude". )|
The i2k Connect Platform classifies items into topics arranged in hierarchical taxonomies called Views.
The brain has always been considered the main inspiration for the field of artificial intelligence(AI). For many AI researchers, the ultimate goal of AI is to emulate the capabilities of the brain. That seems like a nice statement but its an incredibly daunting task considering that neuroscientist are still struggling trying to understand the cognitive mechanism that power the magic of our brains. Despite the challenges, more regularly we are seeing AI research and implementation algorithms that are inspired by specific cognition mechanisms in the human brain and that have been producing incredibly promising results. Recently, the DeepMind team published a paper about neuroscience-inspired AI that summarizes the circle of influence between AI and neuroscience research.
Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Critically, the learner cannot in general know a priori the relevant time scale over which meaningful relationships will be observed. Widely used reinforcement learning algorithms discretize continuous time and use the Bellman equation to estimate exponentially-discounted future reward. However, exponential discounting introduces a time scale to the computation of value. Scaling is a serious problem in continuous time: efficient learning with scaled algorithms requires prior knowledge of the relevant scale. That is, with scaled algorithms one must know at least part of the solution to a problem prior to attempting a solution. We present a computational mechanism, developed based on work in psychology and neuroscience, for computing a scale-invariant timeline of future events. This mechanism efficiently computes a model for future time on a logarithmically-compressed scale, and can be used to generate a scale-invariant power-law-discounted estimate of expected future reward. Moreover, the representation of future time retains information about what will happen when, enabling flexible decision making based on future events. The entire timeline can be constructed in a single parallel operation.
We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory which facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep episodic memory model 1) encodes observed actions in a latent vector space and, based on this latent encoding, 2) infers most similar episodes previously experienced, 3) reconstructs original episodes, and 4) predicts future frames in an end-to-end fashion. Results show that conceptually similar actions are mapped into the same region of the latent vector space. Based on these results, we introduce an action matching and retrieval mechanism, benchmark its performance on two large-scale action datasets, 20BN-something-something and ActivityNet and evaluate its generalization capability in a real-world scenario on a humanoid robot.
We used to think that planning for the future was a skill most children have by the age of four, but now it seems that we don't develop the kind of memory needed to do this until we're older. Episodic memory lets us reflect on our past, and imagine ourselves in the future. To find out when children develop this, Amanda Seed at the University of St Andrews in the UK and her colleagues devised a test for 212 children between the ages of three and seven. Each child was taught how to use a box that released a desirable sticker when the correct token was placed in it. An examiner showed them two boxes of different colours and told them that one would remain on a table while they left the room, and the other would be put away.
In recent years a number of large-scale triple-oriented knowledge graphs have been generated and various models have been proposed to perform learning in those graphs. Most knowledge graphs are static and reflect the world in its current state. In reality, of course, the state of the world is changing: a healthy person becomes diagnosed with a disease and a new president is inaugurated. In this paper, we extend models for static knowledge graphs to temporal knowledge graphs. This enables us to store episodic data and to generalize to new facts (inductive learning). We generalize leading learning models for static knowledge graphs (i.e., Tucker, RESCAL, HolE, ComplEx, DistMult) to temporal knowledge graphs. In particular, we introduce a new tensor model, ConT, with superior generalization performance. The performances of all proposed models are analyzed on two different datasets: the Global Database of Events, Language, and Tone (GDELT) and the database for Integrated Conflict Early Warning System (ICEWS). We argue that temporal knowledge graph embeddings might be models also for cognitive episodic memory (facts we remember and can recollect) and that a semantic memory (current facts we know) can be generated from episodic memory by a marginalization operation. We validate this episodic-to-semantic projection hypothesis with the ICEWS dataset.
Shoes Color Clogs Slide Closed Pointed Beige Stylish Toe Mules Toe Mofri on Flats Women's Color nFYOwIUq, Chiang, Maurice, Gomez, Marcus, Gruver, Nate, Hindy, Yousef, Lam, Michelle, Lu, Peter, Sanchez, Sophia, On White Classic Unisex Black True Slip Skate Checkerboard Vans Shoe gZSqFIn, Smith, Michael, Wang, Lucy, Wong, Catherine
This document provides an overview of the material covered in a course taught at Stanford in the spring quarter of 2018. The course draws upon insight from cognitive and systems neuroscience to implement hybrid connectionist and symbolic reasoning systems that leverage and extend the state of the art in machine learning by integrating human and machine intelligence. As a concrete example we focus on digital assistants that learn from continuous dialog with an expert software engineer while providing initial value as powerful analytical, computational and mathematical savants. Over time these savants learn cognitive strategies (domain-relevant problem solving skills) and develop intuitions (heuristics and the experience necessary for applying them) by learning from their expert associates. By doing so these savants elevate their innate analytical skills allowing them to partner on an equal footing as versatile collaborators - effectively serving as cognitive extensions and digital prostheses, thereby amplifying and emulating their human partner's conceptually-flexible thinking patterns and enabling improved access to and control over powerful computing resources.
Episodic memory is a psychology term which refers to the ability to recall specific events from the past. We suggest one advantage of this particular type of memory is the ability to easily assign credit to a specific state when remembered information is found to be useful. Inspired by this idea, and the increasing popularity of external memory mechanisms to handle long-term dependencies in deep learning systems, we propose a novel algorithm which uses a reservoir sampling procedure to maintain an external memory consisting of a fixed number of past states. The algorithm allows a deep reinforcement learning agent to learn online to preferentially remember those states which are found to be useful to recall later on. Critically this method allows for efficient online computation of gradient estimates with respect to the write process of the external memory. Thus unlike most prior mechanisms for external memory it is feasible to use in an online reinforcement learning setting.
Humans excel at continually acquiring and fine-tuning knowledge over sustained time spans. This ability, typically referred to as lifelong learning, is crucial for artificial agents interacting in real-world, dynamic environments where i) the number of tasks to be learned is not pre-defined, ii) training samples become progressively available over time, and iii) annotated samples may be very sparse. In this paper, we propose a dual-memory self-organizing system that learns spatiotemporal representations from videos. The architecture draws inspiration from the interplay of the hippocampal and neocortical systems in the mammalian brain argued to mediate the complementary tasks of quickly integrating specific experiences, i.e., episodic memory (EM), and slowly learning generalities from episodic events, i.e., semantic memory (SM). The complementary memories are modeled as recurrent self-organizing neural networks: The EM quickly adapts to incoming novel sensory observations via competitive Hebbian Learning, whereas the SM progressively learns compact representations by using task-relevant signals to regulate intrinsic levels of neurogenesis and neuroplasticity. For the consolidation of knowledge, trajectories of neural reactivations are periodically replayed to both networks. We analyze and evaluate the performance of our approach with the CORe50 benchmark dataset for continuous object recognition from videos. We show that the proposed approach significantly outperforms current (supervised) methods of lifelong learning in three different incremental learning scenarios, and that due to the unsupervised nature of neural network self-organization, our approach can be used in scenarios where sample annotations are sparse.
Meta-learning agents excel at rapidly learning new tasks from open-ended task distributions; yet, they forget what they learn about each task as soon as the next begins. When tasks reoccur - as they do in natural environments - metalearning agents must explore again instead of immediately exploiting previously discovered solutions. We propose a formalism for generating open-ended yet repetitious environments, then develop a meta-learning architecture for solving these environments. This architecture melds the standard LSTM working memory with a differentiable neural episodic memory. We explore the capabilities of agents with this episodic LSTM in five meta-learning environments with reoccurring tasks, ranging from bandits to navigation and stochastic sequential decision problems.
Reinforcement learning (RL) algorithms have made huge progress in recent years by leveraging the power of deep neural networks (DNN). Despite the success, deep RL algorithms are known to be sample inefficient, often requiring many rounds of interaction with the environments to obtain satisfactory performance. Recently, episodic memory based RL has attracted attention due to its ability to latch on good actions quickly. In this paper, we present a simple yet effective biologically inspired RL algorithm called Episodic Memory Deep Q-Networks (EMDQN), which leverages episodic memory to supervise an agent during training. Experiments show that our proposed method can lead to better sample efficiency and is more likely to find good policies. It only requires 1/5 of the interactions of DQN to achieve many state-of-the-art performances on Atari games, significantly outperforming regular DQN and other episodic memory based RL algorithms.