The ultimate goal of A.I. research is to build machines that exceed the flexibility and dynamism of the human brain. Currently, the predominant approach in A.I. is to use unlimited data to solve narrowly defined problems. To progress towards human-like intelligence, A.I. benchmarks will need to be extended to focus more on data efficiency, flexibility of reasoning, and transfer of knowledge between tasks -- the constraints on a solution to a problem are as important as the problem itself. In this talk, I will describe the challenges and successes in making these ideas operational. At Vicarious, we use the language of probabilistic graphical models as the representational framework. Compared to neural networks, graphical models have several advantages such as the ability to incorporate prior knowledge, the ability to answer arbitrary probabilistic queries, and the ability to deal with uncertainty. However, one of the downsides of probabilistic graphical models is that inference can be intractable. By incorporating several insights that were originally discovered in neuroscience, we were able to create probabilistic models on which accurate inference can be performed using messremeage passing algorithms that are similar to the computations in a neural network. This allowed us to crack text-based CAPTCHAs with high data efficiency and to beat a text parsing benchmark with 300-fold efficiency compared to deep learning. Recently, we also showed progress in general game playing where we demonstrated vastly superior zero-shot generalization compared to deep reinforcement learning. I'll describe the opportunities in robotics that we are currently exploring and conclude with a description of the challenging problems that remain to be solved.