Hey, I'm Marcus.

I study machine learning and the brain.

Current research

Computation in "deep learning" neural networks superficially resembles computation in the brain. Can we use low-level details about computation in the brain to improve deep networks? In particular, what happens when deep networks use sparse connectivity, sparse activation, and quantized weights and activations? Can we achieve significant improvements in efficiency, enabling today’s hardware to run larger networks and thus become more capable? Does sparsity itself provide benefits other than efficiency? Do new classes of hardware become the new optimum with this type of network? If we co-evolve the networks and the hardware, where do we end up?

This is my main project at Numenta. (Fun fact: We live-stream most of our research meetings.)

Previous research

Efficient and flexible representation of higher-dimensional cognitive variables with grid cells
Mirko Klukas, Marcus Lewis, Ila Fiete
PLOS Computational Biology (2020)
A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex
Jeff Hawkins, Marcus Lewis, Mirko Klukas, Scott Purdy, Subutai Ahmad
Front. Neural Circuits (2019)
Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells
Marcus Lewis, Scott Purdy, Subutai Ahmad, Jeff Hawkins
Front. Neural Circuits (2019)

Other projects, big and small

Using Grid Cells for Coordinate Transforms
Marcus Lewis
Poster, Grid Cell Meeting 2018, UCL, London, England
Grid cells: Visualizing the CAN model
A weekend in April 2017
See HTM run: Stacks of time series
Written while living in hostels. February 2016
A visual running environment for HTM
Collaboration with Felix Andrews. November 2015

More

All posts | Twitter | GitHub | LinkedIn