', 'auto', 'clientTracker'); ga('clientTracker.send', 'pageview');
Mar 25, 2019
Andrew Trask is currently a student at Oxford University and Author of Grokking Deep Learning performing his Ph.D. concerning anonymizing data. He discusses the facts concerning deep and machine learning and their possible benefits to society. Trask also discusses privacy securing techniques that would further benefit the field. Finally, Trask discusses his connection with Open Mind, which is a company that uses machine and deep learning to overcome the barriers in adoption.
Top Three Takeaways:
[0:00] Ladan introduces Andrew Trask who will discuss deep learning. He currently is performing his Ph.D. at University of Oxford concerning anonymizing data.
[1:10] Ladan mentions how Trask wrote his book Grokking Deep Learning. The book seeks to teach the fundamentals of deep learning; the term “grokking” comes from the idea of an innate understanding.
[2:25] The book fills the void for an intuitive guide in the subject of deep learning.
[3:20] Trask was not a Ph.D. student when he started writing the book; he found an implementation of the deep neural networking and removed as much unnecessary information as possible.
[5:55] Machine Learning is a set of algorithms that allows for a system to learn while deep learning is a subset of Machine Learning techniques that are inspired by the human brain.
[6:30] The Deep Learning’s parametric algorithm would construct a hierarchical view of the world by recognizing lines and edges; the second part of the algorithm would take this information to form shapes, textures, and shadows.
[8:20] Machine Learning includes Deep Learning and other learning techniques as subsets.
[10:00] Trask disagrees with those who claim that all forms of Machine Learning and Deep Learning as artificial intelligence; Machine and Deep Learning focus on finding patterns.
[11:50] Sample complexity relates to how many data points an algorithm needs to learn a pattern.
[12:40] Research in Deep Learning and Machine Learning to bring down sample complexity; more data is always better but cannot always be managed.
[14:00] There is much more unlabelled data in any field than labeled data; large amounts of labeled data is preferable.
[16:00] Hospitals are not willing to share useful information for the development of algorithms in safe ways.
[16:30] Research in this field concerns sharing private and intelligent information in a secure way.
[17:40] An example of useful Deep Learning would be to find trends associated with aging in the brain that could lead to the reversal of its effects.
[19:00] Federated learning is a new tool that replaces approaching different data providers with sending statistical models into someone's organization that only reveals the results needed.
[21:30] Differential Privacy is a set of formal proofs to prove that statistic leaving an organization has no private data.
[22:40] Patterns that are not unique to someone should not be considered private information.
[24:00] For example, brain cells firing in response to certain information that is generalizable would not be considered personal information.
[25:15] Secure Multiparty Computation refers to how the statistical model that is used for Machine Learning is put at risk.
[27:50] AI models and data sets are just large collections of numbers.
[28:30] To learn more, study a deep learning framework; fast.ai is a very helpful website for this.
[29:00] Open Mind serves as an open-source community that utilizes privacy securing technologies to lower the barriers of adoption.