Papers We Love #7: CNNs & Francisco Varela
Papers We Love is back with two talks!
The first talk will be about Group Equivariant Convolutional and Capsule Networks by Tolga Birdal.
The second talk will be about the Mathematical Work of Francisco Varela by Johannes Drever.
19:00 doors open, food & drinks provided by INNOQ
19:30 Tolga's talk + questions
20:05 break, more food & drinks
20:20 Johannes' talk + questions
21:00 everybody out
Group Equivariant Convolutional and Capsule Networks
Convolutional neural networks (CNN) have demonstrated tremendous capabilities paving the way for the AI revolution. Part of this is due to their translational invariance, or in a more general terminology, equivariance. While for typical CNN, shift of the input images were handled at no additional effort, dealing with rotational changes was a nuisance. This is because CNNs, as well as many other state of the art learning machinery, are not capable of h…
Coffee and Papers: Talking about trees
This is going to be a meetup in a cafe: grab some food, pick a favourite beverage and enjoy free, informal discussion about papers.
Rules: pick a paper from the following (or multiple) and leave a note that you'll be concentrating on it, have a rough idea about what's going on in the paper.
Subject: everything about trees and modern ways of writing (and reading) things to (and from) different types of disks.
LSM Trees: http://db.cs.berkeley.edu/cs286/papers/lsm-acta1996.pdf
Cache Oblivious B-Trees: http://erikdemaine.org/papers/FOCS2000b/paper.pdf
If anyone has suggestions as regards where to eat, please share it. Otherwise we'll pick a cafe/restaurant in Maxvorstadt or Schwabing or nearby.…
An Introduction to Statistical Relational Learning (SRL)
Artificial Intelligence is booming and how! The current trend is to use Deep Learning tools across a multitude of domains. However, there are alternate Machine Learning approaches which can perform robust and accurate reasoning and learning on complex relational data. Today in this talk, I will speak about one such approach of Markov Logic Networks (MLN) in Statistical Relational Learning (SRL). MLN is a powerful framework combining combining statistical (i.e. Markov Random Fields) and logical reasoning (First Order Logic). I will highlight an example of how it can be used on text to do Natural Language Understanding (NLU) and extract affordances in the Robotics domain. This use case scenario shows how large scale inference on text can be leveraged to give knowledge to artificial assistants. This introductory talk would touch base on the basics and acquaint you with all the state of the art tools. Watch this space for more!
JetBrains will be hosting this …
Interested in Probabilistic Programming?
Come over & join our next meetup! We're going to talk about Probabilistic Programming and Bayesian Methods.
If you'd like to get prepared and know more, you can read the "Bayesian Methods for Hackers" book: https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
We'll host two talks.
One is by Alex Petrov, he's going to talk about Bayesian Inference and how to make machines biased:
Humans have many biases. Let's call them priors. They also have their expertises. Let's call them hypothesis spaces. We often make the conclusions we make swiftly, and without much effort and call it intuition. How do we make machines reason intuitively …
Neural networks and machine learning
The next meetup will be about neural networks and machine learning. Neural networks have had a successful revival under the label "deep learning" in recent years. We'll try to get an overview on recent developments in this meetup. As for now, we have a presentation of a classical paper which highlights how the analysis of the visual cortex leads to new learning algorithms ("sparse coding" ). It would be great if we could find other presentations on Deep Learning, Computer Vision, NLP, Recurrent Neural Networks and Large Scale Machine Learning. For a comprehensive reading list see .
* Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1? (Johannes Drever) 
* Introduction to RBMs. (Markus Liedl)
* Introduction to Deep Learning theory: what have changed in the last years and why it has so much attention right now (Sergii Khomenko)
Probabilistic Data Structures
The next Meetup will be about probabilistic data structures, what they are and what they are used for. It is thought more as an overview of the different data structures than of sophisticated use cases.
The following article provides a good overview about the topic: 
Wikipedia is also a good starting point 
Some (definitely not all) interesting papers are:
• Bloom filter: Space/Time Trade-offs in Hash Coding with Allowable Errors 
• Skip list: Skip Lists: A Probabilistic Alternative to Balanced Trees 
• Count–min sketch: An Improved Data Stream Summary: The Count-Min Sketch and its Applications 
• Hidden Markov Models: , , 
19:00 - 19:30 Socializing
19:30 - 20:30 Talks
Bloom Filters - Stefan Seelmann
Count-Min Sketch - Alex Petrov
Hidden Markov Models - Juan Miguel Cejula
20:30 - 20:45 Break
20:45 - 21:30 Discussion
For the rest of the even…
Bootstrap Meetup: Consensus Algorithms: Paxos and Raft
Let's meet and talk about Consensus Algorithm Papers:
In Search Of An Understandable Consensus Algorithm: https://ramcloud.stanford.edu/raft.pdf
The Part-Time Parliament: http://research.microsoft.com/en-us/um/people/lamport/pubs/lamport-paxos.pdf
Paxos Made Simple: http://pdos.csail.mit.edu/6.824/papers/paxos-simple.pdf
Who organises it?
It's community-organised. If you'd like to co-organise, moderate or can help in any other way, ping <a>Alex</a>, he'll add you to admins.
How does it work?
We all read papers offline, then we meet together and start up a discussion, hacking, trying things out, bringing up examples from industry…