Want to learn the latest way to tech super hard? Or are you itching to know where all those awesome computer science ideas from the 60s led to? Come join the international phenomenon that is Papers We Love! We meet every month to share and discuss papers that have been influential in our own lives and learn what has driven others. It's a ton of fun and a great way to meet other paper-minded folks.
The Denver Chapter meets every fourth Thursday of the month at Code Talent.
Papers We Love has a Code of Conduct. Please contact one of the Meetup's organizers if anyone is not following it. Be good to each other and to the PWL community! Chapter Details
Sign-up: Please RSVP for meetings via Meetup.com
Contact: harry AT thoughtfulsoftware DOT com
Organizers: Harry Brumleve, do you want to help? :-) Sponsors Code Talent - They give us a great place to present, pizza, and beer! HyprLoco - They give us beer, too!
Join us on March 22nd when Thomas discusses the principles of Data Tidying, a small, but important, component of data cleaning that can streamline the process of working with messy data.
All those hip and trendy machine learning algorithms may be cool, but suffer from GIGO - Garbage In, Garbage Out. Data Scientists spend a lot of time cleaning their data so it is ready for analysis.
Just like preparing your guest bathroom before company comes to visit, you're going to have to get your hands dirty. You also don't want to spend any more time cleaning than necessary.
Hear Tracy Allison Altman [@UglyResearch (https://twitter.com/UglyResearch)] talk about a landmark behavioral economics paper, Tversky and Kahneman's Judgment under Uncertainty: Heuristics and Biases (https://people.hss.caltech.edu/~camerer/Ec101/JudgementUncertainty.pdf). Tracy will share how this paper has influenced her work, and what it tells us about bias in algorithms and AI. The authors' insights challenged conventional decision-making, and inspired new approaches to cognitive science, choice architecture, social policy, and software design.…
You don't have to read the paper beforehand ... just show up and take it all in! :-)
Come see Aysylu Greenberg (@aysylu22) talk about one of her favorite papers:
It describes an interesting approach to data replication which allows for finer control over the probability of data loss occurrence and the amount of data loss during such an event. In addition, we'll discuss a technique for moving randomization from runtime to initialization to achieve the same benefits. After the discussion of the paper's contributions, we'll turn to pragmatic aspects of this approach.
Dinner will be provided (sponsors welcome to help out!) and afterwards we'll set out to one(?) of the breweries around the corner.…