General Assembly's 2015 Data Science course in Washington, DC

Overview

DAT8 Course Repository

Course materials for General Assembly's Data Science course in Washington, DC (8/18/15 - 10/29/15).

Instructor: Kevin Markham (Data School blog, email newsletter, YouTube channel)

Binder

Tuesday Thursday
8/18: Introduction to Data Science 8/20: Command Line, Version Control
8/25: Data Reading and Cleaning 8/27: Exploratory Data Analysis
9/1: Visualization 9/3: Machine Learning
9/8: Getting Data 9/10: K-Nearest Neighbors
9/15: Basic Model Evaluation 9/17: Linear Regression
9/22: First Project Presentation 9/24: Logistic Regression
9/29: Advanced Model Evaluation 10/1: Naive Bayes and Text Data
10/6: Natural Language Processing 10/8: Kaggle Competition
10/13: Decision Trees 10/15: Ensembling
10/20: Advanced scikit-learn, Clustering 10/22: Regularization, Regex
10/27: Course Review 10/29: Final Project Presentation

Python Resources

Course project

Comparison of machine learning models

Comparison of model evaluation procedures and metrics

Advice for getting better at data science

Additional resources


Class 1: Introduction to Data Science

Homework:

  • Work through GA's friendly command line tutorial using Terminal (Linux/Mac) or Git Bash (Windows).
  • Read through this command line reference, and complete the pre-class exercise at the bottom. (There's nothing you need to submit once you're done.)
  • Watch videos 1 through 8 (21 minutes) of Introduction to Git and GitHub, or read sections 1.1 through 2.2 of Pro Git.
  • If your laptop has any setup issues, please work with us to resolve them by Thursday. If your laptop has not yet been checked, you should come early on Thursday, or just walk through the setup checklist yourself (and let us know you have done so).

Resources:


Class 2: Command Line and Version Control

  • Slack tour
  • Review the command line pre-class exercise (code)
  • Git and GitHub (slides)
  • Intermediate command line

Homework:

Git and Markdown Resources:

  • Pro Git is an excellent book for learning Git. Read the first two chapters to gain a deeper understanding of version control and basic commands.
  • If you want to practice a lot of Git (and learn many more commands), Git Immersion looks promising.
  • If you want to understand how to contribute on GitHub, you first have to understand forks and pull requests.
  • GitRef is my favorite reference guide for Git commands, and Git quick reference for beginners is a shorter guide with commands grouped by workflow.
  • Cracking the Code to GitHub's Growth explains why GitHub is so popular among developers.
  • Markdown Cheatsheet provides a thorough set of Markdown examples with concise explanations. GitHub's Mastering Markdown is a simpler and more attractive guide, but is less comprehensive.

Command Line Resources:

  • If you want to go much deeper into the command line, Data Science at the Command Line is a great book. The companion website provides installation instructions for a "data science toolbox" (a virtual machine with many more command line tools), as well as a long reference guide to popular command line tools.
  • If you want to do more at the command line with CSV files, try out csvkit, which can be installed via pip.

Class 3: Data Reading and Cleaning

  • Git and GitHub assorted tips (slides)
  • Review command line homework (solution)
  • Python:
    • Spyder interface
    • Looping exercise
    • Lesson on file reading with airline safety data (code, data, article)
    • Data cleaning exercise
    • Walkthrough of Python homework with Chipotle data (code, data, article)

Homework:

  • Complete the Python homework assignment with the Chipotle data, add a commented Python script to your GitHub repo, and submit a link using the homework submission form. You have until Tuesday (9/1) to complete this assignment. (Note: Pandas, which is covered in class 4, should not be used for this assignment.)

Resources:


Class 4: Exploratory Data Analysis

Homework:

Resources:


Class 5: Visualization

Homework:

  • Your project question write-up is due on Thursday.
  • Complete the Pandas homework assignment with the IMDb data. You have until Tuesday (9/8) to complete this assignment.
  • If you're not using Anaconda, install the Jupyter Notebook (formerly known as the IPython Notebook) using pip. (The Jupyter or IPython Notebook is included with Anaconda.)

Pandas Resources:

  • To learn more Pandas, read this three-part tutorial, or review these two excellent (but extremely long) notebooks on Pandas: introduction and data wrangling.
  • If you want to go really deep into Pandas (and NumPy), read the book Python for Data Analysis, written by the creator of Pandas.
  • This notebook demonstrates the different types of joins in Pandas, for when you need to figure out how to merge two DataFrames.
  • This is a nice, short tutorial on pivot tables in Pandas.
  • For working with geospatial data in Python, GeoPandas looks promising. This tutorial uses GeoPandas (and scikit-learn) to build a "linguistic street map" of Singapore.

Visualization Resources:


Class 6: Machine Learning

  • Part 2 of Visualization with Pandas and Matplotlib (notebook)
  • Brief introduction to the Jupyter/IPython Notebook
  • "Human learning" exercise:
  • Introduction to machine learning (slides)

Homework:

  • Optional: Complete the bonus exercise listed in the human learning notebook. It will take the place of any one homework you miss, past or future! This is due on Tuesday (9/8).
  • If you're not using Anaconda, install requests and Beautiful Soup 4 using pip. (Both of these packages are included with Anaconda.)

Machine Learning Resources:

IPython Notebook Resources:


Class 7: Getting Data

Homework:

  • Optional: Complete the homework exercise listed in the web scraping code. It will take the place of any one homework you miss, past or future! This is due on Tuesday (9/15).
  • Optional: If you're not using Anaconda, install Seaborn using pip. If you're using Anaconda, install Seaborn by running conda install seaborn at the command line. (Note that some students in past courses have had problems with Anaconda after installing Seaborn.)

API Resources:

  • This Python script to query the U.S. Census API was created by a former DAT student. It's a bit more complicated than the example we used in class, it's very well commented, and it may provide a useful framework for writing your own code to query APIs.
  • Mashape and Apigee allow you to explore tons of different APIs. Alternatively, a Python API wrapper is available for many popular APIs.
  • The Data Science Toolkit is a collection of location-based and text-related APIs.
  • API Integration in Python provides a very readable introduction to REST APIs.
  • Microsoft's Face Detection API, which powers How-Old.net, is a great example of how a machine learning API can be leveraged to produce a compelling web application.

Web Scraping Resources:


Class 8: K-Nearest Neighbors

Homework:

KNN Resources:

Seaborn Resources:


Class 9: Basic Model Evaluation

Homework:

Model Evaluation Resources:

Reproducibility Resources:


Class 10: Linear Regression

Homework:

  • Your first project presentation is on Tuesday (9/22)! Please submit a link to your project repository (with slides, code, data, and visualizations) by 6pm on Tuesday.
  • Complete the homework assignment with the Yelp data. This is due on Thursday (9/24).

Linear Regression Resources:

Other Resources:


Class 11: First Project Presentation

  • Project presentations!

Homework:


Class 12: Logistic Regression

Homework:

Logistic Regression Resources:

  • To go deeper into logistic regression, read the first three sections of Chapter 4 of An Introduction to Statistical Learning, or watch the first three videos (30 minutes) from that chapter.
  • For a math-ier explanation of logistic regression, watch the first seven videos (71 minutes) from week 3 of Andrew Ng's machine learning course, or read the related lecture notes compiled by a student.
  • For more on interpreting logistic regression coefficients, read this excellent guide by UCLA's IDRE and these lecture notes from the University of New Mexico.
  • The scikit-learn documentation has a nice explanation of what it means for a predicted probability to be calibrated.
  • Supervised learning superstitions cheat sheet is a very nice comparison of four classifiers we cover in the course (logistic regression, decision trees, KNN, Naive Bayes) and one classifier we do not cover (Support Vector Machines).

Confusion Matrix Resources:


Class 13: Advanced Model Evaluation

Homework:

ROC Resources:

Cross-Validation Resources:

Other Resources:


Class 14: Naive Bayes and Text Data

Homework:

  • Complete another homework assignment with the Yelp data. This is due on Tuesday (10/6).
  • Confirm that you have TextBlob installed by running import textblob from within your preferred Python environment. If it's not installed, run pip install textblob at the command line (not from within Python).

Resources:

  • Sebastian Raschka's article on Naive Bayes and Text Classification covers the conceptual material from today's class in much more detail.
  • For more on conditional probability, read these slides, or read section 2.2 of the OpenIntro Statistics textbook (15 pages).
  • For an intuitive explanation of Naive Bayes classification, read this post on airport security.
  • For more details on Naive Bayes classification, Wikipedia has two excellent articles (Naive Bayes classifier and Naive Bayes spam filtering), and Cross Validated has a good Q&A.
  • When applying Naive Bayes classification to a dataset with continuous features, it is better to use GaussianNB rather than MultinomialNB. This notebook compares their performances on such a dataset. Wikipedia has a short description of Gaussian Naive Bayes, as well as an excellent example of its usage.
  • These slides from the University of Maryland provide more mathematical details on both logistic regression and Naive Bayes, and also explain how Naive Bayes is actually a "special case" of logistic regression.
  • Andrew Ng has a paper comparing the performance of logistic regression and Naive Bayes across a variety of datasets.
  • If you enjoyed Paul Graham's article, you can read his follow-up article on how he improved his spam filter and this related paper about state-of-the-art spam filtering in 2004.
  • Yelp has found that Naive Bayes is more effective than Mechanical Turks at categorizing businesses.

Class 15: Natural Language Processing

  • Yelp review text homework due (solution)
  • Natural language processing (notebook)
  • Introduction to our Kaggle competition
    • Create a Kaggle account, join the competition using the invitation link, download the sample submission, and then submit the sample submission (which will require SMS account verification).

Homework:

  • Your draft paper is due on Thursday (10/8)! Please submit a link to your project repository (with paper, code, data, and visualizations) before class.
  • Watch Kaggle: How it Works (4 minutes) for a brief overview of the Kaggle platform.
  • Download the competition files, move them to the DAT8/data directory, and make sure you can open the CSV files using Pandas. If you have any problems opening the files, you probably need to turn off real-time virus scanning (especially Microsoft Security Essentials).
  • Optional: Come up with some theories about which features might be relevant to predicting the response, and then explore the data to see if those theories appear to be true.
  • Optional: Watch my project presentation video (16 minutes) for a tour of the end-to-end machine learning process for a Kaggle competition, including feature engineering. (Or, just read through the slides.)

NLP Resources:


Class 16: Kaggle Competition

Homework:

  • You will be assigned to review the project drafts of two of your peers. You have until Tuesday 10/20 to provide them with feedback, according to the peer review guidelines.
  • Read A Visual Introduction to Machine Learning for a brief overview of decision trees.
  • Download and install Graphviz, which will allow you to visualize decision trees in scikit-learn.
    • Windows users should also add Graphviz to your path: Go to Control Panel, System, Advanced System Settings, Environment Variables. Under system variables, edit "Path" to include the path to the "bin" folder, such as: C:\Program Files (x86)\Graphviz2.38\bin
  • Optional: Keep working on our Kaggle competition! You can make up to 5 submissions per day, and the competition doesn't close until 6:30pm ET on Tuesday 10/27 (class 21).

Resources:


Class 17: Decision Trees

Homework:

Resources:

  • scikit-learn's documentation on decision trees includes a nice overview of trees as well as tips for proper usage.
  • For a more thorough introduction to decision trees, read section 4.3 (23 pages) of Introduction to Data Mining. (Chapter 4 is available as a free download.)
  • If you want to go deep into the different decision tree algorithms, this slide deck contains A Brief History of Classification and Regression Trees.
  • The Science of Singing Along contains a neat regression tree (page 136) for predicting the percentage of an audience at a music venue that will sing along to a pop song.
  • Decision trees are common in the medical field for differential diagnosis, such as this classification tree for identifying psychosis.

Class 18: Ensembling

Resources:


Class 19: Advanced scikit-learn and Clustering

Homework:

scikit-learn Resources:

Clustering Resources:


Class 20: Regularization and Regular Expressions

Homework:

  • Your final project is due next week!
  • Optional: Make your final submissions to our Kaggle competition! It closes at 6:30pm ET on Tuesday 10/27.
  • Optional: Read this classic paper, which may help you to connect many of the topics we have studied throughout the course: A Few Useful Things to Know about Machine Learning.

Regularization Resources:

Regular Expressions Resources:


Class 21: Course Review and Final Project Presentation

Resources:


Class 22: Final Project Presentation


Additional Resources

Tidy Data

Databases and SQL

Recommendation Systems

You might also like...
Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python

Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python 📊

Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.
Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

Repositori untuk menyimpan material Long Course STMKGxHMGI tentang Geophysical Python for Seismic Data Analysis
Repositori untuk menyimpan material Long Course STMKGxHMGI tentang Geophysical Python for Seismic Data Analysis

Long Course "Geophysical Python for Seismic Data Analysis" Instruktur: Dr.rer.nat. Wiwit Suryanto, M.Si Dipersiapkan oleh: Anang Sahroni Waktu: Sesi 1

Sample code for Harry's Airflow online trainng course

Sample code for Harry's Airflow online trainng course You can find the videos on youtube or bilibili. I am working on adding below things: the slide p

Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine

Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine Intro This repo contains the python/stan version of the Statistical Rethinking

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.
Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.

Data lineage made simple, reliable, and automated. Effortlessly track the flow of data, understand dependencies and analyze impact. Features Visualiza

 🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

🧪📈 🐍. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python and HoloViz Panel.

fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

Comments
  • Additional dataset for chipotle homework assignment

    Additional dataset for chipotle homework assignment

    Howdy Kevin, After getting the feedback from Rachael on the homework from class 2, I had a quick back and forth with her on why the solution isn't actually quite as good as the one I hacked out in class. I've added an updated chipotle dataset that includes an outlier Steak Burrito Order that breaks the homework solution. I'll admit I actually saw this earlier because when I first cloned the DAT8 repository it had the solution and I looked at it and said something to the effect of "this is an illustration of the family of jokes about the difference between an engineer and a scientist, the slow precision vs. quick estimation"; shockingly I fell onto the scientist side of things here (unusual for me). Anyway, this also gave me a quick minute to refresh my memory on how to do pull requests (hadn't done one in a couple of years) so here's my dataset.

    --Micah

    opened by micahmarkman 3
  • Added a new resource

    Added a new resource

    Hey, I have added a reference link for Python Tutorial under Python Resources. I came upon this article while looking for resources to learn Python. This citation, in my opinion, will enhance the content of this article. Hope that my contribution will benefit other learners.

    opened by Sachinbhatttech 1
  • Pandas Merge Error

    Pandas Merge Error

    Hello, I have a few days learning from your videos and all the exercises are really cool. I am now exploring the exercises of data merge. But when I run the notebook i have an error and i don't know how to solve it.

    This is the script I am trying to run. import pandas as pd movie_url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.item' movie_cols = ['movie_id', 'title'] movies = pd.read_table(movie_url, sep='|', header=None, names=movie_cols, usecols=[0, 1]) movies.head()

    This is the output.

    UnicodeDecodeError Traceback (most recent call last) pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_with_dtype()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._string_convert()

    pandas_libs\parsers.pyx in pandas._libs.parsers._string_box_utf8()

    UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 3: invalid continuation byte

    During handling of the above exception, another exception occurred:

    UnicodeDecodeError Traceback (most recent call last) in () 1 movie_url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.item' 2 movie_cols = ['movie_id', 'title'] ----> 3 movies = pd.read_table(movie_url, sep='|', header=None, names=movie_cols, usecols=[0, 1]) 4 movies.head()

    c:\users\omar\appdata\local\programs\python\python37-32\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, doublequote, delim_whitespace, low_memory, memory_map, float_precision) 676 skip_blank_lines=skip_blank_lines) 677 --> 678 return _read(filepath_or_buffer, kwds) 679 680 parser_f.name = name

    c:\users\omar\appdata\local\programs\python\python37-32\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 444 445 try: --> 446 data = parser.read(nrows) 447 finally: 448 parser.close()

    c:\users\omar\appdata\local\programs\python\python37-32\lib\site-packages\pandas\io\parsers.py in read(self, nrows) 1034 raise ValueError('skipfooter not supported for iteration') 1035 -> 1036 ret = self._engine.read(nrows) 1037 1038 # May alter columns / col_dict

    c:\users\omar\appdata\local\programs\python\python37-32\lib\site-packages\pandas\io\parsers.py in read(self, nrows) 1846 def read(self, nrows=None): 1847 try: -> 1848 data = self._reader.read(nrows) 1849 except StopIteration: 1850 if self._first_chunk:

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader.read()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_column_data()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_with_dtype()

    pandas_libs\parsers.pyx in pandas._libs.parsers.TextReader._string_convert()

    pandas_libs\parsers.pyx in pandas._libs.parsers._string_box_utf8()

    UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 3: invalid continuation byte

    opened by Chamartin3 1
Owner
Kevin Markham
Founder of Data School
Kevin Markham
A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

This tutorial's purpose is to introduce Pythonistas to methods for scaling their data science and machine learning work to larger datasets and larger models, using the tools and APIs they know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

Coiled 102 Nov 10, 2022
Orchest is a browser based IDE for Data Science.

Orchest is a browser based IDE for Data Science. It integrates your favorite Data Science tools out of the box, so you don’t have to. The application is easy to use and can run on your laptop as well as on a large scale cloud cluster.

Orchest 3.6k Jan 9, 2023
A lightweight, hub-and-spoke dashboard for multi-account Data Science projects

A lightweight, hub-and-spoke dashboard for cross-account Data Science Projects Introduction Modern Data Science environments often involve many indepe

AWS Samples 3 Oct 30, 2021
Lale is a Python library for semi-automated data science.

Lale is a Python library for semi-automated data science. Lale makes it easy to automatically select algorithms and tune hyperparameters of pipelines that are compatible with scikit-learn, in a type-safe fashion.

International Business Machines 293 Dec 29, 2022
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
Improving your data science workflows with

Make Better Defaults Author: Kjell Wooding [email protected] This is the git repo for Makefiles: One great trick for making your conda environments mo

Kjell Wooding 18 Dec 23, 2022
Open source platform for Data Science Management automation

Hydrosphere examples This repo contains demo scenarios and pre-trained models to show Hydrosphere capabilities. Data and artifacts management Some mod

hydrosphere.io 6 Aug 10, 2021
MS in Data Science capstone project. Studying attacks on autonomous vehicles.

Surveying Attack Models for CAVs Guide to Installing CARLA and Collecting Data Our project focuses on surveying attack models for Connveced Autonomous

Isabela Caetano 1 Dec 9, 2021
A Streamlit web-app for a data-science project that aims to evaluate if the answer to a question is helpful.

How useful is the aswer? A Streamlit web-app for a data-science project that aims to evaluate if the answer to a question is helpful. If you want to l

null 1 Dec 17, 2021
2019 Data Science Bowl

Kaggle-2019-Data-Science-Bowl-Solution - Here i present my solution to kaggle 2019 data science bowl and how i improved it to win a silver medal in that competition.

Deepak Nandwani 1 Jan 1, 2022