Summary and Setup

Welcome


This is a hands-on introduction to the first steps in deep learning, intended for researchers who are familiar with (non-deep) machine learning.

The use of deep learning has seen a sharp increase of popularity and applicability over the last decade. While deep learning can be a useful tool for researchers from a wide range of domains, taking the first steps in the world of deep learning can be somewhat intimidating. This introduction covers the basics of deep learning in a practical and hands-on manner, so that upon completion, you will be able to train your first neural network and understand what next steps to take to improve the model.

We start with explaining the basic concepts of neural networks, and then go through the different steps of a deep learning workflow. Learners will learn how to prepare data for deep learning, how to implement a basic deep learning model in Python with Keras, how to monitor and troubleshoot the training process and how to implement different layer types such as convolutional layers.

Checklist

Prerequisites

Learners are expected to have the following knowledge:

  • Basic Python programming skills and familiarity with the Pandas package.
  • Basic knowledge on machine learning, including the following concepts: Data cleaning, train & test split, type of problems (regression, classification), overfitting & underfitting, metrics (accuracy, recall, etc.).

Introduction to artificial neural networks in Python

The Introduction to artificial neural networks in Python lesson takes a different angle to introducing deep learning, focusing on computer vision with the application on medical images.

Introduction to machine learning in Python with scikit-learn

The Introduction to machine learning in Python with scikit-learn lesson introduces practical machine learning using Python. It is a good lesson to follow in preparation for this lesson, since basic knowledge of machine learning and Python programming skills are required for this lesson.

Introduction to text analysis and natural language processing (NLP) in Python

The Introduction to text analysis and natural language processing in Python lesson provides a practical introduction to working with unstructured text data, such as survey responses, clinical notes, academic papers, or historical documents. It covers key natural language processing (NLP) techniques including preprocessing, tokenization, feature extraction (e.g., TF-IDF, word2vec, and BERT), and basic topic modeling. The skills taught in this lesson offer a strong foundation for more advanced topics such as knowledge extraction, working with large text corpora, and building applications that involve large language models (LLMs).

Trustworthy AI: Validity, fairness, explainability, and uncertainty assessments

The Trustworthy AI lesson introduces tools and practices for building and evaluating machine learning models that are fair, transparent, and reliable across multiple data types, including tabular data, text, and images. Learners explore model evaluation, fairness audits, explainability methods (such as linear probes and GradCAM), and strategies for handling uncertainty and detecting out-of-distribution (OOD) data. It is especially relevant for researchers working with NLP, computer vision, or structured data who are interested in integrating ethical and reproducible ML practices into their workflows—including those working with large language models (LLMs) or planning to release models for public or collaborative use.

Intro to AWS SageMaker for predictive ML/AI

The Intro to AWS SageMaker for predictive ML/AI lesson focuses on training and tuning neural networks (and other ML models) using Amazon SageMaker, and is a natural next step for learners who’ve outgrown local setups. If your deep learning models are becoming too large or slow to run on a laptop, SageMaker provides scalable infrastructure with access to GPUs and support for parallelized hyperparameter tuning. Participants learn to use SageMaker notebooks to manage data via S3, launch training jobs, monitor compute usage, and keep experiments cost-effective. While the examples center on small to mid-sized models, the workflow is directly applicable to scaling up deep learning and LLM-related experiments in research.

Software Setup


Discussion

Installing Python

Python is a popular language for scientific computing, and a frequent choice for machine learning as well. To install Python, follow the Beginner’s Guide or head straight to the download page.

Please set up your python environment at least a day in advance of the workshop. If you encounter problems with the installation procedure, ask your workshop organizers via e-mail for assistance so you are ready to go as soon as the workshop begins.

Installing the required packages


Pip is the package management system built into Python. Pip should be available in your system once you installed Python successfully.

Open a terminal (Mac/Linux) or Command Prompt (Windows) and run the following commands.

  1. Create a virtual environment called dl_workshop:
python3 -m venv dl_workshop
py -m venv dl_workshop
  1. Activate the newly created virtual environment:
source dl_workshop/bin/activate
dl_workshop\Scripts\activate

Remember that you need to activate your environment every time you restart your terminal!

  1. Install the required packages:
python3 -m pip install jupyter seaborn scikit-learn pandas tensorflow pydot

Note for MacOS users: there is a package tensorflow-metal which accelerates the training of machine learning models with TensorFlow on a recent Mac with a Silicon chip (M1/M2/M3). However, the installation is currently broken in the most recent version (as of January 2025), see the developer forum.

py -m pip install jupyter seaborn scikit-learn pandas tensorflow pydot

Note: Tensorflow makes Keras available as a module too.

An optional challenge in episode 2 requires installation of Graphviz and instructions for doing that can be found by following this link.

Starting Jupyter Lab


We will teach using Python in Jupyter Lab, a programming environment that runs in a web browser. Jupyter Lab is compatible with Firefox, Chrome, Safari and Chromium-based browsers. Note that Internet Explorer and Edge are not supported. See the Jupyter Lab documentation for an up-to-date list of supported browsers.

To start Jupyter Lab, open a terminal (Mac/Linux) or Command Prompt (Windows), make sure that you activated the virtual environment you created for this course, and type the command:

jupyter lab

Check your setup


To check whether all packages installed correctly, start a jupyter notebook in jupyter lab as explained above. Run the following lines of code:

PYTHON

import sklearn
print('sklearn version: ', sklearn.__version__)

import seaborn
print('seaborn version: ', seaborn.__version__)

import pandas
print('pandas version: ', pandas.__version__)

import tensorflow
print('Tensorflow version: ', tensorflow.__version__)

This should output the versions of all required packages without giving errors. Most versions will work fine with this lesson, but: - For Keras and Tensorflow, the minimum version is 2.12.0 - For sklearn, the minimum version is 1.2.2

Fallback option: cloud environment


If a local installation does not work for you, it is also possible to run this lesson in Binder Hub. This should give you an environment with all the required software and data to run this lesson, nothing which is saved will be stored, please copy any files you want to keep. Note that if you are the first person to launch this in the last few days it can take several minutes to startup. The second person who loads it should find it loads in under a minute. Instructors who intend to use this option should start it themselves shortly before the workshop begins.

Alternatively you can use Google colab. If you open a jupyter notebook here, the required packages are already pre-installed. Note that google colab uses jupyter notebook instead of Jupyter Lab.

Downloading the required datasets


Download the weather dataset prediction csv and Dollar street dataset (4 files in total)