Skip to content

CAS about artificial intelligence made at BFH in 2020 and 2021

Notifications You must be signed in to change notification settings

cedricmoullet/CAS_AI_2020_2021

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CAS about artificial intelligence made at BFH in 2020 and 2021.

Table of contents generated with markdown-toc

Ressources

Interative course script

Script as notebook

Data

Machine learning data repository

IRIS

EPFL

Videos

Courses video recording

Tutorial

Colab

Linear Regression in Python with Cost function and Gradient descent

Deep Learning (CAS machine intelligence, 2019)

Bluelife AI

Book

  1. Machine Learning, Tom Mitchell, McGraw Hill, 1997 http://www.cs.cmu.edu/~tom/mlbook.html
  2. Deeplearning, Ian Goodfellow https://www.deeplearningbook.org/contents/mlp.html
  3. Stanford Cheat sheets about AI https://stanford.edu/~shervine/teaching/
  4. Deep reinforcement learning, Miguel Morales: https://drive.google.com/file/d/1VWO0Ji-iK5Z7iqe8XwgOivZLC_SCXIL8/view?usp=sharing

Glossary

Reinforcement learning glossary

Graphical glossary

  1. Sample: A sample is a single row of data (=an instance, an observation, an input vector, or a feature vector.).
  2. Batch size: The batch size defines the number of samples to work through before updating the internal model parameters.
  3. Epoch: The number of epochs defines the number times that the learning algorithm will work through the entire training dataset.
  4. Gradient descent: Gradient descent is an optimization algorithm used to find the values of parameters (coefficients) of a function (f) that minimizes a cost function (cost).
  5. Deep learning algorithms

image

Tricks

Update python packages

pip list --outdated
pip3 list --outdated --format=freeze | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 pip3 install -U

2020-10-20 1.Einführung in AI Grundtechniken: Gradient Descent,partielle Ableitungen, Matrix Algebra, AI Frameworks

Theory

General AI principle

Y: A label is the thing we're predicting

X: A feature is an input variable (can be a list)

x: An example is a particular instance of data (a vector of values for a feature)

A labeled example includes both feature(s) and the label.

An unlabeled example contains features but not the label.

Once we've trained our model with labeled examples, we use that model to predict the label on unlabeled examples.

General AI function: Y = wX b

Notes

Doc
Neuron General AI function
Neuron General AI function
General AI function Chain rule
m derivative MSE / loss
Activation function

Home work

Compute a gradient descent for a complex function and determine iteratively m and b: Colab

2020-10-26 2.Tensorflow und PyTorch Frameworks

Theory

Data loading

Loading data in Colab

Tensorflow introduction

Tensorflow quickstart for beginners

Predict fuel efficiency with a Tensorfol regressition. Dataset: MPG

Pytorch introduction

Pytorch quickstart for beginners

Notes

Doc
Neocortex Neocortex
Activation function Activation function
AI Neuronal network AI Neuronal network: encoder

Homework

Pytorch two layers NN

  • Change values N, D_in, H, D
  • Add a new layer/activation function with H hidden size

2020-11-03 3.Fundamentale Neuronale Netze: MLP und Autoencoder

Theory

Pytorch gradients - trainer notebook / exercice notebook

Pytorch linear regressions - trainer notebook / exercice notebook

Pytorch NN - trainer notebook / exercice notebook

Pytorch datasets management - trainer notebook / exercice notebook

Tensorflow quickstart for experts - trainer notebook / exercice notebook

Tensorflow autoencoder - trainer notebook / exercice notebook

Homework

Migrate Iris exercice to Tensorflow: https://colab.research.google.com/drive/1gPMNk24EuvBKun5oCfV_mGrCtUA2rmoy?usp=sharing

2020-11-10 4.Anomaly Detection mit Autoencoder

Theory

Tensorflow Stacked MLP autoencode - trainer notebook / exercice notebook

Notes

Notes as PDF

Homework

Original autoencoder for ECG validation

A Gentle Introduction to Anomaly Detection with Autoencoders

Goal of the exercice is to use another data set: Anomaly detection for credit card with autoencoder

2020-11-17 5.Variational Autoencoder

Theory

Variational autoencode - trainer notebook / exercice notebook / video

Dense function in TensorFlow

Exercice: use electrocardiogram data with previous variational autoencoder

Divergence de Kullback-Leibler / video

6 Different Ways of Implementing VAE with TensorFlow 2 and TensorFlow Probability

Notes

Notes as PDF

Homework

Howework description

  1. Basierend auf dem vorliegenden Notebook, VAE mit den ECG Daten trainieren.

  2. Training von VAE

  3. Anomaly Detection mit VAE und Vergleich von drei Metrics - accuracy, precision und recall von einem vanilla Autoencoder aus Aufgabe 4

Robust Variational Autoencoder trainer notebook / my homework notebook

Very good homework

2020-11-24 6.CNN

Theory

Convolution is a serie of scalar product.

Convolution

Notebook for understanding convolutions

Tensorboard / My tensorboard

CNN basic

CNN advanced

Softmax: compute probability of vector element

CIFAR10 explanation

Notes

Notes as PDF

Homework

CNN homework #1

CNN homework with preprocessing -> finally cancelled

CNN Advanced homework #2 - With CIFAR 10

Conv Variational Autoencoder homework - With CIFAR 10 -> finally cancelled

Robust Conv Variational Autoencoder homework - with MNIST - Trainer notebook / Homework #3

2020-12-01 7.CNN - Transfer Learning

Theory

VGG16 – Convolutional Network for Classification and Detection

Notes

Notes as PDF

Homework

07 TF2.0 Transfer Learning - Special - Trainer notebook / My notebook

07 TF2.0 Transfer Learning with Data Augmentation - Classic - Trainer notebook / My notebook

07 TF2.0 Transfer Learning - Special CIFAR / Trainer notebook

2020-12-08 8.GAN

Theory

Generative adversarial network

A Friendly Introduction to Generative Adversarial Networks (GANs) - Video

A guide to convolution arithmetic for deep learning

Conv2D explanation

Notes

Notes as PDF

Homework

Dense GAN - My notebook

DCGAN - CIFAR - My notebook

2020-12-15 9.RNN

Theory

Cheatsheet recurrent neural networks

RNN for calligraphy / Source code

RNN Video - LSTM Video

LTSM illustrated

RNN with pytorch - Trainer notebook

LSTM with pytorch - Trainer notebook

Stock return - Trainer notebook

Notes

Notes as PDF

Homework

RNN with pytorch - Homework

Stock prediction by Boris Banushev

2021-01-05 10.RL Grundlagen

Theory

Reinforcement Learning algorithms — an intuitive overview

Lilian Weng blog

OpenAI Gym - CartPole

Setup

We will work with Anaconda and PyCharm

A new python environment has been installed with Anaconda: casaai2020

Install gym and pygame:

source activate casaai2020
pip install gym
pip install pygame

Categories of machine learning

Categories Machine Learning

Use cases of machine learning

Use cases of machine learning

Categories of reinforcement learning

Categories Reinforcement Learning

Notes

Notes as PDF

Homework

Take the trainer notebook and make it working on pycharm -> export to .py. In order to work with PyCharm, several components had to be installed with pip: opencv, opencv-python, torchvision, cmake and atari-py

My notebook

Tip: load tensorboard by starting it in the PyCharm console:

tensorboard --logdir=runs

2021-01-12 11.RL Cross Entropy

Theory

GYM environments - Trainer notebook / My notebook

CrossEntropy example - Cart Pole agent and mountain car agent

GYM environments

Tip: python environment library version:

source activate casaai2020
pip list

Q-Learning intro

Wikipedia for Q-Learning

Carneggie Mellon course

Q-Learning video

Gamma value

Policy Gradient with gym-MiniGrid

Notes

Notes as PDF

Homework

Moodle description / mini-grid code

2021-01-19 12.RL Value iteration

Theory

Keras RL2 / trainer notebook

Comments about value iteration example:

  • Observations: the possible agent positions in the 4x4 grids. 16 possibilities.
  • Actions: the possible actions made by the agent: up, down, left and right
  • Rewards: the possible rewards, depending of the currrent state, the next state and the action: 16 * 16 * 4
  • Transitions: the possible paths, depending of the currrent state, the next state and the action:: 16 * 16 * 4
  • Values: the Q-values, depending og the state and the action: 16 * 4

GridWorld env

Notes

Notes as PDF

Notes as PDF

Homework

RL2 notebook

Finance processing with AI

PyTorch uncertainty estimation - trainer notebook / my notebook / video

2021-01-26 13. Tabular Q-Learning

Theory

Tabular Q-Learning source code

Holt winters forecasting

Theta model

Notes

Notes as PDF

Homework

Avocado exercise - trainer notebook / my notebook

2021-02-02 14. Deep Q Learning

Theory

Advanced Forecasting with LSTM and verification of results using Multi-Step Forecasting Devoir - trainer notebook / my notebook

DQN Video lesson

DQN Cart pole

Notes

Notes as PDF

Homework

Exercise - my notebook

Exercise - my notebook with price only

Exercise - my notebook with cryptocurrencies

Exercise - my second notebook with cryptocurrencies

2021-02-09 15. Reinforce

Theory

Reinforce example with lunar lander

An Intuitive Explanation of Policy Gradient

RL book

Claude Shannon entropy computation

Frank Rosenblatt

Notes

Notes as PDF

Homework

2021-02-16 16. A2C

Theory

Reinforcement learning cheat sheet

Cheat sheet 1

Cheat sheet 2

Reinforcement learning book

Notes

Notes as PDF

Homework

Sequence modeling with attenion - trainer notebook with ECG data - my notebook

Dynamic Content Personalization Using LinUCB - trainer notebook

2021-02-23 17. Tween Delayed Deep Deterministic Policy Gradient - TD3

Theory

Mastering Continuous Robotic Control with TD3 | Twin Delayed Deep Deterministic Policy Gradients video

Reinforcement Learning:An Introduction Stanford

Colab cheetah / other implementation

Graphical representation of TD3

Use cases of T3D:

  • machine control
  • trading forecaster
  • sensor control/management (example of insulin: measure = state / inject insulin = action)

Notes

Notes as PDF

Homework

2021-03-02 18. Evolution strategy

Theory

Evolution strategies

Blog about evolution strategies

Colab Evolution Strategies Supervised / my notebook

Colab Evolution Strategies Half Cheetah

Notes

Notes as PDF

Homework

Bipedal walker

Regression with evolution strategies - my notebook

Regression with evolution strategies and PyTorch - solution

2021-03-09 19. RL mit Trend Following Strategie

Theory

Policy gradient algorithms

Facebook AI prophet

Neural prophet

Trend Following Strategie (SMA) - trainer notebook

RL mit TFS - trainer notebook

Q-Learning Algo Trader - trainer notebook

Notes

Notes as PDF

Homework

Trend Following Strategie (SMA) - my notebook

Trend Following Strategies revisited , with env and agent - my notebook

Q-Learning trader - my notebook

DQN trader - my notebook

2021-03-16 20. Q-Learning Lab

Theory

Deep reinforcement learning book

Grokking Deep Reinforcement Learning

Notes

Notes as PDF

Homework

Exam 2020

2021-03-23 21. Klausur Vorbereitung I

2 exercises: 1 deep learning / 1 reinforcement learning

Introduction to gradients and automatic differentiation - my notebook

Basic training loops

Basic training loops - multidimensional

LSTM & GRU en schémas

Notes

Notes as PDF

2021-03-30 22. Klausur Vorbereitung II - reinforcement learning

  • Markov property is the probability of the next state, given the current state and current action, will be the same as if you give the entire history of interactions (states and actions)
  • | means "gegeben"
  • Transition function is defined as the probability fo transitioning to state s' at time step t given action a was selected on state s in the previous time step t-1. Given these are the probabilities, we expect the sum of the probabilities across all possible next states to sum to 1. Thta's true for all states s in the set of states s, and all actions a in the set of actions available in state s.
  • Reward function can be defined as a function that takes in a state-action pair. Andm it's the expectation of reward at time step t, given the state-action pair in the previous time step. But, it can also be defined as a function that takes a full transition tuple s, a, s'. And it's also defined as the expectation, but now given that transition tuple. the reward at time step t comes from a set of all rewards R, whihc is a subset of all real numbers.
  • MDP: S, A, T, R, Stheta, gamma, horizon
  • POMDP: S, A, T, R, Stheta, gamma, horizon, observation, epsilon

Notes

Notes as PDF

Full notes as PDF

2021-04-06 23. Exam

Original exam

My exam 2020-2021

DQN tutorial

About

CAS about artificial intelligence made at BFH in 2020 and 2021

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published