Skip to content

Nkluge-correa/TeenyTinyCastle

Repository files navigation

Teeny-Tiny Castle: educational tools for AI Ethics and Safety research

DOI

A logo for a GitHub repository called Teeny-Tiny Castle with a machine learning theme and a little robot. The logo is a cartoon illustration of a blue robot with a square head and round body in a pink and purple castle with a blue moat. The castle has a banner that reads 'Teeny-Tiny Castle' in a handwritten font. The background is dark purple with stars and trees.

AI Ethics and Safety are (relatively) new fields, and their tools (and how to handle them) are still unknown to most of the development community. To address this problem, we created the Teeny-Tiny Castle, an open-source repository containing "Educational tools for AI Ethics and Safety Research". Here, the developer can find many examples of how to deal with various problems raised in the literature (e.g., algorithmic discrimination, model opacity, etc.).

Our repository has several examples of how to work ethically and safely with AI, mainly focusing on issues related to Accountability & Sustainability, Interpretability, Robustness/Adversarial, and Fairness, all being worked through examples that refer to some of the most common contemporary AI applications (e.g., Computer Vision, Natural language Processing, Classification & Forecasting, etc.). In case you are new to the field, you will also find in the Teeny-Tiny Castle an introductory course on ML.

To run the notebooks, open them in your Google Drive as a Colab Notebook, or you can also follow our Python and VS Code installation tutorial if you want to run these notebooks on your own workstation. All requirements are specified in the requirements.txt file. All notebooks were written using Python 3.9.13.

Note: For more tools and metrics on the subject, we recommend the OECD Tools & Metrics catalog. This catalog also seeks to boost (in a much more extensive way) the popularity of tools and metrics to help AI actors build and deploy trustworthy AI systems.

AI Ethics ⚖️

In pursuing responsible and ethical AI development, staying informed about the principles, risks, regulations, and challenges associated with artificial intelligence is essential. Explore the following resources to deepen your understanding of AI ethics.

Supporting resources URL
Learn about AI principles in the WAIE dashboard LINK
Learn about the risks related to AI models LINK
Get informed about AI regulation in Brazil LINK
Learn about the problems related to facial recognition technologies LINK
Learn about the EPS methodology for ethical and safe AI development LINK

Machine Learning Introduction Course 📈

Whether you're a beginner or looking to refresh your skills, this course covers a range of essential topics in machine learning. From setting up your own workstation with Visual Studio Code to deploying a forecasting model as an API with FastAPI, each tutorial provides hands-on experience and practical knowledge.

Tutorial GitHub Colab
Build your own workstation with Visual Studio Code LINK 👈
Introduction to Python LINK Open In Colab
Basic Pandas, Scikit-learn, and Numpy tutorial LINK Open In Colab
Gradient Descent from scratch LINK Open In Colab
Linear Regression with gradient descent from scratch LINK Open In Colab
Multi-Layer Perceptron with NumPy LINK Open In Colab
Feed-Forward Neural Network from scratch with NumPy LINK Open In Colab
Introduction to Keras and TensorFlow using the Fashion-MNIST dataset LINK Open In Colab
Introduction to PyTorch using the Digit-MNIST dataset LINK Open In Colab
Hyperparameter optimization with KerasTuner LINK Open In Colab
Dataset processing with TFDS LINK Open In Colab
Experimentation tracking with Tensorboard LINK Open In Colab
Introduction to recommendation systems LINK Open In Colab
Introduction to time series forecasting and XGBoost LINK Open In Colab
Text classification with Transformers LINK Open In Colab
Sequence-to-sequence modeling with RNNs and Transformers LINK Open In Colab
Text-generation with the GPT architecture LINK Open In Colab
Introduction to Reinforcement Learning LINK Open In Colab
Creating ML apps with Gradio LINK Open In Colab
Deploying a forcasting model as an API with FastAPI LINK Open In Colab

Accountability and Sustainability ♻️

Learn how to generate model cards for transparent model reporting, explore the environmental impact of your models with CO2 emission reports using CodeCarbon, and navigate the accuracy versus sustainability dilemma.

Tutorial GitHub Colab
Accountability through Model Reporting LINK Open In Colab
Tracking carbon emissions and power consumption with CodeCarbon LINK Open In Colab
Architectural choices in computer vision and their impact on energy consumption LINK Open In Colab

Interpretability with CV 🖼️

Understanding and interpreting the decisions made by machine learning models is essential for building trust and making informed decisions. In this course, we explore various techniques for interpretability in computer vision. From introducing convolutional neural networks with CIFAR-10 to exploring feature visualization, maximum activation manipulation, saliency mapping, and using LIME for interpretation, each tutorial provides insights into the inner workings of CV models.

Tutorial GitHub Colab
Creating computer vision models for image classification LINK Open In Colab
Activation Maximization in CNNs LINK Open In Colab
Introduction to saliency mapping with CNNs LINK Open In Colab
Applying LIME to CNNs LINK Open In Colab

Interpretability with NLP 📚

Unravel the complexities of natural language processing models and gain insights into their decision-making processes. From sentiment analysis and applying LIME explanations to LSTMs to exploring integrated gradients, interpreting BERT models, word2vector models, and embedding models, each tutorial provides a deep dive into NLP interpretability.

Tutorial GitHub Colab
Creating language models for text-classification LINK Open In Colab
Applying LIME explanations to shallow languge models LINK Open In Colab
Applying integrated gradients to Language Models LINK Open In Colab
Explaining DistilBERT with integrated gradients LINK Open In Colab
Training and Exploring Word2Vec models LINK Open In Colab
Exploring Language Model's Embeddings LINK Open In Colab
Text mining on text datasets LINK Open In Colab
Dissecting a GPT model LINK Open In Colab

Interpretability with Tabular Classifiers 📊

Gain a deeper understanding of classification and prediction models with tabular data through interpretability techniques. Explore how to apply explanation techniques to tabular classifiers, uncovering insights into their decision-making processes.

Tutorial GitHub Colab
Applying model-agnostic explanations to classifiers with dalex LINK Open In Colab
Exploring models trained on the COMPAS Recidivism Racial Bias dataset LINK Open In Colab

Machine Learning Fairness ⚖️

Advancing the discourse on machine learning fairness, the following tutorials delve into diverse facets of this crucial domain. From applying fairness metrics on datasets like Credit Card and Adult Census to enforcing fairness using tools like AIF360, these tutorials guide you through the intricate landscape of addressing biases in machine learning models.

Tutorial GitHub Colab
Applying fairnes metrics on the Credit Cart Dataset LINK Open In Colab
Applying fairnes metrics on the Adult Census Dataset LINK Open In Colab
Enforcing fairnes with AIF360 LINK Open In Colab
Applying the principle of Ceteris paribus LINK Open In Colab
Applying fairnes metrics on the CelebA dataset LINK Open In Colab
Investigating biases on text generation models LINK Open In Colab

Adversarial Machine Learning 🐱‍💻

Within these tutorials, we navigate the intricate landscape of thwarting adversarial attacks and understanding their nuances. Explore the dark arts of exploiting pickle serialization, create adversarial examples with SecML and Textattack, and apply the fast sign gradient method against convolutional neural networks.

Tutorial GitHub Colab
Exploiting pickle serialization LINK Open In Colab
Creating adversarial examples with SecML LINK Open In Colab
Applying the fast sign gradient method against CNNs LINK Open In Colab
Creating adverarial examples with textattack LINK Open In Colab
Extraction attacks via model clonning LINK Open In Colab
Demonstrating poisoning attacks LINK Open In Colab
Adversarial training for computer vision models LINK Open In Colab
Adversarial training for language models LINK Open In Colab

Cite as 🤗

@misc{teenytinycastle,
    doi = {10.5281/zenodo.7112065},
    url = {https://github.com/Nkluge-correa/TeenyTinyCastle},
    author = {Nicholas Kluge Corr{\^e}a},
    title = {Teeny-Tiny Castle},
    year = {2024},
    publisher = {GitHub},
    journal = {GitHub repository}
}
@article{correa2024eps,
  doi = {10.1007/s43681-024-00469-8},
  url = {https://link.springer.com/article/10.1007/s43681-024-00469-8},
  author={Corr{\^e}a, Nicholas Kluge and Santos, James William and Galv{\~a}o, Camila and Pasetti, Marcelo and Schiavon, Dieine and Naqvi, Faizah and Hossain, Robayet and De Oliveira, Nythamar},
  title = {Crossing the principle–practice gap in AI ethics with ethical problem-solving},
  year = {2024},
  publisher = {Springer},
  journal = {AI and Ethics},
}

Funding

The creation of this repository was funded by RAIES (Rede de Inteligência Artificial Ética e Segura). RAIES is a project supported by FAPERGS (Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul) and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico).

License

Teeny-Tiny Castle is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.