This repository has a collection of utilities for Glue Crawlers. These utilities come in the form of AWS CloudFormation templates or AWS CDK applications.
-
Updated
Dec 21, 2021 - Python
This repository has a collection of utilities for Glue Crawlers. These utilities come in the form of AWS CloudFormation templates or AWS CDK applications.
This repository contains source code for the AWS Database Blog Post Reduce data archiving costs for compliance by automating RDS snapshot exports to Amazon S3
ETL Data pipeline using aws services
Terraform configuration that creates several AWS services, uploads data in S3 and starts the Glue Crawler and Glue Job.
Automation framework to catalog AWS data sources using Glue
AWS Athena, Glue Database, Glue Crawler and S3 buckets deployment through AWS GUI console.
In this project I have used the Trending YouTube Video Statistics data from Kaggle to analyze and prepare it for usage.
Unveiling job market trends with Scrapy and AWS
Creating an audit table for a DynamoDB table using CloudTrail, Kinesis Data Stream, Lambda, S3, Glue and Athena and CloudFormation
An end-to-end data pipeline built with AWS S3, Glue, Crawler, Athena, Tableau visulization
It is a project build using ETL(Extract, Transform, Load) pipeline using Spotify API on AWS.
Analyzed a multicategory e-commerce store using big data techniques on a Kaggle dataset with the help of AWS EC2, AWS S3, PySpark, AWS Glue ETL, AWS Athena, AWS CloudFormation, AWS Lambda and Power BI!
AWS Athena, Glue Database, Glue Crawler and S3 buckets deployment through CloudFormation stack on AWS console.
An end-to-end solution for managing and analyzing YouTube video data from Kaggle, leveraging AWS services and visualized through Quicksight and Tableau
Implementing data pipeline using AWS services for airlines data
Working with Glue Data Catalog and Running the Glue Crawler On Demand
Smart City Realtime Data Engineering Project
The Project aims to establish a robust data pipeline for tracking and analyzing sales performance using various AWS services. The process involves creating a DynamoDB database, implementing Change Data Capture (CDC), utilizing Kinesis streams, and finally, storing and querying the data in Amazon Athena.
A pipeline within AWS to capture schema changes in S3 files and to update them in a DB.
Este projeto tem como objetivo realizar a coleta, catalogo, governança, processamento e visualização de dados.
Add a description, image, and links to the aws-glue-crawler topic page so that developers can more easily learn about it.
To associate your repository with the aws-glue-crawler topic, visit your repo's landing page and select "manage topics."