English
| 简体中文
A simple seckill scenario implemented by Sanic
web framework, using microservices style.
This project simulate a simple seckill scenario, which is consist of three microservices as follow:
- Product Service
- any user can query products and product details
- any user can add/delete/edit product instance
- Activity Service
- any user can query activities and activity details
- any user can add/delete/edit activity instance
- any user can participate seckill activity to place an order with unique user id.
- Order Service
- user can query their orders, identify by unique user id
- user can view/delete order details
Write an web api service to simulate users' participation of a seckill activity:
- Provide a web API, based on HTTP or TCP, which users can place their order for specific activity
- User can send request over once to the same activity, and the server side should ensure that only one order is being placed when conditions are met. (only when there are products left)
- The orders info should be persistent even when the service or machine is being restarted
- Each activity only related to one product, user can only place order when activity's product inventory > 0
- The final products sum up within orders should be equivalent to inventory of the products
- The service should be able to handle large amount concurrent requests from multiple users (etc. 100 TPS)
- Using Sanic , Async Python 3.6 web server/framework | Build fast. Run fast
- Using aiomysql as database driver, to execute sql statement asynchronously
- Using aiohttp as client to issue async http requst, interacting with other microservices.
- Using peewee as ORM,Only for modeling and data model migrations
- Using sanic-opentracing as distributed tracing system implementation
- Using sanic-openapi to auto generate Swagger API documentation
Create your local .env
file, you can use the .env_template
as a starting point.
DOCKER_DIR=~/you/project/patch/sanic_seckill/seckill/deployment
MYSQL_ROOT_PASSWORD=sanicroot
Build and start all services with the following command using docker-compose. You can adjust the port mapping or other setting by editing the docker-compose.yml
file.
docker-compose build
docker-compose up
By default, docker-compose will bind services port to local host machine. Change any port mapping as you like by editing docker-compose.yml
file.
- Consul UI:
http://localhost:8501
- Jaeger UI:
http://localhost:8502
- Product Service:
http://localhost:8503
- Activity Service:
http://localhost:8504
- Order Service:
http://localhost:8505
If everything works as expected, you can see all three microservices are registered with healthy check status in Consul
web GUI.
By default, TRACE_ALL
is set to true
within the activity service
, which is configured in docker-compose.yml
, so when you make request to activity service
, you can view all request trace in Jaeger
web GUI.
- Create DB connection pool
- Create client connection session,to interact with other services
- Create
jaeger.tracer
to implement distributed tracing over requests
- Implement request middleware to add specific HTTP headers, handle CORS request
- Add an envelop to response, uniform all response data format
Intercept with exception, and response with uniform format.
Create ServiceWatcher Task, for service discovery and service healthy status check, all services are maintained to app.services
list.
Using peewee as ORM backend, only for data model design and data migration, using
aiomysql
for async SQL operation.
- All DB connection configurations are configured by environment variables
- If you are using
docker-compose
, you don't have to create DB table manually - Otherwise you need to run
python migrations.py
to migrate DB table
Using aiomysql
as database connector, all sql related operation are capsulated by DBConnection
, to execute raw sql asynchronously,
acquire()
returns a non-transaction SQL connection. Used for query which is optimized for efficiency.tansaction()
as transaction function, all delete/insert related SQL should use transaction when needed.- If
trace
is set totrue
, all DB operation will being traced.
Using client
within aiohttp
package, capsulating to provide common utilities for accessing other microservices asyncronously.
Using Python logging
module, logging,yml
as configuration file, JsonFormatter
will transform logs into json format.
- OpenTracing is built around by Dapper, Zipkin, which bring us a standard for distributed tracing system
- Opentracing will trace every request within your service, and every related service, which plays an critical part for analysis of microservice performance
- Implementing opentracing standard, and use Jaeger as tracer
- Tracing behaviors(DB, Client) can be configured by environment variables
Using app.error_handler = CustomHander()
to handle exceptions.
code
: status code,0
for success, and other for exceptions.message
: error messagestatus_code
: standard HTTP status code