My thoughts about life, startups, and weight lifting.

Author: Jay Paulynice (Page 1 of 2)

Dynamic Scheduling Under The Hood

As promised, in this week’s edition of Cloud Musings, I thought I would do a deep dive with code into dynamic scheduling and explain how we solve this challenge. Don’t forget to subscribe here on Substack of Linkedin. Thanks for reading!

Last week, I wrote about this on a very high level. Here is a demo of how it works. I’ve also started to open source some of our code base so people can understand how our platform works.

We have a pretty solid architecture:

  • Python, Flask API
  • MongoDB database
  • Celery, Redis task queue
  • React, Typescript Frontend
  • Docker on AWS ECS and Digital Ocean

The UI:

Schedule Deployment

To schedule a deployment, a user specifies a cloud infrastructure, the date, time, and dependency. Dependency is optional, but we could imagine the case of deploying the API before a change in the UI or a database change before deploying the API. When a user specifies a dependency, it runs 15 minutes before the actual deployment.

For the frontend, I’m using ReactTypescript with a tool called umijs and ant design from Ant financial:

The scheduled deployment request function just calls the backend API passing the user inputs along with the access token for security and to know who the user is.

import { request } from 'umi';

export async function schedule(data: API.ScheduledDataType) {
  return request('/v1/scheduled_deployments', {
    method: 'POST',
    headers: {
      Authorization: 'Bearer oatfin_access_token'

The user specifies a date (year, month, day) and time (hour, minute). We use moment-timezone npm to guess the timezone:

  const handleSubmit = async (values: API.DateTimeDependency) => {
    const data: API.ScheduledDataType = {
      app: current?.id,
      month: + 1,

      hour: values.time.hour(),
      minute: values.time.minute(),
      dependency: values.dependency,

    try {
      const res = await schedule(data);
    } catch (error) {
      message.error('Error scheduling deployment.');

Inside the React functional component, we specify a function for the onSubmit in the modal where we capture the user inputs.

 export const SchedComponent: FC<BasicListProps> = (props) => {
  const getModalContent = () => {
    return (
      <Form onFinish={handleFinish}>
        <Form.Item name="name" label="Application">
          <Input disabled value={current?.name} />
        <Form.Item name="date" label="Date">
            disabledDate={(currentDate) => disabled(currentDate)}
        <Form.Item name="time" label="Time">
          <TimePicker use12Hours format="h:mm A" showNow={false}/>
        <Form.Item name="dependency" label="Dependency">
              <Select.Option key={} value={}>

      title="Schedule Deployment"
      bodyStyle={{ padding: '28px 0 0' }}

The Python, Flask API:

First we capture the parameters that the UI sends, then call the service to create the actual schedule. If the user specifies a dependency, we also create a scheduled entry in MongoDB and Redis for the dependency.

@api.route('/scheduled_deployments', methods=['POST'])
def schedule_deployment():
    user_id = get_jwt_identity()['user_id']
    req_data = flask.request.get_json()

    app_id = req_data.get('app')
    dependency = req_data.get('dependency')
    year = req_data.get('year')
    month = req_data.get('month')
    day = req_data.get('day')
    hour = req_data.get('hour')
    minute = req_data.get('minute')
    timezone = req_data.get('timezone')

    sd = ScheduledDeploymentService().create_schedule(...)

    args = [...]
    SchedulerService().create_entry(sd, args=args, app=app)

    if sd.dependency is not None:
        dep = sd.dependency
        args = [...]
        SchedulerService().create_entry(dep, args=args, app=app)

    return flask.jsonify(
    ), 200

The create_schedule method in ScheduledDeploymentService creates an entry in MongoDB for the parent deployment and any dependency the user specified.

def create(app_id, dep, user_id, year, month, day, hour, minute, tz):
    if dependency:
        sched = ScheduledDeployment(

        return ScheduledDeployment(

The MongoDB document:

from mongoengine import Document, IntField, ReferenceField, etc.

class ScheduledDeployment(Document):
    year = IntField()
    month = IntField()
    day = IntField()
    hour = IntField()
    minute = IntField()
    original_timezone = StringField()
    entry_key = StringField()
    dependency = ReferenceField('self', required=False)

    def json(self):
        return {
            'year': self['year'],
            'month': self['month'],
            'day': self['day'],
            'hour': self['hour'],
            'minute': self['minute'],
            'original_timezone': self['original_timezone'],
            'entry_key': self['entry_key']

The DeploymentSchedulerService is used to first translate the user date and time from their timezone to UTC, then it creates an entry in Redis. We’re also using crontab from celery to create the actual schedule. The challenge here is that we can only specify month_of_year, day_of_month, hour, and minute. We can’t specify a year. We handle this by deleting the entry from Redis once the scheduled deployment is successful.

class SchedulerService(object):
    def create_scheduled_entry(self, sd, args, app):
        scheduled_date = self.to_utc(...)
        entry = RedBeatSchedulerEntry(

    def to_utc(self, timezone, year, month, day, hour, minute):
        tz = pytz.timezone(timezone)
        user_dtz = tz.localize(datetime.datetime(...))
        return tz.normalize(user_dtz).astimezone(pytz.utc)

Finally the task queue looks like this:

app = Celery(__name__)
app.conf.broker_url = app_config.REDIS_BROKER
app.conf.result_backend = app_config.REDIS_BACKEND
app.conf.redbeat_redis_url = app_config.REDIS_BACKEND


def deploy(user_id, app_id, key, secret, region, deployment_id=None, scheduled_id=None):


Thanks for reading!


User Growth and Analytics

In this edition of Cloud Musings, I thought I would dive into user analytics from Google Analytics from the last couple of years. Some people find it useful, but I think we are still early to have meaningful user data.

As we get more data, we will want to double down on the channels that bring the most bang for the buck. We are not there yet. In fact, we have done no marketing or any kind of advertising. Ideally, we’ll be doing marketing where developers hang out like Github, Gitlab, etc.

For the analytics, blue bars represent 2021 and the red bars represent 2022. I’m doing this for both the website as well as the app. I also made a quick video to show this data from Google Analytics here.

Thanks for reading Cloud Musings! Subscribe for free to receive new posts and support my work.

Subscribed property:

As we can see from 2021 to 2022, we had a massive user growth. In particular, we went from 482 new users in 2021 to 1905 new users in 2022. The number of sessions and page views grew accordingly.

Diving into the different acquisition channels, a lot of the traffic came from direct hits. My guess is that a lot of people prefer to type in the browser as opposed to click on a link. I personally do that especially when Google search shows an ad, but I don’t want to click on the ad, so I would type in the address in the browser. Also, I think Google Analytics count email as direct hits. We did a lot of email cold outreach.

As for social channels, I only use Twitter and LinkedIn. It’s not a lot of hits, but it’s still significant to see that people are coming from Twitter and LinkedIn. property:

For the app, I’m using Google Analytics v4 and I’ve found it less useful than Google Analytics v3. There isn’t as much details to dive in. Here we see the user growth.

Here you can see the different user acquisition channels. Again, a lot of the users come from direct hits, few organic search, and referrals for both 2021 and 2022. That’s very much what I expected.

Thanks for reading!


Dynamic Scheduling in Python

Looking forward to dive into next week’s Cloud Musing newsletter on dynamic scheduling. This week, I will talk about User Acquisition and Growth. If you like this kind of content subscribe to our newsletter:

Dynamic scheduling is one of the biggest challenges developers face. For example, I want to deploy the Oatfin API on February 10, 2023 at 3:00 AM. Dynamic scheduling is hard because there is little support for it out of the box. With that also, as every developer knows, timezone is very hard to deal with.

Static scheduling on the other hand is very simple, every programming language provides some kind of support for writing cron jobs. An example of a cron job: I want to import data from a vendor every day at 8:00 PM.

Here is very high level on how we deal with this problem:

For the frontend:
1. To keep it simple, a user specifies year, month, day, hour, and minute from their own point of view.
2. We use the moment-timezone npm package to guess a user’s timezone from the browser.

1. We run 3 docker containers: celery, scheduler and api. Celery is the base image and the other 2 containers extend the base image and overrides the CMD directive in the Dockerfile.
2. We use MongoDB to store the exact schedule in the database with the user’s timezone.
3. We use celery as a task queue and celery-redbeat as the scheduler. Celery-redbeat provides a RedBeatSchedulerEntry object to insert schedules in Redis. When we insert a schedule in Redis, we translate the user’s schedule to UTC date and time.
4. Once the task is complete, we mark it complete in MongoDB, which removes it from the list of scheduled deployments.
5. When a user cancels a task, we delete this entry from Redis and it won’t run.

Containers and Such

As promised, this newsletter would be very technical sometimes to target the technical audience. Don’t forget to subscribe for more updates. It’s 36 subscribers strong on LinkedIn. Thank you for reading!

It took me the better part of the weekend to get the Celery, Redis task queue and scheduler to work, but it’s working now! Happy to talk about some of the challenges! This assumes familiarity with AWS Elastic Container Service, message brokers, and task queues.

We have a pretty solid architecture:

  • Python, Flask API
  • MongoDB database
  • Celery, Redis task queue
  • React, Typescript frontend
  • Docker on AWS ECS and Digital Ocean

What is Celery?

Celery is a task queue with focus on real-time processing, while also supporting task scheduling. It is a is a simple, flexible, and reliable distributed system to process large amounts of messages. It works with Redis, RabbitMQ, and other message brokers.

Some of the challenges I came across:

First, it took me a while to connect to Redis (Elasticache) on AWS. You have to add inbound rules to both security groups in order for the API to communicate with Redis over a specific port like 6379, but it didn’t work for me. I ended up using Redis Cloud because it is a lot simpler than AWS Elasticache. Another solution would be to run Redis on a Digital Ocean droplet or AWS EC2 instance, but I would have to expose the IP and port to the outside world.

The next challenge was how to get the Celery container to run on AWS Elastic Container Service. There are a couple ways to make it work:

  1. Multiple containers within the same task definition
  "taskDefinitionArn": task definition 1,
        'name': container 1,
        'image': docker_img 1,
        'name': container 2,
        'image': docker_img 2,
  1. Multiple task definitions
  "taskDefinitionArn": task definition 1,
          'name': container 1,
          'image': docker_img 1,
  "taskDefinitionArn": task definition 2,
          'name': container 2,
          'image': docker_img 2,

But this was not necessary because the Celery container doesn’t have to scale like the API. ECS also requires a health check path, but there isn’t one for the Celery container, which meant that starting a separate cluster was out of the question.

The solution was to create a multi-container deployment: a base image for the Celery task queue and a main container image for the API that builds on top of the base one. The API image simply overrides the CMD directive in the docker file.

Here is what this looks like:

  1. Base Celery Container – Dockerfile.celery
FROM python:3

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

ADD ./requirements.txt /usr/src/app/requirements.txt

RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt

ADD . /usr/src/app
CMD celery -A apis.deploy.tasks.tasks worker -l info;
  1. Flask API Container builds on the base image – Dockerfile.staging
CMD ["sh", "-c", "gunicorn manage:app -w 5 -b"]

I installed docker on a Digital Ocean droplet and ran the Celery containers on it, one setup for staging and another for production. It works as long as Celery can connect to Redis. In fact, I ran the Celery container locally and it worked. That’s how I figured it out. We could spin up an EC2 instance and run docker on it, but it was cheaper to go with a Digital Ocean droplet.

Building and running the containers is very trivial from there on. First login to a Digital Ocean droplet with docker installed then:

Login the docker registry. I’m using Gitlab’s docker registry.

docker login -u USERNAME -p PASSWORD

Build the docker image

docker build -f Dockerfile.celery \
--cache-from \ \
 -t .

Push the docker image

docker push

Finally run the container on the Digital Ocean droplet.

docker run

Once everything works correctly, then we get some nice logs showing that celery is running. Here I’m using host.docker.internal to connect to my local Redis. I didn’t want to show the Redis Cloud link because anyone with the Redis link can intercept your messages.

Celery connected to Redis

Thanks for reading!


Oatfin Beginnings

Thanks again for subscribing to Cloud Musings! Last I checked, it was 38 subscribers strong. If you haven’t subscribed, subscribe to get automatic updates when I publish new editions. I will try to make it interesting sometimes!

This week I got the book Start With Why by Simon Sinek from Darrel, one of our investors at Visible Hands. Darrel along with Daniel were some of our first believers. I started reading the book and thought I would take a step back to talk about why I’m working on Oatfin.

Start With Why by Simon Sinek

The short answer is that I left Akamai in 2016 and wanted to focus on startups. I realized that I was not a corporate person. I come from a family of entrepreneurs. My parents are entrepreneurs and my grandparents were also entrepreneurs.

First, I started working on a fintech/blockchain solution and it was a pain dealing with infrastructure. Back in 2016, blockchain was also blacklisted by every major cloud and payment forms like Stripe. I took a break to work at a few startups to learn.

For the long answer. I’ve been a software engineer for 15 years. In my experience working at GE, Putnam Investments, Akamai, and many early stage startups, cloud infrastructure was a major challenge. The process is not only manual, but tedious and frustrating at times. If you’ve ever used the AWS or Google Cloud user interface, then you know this pain well!

For example, I was working at a fintech startup and one of my roles was to automate our cloud infrastructure. Sometimes it took days to deploy a simple app.

I was working at another healthcare startup, and it was a lot of the same frustrations. We moved from servers to server-less and then back to servers. Other challenges we faced were cloud spend, testing, security, compliance, and observability into the server-less apps. I left after 6 months to work on Oatfin because I believe the process should be simpler.

Some problems with the cloud currently:

  • Painfully slow development cycle.
  • Manual, tedious, time consuming and frustrating.
  • Days to weeks to build a secure cloud infrastructure.
  • Vendor lock-in means high cost.
  • Requires expert knowledge, new staff and skills.

There are some solutions like infrastructure as code (IaC), but I don’t believe developers want to write more code to maintain the code they already have. I’ve written a fair amount of infrastructure as code. Some problems include:

  • Hard to maintain manual scripts – multiple environments and clouds.
  • Learning new frameworks and languages like Terraform and Pulumi.
  • Doesn’t remove the complexity of infrastructure.
  • Security issues with cloud credentials and secrets in code.

With Oatfin, our mission is to improve developer productivity and the development experience. We make it simple and easy to manage any application on any cloud.

Where the name comes from: the “oat” piece is because I love oatmeal. The “fin” is for fintech. Since I already had the domain name, Twitter and LinkedIn handles, it all stuck around. I also wanted to choose something that I could create the Google search presence for as opposed to something that would be confusing.

There are 3 big tenets in our application: infrastructure automation, security, and compliance. Our focus is cloud native, Docker, and Kubernetes.

Why cloud native?

Containerization provides many benefits like porting to different clouds, different operating systems as well as being easier to scale. There is no doubts that more enterprise companies will take advantage of cloud native deployments as they continue to use the public cloud.

Currently, the app allows customers to define their containers from any public or private container registry. We automate the infrastructure so they can choose the type of infrastructure they want to create. Since we have the containers, we can also scan them to detect vulnerabilities and compliance.

There are many features that make us stand out:

  • Being able to clone an infrastructure to debug production issues.
  • Schedule a deployment and specifying dependencies that need to be deployed before.
  • Zero trust security.
  • Scanning containers for vulnerabilities.
  • Compliance automation.
  • Team collaboration.

Our target customers are enterprise companies. As a startup, deploying native applications is very simple. You are most likely deploying a single API with a single container. But as an enterprise company, things get complicated very quickly with very little visibility. For example, you might have an API running on AWS, a database running on premise, and some other pieces running on Google Cloud. Managing these hybrid and multi-cloud environments is very challenging.

The Oatfin architecture is a good example. We have a Python and Flask API that talks to MongoDB with a Celery task queue. The Celery task queue uses Redis as a backend and message broker. The API is deployed on AWS using Elastic Container Service (ECS), the database is deployed on MongoDB Cloud, which is on Google Cloud, and we have Redis running on Redis Cloud. Finally our frontend is running on DigitalOcean along with the Celery task queue.

With that said, we’re raising our seed round and I would love to connect with investors who are excited about the cloud and developer tools space.

Thanks for reading!


Productive 2023 Continued

I’m hoping to write weekly on Sunday and talk about the features we ship weekly, fundraising, customer wins, programs, etc. I find that forcing myself to write about the week forces me to get stuff done to write about!

It will be very technical sometimes to target the more technical audience.

It’s been 2 years since I incorporated Oatfin, but I feel now we’re really making progress. The last couple years, we went through a few programs that prepared us. In 2021, we went through the Google for Startups Founders Academy. That was a 6 months program and culminated in Oatfin getting funded through the Google for Startups Black Founders Fund.

In 2022, we went through Lightship Capital’s Bootcamp, then we did another program with Google called Black+ TechAmplify. Late 2022, we also started a year long development program with Accenture where we have the chance to do paid pilots with their clients. In the last quarter of 2022, we got accepted to Visible Hands’ accelerator program and got our pre-seed funding.

We kicked off our seed fundraising in October 2022. We have a number of investors who are interested, but we have to get a lead investor who is excited about the space. We are actively looking for a lead investor and would love introduction to investors who are excited about the cloud and developer tools space.

Last week we shipped more features:

  1. Chain deployments. For example, service A depends on service B. Scheduling service A for deployment automatically runs service B 15 minutes before. Some use cases: making a database change before deploying an API or deploying the API before deploying the UI.
  2. Enable team management and invite team members to collaborate on Oatfin. Our business model is very simple: we charge $249/user per month for the SaaS model billed yearly and $2,999/user per year for the on-premise solution.

This week, I’m finishing up the deployment scheduling and tackling compliance automation, but the heavy lifting is targeted for February. Since we have the cloud infrastructure, compliance automation is the next logical step. Compliance is a major problem for a lot of enterprise companies as well as startups.

I’m also looking forward to 2 great programs this week that will hopefully help us get to the next level starting both January 25:

  1. AWS CTO Fellowship is a 5 weeks program for seed stage startups. It is a community of over 3,000 and growing early-stage and venture-backed CTOs. It is designed to provide early-stage CTOs with technical resources, guidance and community. The program consists of short weekly sessions with CTOs from top late-stage startups covering a different theme each week.
  2. Bolster Ready to Raise is the first Bolster for Startups program in 2023. They are partnering with Jenny Fielding of The Fund to help founders work through a tight fundraising process.

Thanks for reading!

REST Clone Operation

Super excited for what we have coming in 2023!

With cloud native, one of the major challenges is observability and being able to debug production issues quickly. But you can’t really debug your app while it’s being used by users in real time.

It’s nice to have a temporary environment that is as close to production as possible, but spinning up and tearing down a cloud infrastructure quickly is a major pain for developers.

One of the cool features we just shipped at Oatfin is the ability to clone an infrastructure or environment one-to-one to make it easy to reproduce and troubleshoot production problems.

Oatfin UI showing the clone operation

In solving this problem, it would have been nice to have a COPY or CLONE operation in REST. Something I think is fundamental to almost every API.

For now I’m doing:

POST /resource?source=id

If the “source” parameter is present as a query parameter, then it’s a clone operation, otherwise it’s a create operation. The operation is not idempotent meaning it will create a new copy each time the API is called.

Here is what this looks like. For the frontend, I’m using React, Typescript with a tool called umijs and antdesign from Ant financial:

Create request

import { request } from 'umi';

export async function createApp(params) {
  return request('/v1/apps', {
    method: 'POST',
    data: params,
    headers: {
      Authorization: 'Bearer ACCESS_TOKEN,

Clone request

import { request } from 'umi';

export async function cloneApp(params, id) {
  return request(`/v1/apps?source=${id}`, {
    method: 'POST',
    data: params,
    headers: {
      Authorization: 'Bearer ACCESS_TOKEN,

Calling these functions inside the React component:

export const AppComponent: FC<Props> = (props) => {
    const handleCreate = async (values) => {
        const res = await createApp(values);

    const handleClone = async (values) => {
        const res = await cloneApp(values);

    return (

The Python & Flask code skeleton:

@api.route('/apps', methods=['POST'])
def create_clone_app():

    req_data = flask.request.get_json()
    clone_source = flask.request.args.get('source')

    if not clone_source:
        app = AppsService().create(...)
        app = AppsService().clone(...)

    return flask.jsonify(
    ), 200

I’d be interested to see how others deal with the CLONE or COPY in REST.

New Year Goals

Lots of great things happened in 2022, but it was no doubt one of the most brutal year given the recession, mass layoffs, and investor pull back. I learned a lot about fundraising, business, and technology.

Where I lacked a lot was building relationships with investors. Further complicating things was the Covid pandemic. Building a company when there is a raging global pandemic was challenging.

Some of my focus areas for 2023:

Build relationships

  • Customers
  • Strategic Partners
  • Employees/Co-founders
  • Investors

Expand my knowledge about Venture Capital/Financing

  • Grants like the NSF SBIR/STTR program
  • SAFEs
  • Angel Groups
  • Accelerators
  • Convertible Notes
  • Crowdfunding

Expand my knowledge about business

  • Startup business valuation
  • Intellectual property
  • Financial Accounting
  • Pitch deck/Business plan
  • Selling and marketing to enterprise businesses

Expand my knowledge about Product Management and Technology

  • Product Management tools (JIRA, Confluence, etc.)
  • Cloud certifications (AWS, Google Cloud, Azure)
  • Learn a new programming language like Rust or Go
  • Developer conferences

23andMe Ancestry

Today, I got the complete result of my 23andMe Ancestry DNA test. I was born in Haiti, but moved to the US as a kid. I knew my ancestors mostly came from West Africa, but the exact place is something I always wanted to know. On my mom’s side, I knew I also had some European heritage but not much information.

The most interesting fact is my ancestry timeline. The timeline shows the ancestors who were 100% heritage and the time they were likely born. It seems correct because Haiti got its independence in 1804 which puts some of my ancestors as European slave owners from Spain, Portugal, Italy as well as Indigenous Americans and Southern East Africans in the times of slavery.

It’s also very interesting that I have a 100% Nigerian ancestor who was born as recently as the 1900s, which leaves a lot for research. To my knowledge, none of my grandparents is Nigerian. It could be a great-grandparent that I don’t know have much information about.

Some surprises, but not really that surprised. In college, I met a lot of friends from Africa who instantly recognized me as a fellow African, but I was somewhat annoyed because I had never set foot in Africa. Growing up, to tell Haitians that they are Africans is somewhat derogatory. It’s a form of self-hatred and ignorance. That changed for me because I was exposed to a lot of Africans in college and grad school.

I got along well and felt a connection with Nigerians, Ghanaians, Liberians, Senegalese, Tanzanians, Rwandans etc because I saw myself in them. In fact some of my best friends from are from this area.

The results:

95% Sub-Saharan African

  • 46.3% Nigerian
  • 23% Ghanaian, Liberian, Sierra Leone
  • 11.1% Angolan, Congolese
  • 7% Broadly West African
  • 6.3% Senegambian (Senegal/Gambia), Guinean
  • 1.3% South East Africa

4% European

  • 2.4% Spanish, Portuguese
  • 0.8% Italian
  • 1% Broad European

1% Trace/Unassigned ancestry

Sub-Saharan Africa

Congo and Southern East Africa

Some of my European ancestors are from Spain, Europe, Italy

The 23andMe data set is very small (12 million customers compared to 8 billion people) so it’s interesting to see that I have 1380 distant relatives (2nd, 3rd, 4th, and 5th Cousins) throughout the world of which 477 have added a location on the map.

Sadly, not many distant relatives show up in Africa and the far East yet.

Empty Offices

I was talking to a friend last week and one of the topic that came up was empty offices. You know the massive skyscrapers that were built to anticipate the growth of corporate workers.

But with the lingering effect of the pandemic, these beautiful massive buildings we all call “downtown” are going to be largely abandoned. I think the story will end just like how manufacturing companies disappeared in the early 80s and 90s creating largely abandoned factories and towns.

Currently, few buildings in Boston are 100% occupied. In fact, the average vacancy rate around Boston towns is around 25%. I think this trend will continue as we see more and more layoffs.

In the last 20 years, every company has had to become a tech company except of course certain industries like healthcare, education, and government. With more advances in generative AI and machine learning, we are seeing a rapid decline in creative work like digital marketing, advertising, and of course newspapers. Even fields we thought would not be impacted are currently impacted like software engineering.

« Older posts