Looking forward to dive into next week’s Cloud Musing newsletter on dynamic scheduling. This week, I will talk about User Acquisition and Growth. If you like this kind of content subscribe to our newsletter: https://lnkd.in/e3Xj4qhG
Dynamic scheduling is one of the biggest challenges developers face. For example, I want to deploy the Oatfin API on February 10, 2023 at 3:00 AM. Dynamic scheduling is hard because there is little support for it out of the box. With that also, as every developer knows, timezone is very hard to deal with.
Static scheduling on the other hand is very simple, every programming language provides some kind of support for writing cron jobs. An example of a cron job: I want to import data from a vendor every day at 8:00 PM.
Here is very high level on how we deal with this problem:
For the frontend: 1. To keep it simple, a user specifies year, month, day, hour, and minute from their own point of view. 2. We use the moment-timezone npm package to guess a user’s timezone from the browser.
Backend: 1. We run 3 docker containers: celery, scheduler and api. Celery is the base image and the other 2 containers extend the base image and overrides the CMD directive in the Dockerfile. 2. We use MongoDB to store the exact schedule in the database with the user’s timezone. 3. We use celery as a task queue and celery-redbeat as the scheduler. Celery-redbeat provides a RedBeatSchedulerEntry object to insert schedules in Redis. When we insert a schedule in Redis, we translate the user’s schedule to UTC date and time. 4. Once the task is complete, we mark it complete in MongoDB, which removes it from the list of scheduled deployments. 5. When a user cancels a task, we delete this entry from Redis and it won’t run.
As promised, this newsletter would be very technical sometimes to target the technical audience. Don’t forget to subscribe for more updates. It’s 36 subscribers strong on LinkedIn. Thank you for reading!
It took me the better part of the weekend to get the Celery, Redis task queue and scheduler to work, but it’s working now! Happy to talk about some of the challenges! This assumes familiarity with AWS Elastic Container Service, message brokers, and task queues.
We have a pretty solid architecture:
Python, Flask API
MongoDB database
Celery, Redis task queue
React, Typescript frontend
Docker on AWS ECS and Digital Ocean
What is Celery?
Celery is a task queue with focus on real-time processing, while also supporting task scheduling. It is a is a simple, flexible, and reliable distributed system to process large amounts of messages. It works with Redis, RabbitMQ, and other message brokers.
Some of the challenges I came across:
First, it took me a while to connect to Redis (Elasticache) on AWS. You have to add inbound rules to both security groups in order for the API to communicate with Redis over a specific port like 6379, but it didn’t work for me. I ended up using Redis Cloud because it is a lot simpler than AWS Elasticache. Another solution would be to run Redis on a Digital Ocean droplet or AWS EC2 instance, but I would have to expose the IP and port to the outside world.
The next challenge was how to get the Celery container to run on AWS Elastic Container Service. There are a couple ways to make it work:
Multiple containers within the same task definition
But this was not necessary because the Celery container doesn’t have to scale like the API. ECS also requires a health check path, but there isn’t one for the Celery container, which meant that starting a separate cluster was out of the question.
The solution was to create a multi-container deployment: a base image for the Celery task queue and a main container image for the API that builds on top of the base one. The API image simply overrides the CMD directive in the docker file.
Here is what this looks like:
Base Celery Container – Dockerfile.celery
FROM python:3
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD ./requirements.txt /usr/src/app/requirements.txt
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
ADD . /usr/src/app
ENV FLASK_ENVIRONMENT=stg
CMD celery -A apis.deploy.tasks.tasks worker -l info;
Flask API Container builds on the base image – Dockerfile.staging
I installed docker on a Digital Ocean droplet and ran the Celery containers on it, one setup for staging and another for production. It works as long as Celery can connect to Redis. In fact, I ran the Celery container locally and it worked. That’s how I figured it out. We could spin up an EC2 instance and run docker on it, but it was cheaper to go with a Digital Ocean droplet.
Building and running the containers is very trivial from there on. First login to a Digital Ocean droplet with docker installed then:
Login the docker registry. I’m using Gitlab’s docker registry.
Finally run the container on the Digital Ocean droplet.
docker run registry.gitlab.com/oatfin/automata:celery
Once everything works correctly, then we get some nice logs showing that celery is running. Here I’m using host.docker.internal to connect to my local Redis. I didn’t want to show the Redis Cloud link because anyone with the Redis link can intercept your messages.
Thanks again for subscribing to Cloud Musings! Last I checked, it was 38 subscribers strong. If you haven’t subscribed, subscribe to get automatic updates when I publish new editions. I will try to make it interesting sometimes!
This week I got the book Start With Why by Simon Sinek from Darrel, one of our investors at Visible Hands. Darrel along with Daniel were some of our first believers. I started reading the book and thought I would take a step back to talk about why I’m working on Oatfin.
The short answer is that I left Akamai in 2016 and wanted to focus on startups. I realized that I was not a corporate person. I come from a family of entrepreneurs. My parents are entrepreneurs and my grandparents were also entrepreneurs.
First, I started working on a fintech/blockchain solution and it was a pain dealing with infrastructure. Back in 2016, blockchain was also blacklisted by every major cloud and payment forms like Stripe. I took a break to work at a few startups to learn.
For the long answer. I’ve been a software engineer for 15 years. In my experience working at GE, Putnam Investments, Akamai, and many early stage startups, cloud infrastructure was a major challenge. The process is not only manual, but tedious and frustrating at times. If you’ve ever used the AWS or Google Cloud user interface, then you know this pain well!
For example, I was working at a fintech startup and one of my roles was to automate our cloud infrastructure. Sometimes it took days to deploy a simple app.
I was working at another healthcare startup, and it was a lot of the same frustrations. We moved from servers to server-less and then back to servers. Other challenges we faced were cloud spend, testing, security, compliance, and observability into the server-less apps. I left after 6 months to work on Oatfin because I believe the process should be simpler.
Some problems with the cloud currently:
Painfully slow development cycle.
Manual, tedious, time consuming and frustrating.
Days to weeks to build a secure cloud infrastructure.
Vendor lock-in means high cost.
Requires expert knowledge, new staff and skills.
There are some solutions like infrastructure as code (IaC), but I don’t believe developers want to write more code to maintain the code they already have. I’ve written a fair amount of infrastructure as code. Some problems include:
Hard to maintain manual scripts – multiple environments and clouds.
Learning new frameworks and languages like Terraform and Pulumi.
Doesn’t remove the complexity of infrastructure.
Security issues with cloud credentials and secrets in code.
With Oatfin, our mission is to improve developer productivity and the development experience. We make it simple and easy to manage any application on any cloud.
Where the name comes from: the “oat” piece is because I love oatmeal. The “fin” is for fintech. Since I already had the domain name, Twitter and LinkedIn handles, it all stuck around. I also wanted to choose something that I could create the Google search presence for as opposed to something that would be confusing.
There are 3 big tenets in our application: infrastructure automation, security, and compliance. Our focus is cloud native, Docker, and Kubernetes.
Why cloud native?
Containerization provides many benefits like porting to different clouds, different operating systems as well as being easier to scale. There is no doubts that more enterprise companies will take advantage of cloud native deployments as they continue to use the public cloud.
Currently, the app allows customers to define their containers from any public or private container registry. We automate the infrastructure so they can choose the type of infrastructure they want to create. Since we have the containers, we can also scan them to detect vulnerabilities and compliance.
There are many features that make us stand out:
Being able to clone an infrastructure to debug production issues.
Schedule a deployment and specifying dependencies that need to be deployed before.
Zero trust security.
Scanning containers for vulnerabilities.
Compliance automation.
Team collaboration.
Our target customers are enterprise companies. As a startup, deploying native applications is very simple. You are most likely deploying a single API with a single container. But as an enterprise company, things get complicated very quickly with very little visibility. For example, you might have an API running on AWS, a database running on premise, and some other pieces running on Google Cloud. Managing these hybrid and multi-cloud environments is very challenging.
The Oatfin architecture is a good example. We have a Python and Flask API that talks to MongoDB with a Celery task queue. The Celery task queue uses Redis as a backend and message broker. The API is deployed on AWS using Elastic Container Service (ECS), the database is deployed on MongoDB Cloud, which is on Google Cloud, and we have Redis running on Redis Cloud. Finally our frontend is running on DigitalOcean along with the Celery task queue.
With that said, we’re raising our seed round and I would love to connect with investors who are excited about the cloud and developer tools space.
I’m hoping to write weekly on Sunday and talk about the features we ship weekly, fundraising, customer wins, programs, etc. I find that forcing myself to write about the week forces me to get stuff done to write about!
It will be very technical sometimes to target the more technical audience.
It’s been 2 years since I incorporated Oatfin, but I feel now we’re really making progress. The last couple years, we went through a few programs that prepared us. In 2021, we went through the Google for Startups Founders Academy. That was a 6 months program and culminated in Oatfin getting funded through the Google for Startups Black Founders Fund.
In 2022, we went through Lightship Capital’s Bootcamp, then we did another program with Google called Black+ TechAmplify. Late 2022, we also started a year long development program with Accenture where we have the chance to do paid pilots with their clients. In the last quarter of 2022, we got accepted to Visible Hands’ accelerator program and got our pre-seed funding.
We kicked off our seed fundraising in October 2022. We have a number of investors who are interested, but we have to get a lead investor who is excited about the space. We are actively looking for a lead investor and would love introduction to investors who are excited about the cloud and developer tools space.
Last week we shipped more features:
Chain deployments. For example, service A depends on service B. Scheduling service A for deployment automatically runs service B 15 minutes before. Some use cases: making a database change before deploying an API or deploying the API before deploying the UI.
Enable team management and invite team members to collaborate on Oatfin. Our business model is very simple: we charge $249/user per month for the SaaS model billed yearly and $2,999/user per year for the on-premise solution.
This week, I’m finishing up the deployment scheduling and tackling compliance automation, but the heavy lifting is targeted for February. Since we have the cloud infrastructure, compliance automation is the next logical step. Compliance is a major problem for a lot of enterprise companies as well as startups.
I’m also looking forward to 2 great programs this week that will hopefully help us get to the next level starting both January 25:
AWS CTO Fellowship is a 5 weeks program for seed stage startups. It is a community of over 3,000 and growing early-stage and venture-backed CTOs. It is designed to provide early-stage CTOs with technical resources, guidance and community. The program consists of short weekly sessions with CTOs from top late-stage startups covering a different theme each week.
Bolster Ready to Raise is the first Bolster for Startups program in 2023. They are partnering with Jenny Fielding of The Fund to help founders work through a tight fundraising process.
With cloud native, one of the major challenges is observability and being able to debug production issues quickly. But you can’t really debug your app while it’s being used by users in real time.
It’s nice to have a temporary environment that is as close to production as possible, but spinning up and tearing down a cloud infrastructure quickly is a major pain for developers.
One of the cool features we just shipped at Oatfin is the ability to clone an infrastructure or environment one-to-one to make it easy to reproduce and troubleshoot production problems.
In solving this problem, it would have been nice to have a COPY or CLONE operation in REST. Something I think is fundamental to almost every API.
For now I’m doing:
POST /resource?source=id
If the “source” parameter is present as a query parameter, then it’s a clone operation, otherwise it’s a create operation. The operation is not idempotent meaning it will create a new copy each time the API is called.
Here is what this looks like. For the frontend, I’m using React, Typescript with a tool called umijs and antdesign from Ant financial:
Lots of great things happened in 2022, but it was no doubt one of the most brutal year given the recession, mass layoffs, and investor pull back. I learned a lot about fundraising, business, and technology.
Where I lacked a lot was building relationships with investors. Further complicating things was the Covid pandemic. Building a company when there is a raging global pandemic was challenging.
Some of my focus areas for 2023:
Build relationships
Customers
Strategic Partners
Employees/Co-founders
Investors
Expand my knowledge about Venture Capital/Financing
Grants like the NSF SBIR/STTR program
SAFEs
Angel Groups
Accelerators
Convertible Notes
Crowdfunding
Expand my knowledge about business
Startup business valuation
Intellectual property
Financial Accounting
Pitch deck/Business plan
Selling and marketing to enterprise businesses
Expand my knowledge about Product Management and Technology
Today, I got the complete result of my 23andMe Ancestry DNA test. I was born in Haiti, but moved to the US as a kid. I knew my ancestors mostly came from West Africa, but the exact place is something I always wanted to know. On my mom’s side, I knew I also had some European heritage but not much information.
The most interesting fact is my ancestry timeline. The timeline shows the ancestors who were 100% heritage and the time they were likely born. It seems correct because Haiti got its independence in 1804 which puts some of my ancestors as European slave owners from Spain, Portugal, Italy as well as Indigenous Americans and Southern East Africans in the times of slavery.
It’s also very interesting that I have a 100% Nigerian ancestor who was born as recently as the 1900s, which leaves a lot for research. To my knowledge, none of my grandparents is Nigerian. It could be a great-grandparent that I don’t know have much information about.
Some surprises, but not really that surprised. In college, I met a lot of friends from Africa who instantly recognized me as a fellow African, but I was somewhat annoyed because I had never set foot in Africa. Growing up, to tell Haitians that they are Africans is somewhat derogatory. It’s a form of self-hatred and ignorance. That changed for me because I was exposed to a lot of Africans in college and grad school.
I got along well and felt a connection with Nigerians, Ghanaians, Liberians, Senegalese, Tanzanians, Rwandans etc because I saw myself in them. In fact some of my best friends from are from this area.
The results:
95% Sub-Saharan African
46.3% Nigerian
23% Ghanaian, Liberian, Sierra Leone
11.1% Angolan, Congolese
7% Broadly West African
6.3% Senegambian (Senegal/Gambia), Guinean
1.3% South East Africa
4% European
2.4% Spanish, Portuguese
0.8% Italian
1% Broad European
1% Trace/Unassigned ancestry
Sub-Saharan Africa
Congo and Southern East Africa
Some of my European ancestors are from Spain, Europe, Italy
The 23andMe data set is very small (12 million customers compared to 8 billion people) so it’s interesting to see that I have 1380 distant relatives (2nd, 3rd, 4th, and 5th Cousins) throughout the world of which 477 have added a location on the map.
Sadly, not many distant relatives show up in Africa and the far East yet.
I was talking to a friend last week and one of the topic that came up was empty offices. You know the massive skyscrapers that were built to anticipate the growth of corporate workers.
But with the lingering effect of the pandemic, these beautiful massive buildings we all call “downtown” are going to be largely abandoned. I think the story will end just like how manufacturing companies disappeared in the early 80s and 90s creating largely abandoned factories and towns.
Currently, few buildings in Boston are 100% occupied. In fact, the average vacancy rate around Boston towns is around 25%. I think this trend will continue as we see more and more layoffs.
In the last 20 years, every company has had to become a tech company except of course certain industries like healthcare, education, and government. With more advances in generative AI and machine learning, we are seeing a rapid decline in creative work like digital marketing, advertising, and of course newspapers. Even fields we thought would not be impacted are currently impacted like software engineering.
We are super excited to announce that Oatfin will be part of the Google For Startups Founders Academy! At Oatfin, our mission is to enable software engineers to deliver cloud applications faster through self-service automation. We do for software engineers what self-checkout does for retail stores. We enable them to be more agile and remove dependency on platform teams to deliver cloud applications.
The Google For Startup Founders Academy begins on March 3rd and consists of hands-on workshops across a range of topics including customer acquisition, hiring, fundraising, and tech enablement. We are one of 50 high-potential startups that the Google for Startups team selected for the first nationwide cohort. Last year, it was piloted in Atlanta with 45 Georgia based companies.
With Google’s advanced technologies and their sophisticated ecosystem of cutting edge tools, we are excited about how that will help us further our mission. We plan to leverage Google Cloud to further enable developers to deliver cloud applications faster.
While working on Oatfin, one use case we just finished implementing is using Github to allow users to sign up and login for the app. Users can now easily login with Google, Github, and Gitlab.
Our default login is also password-less meaning a user can login with just an email address. We send the login link directly to the email. This adds a layer of security because the user has to have access to the email to login and also validates that the user is a real person. Another problem password-less solves is syncing users from different sign in providers. We can guarantee a user who signed in with Google is the same user from Github or Gitlab.
Our app uses React with Typescript on the front-end and Python, Flask on the back-end. Here is what it looks like.
Part one is to create an OAuth app in Github as explained here.
Part two is setting up the React/Typescript component. When unauthenticated users visit the home page, they are redirected to the login page. That should be your application’s default behavior already.
When users click your ‘Login with Github’ button, they are first sent to Github’s login page with the scopes you want and your client_id like this:
After visiting Github’s website and they login, Github redirects back to your call back page which might still be the login page in our case. In your callback url, there is a now parameter ?code=some_code_text.
Your goal now is to take the code returned from Github and pass it to the Python/Flask app. My login component looks like this:
const Login: React.FC<{}> = () => {
// ...
useEffect(() => {
// see if code was returned, returns an error if the user denies the request
const newUrl = window.location.href;
const hasCode = newUrl.includes('?code=');
if (hasCode) {
// get the code value
const url = newUrl.split('?code=')[1].split('#/login');
const data = {
code: url[0],
};
// send the code to the backend
submitGithub(data as LoginParamsType);
}
}, [submitGithub]);
const onClick = async () => {
window.location.href =
'https://github.com/login/oauth/authorize?scope=user:email&client_id=YOUR_CLIENT_ID'
}
return (
<Button onClick={onClick}
Continue with Github
</Button>
)
}
Here we take the code and call the API, which returns access token to store in localStorage.
const submitGithub = async (values: LoginParamsType) => {
try {
const res = await accountLogin({ ...values });
if (res !== undefined && res.access_token !== undefined) {
window.localStorage.setItem('oatfin_access_token', res.access_token);
} catch (error) {
message.error('Unable to login with Github.');
}
};
Part three is the Python/Flask login API. We make 2 calls to Github: first to exchange the code we got from the front-end with an access token, then to use the access token to get the user details.
import requests
@api.route('/login', methods=['POST'])
def login():
req_data = flask.request.get_json()
code = req_data.get('code')
if code:
data = {
'client_id': app_config.GITHUB_CLIENT_ID,
'client_secret': app_config.GITHUB_CLIENT_SECRET,
'code': code
}
# exchange the 'code' for an access token
res = requests.post(
url='https://github.com/login/oauth/access_token',
data=data,
headers={'Accept': 'application/json'}
)
if res.status_code != 200:
raise UnauthenticatedError()
res_json = res.json()
access_token = res_json['access_token']
# get the user details using the access token
res = requests.get(
url='https://api.github.com/user',
headers={
'Accept': 'application/json',
'Authorization': 'token {}'.format(access_token)
}
)
if res.status_code != 200:
raise UnauthenticatedError()
res_json = res.json()
names = res_json['name'].split()
first_name = names[0]
last_name = names[1]
login = res_json['login'] or res_json['email']
avatar = res_json['avatar_url']
# create the user
user = UserService().create(...)
access_token = create_access_token(identity=user.json())
return flask.jsonify(
access_token=access_token
), 200