My thoughts about life, startups, and weight lifting.

Author: Jay Paulynice (Page 1 of 2)

Oatfin Architecture

At Oatfin, we’ve been hard at work crafting a robust architecture that powers our web applications to handle millions of concurrent users and deployments seamlessly. In this blog, we’re thrilled to unveil the architecture behind the scenes, showcasing the technologies and strategies that drive our platform’s success.

Architecture Overview:

Let’s dive into the heart of our architecture. At Oatfin, we’ve crafted a solid foundation comprising key components:

  1. React, Typescript UI deployed on an EC2 instance: Our user interface is built using React and Typescript, providing a seamless and dynamic user experience. Deployed on Amazon EC2 instances, our UI delivers high performance and responsiveness to users worldwide.
  2. Python, Flask API using Docker and deployed using AWS Elastic Container Service (ECS): Powering our backend, we rely on Python’s robustness and the elegance of Flask to develop our APIs. Leveraging Docker containers, we ensure consistency and scalability in our deployments. AWS Elastic Container Service (ECS) manages our containerized applications, offering effortless scalability and reliability.
  3. MongoDB database using MongoDB Atlas: Data is the lifeline of any application, and at Oatfin, we entrust MongoDB to store and manage our data efficiently. With MongoDB Atlas, we enjoy the benefits of a fully managed database service, including automated backups, security features, and seamless scalability.
  4. Elasticsearch for indexing and searching documents: Searching and indexing documents is a breeze with Elasticsearch. We utilize its powerful search capabilities to enhance the discoverability of content within our applications, delivering fast and relevant results to our users.
  5. Celery, Redis task queue, and scheduler running inside Docker and deployed on an EC2 instance: Task queuing and scheduling are essential for handling background processes and asynchronous tasks. With Celery and Redis, we orchestrate our tasks efficiently, ensuring optimal performance and resource utilization. Docker containers simplify the deployment and management of Celery and Redis components, while EC2 instances offer the flexibility and scalability we need.

Scalability and Deployment:

One of the hallmarks of our architecture is its scalability. By leveraging cloud-native technologies and best practices, we’ve built a platform that can effortlessly scale to meet growing demands. Whether it’s handling spikes in user traffic or deploying updates seamlessly, our architecture is designed to adapt and grow with ease.

Our deployment process is streamlined and efficient. We deploy our React UI on EC2 instances, containerize our Flask API using Docker, and leverage AWS ECS for orchestration and scaling. MongoDB Atlas and Elasticsearch further simplify our deployment, offering managed services that eliminate infrastructure overhead.

Challenges and Future Improvements: Of course, building and scaling a complex architecture like ours comes with its share of challenges. From optimizing performance to ensuring security and reliability, we continuously strive to overcome obstacles and improve our platform.

Looking ahead, we’re excited about the future possibilities. We’re exploring new technologies, optimizing our architecture for even greater performance, and refining our processes to deliver an exceptional user experience.

At Oatfin, we’re proud of the architecture we’ve built, and we’re excited to share it with you. Our commitment to scalability, reliability, and innovation drives us forward as we continue to push the boundaries of what’s possible. Thank you for joining us on this journey, and we look forward to shaping the future of technology together.

Oatfin Cloud Cost Intelligence Beta: 6 Months In!

It’s been an exciting 6 months here at Oatfin. Since we launched Oatfin Cloud Cost Intelligence in beta, we’ve received invaluable feedback and I wanted to share our progress.

Feedback that Drives Innovation:

First and foremost, a huge thank you to everyone who jumped into the beta testing phase with us. The feedback has been invaluable! We’ve been diligently listening, learning, and refining Oatfin Cloud Cost Intelligence based on your insights. Your input is shaping the future of our platform, and we couldn’t be more grateful.

What We’ve Learned:

Our beta users have highlighted specific pain points and challenges they face in managing AWS costs. Unpredictable costs, hidden fees, and the lack of visibility are consistent themes. Your experiences are the driving force behind our mission to simplify and optimize cloud cost management.

Refining the Intelligence:

Since the beta launch, we’ve been hard at work fine-tuning the AI models, enhancing data collection processes, and optimizing our approach to better predict and manage cloud costs. The journey to solving the cloud cost puzzle is ongoing, and the feedback is the compass guiding us in the right direction.

Exciting Features on the Horizon:

Based on your suggestions and needs, we’re gearing up to roll out some exciting features in the coming weeks. Stay tuned for updates on enhanced cost visibility, personalized recommendations, and even more ways to take control of your AWS spending.

How You Can Help:

Your continued participation in the beta testing phase is crucial. Please keep those insights coming! Whether it’s a small observation or a big “Aha!” moment, every bit helps us refine and improve Oatfin Cloud Cost Intelligence.

Join the Conversation:

We’ve set up a dedicated space for beta users to share thoughts, ask questions, and connect with each other on Slack. Your experiences and expertise contribute to a vibrant and collaborative Oatfin community.

Thank you for being part of this journey with us. Together, we’re reshaping the way cloud costs are managed.

Stay tuned for more updates, and as always, feel free to reach out if you have any questions or thoughts to share.

Best,

Jay

Oatfin Year-End Review (2023)

Thank you for subscribing to Cloud Musings! If you haven’t subscribed yet, please consider subscribing to receive automatic updates when I publish new editions.

As we approach the close of another transformative year, we are delighted to reflect on Oatfin’s remarkable journey in 2023. It’s been a year filled with growth, challenges, and unwavering commitment to our mission.

Highlights of 2023:

  • Product Evolution: Based on invaluable feedback we received from our users, we launched Oatfin Cloud Cost Intelligence in beta, which is designed to simplify cloud cost management on AWS. In addition, we made a lot of improvements and added a lot of new features to our Oatfin Cloud product.
  • Cloud Musings Newsletter: In 2023, we launched the Cloud Musings newsletter on LinkedIn and Substack. We’ve seen a number of subscribers and growth in the Oatfin community.
  • Financial Milestones: We’re proud to report a 2.3x growth in revenue this year, underscoring Oatfin’s resilience and market relevance. Our continued focus on cloud infrastructure has yielded promising results, setting the stage for sustainable growth.
  • Customer Success: We’ve received positive feedback from our valued customers, reaffirming the impact and value of our solutions in their operations.

Navigating Challenges and Embracing Opportunities:

While the year presented its set of challenges from the fundraising environment, our dedication and adaptability ensured that we turned obstacles into opportunities. Through strategic planning and collaborative efforts, we’ve overcome hurdles, gaining invaluable insights and experience along the way.

Looking Ahead: A Glimpse into 2024:

  • Fundraising: We’re on track to conclude our seed fundraising, paving the way for accelerated growth and innovation.
  • Innovation: We’re gearing up to expand our product suite, integrating cutting-edge generative AI solutions.
  • Market Expansion: Through strategic partnerships and initiatives, we aim to fortify our market footprint.
  • Excellence in R&D: Our commitment to innovation remains unwavering, with increased investments in research & development and customer experience.
  • Sustainability & Community: Rooted in our core values, we’re committed to fostering sustainability, responsibility, and active community engagement.

In Conclusion:

As we conclude 2023, we extend our heartfelt gratitude to our loyal customers, partners, and stakeholders. Your unwavering support, collaboration, and trust have been instrumental in Oatfin’s success, and we look forward to achieving new milestones together in the coming year.

Wishing you a joyful holiday season and a prosperous New Year!

Warm regards,

Jay

Gratitude in the Startup Journey

As I reflect on the journey of building Oatfin, I can’t help but feel an immense sense of gratitude. Here are a some things I’m thankful for as an early-stage startup founder:

1. Supportive Network: Grateful for the mentors, advisors, friends, and family who’ve been with us from the beginning. The guidance, encouragement, and connections have been invaluable.

2. Learning Opportunities: Every challenge is a chance to learn and grow. Gratitude for the experiences, both good and tough, that shape us and make us more resilient entrepreneurs.

3. Early Adopters and Customers: Huge thanks to our early adopters and customers! The feedback so far has been gold, and we’re committed to delivering value that exceeds expectations.

4. Resilience and Perseverance: Building a startup tests us in ways we never imagined. Grateful for the resilience and perseverance that keeps us moving forward, even when the road is tough.

5. Access to Resources: From funding to workspace and technology, having the right resources is a blessing. Thank you to everyone who has contributed to our journey in any way.

6. Market Validation: Excited to see positive responses from the market. It’s a sign that our vision resonates, and there’s a demand for what we’re building. This validation fuels our determination to keep pushing boundaries.

Here’s to the journey ahead, filled with challenges, triumphs, and continued growth!

Jay

Oatfin Cloud Cost Intelligence

Thank you for subscribing to Cloud Musings! If you haven’t subscribed yet, please consider subscribing on LinkedIn and Substack to receive automatic updates when I publish new editions.

It’s been a while since our last regular update, but we’re excited to announce that we’re getting back into our monthly cadence. As we continue our seed fundraising journey, we’d greatly appreciate introductions to more investors and enterprise partners who share our enthusiasm for what we’re building at Oatfin.

Over the past few months, we’ve been hard at work planning the next version of our product and crafting a roadmap. Since the launch of Oatfin Cloud, we’ve received invaluable feedback from our users. One topic that continually surfaces as a challenge for our customers is cloud cost management. While AWS offers numerous benefits, managing costs can often be a significant hurdle. For example, one startup I worked at, we spent north of $30,000 per month without a clear understanding of why.

Introducing Oatfin Cloud Cost Intelligence (Beta)

This week, we launched Oatfin Cloud Cost Intelligence in beta, which uses AI to simplify cloud cost management on AWS.

Oatfin Cloud Cost Intelligence

Here are some of the key challenges we’re addressing:

  1. Unpredictable Costs: One of the most significant issues with AWS is the unpredictability of costs. Usage can vary from month to month, and it’s challenging to estimate how much services will cost.
  2. Hidden Costs: AWS has hidden costs, such as data transfer fees, storage costs, and charges for additional services or features. These can add up quickly and catch organizations by surprise.
  3. Lack of Cost Visibility: Many organizations struggle with a lack of visibility into their AWS spending. Without proper monitoring and reporting tools, it’s challenging to identify cost-saving opportunities.
  4. Complex Pricing Models: AWS offers complex pricing models with various pricing tiers, discounts, and options. Understanding these models can be challenging, making it easy to overspend.
  5. Lack of Cost Accountability: Without proper cost allocation and accountability mechanisms, different teams or departments within an organization may overspend without realizing it.

Solving the cloud cost problem using AI is indeed complex, but we firmly believe that AI can help optimize cloud costs by predicting usage, recommending resource allocation, and identifying cost-saving opportunities.

Our Approach in a Nutshell:

  1. Data Collection: We collect data on AWS cloud usage, such as historical billing data, resource usage metrics, and other pertinent information.
  2. Data Preprocessing: We clean and preprocess the data to make it suitable for AI modeling, removing irrelevant data and normalizing values.
  3. Feature Engineering: We create meaningful features from the data, such as CPU usage, memory usage, or user behavior, which can be used for modeling.
  4. Model Selection: We choose an appropriate AI model for the problem, with options like time series forecasting, regression, and reinforcement learning models.
  5. Model Training: We train the selected AI model on the preprocessed data.
  6. Cost Prediction: Using the trained model, we predict future cloud costs.
  7. Recommendations: Based on the model predictions, we generate recommendations to optimize cloud costs.
  8. Monitoring and Continuous Learning: We continually monitor the cloud environment, update the model with new data, and adapt recommendations as the environment changes.

We welcome any feedback you may have and invite you to participate as beta testers.

Thank you for reading and being a part of our journey at Oatfin!

Jay

Dynamic Scheduling Under The Hood

As promised, in this week’s edition of Cloud Musings, I thought I would do a deep dive with code into dynamic scheduling and explain how we solve this challenge. Don’t forget to subscribe here on Substack of Linkedin. Thanks for reading!

Last week, I wrote about this on a very high level. Here is a demo of how it works. I’ve also started to open source some of our code base so people can understand how our platform works.

We have a pretty solid architecture:

  • Python, Flask API
  • MongoDB database
  • Celery, Redis task queue
  • React, Typescript Frontend
  • Docker on AWS ECS and Digital Ocean

The UI:

Schedule Deployment

To schedule a deployment, a user specifies a cloud infrastructure, the date, time, and dependency. Dependency is optional, but we could imagine the case of deploying the API before a change in the UI or a database change before deploying the API. When a user specifies a dependency, it runs 15 minutes before the actual deployment.

For the frontend, I’m using ReactTypescript with a tool called umijs and ant design from Ant financial:

The scheduled deployment request function just calls the backend API passing the user inputs along with the access token for security and to know who the user is.

import { request } from 'umi';

export async function schedule(data: API.ScheduledDataType) {
  return request('/v1/scheduled_deployments', {
    method: 'POST',
    data,
    headers: {
      Authorization: 'Bearer oatfin_access_token'
    },
  });
}

The user specifies a date (year, month, day) and time (hour, minute). We use moment-timezone npm to guess the timezone: moment.tz.guess()

  const handleSubmit = async (values: API.DateTimeDependency) => {
    const data: API.ScheduledDataType = {
      app: current?.id,
      year: values.date.year(),
      month: values.date.month() + 1,
      day: values.date.date(),

      hour: values.time.hour(),
      minute: values.time.minute(),
      timezone: moment.tz.guess(),
      dependency: values.dependency,
    };

    try {
      ...
      const res = await schedule(data);
      ...
    } catch (error) {
      message.error('Error scheduling deployment.');
    }
  };

Inside the React functional component, we specify a function for the onSubmit in the modal where we capture the user inputs.

 export const SchedComponent: FC<BasicListProps> = (props) => {
  ...
  const getModalContent = () => {
    return (
      <Form onFinish={handleFinish}>
        <Form.Item name="name" label="Application">
          <Input disabled value={current?.name} />
        </Form.Item>
        <Form.Item name="date" label="Date">
          <DatePicker
            disabledDate={(currentDate) => disabled(currentDate)}
          />
        </Form.Item>
        <Form.Item name="time" label="Time">
          <TimePicker use12Hours format="h:mm A" showNow={false}/>
        </Form.Item>
        <Form.Item name="dependency" label="Dependency">
            <Select>
              <Select.Option key={app.id} value={app.id}>
                {app.name}
              </Select.Option>
            </Select>
          </Form.Item>
      </Form>
    );
  };

  return(
    <Modal
      title="Schedule Deployment"
      width={640}
      bodyStyle={{ padding: '28px 0 0' }}
      destroyOnClose
      visible={visible}
      onCancel={onCancel}
      onsubmit={handleSubmit}
    >
      {getModalContent()}
    </Modal>
  )
}

The Python, Flask API:

First we capture the parameters that the UI sends, then call the service to create the actual schedule. If the user specifies a dependency, we also create a scheduled entry in MongoDB and Redis for the dependency.

...
@api.route('/scheduled_deployments', methods=['POST'])
@jwt_required()
def schedule_deployment():
    user_id = get_jwt_identity()['user_id']
    req_data = flask.request.get_json()

    app_id = req_data.get('app')
    dependency = req_data.get('dependency')
    year = req_data.get('year')
    month = req_data.get('month')
    day = req_data.get('day')
    hour = req_data.get('hour')
    minute = req_data.get('minute')
    timezone = req_data.get('timezone')

    sd = ScheduledDeploymentService().create_schedule(...)

    args = [...]
    SchedulerService().create_entry(sd, args=args, app=app)

    if sd.dependency is not None:
        dep = sd.dependency
        args = [...]
        SchedulerService().create_entry(dep, args=args, app=app)

    return flask.jsonify(
        result=sd.json(),
    ), 200

The create_schedule method in ScheduledDeploymentService creates an entry in MongoDB for the parent deployment and any dependency the user specified.

def create(app_id, dep, user_id, year, month, day, hour, minute, tz):
    if dependency:
        sched = ScheduledDeployment(
            app=dependency_app,
            team=user.team,
            year=dependency_sched.year,
            month=dependency_sched.month,
            day=dependency_sched.day,
            hour=dependency_sched.hour,
            minute=dependency_sched.minute,
            original_timezone=tz
        ).save()

        return ScheduledDeployment(
            app=app,
            dependency=sched,
            team=user.team,
            year=year,
            month=month,
            day=day,
            hour=hour,
            minute=minute,
            original_tz=tz
        ).save()

The MongoDB document:

from mongoengine import Document, IntField, ReferenceField, etc.

class ScheduledDeployment(Document):
    year = IntField()
    month = IntField()
    day = IntField()
    hour = IntField()
    minute = IntField()
    original_timezone = StringField()
    entry_key = StringField()
    dependency = ReferenceField('self', required=False)

    def json(self):
        return {
            'year': self['year'],
            'month': self['month'],
            'day': self['day'],
            'hour': self['hour'],
            'minute': self['minute'],
            'original_timezone': self['original_timezone'],
            'entry_key': self['entry_key']
        }

The DeploymentSchedulerService is used to first translate the user date and time from their timezone to UTC, then it creates an entry in Redis. We’re also using crontab from celery to create the actual schedule. The challenge here is that we can only specify month_of_year, day_of_month, hour, and minute. We can’t specify a year. We handle this by deleting the entry from Redis once the scheduled deployment is successful.

class SchedulerService(object):
    def create_scheduled_entry(self, sd, args, app):
        scheduled_date = self.to_utc(...)
        entry = RedBeatSchedulerEntry(
            name=str(sd.id),
            schedule=crontab(
                month_of_year=scheduled_date.month,
                day_of_month=scheduled_date.day,
                hour=scheduled_date.hour,
                minute=scheduled_date.minute,
            ),
            task='tasks.deploy',
            args=args,
            app=app
        )
        entry.save()
        sd.update(set__entry_key=entry.key)

    def to_utc(self, timezone, year, month, day, hour, minute):
        tz = pytz.timezone(timezone)
        user_dtz = tz.localize(datetime.datetime(...))
        return tz.normalize(user_dtz).astimezone(pytz.utc)

Finally the task queue looks like this:

app = Celery(__name__)
app.conf.broker_url = app_config.REDIS_BROKER
app.conf.result_backend = app_config.REDIS_BACKEND
app.conf.redbeat_redis_url = app_config.REDIS_BACKEND

app.conf.update()


@app.task(name='tasks.deploy')
def deploy(user_id, app_id, key, secret, region, deployment_id=None, scheduled_id=None):
    connect(
        db=app_config.DB_NAME,
        username=app_config.DB_USERNAME,
        password=app_config.DB_PASSWORD,
        host=app_config.DB_HOST,
        port=app_config.DB_PORT,
        authentication_source=app_config.DB_AUTH_SOURCE
    )

    ECSDeploymentService().deploy(
        user_id=user_id,
        app_id=app_id,
        oatfin_key=key,
        oatfin_secret=secret,
        oatfin_region=region,
        deployment_id=deployment_id,
        scheduled_id=scheduled_id
    )

Thanks for reading!

Jay

User Growth and Analytics

In this edition of Cloud Musings, I thought I would dive into user analytics from Google Analytics from the last couple of years. Some people find it useful, but I think we are still early to have meaningful user data.

As we get more data, we will want to double down on the channels that bring the most bang for the buck. We are not there yet. In fact, we have done no marketing or any kind of advertising. Ideally, we’ll be doing marketing where developers hang out like Github, Gitlab, etc.

For the analytics, blue bars represent 2021 and the red bars represent 2022. I’m doing this for both the website as well as the app. I also made a quick video to show this data from Google Analytics here.

Thanks for reading Cloud Musings! Subscribe for free to receive new posts and support my work.

Subscribed

Oatfin.com property:

As we can see from 2021 to 2022, we had a massive user growth. In particular, we went from 482 new users in 2021 to 1905 new users in 2022. The number of sessions and page views grew accordingly.

Diving into the different acquisition channels, a lot of the traffic came from direct hits. My guess is that a lot of people prefer to type in the browser as opposed to click on a link. I personally do that especially when Google search shows an ad, but I don’t want to click on the ad, so I would type in the address in the browser. Also, I think Google Analytics count email as direct hits. We did a lot of email cold outreach.

As for social channels, I only use Twitter and LinkedIn. It’s not a lot of hits, but it’s still significant to see that people are coming from Twitter and LinkedIn.

cloud.oatfin.com property:

For the app, I’m using Google Analytics v4 and I’ve found it less useful than Google Analytics v3. There isn’t as much details to dive in. Here we see the user growth.

Here you can see the different user acquisition channels. Again, a lot of the users come from direct hits, few organic search, and referrals for both 2021 and 2022. That’s very much what I expected.

Thanks for reading!

Jay

Dynamic Scheduling in Python

Looking forward to dive into next week’s Cloud Musing newsletter on dynamic scheduling. This week, I will talk about User Acquisition and Growth. If you like this kind of content subscribe to our newsletter: https://lnkd.in/e3Xj4qhG

Dynamic scheduling is one of the biggest challenges developers face. For example, I want to deploy the Oatfin API on February 10, 2023 at 3:00 AM. Dynamic scheduling is hard because there is little support for it out of the box. With that also, as every developer knows, timezone is very hard to deal with.

Static scheduling on the other hand is very simple, every programming language provides some kind of support for writing cron jobs. An example of a cron job: I want to import data from a vendor every day at 8:00 PM.

Here is very high level on how we deal with this problem:

For the frontend:
1. To keep it simple, a user specifies year, month, day, hour, and minute from their own point of view.
2. We use the moment-timezone npm package to guess a user’s timezone from the browser.

Backend:
1. We run 3 docker containers: celery, scheduler and api. Celery is the base image and the other 2 containers extend the base image and overrides the CMD directive in the Dockerfile.
2. We use MongoDB to store the exact schedule in the database with the user’s timezone.
3. We use celery as a task queue and celery-redbeat as the scheduler. Celery-redbeat provides a RedBeatSchedulerEntry object to insert schedules in Redis. When we insert a schedule in Redis, we translate the user’s schedule to UTC date and time.
4. Once the task is complete, we mark it complete in MongoDB, which removes it from the list of scheduled deployments.
5. When a user cancels a task, we delete this entry from Redis and it won’t run.

Containers and Such

As promised, this newsletter would be very technical sometimes to target the technical audience. Don’t forget to subscribe for more updates. It’s 36 subscribers strong on LinkedIn. Thank you for reading!

It took me the better part of the weekend to get the Celery, Redis task queue and scheduler to work, but it’s working now! Happy to talk about some of the challenges! This assumes familiarity with AWS Elastic Container Service, message brokers, and task queues.

We have a pretty solid architecture:

  • Python, Flask API
  • MongoDB database
  • Celery, Redis task queue
  • React, Typescript frontend
  • Docker on AWS ECS and Digital Ocean

What is Celery?

Celery is a task queue with focus on real-time processing, while also supporting task scheduling. It is a is a simple, flexible, and reliable distributed system to process large amounts of messages. It works with Redis, RabbitMQ, and other message brokers.

Some of the challenges I came across:

First, it took me a while to connect to Redis (Elasticache) on AWS. You have to add inbound rules to both security groups in order for the API to communicate with Redis over a specific port like 6379, but it didn’t work for me. I ended up using Redis Cloud because it is a lot simpler than AWS Elasticache. Another solution would be to run Redis on a Digital Ocean droplet or AWS EC2 instance, but I would have to expose the IP and port to the outside world.

The next challenge was how to get the Celery container to run on AWS Elastic Container Service. There are a couple ways to make it work:

  1. Multiple containers within the same task definition
{
  "taskDefinitionArn": task definition 1,
    containerDefinitions=[{
        'name': container 1,
        'image': docker_img 1,
        ...
    },{
        'name': container 2,
        'image': docker_img 2,
        ...
    }]
    ...
}
  1. Multiple task definitions
[{
  "taskDefinitionArn": task definition 1,
      containerDefinitions=[{
          'name': container 1,
          'image': docker_img 1,
          ...
      }]
},
{
  "taskDefinitionArn": task definition 2,
      containerDefinitions=[{
          'name': container 2,
          'image': docker_img 2,
          ...
      }]
}]

But this was not necessary because the Celery container doesn’t have to scale like the API. ECS also requires a health check path, but there isn’t one for the Celery container, which meant that starting a separate cluster was out of the question.

The solution was to create a multi-container deployment: a base image for the Celery task queue and a main container image for the API that builds on top of the base one. The API image simply overrides the CMD directive in the docker file.

Here is what this looks like:

  1. Base Celery Container – Dockerfile.celery
FROM python:3

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

ADD ./requirements.txt /usr/src/app/requirements.txt

RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt

ADD . /usr/src/app
ENV FLASK_ENVIRONMENT=stg
CMD celery -A apis.deploy.tasks.tasks worker -l info;
  1. Flask API Container builds on the base image – Dockerfile.staging
FROM registry.gitlab.com/oatfin/automata:celery
CMD ["sh", "-c", "gunicorn manage:app -w 5 -b 0.0.0.0:5000"]

I installed docker on a Digital Ocean droplet and ran the Celery containers on it, one setup for staging and another for production. It works as long as Celery can connect to Redis. In fact, I ran the Celery container locally and it worked. That’s how I figured it out. We could spin up an EC2 instance and run docker on it, but it was cheaper to go with a Digital Ocean droplet.

Building and running the containers is very trivial from there on. First login to a Digital Ocean droplet with docker installed then:

Login the docker registry. I’m using Gitlab’s docker registry.

docker login -u USERNAME -p PASSWORD registry.gitlab.com

Build the docker image

docker build -f Dockerfile.celery \
--cache-from \
registry.gitlab.com/oatfin/automata:celery \
 -t registry.gitlab.com/oatfin/automata:celery .

Push the docker image

docker push registry.gitlab.com/oatfin/automata:celery

Finally run the container on the Digital Ocean droplet.

docker run registry.gitlab.com/oatfin/automata:celery

Once everything works correctly, then we get some nice logs showing that celery is running. Here I’m using host.docker.internal to connect to my local Redis. I didn’t want to show the Redis Cloud link because anyone with the Redis link can intercept your messages.

Celery connected to Redis

Thanks for reading!

Jay

Oatfin Beginnings

Thanks again for subscribing to Cloud Musings! Last I checked, it was 38 subscribers strong. If you haven’t subscribed, subscribe to get automatic updates when I publish new editions. I will try to make it interesting sometimes!

This week I got the book Start With Why by Simon Sinek from Darrel, one of our investors at Visible Hands. Darrel along with Daniel were some of our first believers. I started reading the book and thought I would take a step back to talk about why I’m working on Oatfin.

Start With Why by Simon Sinek

The short answer is that I left Akamai in 2016 and wanted to focus on startups. I realized that I was not a corporate person. I come from a family of entrepreneurs. My parents are entrepreneurs and my grandparents were also entrepreneurs.

First, I started working on a fintech/blockchain solution and it was a pain dealing with infrastructure. Back in 2016, blockchain was also blacklisted by every major cloud and payment forms like Stripe. I took a break to work at a few startups to learn.

For the long answer. I’ve been a software engineer for 15 years. In my experience working at GE, Putnam Investments, Akamai, and many early stage startups, cloud infrastructure was a major challenge. The process is not only manual, but tedious and frustrating at times. If you’ve ever used the AWS or Google Cloud user interface, then you know this pain well!

For example, I was working at a fintech startup and one of my roles was to automate our cloud infrastructure. Sometimes it took days to deploy a simple app.

I was working at another healthcare startup, and it was a lot of the same frustrations. We moved from servers to server-less and then back to servers. Other challenges we faced were cloud spend, testing, security, compliance, and observability into the server-less apps. I left after 6 months to work on Oatfin because I believe the process should be simpler.

Some problems with the cloud currently:

  • Painfully slow development cycle.
  • Manual, tedious, time consuming and frustrating.
  • Days to weeks to build a secure cloud infrastructure.
  • Vendor lock-in means high cost.
  • Requires expert knowledge, new staff and skills.

There are some solutions like infrastructure as code (IaC), but I don’t believe developers want to write more code to maintain the code they already have. I’ve written a fair amount of infrastructure as code. Some problems include:

  • Hard to maintain manual scripts – multiple environments and clouds.
  • Learning new frameworks and languages like Terraform and Pulumi.
  • Doesn’t remove the complexity of infrastructure.
  • Security issues with cloud credentials and secrets in code.

With Oatfin, our mission is to improve developer productivity and the development experience. We make it simple and easy to manage any application on any cloud.

Where the name comes from: the “oat” piece is because I love oatmeal. The “fin” is for fintech. Since I already had the domain name, Twitter and LinkedIn handles, it all stuck around. I also wanted to choose something that I could create the Google search presence for as opposed to something that would be confusing.

There are 3 big tenets in our application: infrastructure automation, security, and compliance. Our focus is cloud native, Docker, and Kubernetes.

Why cloud native?

Containerization provides many benefits like porting to different clouds, different operating systems as well as being easier to scale. There is no doubts that more enterprise companies will take advantage of cloud native deployments as they continue to use the public cloud.

Currently, the app allows customers to define their containers from any public or private container registry. We automate the infrastructure so they can choose the type of infrastructure they want to create. Since we have the containers, we can also scan them to detect vulnerabilities and compliance.

There are many features that make us stand out:

  • Being able to clone an infrastructure to debug production issues.
  • Schedule a deployment and specifying dependencies that need to be deployed before.
  • Zero trust security.
  • Scanning containers for vulnerabilities.
  • Compliance automation.
  • Team collaboration.

Our target customers are enterprise companies. As a startup, deploying native applications is very simple. You are most likely deploying a single API with a single container. But as an enterprise company, things get complicated very quickly with very little visibility. For example, you might have an API running on AWS, a database running on premise, and some other pieces running on Google Cloud. Managing these hybrid and multi-cloud environments is very challenging.

The Oatfin architecture is a good example. We have a Python and Flask API that talks to MongoDB with a Celery task queue. The Celery task queue uses Redis as a backend and message broker. The API is deployed on AWS using Elastic Container Service (ECS), the database is deployed on MongoDB Cloud, which is on Google Cloud, and we have Redis running on Redis Cloud. Finally our frontend is running on DigitalOcean along with the Celery task queue.

With that said, we’re raising our seed round and I would love to connect with investors who are excited about the cloud and developer tools space.

Thanks for reading!

Jay

« Older posts