Developing a medical device is challenging.
Developing a medical device that integrates various platforms optimized to run in the palm of your hand is a huge challenge that brings great satisfaction!
Here at Binah.ai we are combining the delicacy, efficiency and precision of medical-grade devices, built on top of personal mobile devices, with the great firepower of cloud platform services.
One can use a mobile device (Smartphone, Tablet, Laptop, etc) to easily and immediately extract vital signs such as Heart Rate, Respiration Rate, Oxygen Saturation levels, Heart Rate Variability and soon, even your own Blood Pressure.
To deliver on these goals, we are building and deploying complex, scalable, container-based services utilizing both local and AWS-based computing resources.
As a Senior DevOps Engineer at Binah.ai, you will be working closely with the Research, Development, Platform, Mobile and QA software groups to build and automate systems to deploy and monitor scalable, high-availability systems, critical to the success of our business and the future of AI-powered vital signs monitoring on a massive scale. You will have a significant impact throughout the company and your passion will be quickly felt and recognized.
- Influence the development of solutions that impact strategic projects/program goals and business results
- Be a key player in building our customer-facing platform
- Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardized best practices
- Maintain domain knowledge expertise in public cloud architectures and best practices
- Resolve highly complex problems using a significant application of technical knowledge, conceptualization, reasoning and interpretation
Required Skills or Experience
- 5 or more years of industry experience deploying highly available, rapidly scalable AWS-based computing services
- Experience in implementing and managing AWS EKS/ECS
- Experience with Docker container deployments, Docker images, Docker registry, etc.
- Experience in managing multiple AWS environments and AWS service ecosystem (RDS, S3, EC2, IAM, Route53, VPC, ES, SSM, CloudTrail, etc.)
- Authoring automated deployments using Terraform
- Automated provision/orchestration of AWS resources (e.g. via CloudFormation or other frameworks)
- Experience with cloud-native ecosystem tools and technology stacks such as container security, static code vulnerability, service proxies, container network, and container storage
- Strong experience with Continuous Integration and Continuous Delivery CI/CD pipelines and processes as well as tools such as Jenkins and Drone.io
- Implementation, management and optimization of observability tools and platform for system monitoring, logging, tracing and metrics (e.g. Prometheus, Grafana, Kibana, Sentry, Logz.io, New Relic, Jaeger, etc.)
- Experience with containers (Docker, ECS, EKS) and Serverless (Lambda) architectures
- Working experience with different database engines (Postgres, AWS Aurora, Elastic Search, etc.)
- Deep system-level familiarity with Linux operating systems and scripting in Bash. Python knowledge is a plus
- First and foremost – team player
- Excellent written and verbal communication skills
- Ability to manage conflicts effectively and to be productive in a dynamic environment
- Self-starter, self-managed
- Deep understanding of high-performance/distributed computing concepts, software development lifecycle, databases and query performance monitoring/tuning