Apervita, Inc is the trusted collaboration platform for value-based healthcare. We empower payers and providers and other stakeholders to more efficiently and effectively measure clinical and financial performance, improve clinical quality, and administer value-based contracts. By providing an independent, secure, trusted platform to perform shared analyses, Apervita uniquely allows stakeholders to gain mutual, continuous clinical and financial insights and integrate those insights in various systems and workflows simultaneously and at scale.
Serving more than one in five hospitals in the United States and several nationally recognized health plans, Apervita conducts more than 1 billion value-based computations and insights for our clients every year. Our customers are committed to building a better healthcare system, and we’re helping them make it happen. Close collaboration with them means our products and professional services have deep roots in the real world.
As we grow and accelerate our position, we need help to solve hard problems. As a team, we seek to be bold in our ideas, passionate in pursuit of our mission and humble in our interactions with customers and partners. Every day will start and end around our “why”; to create real human impact through technology.
About the role
The Data Engineer is responsible for deploying and managing ETL/ELT pipelines, jobs, orchestration frameworks, and ensuring data quality. The charter of the data engineering team is to optimize how our customers data turn into insights as fast as possible meeting customer SLA’s. The data engineer can expect to work closely with business analysts, data scientists, analytics experts and developers to build the ETL/RLT pipelines, and data models. The data engineer will also have the opportunity to inform the design, implementation, and best practices or this system including deployment of modern tools.
-- Build cloud-based data warehousing environments, data processing pipelines, and data models that support a variety of business needs
-- Support a variety of data processing pipelines, integrate new data sources into our data warehouse, and create jobs to load, transform, and QA vital datasets
-- Work with analysts and developers in the product development process to ensure that newly designed data models meet analytics requirements and follow best practices
-- Share your expertise on scalable data processing with analysts and data scientists to further our goal of being a truly data driven organization
-- 5+ years of experience as a data engineer using data warehousing technologies like, Amazon Redshift, RDS, S3, Athena, EMR, and Hadoop/Hive/Spark
-- Proficient in SQL including one or more relational databases like MySQL, Oracle, Postgres, or similar
-- 5+ years experience with ETL and job scheduling or orchestration using tools like Airflow, Luigi, Oozie, or similar
-- 5+ years experience programming in python and familiarity with AWS and git
-- Excellent communication skills and ability to work on a growing team
Bonus points if you have
-- Experience with web-scale data or working with healthcare data in a HIPAA-compliant environment
-- Experience with Healthcare Payer data as well as Optum Impact Intelligence tool.
-- Experience with data modeling, data visualization, and/or BI tools like Looker, Metabase, or Tableau
-- Experience with AB Testing
-- Stock Options
-- Unlimited sick and sanity days
-- Commuter benefits
-- Medical, Dental, Vision
-- 401K with matching
-- Unlimited snacks in office