Senior Data Engineer

Position: Senior Data Engineer

Location: San Francisco, CA OR 100% Remote (U.S. Based ONLY)

A Little Bit About StyleSeat:

Senior Data Engineer

As a Senior Data Engineer at StyleSeat, you will have a rare opportunity to join a startup empowering small business owners across the country to be more successful doing what they love. O​ur mission is to help people look and feel their best. We are on the path to achieving this mission by being the go-to marketplace for consumers to discover, book, and pay for beauty and grooming services (hair stylists, colorists, nail artists, estheticians, barbers, etc). We are also the premier solution for all independent professionals in the industry to run and grow their business. We have powered over 120 million appointments booked and​ $10B in revenue for small businesses and are on the path to much more.

In Your New Role:

As a Senior Data Engineer you will ​join an impactful, multi-functional team of data scientists, analysts, data engineers and backend engineers dedicated to creating a data-driven culture. A team where everyone is active in defining the product and development process. As a result, you will know where your initiative and drive can best make a difference and be recognized. You'll know the internal and external customers with whom we are working,, and the needs of each one. The Senior Data Engineer will utilize their experience and create appropriate solutions and tools to solve complex data engineering problems. StyleSeat is a rapidly scaling company making this the best environment to take on ownership as well as learn how to grow a company.

Our engineering team consists of developers from a wide array of backgrounds. Our team is a tight knit, friendly group of engineers that is dedicated to learning from and teaching each other. Team members regularly contribute to and optimize our engineering best practices and processes. Our team wants to make software engineering fun, easy, and fulfilling, so we've come up with a set of values that we apply to our software every day: Flexible, Consistent, Predictable, Efficient, and Pragmatic.

What you’ll be doing:

  • Working on StyleSeat's Data & Infrastructure Projects

  • Experience or demonstrated interest in Big Data Technologies

  • Develop and further develop Big Data processing pipelines for data sources containing structured

    and unstructured data

  • Monitor and optimize key infrastructure components such as Databases, EC2 Clusters,

    Containers and other aspects of the stack

  • Help promote best practices for Big Data development at StyleSeat

  • Act as a bridge between the Data Engineering team and the wider Engineering organization

  • Working closely with our senior Data Analysts

  • Work with the Data Science team on crossover initiatives

  • Work in an Agile manner with business users, data analysts and data scientists to understand and discover the potential business value of new and existing Data Sets and help productize those discoveries

  • Analyzes requirements and architecture specifications to create detailed design

  • Research areas of interest to the team and help facilitate solutions

    About you:

  • You’ve had experience at a bigger startup where you’ve worked with big data architecture, and you were there while they scaled and can bring that experience to help us scale

  • You have a can-do attitude and you see your cross-functional work as equally important as the work within your immediate team

  • You’re not afraid to challenge the status quo and suggest alternate architecture, and you actively encourage others to do so

  • While you own everything you do, you also keep the bigger picture in mind

    What you should bring to the table:

  • 7+ years as a Backend Software Engineer or as a Data Engineer

    • 2+ years of experience with AWS Data Infrastructure, including RDS, RedShift & S3

    • 2+ years building data pipelines in a high ingestion environment ​with varied forms of data

      infrastructure technologies

  • Designing, Developing, and Owning ETL pipelines that deliver data with measurable quality under

    pre-defined SLA

  • Proficiency with Python, SQL and other scripting languages

  • Using SQL daily to scale and optimize schemas, and performance-tune ETL pipelines

  • Identifying and resolving pipeline issues, and discovering opportunities for improvement in in

    complex designs or coding schemes

  • Monitoring existing metrics, analyzing data, and partnering with other internal teams to solve

    difficult problems creating a better customer experience

    Some plusses:

  • Experience with data streaming technologies e.g. Spark, Storm, Flink

  • Experience with message queue systems e.g. Kafka, Kinesis

  • Experience with any of the following message / file formats: Parquet, Avro, Protocol Buffer

  • Experience with Redis, Cassandra, MongoDB or similar NoSQL databases

  • Experience ​with containerization frameworks and tools e.g. Docker, Terraform, Kubernetes

  • Gathering requirements, scoping, architecting, developing, building, releasing, and maintaining

    data oriented projects for different parts of the organization, while taking into account

    performance, stability and an error-free operation

  • Architecting scalable and reliable data solutions to move data across systems from multiple

    products in nearly real time