AVP, Cloud Engineer

This position has been filled or has expired. To search all open positions, visit our Job Board.

Moody’s (NYSE: MCO) is a globally integrated risk assessment firm that empowers organizations to make better decisions. Our data, analytical solutions and insights help decision-makers identify opportunities and manage the risks of doing business with others. We believe that greater transparency, more informed decisions, and fair access to information open the door to shared progress. With over 11,000 employees in more than 40 countries, Moody’s combines international presence with local expertise and over a century of experience in financial markets.

Moody’s Shared Services are the front-line professionals including Finance, Technology, Legal, Compliance and Human Resources, that operationally support our business units. Exceptional Shared Services teams are vital to the international success of our business.

Moody’s ESG Solutions Group (MESGS) is a business unit of Moody’s Corporation serving the growing global demand for ESG and climate insights. The group leverages Moody’s data and expertise across ESG, climate risk, and sustainable finance, and aligns with Moody’s Investors Service (MIS) and Moody’s Analytics (MA) to deliver a comprehensive, integrated suite of ESG and climate risk solutions including ESG scores, analytics, Sustainability Ratings and Sustainable Finance Reviewer/certifier services. It houses V.E and 427, both affiliates of Moody’s.

The AVP, Cloud Engineer is a key contributor to the design and implementation of data engineering pipelines to ingest big data (internal and external) data sources. She/he will ensure overall execution of enterprise-grade operational data lake leveraging serverless technology stack. Will lead by the experience in development and product releases in a fast-paced, hyper-growth environment. Should be hands-on and eager to make an immediate impact upon joining the team.

• Hands-on experience developing systems that leverage multiple AWS services including CloudFormation, Lambda, Python, API Gateway, S3, DynamoDB/ any NoSQL, Relational Database Service (RDS), Glue Catalog & ETLs, Athena and QuickSight for data management and analysis
• Experience working in product delivery teams. (SaaS, Hi-Tech)
• Working knowledge of integration patterns and CI/CD pipelines
• Demonstrate robust debugging skills and enforcing strict standards for code quality on python (or similar) libraries
• Ability to work in agile environments with a global setting
• Proficiency in python and java for serverless architecture in AWS (Lambda)
• Proficiency in spark and AWS Glue for big data processing
• Expert-level knowledge of SQL, Python, R, or similar language used for data engineering
• Experience with automation and event driven implementations
• 5+ years of core experience in enterprise-grade big data cloud programs delivering as an individual contributor
• Clear articulation and precise communication to leadership and stakeholders

• BS or Higher in Computer Science or related technical field required
• Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets in the financial services industry
• Strong analytical and problem-solving skills. High technical fluency in Cloud Data Lake, Data Warehouse, Python, and SQL
• Excellent communication and interpersonal skills to work effectively in a highly matrixed environment with vendors, clients, peers, and Technology management Must be able to clearly communicate complex technical and business concepts to team members
• Ability to direct onshore and offshore vendor resources
• Strong at designing, developing, testing, and implementing solutions in cloud
• Strong ability to thrive on ambiguity. Impeccable attention to detail and strong ability to convert complex data into insights and action plans
• Experience with Agile framework. Flexibility to adapt to changing priorities of various key stakeholders at the leadership level
• Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets in the financial services industry
• Organized, detail-oriented, QA-focused