Lead Infrastructure Engineer – AI, ML, DATA
The primary purpose of this role is to lead the development of specific technology solutions requiring the integration of multiple viewpoints including business, application, data, and infrastructure across the enterprise. They are responsible for creating and maintaining the documentation of current state solution designs and collaborating with Engineers to develop solution designs, assuring adherence to standards, and alignment to the desired target state. This role works as an integral part of project teams and with business users to develop conceptual, logical, and physical designs required for business solution delivery. This role includes responsibility for providing critical decision support, guidance, and recommendations regarding project and program design deliverables.
• Influences the platform standards and roadmap through program and project design deliverables while executing cross-team collaboration
• Guides application and infrastructure development teams in the design and build of complex solutions in alignment with engineering standards, procedures, metrics, and policies to ensure consistency and adherence to • Lowe’s processes.
• Facilitates the transition to high level design and supports the project lifecycle with input from executive leadership where needed.
• Educates others on current Design Standards, Guidelines, and Patterns driving the re-use of these artifacts across programs and projects to drive efficiency.
• Drives the adoption of new technologies by researching innovative technical trends and developments.
• To study core algorithms in deep learning (including various neutral network structures and applications) or in all fields of computer vision (target detection and recognition, particularly Natural Language Processing, etc.)
• Engaged in research and analysis of cutting-edge algorithms in the field of artificial intelligence, explored the innovative application of artificial intelligence in the field of data design & usage patterns, and worked with the product development team to optimize algorithm practicability and gain competitive advantages in practical applications.
• Based on the market demand, develop the core technologies and road map of artificial intelligence or big data applications including algorithms;
• Improve core technology competitiveness through innovation;
• Hands on, to achieve these core technologies;
• Use these core technologies to support product development
• Participate in the design and implementation of sophisticated software systems in Java, Scala, and Python
• Participate in software design and code reviews. Reviews include other Software Engineers and are held to ensure a high level of software quality and to share knowledge with team members.
• Participate in, and adhere to, professional software engineering practices using such tools and methodologies as • Agile Software Development, Test Driven Development, Continuous Integration, Source Code Management (git), and GitHub
• Participate in the planning, creation, and execution of automated test cases and load/performance testing
• Address production issues in a timely manner, to include root cause analysis, and working with manager and team members to resolve the problem (eg. ITSD and ITSM)
• Create processes and tools to automate the training of machine learning systems
• Maintain a high level of proficiency with Computer Science/Software Engineering knowledge and contribute to the technical skills growth of other team members
• Work well independently and as part of a team
• Engage in 24/7 support periodically to assist in critical production issues to maintain the system
• Proficiency with Version Control and unit testing
• Ability to analyze and create algorithmic models
• Familiarity with rule-based NLP techniques, information extraction, and productionizing models
• Build and maintain a variety of Deep Learning models.
• Develop front end tools to help understand and present the results of these models.
• Be prepared to explain and defend the results of these models.
• Develop and maintain web scrapers in python.
• Bachelor’s Degree in Computer Science, CIS, Engineering, or related field
• 8-12 years’ experience in IT, including 5+ years of experience in IT architecture or additional equivalent work experience may be substituted in lieu of degree
• 4+ years’ experience leading technical teams with or without direct reports
• 2+ years’ experience working in a large matrixed organization
• 4+ years’ experience architecting, designing, and implementing enterprise-scale, high volume, high availability systems
• 2 years’ experience with modeling techniques (e.g., BPM, UML, ER)
• 2+ years’ experience working with Enterprise Architecture frameworks, such as TOGAF or Zachman
• Master’s Degree in Computer Science, CIS, Engineering, or related field
• 2+ years’ experience in an IT role requiring interaction with senior leadership
• 4+ years’ experience with commercial off-the-shelf package integration
• At least 5 years working experience in the research and development of artificial intelligence or big data application, with profound and solid technical foundation and performance record;
• Proficiency in one or more programming languages, including but not limited to Java, CC, C, Python, Shell, R, SCALA, Matlab, Lua, etc. Familiar with common machine learning and computer vision open source libraries;
• Have a solid foundation in mathematics and machine learning algorithms, familiar with decision tree, clustering, time series, classification tree, Boosting, SVM, random forest, collaborative filtering, regression, Bayesian networks, stochastic processes, Markov chain and other related Traditional machine learning algorithms;
• Experience with neural network theory, AI, machine learning, speech recognition, image recognition, or natural language processing.
• 2+ years of professional experience building and operating scalable distributed systems across the full software lifecycle, including design, implementation, testing, operations, and maintenance
• Experience working with modern tools for big data processing and scalable machine learning (e.g., AWS, GCP, Kafka, Apache Spark, Delta Lake, SQL, NoSQL, etcd, Zookeepr, Cassandra, Kubernetes)
• Experience in foundational machine learning models and concepts: regression, random forest, boosting, HMM, CRFs, MRFs, deep learning is a plus
• Experience with common machine learning libraries and tools: scikit-learn, MXNet, Tensorflow, XGBoost is a plus
• At least 6 months experience working with Amazon Lex or IPSoft Amelia SparkCognition, Expert System, Microsoft Cognitive Services, IBM Watson, Open AI, Numenta, Deepmind, CognitiveScale, CustomerMatrix, IPSoft, Pega, Salesforce Einstein, Google Cloud Platform/Tensor Flow, or Amazon Web Services/Sagemaker, Azure, and/or common open-source scripting languages
• Experience running services in Public Cloud and Migrating services to the Public Cloud
• Hands-on experience with infrastructure-as-code, docker, containerization, microservices, orchestration tools, distributed systems, etc.
• Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging on-premise and cloud-based platforms