Job Description
Conducts design and development to build and optimize AI software.
Designs, develops, and optimizes for AI frameworks (e.
g.
, OpenVINO) and to contribute to external frameworks (e.
g.
, TensorFlow, PyTorch).
Implements various distributed algorithms such as model/data parallel frameworks, parameter servers, dataflow based asynchronous data communication in machine learning, and/or deep learning frameworks.
Transforms computational graph representation of neural network model, and develops machine learning and/or deep learning primitives in mathematical libraries.
Profiles distributed deep learning models to identify performance bottlenecks and proposes solutions across individual component teams.
Optimizes code for various computing hardware backends, and interacts with machine learning and/or deep learning researchers, and utilizing experience with machine learning and/or deep learning frameworks.
Qualifications
You must possess the below minimum qualifications to be initially considered for this position.
Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.
Minimum Qualifications:
Master's degree or PhD in Computer Science or a related technical field
1+ year of Experience in C++/Python either through coursework or prior experience.
Preferred Qualifications:
Research or publications or coursework related to Deep Learning
Previous internship experience in the field of AI
Experience with Deep Learning frameworks such as PyTorch
Understanding of Deep Learning algorithms
Developing or optimizing Deep Learning models, especially low precision models
MLPerf benchmarks
Inside this Business Group
The Machine Learning Performance (MLP) division is at the leading edge of the AI revolution at Intel, covering the full stack from applied ML to ML / DL and data analytics frameworks, to Intel oneAPI AI libraries, and CPU/GPU HW/SW co-design for AI acceleration.
It is an organization with a strong technical atmosphere, innovation, friendly team-work spirit, and engineers with diverse backgrounds.
The Deep Learning Frameworks and Libraries (DLFL) department is responsible for optimizing leading DL frameworks on Intel platforms.
We also develop the popular oneAPI Deep Neural Network Library (oneDNN), and new oneDNN Graph library.
Our goal is to lead in Deep Learning performance for both the CPU and GPU.
We work closely with other Intel business units and industrial partners.
Other Locations
US, OR, Hillsboro; US, WA, Seattle; US, AZ, Phoenix
Posting Statement
All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Benefits
We offer a total compensation package that ranks among the best in the industry.
It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation.
Find more information about all of our Amazing Benefits here.
Annual Salary Range for jobs which could be performed in US, Washington, California: $105,930.
00-$158,890.
00
*Salary range dependent on a number of factors including location and experience
Working Model
This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site.
In certain circumstances the work model may change to accommodate business needs.
JobType
Hybrid