search
yourdomain > Phoenix > computer/technical > AI Frameworks Graduate Intern

AI Frameworks Graduate Intern

Report Ad  Whatsapp
Posted : Sunday, April 28, 2024 11:06 AM

Job Description The AI organization at Intel is dedicated to research and development for the future of AI - an unprecedented scale for enabling machine intelligence on desktop, mobile computers and large clusters alike.
We are seeking a motivated and passionate AI engineering intern who would be responsible for maintaining the infrastructure/operations of deep-learning frameworks, debugging and contributing to timely release.
Designs, develops, and optimizes Intel AI framework components (eg: OpenVINO) and to contribute to external frameworks (e.
g.
, TensorFlow, PyTorch).
Involve in performance benchmarking and validation of popular models under scale-up and scale-out environments.
Gradual and Occasional involvement in getting starter scripts/BKMs ready for newer and upcoming models or under newer constraints/hardware.
This is a great opportunity for a budding engineer to be part of a passionate team and work with seasoned AI leaders who push the boundary on state-of-the-art operations on Deep- learning frameworks and their use cases everyday.
Qualifications Requirements listed would be obtained through a combination of industry relevant job experience, internship experiences and or schoolwork/classes/research.
Minimum qualifications are required to be initially considered for this position.
Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.
Minimum Qualifications: Candidates must currently be pursuing a Master's degree in Computer Science, Machine Learning, Computer Engineering, Data Science or a related field.
Should be in excellent academic standing and due to graduate in 6-8 months.
3+months experience in: Python Knowledge of Linux system architecture and operations Mathematical foundations for Deep Learning and Neural Network, statistics, data science basics Familiarity with TensorFlow or PyTorch - preferably both Good scripting/ automation skills Preferred Qualifications: Familiarity with distributed computing/large computing clusters Familiarity with dockers, Kubernetes or other containerization Experience working in DevOps or MLOps Familiarity with Natural Language Processing and Large Language Models Strong communication and collaboration skills Inside this Business Group The Machine Learning Performance (MLP) division is at the leading edge of the AI revolution at Intel, covering the full stack from applied ML to ML / DL and data analytics frameworks, to Intel oneAPI AI libraries, and CPU/GPU HW/SW co-design for AI acceleration.
It is an organization with a strong technical atmosphere, innovation, friendly team-work spirit, and engineers with diverse backgrounds.
The Deep Learning Frameworks and Libraries (DLFL) department is responsible for optimizing leading DL frameworks on Intel platforms.
We also develop the popular oneAPI Deep Neural Network Library (oneDNN), and new oneDNN Graph library.
Our goal is to lead in Deep Learning performance for both the CPU and GPU.
We work closely with other Intel business units and industrial partners.
Other Locations US, OR, Hillsboro; US, AZ, Phoenix Posting Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Benefits We offer a total compensation package that ranks among the best in the industry.
It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation.
Find more information about all of our Amazing Benefits here.
Annual Salary Range for jobs which could be performed in US, California: $63,000.
00-$166,000.
00 *Salary range dependent on a number of factors including location and experience Working Model This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site.
In certain circumstances the work model may change to accommodate business needs.
JobType Hybrid

• Phone : NA

• Location : Phoenix, AZ

• Post ID: 9003741208


Related Ads (See all)


auburn.yourdomain.com is an interactive computer service that enables access by multiple users and should not be treated as the publisher or speaker of any information provided by another information content provider. © 2024 yourdomain.com