#hiring *Data Engineer (Chicago, IL - Hybrid)*, Chicago, *United States*, fulltime #jobs #jobseekers #careers #Chicagojobs #Illinoisjobs #ITCommunications
*Apply*: https://lnkd.in/ep-pK3xm
There are over 7 billion people on this planet. And by 2050, there will be 2 billion more many moving into urban centers at an unprecedented rate. Making sure there is enough food, fiber and infrastructure for our rapidly growing world is what we're all about at John Deere. And it's why we're investing in our people and our technology like never before! Here the world's brightest minds are tackling the world's biggest challenges. If you believe one person can make the world a better place, we'll put you to work. RIGHT NOW. John Deere is an equal opportunity employer. All qualified applicants will receive consideration for employment without regards to, among other things, race, religion, color, national origin, sex, age, sexual orientation, gender identity or expression, status as a protected veteran, or status as a qualified individual with disability. Primary Location: United States (US) - Illinois - Chicago Function: Technology (CA)Title: Data Engineer (Chicago, IL - Hybrid) - 102983 Onsite/Remote:Partial Remote Position Your ResponsibilitiesAs a Data Engineer, for John Deere Financial (JDF) group, located in Chicago, IL you will join a team to create data platform for advanced analytics and model building in order to enable efficient customer services in various area such as automatic loan approvals. Our team partners with product managers and data practitioners to design, scale, and deliver full stack data solutions. In this role you will: Develop data model and data pipeline, build rest APIs to provide data to downstream systems Perform debugging and fixing application issues, root cause analysis and help in proactive/preventive maintenance Follow test driven development approach Help in technical debt reduction and follow clean code principles Follow Security by design principles to define security framework around data pipelines and APIs Work as Agile team member and participate in back log refinement, sprint planning, sprint review, sprit retrospective VISA Sponsorship is NOT available for this position. What Skills You Need Working knowledge of ETL, Data Modeling, Data Warehousing, and working with large-scale datasets Working knowledge of API Development experience (specifically Rest APIs using Java, Spring boot) Working knowledge of Data Engineering and the respective tools and technologies (e.g., Apache Spark, Databricks, Python, SQL DB, NoSQL DB, Data Lake concepts) Working Knowledge of AWS services such as Lambda, RDS, ECS, API Gateway, S3 etc. What Makes You Stand Out Passionate, creative and have the desire to learn new complex technical areas Accountable , curious, and collaborative with an intense focus on product quality Skilled in interpersonal communications