OBS! Ansökningsperioden för denna annonsen har
Cognizant (NASDAQ: CTSH) is the fastest-growing global IT services and business process outsourcing solutions provider, headquartered in Teaneck, N.J. Cognizant’s single-minded passion is to dedicate our global technology and innovation know-how, our industry expertise and worldwide resources to working together with clients to make their businesses stronger.
Today with over 50 delivery centers worldwide and over 100,000 employees as of December 31, 2010, Cognizant is a member of the NASDAQ-100, the S&P 500, the Forbes Global 2000, and the Fortune 1000, and is ranked among the top performing and fastest growing companies in the world.
Cognizant’s Artificial Intelligence & Analytics (AIA) is one of the world’s biggest BI practices (20.000+ colleagues) and well-known for its ability to execute and client focus. We have a broad range of experience in different functions like Customer Analytics, Visualization, Big Data, Data Warehouse design, Data modelling and others. Although we are a big organization we value proximity and toward our employees we act in small communities where everybody knows each other.
As a Data Engineer you will be part of our exciting European team. We offer an international, result driven -but at the same time fun -environment to grow in, learn and build your own professional network.
You will be part of the AIA team, which provides a full range of data and analytic services to clients across multiple sectors. You will work alongside other experts to deliver BI solutions to our clients, many of whom are well-known brands.
We are looking for candidates focusing on big data analytics space. In this job, you will need to function as senior big data engineer who masters multiple roles such as solution designer, developer, maintenance personnel, meaning a full stack Dev-Ops engineer.
You should be passionate about tackling complex problems, especially tuning, scaling and multi-tenancy. You should be a good team player who knows how to work with agile Dev-Ops setup.
You should be proficient enough to realize and translate business requirements into data products catering to use cases. You should also help business users to understand and realize their need of data-driven analytics.
The ideal candidate should have:
Minimum of 5 years of experience with Big Data technologies.
Experience with Hadoop ecosystem technologies like HDFS, Hive, Spark, Kafka, Airflow.
Strong development/automation skills. Must be very comfortable with reading and writing SCALA and Python code.
Experience with data manipulation languages and technologies such as Spark-SQL, Hive HQL, Perl, Shell-Scripts.
Good understanding of Hadoop ecosystem component architecture - HDFS, Yarn, Kafka, Spark and various Hadoop cluster components.
Sound knowledge of security protocols and systems such as Kerberos, LDAP, AD servers.
Experience with big data use cases around pipelines, data lakes, reporting, and other data-driven functionalities
Consultative approach about identifying client issues and commitment to deliver high quality solutions
Strong problem solving and analytical skills, with permanent-maintainable-reusable solution approach.
Experience with multi-tenant data lake implementation.
Experience with performance tuning, scaling and reusability aspects of complex applications.
Hands-On experience on development of BI reports.
Experience in leading teams focusing on agile way of working.
Experience with handling trouble tickets, identifying root-causes and fixing with permanent solutions.
Qualifications and Certifications
Fluent in English. Swedish can be an added advantage
Bachelors or Master Degree in Computer Science or similar technical degree