Big Data Engineer #099416

  • Competitive
  • Zurich, Switzerland
  • Permanent, Full time
  • Credit Suisse AG
  • 17 Aug 17

Big Data Engineer #099416

We Offer

  • An interesting position as big data architect to join a growing, high-visibility cross-Bank team that is developing and deploying solutions to some of the company's most challenging analytic and big data problems
  • As a member of this team, you will work with clients and data spanning Credit Suisse's global organization to solve emerging critical challenges via the utilization of emerging technologies
  • You will use distributed file systems and storage technologies (HDFS, HBase, Accumulo, Hive)
  • You will be responsible for large-scale distributed data analytic platforms and compute environments (Spark, Map/Reduce)
  • The possibility to work with tools for semantic reasoning and ontological data normalization (RDF, SPARQL, Tamr)
  • The role offers a hands-on engineering position responsible for supporting client engagements for Big Data architecture and planning
  • You will have a solid platform to drive the architecture/design decisions needed to achieve cost-effective and high performance result
  • The opportunity to provide technical guidance to a team of Big Data engineers who are building the platform and innovating in core areas, real time analytics and large-scale data processing


You Offer
  • You have a formal background and validated experience in engineering, mathematics and computer science, particularly within the financial services sector
  • You function within a multidisciplinary, global team. Be a self-starter with a strong curiosity for extracting knowledge from data and being able to elicit technical requirements from a non-technical audience
  • Good presentation skills and the ability to communicate deep technical findings to a business audience are important for this role
  • You have good Programming / Scripting Skills (Python, Java, C/C++, Scala, Bash, Korn Shell)
  • DevOps Tools (Chef, Docker, Puppet, Bamboo, Jenkins) are required
  • Experience with distributed computing environments such as Hadoop, Spark or High Performance Computing (HPC) systems and an understanding of parallel application performance and reliability would be an advantage
  • You demonstrate client and solution leadership through good communication skills to recommend actionable, data-driven insights
  • Design of highly-scalable, reliable, and performant pipelines to consume, integrate and analyze large volumes of complex data using a variety of exclusive proprietary and open-source platforms and tools is beneficial
  • You conduct feasibility analysis, produce functional and design specifications of proposed new features
  • Experience with Data Concepts (ETL, near-/real-time streaming, data structures, metadata and workflow management) would be a big strength
  • A good understanding of Linux / Windows (Command line) and Unix/Linux including system administration and shell scripting is highly beneficial
  • You can solve complex data and technology problems
  • You collaborate with team members, business partners and data SMEs to elicit, translate, and prescribe requirements and you cultivate sustained innovation to deliver extraordinary products to customers
  • Database management and business intelligence including SQL, data modeling or schema management and experience with NoSQL databases such as MongoDB, Cassandra or associated technologies is a big plus
  • You are able to construct end-to-end data analytic workflows including data capture, cleaning, normalization, exploration, modeling and visualization, preferably utilizing the tools previously described
  • Provide L3 support in addition to supporting new products being engineered by the team whilst transitioning L1/L2 product support to the BAU support team.
*LI-CSJOB*
Ms M. Eve would be delighted to receive your application.
Please apply via our career-portal.