Application Production Expert

  • 24 septembre
  • Singapore
  • CDI
CA CIB Singapour
Our Singapore center, Information Systems Asia Pacific (ISAP), is one of the 3 main IT Hubs for the Bank's worldwide business with over 800 IT staffs covering Production and Application Development activities.
We work daily with international branches located in 33 countries by supporting their IT solutions and envisioning and developing the Bank?s futures information systems.
Within ISAP, the IT Production department (ITS) is in charge of IT Operations (RUN) for CACIB IT infrastructure in Paris, which is the main IT hub, and to a lower extent other geographies such as London and Singapore.
He/She will be an expert resource for all Big Data services in Credit Agricole Corporate and Investment Bank.
As such, he/she will provide support for all production support activities within the Big Data team in Singapore.
He/She is accountable for the overall health and stability of the technical solutions within his scope.
He/She has to effectively work with technical peers such as architects, peer experts and project teams on technology road maps and projects.
He/She will work with Service Manager to gain control over the scope of technical activities, develop best practices and gain knowledge over all aspects of support.
The Big Data team operates the RUN activities and delivers projects for its infrastructure scope covering all Big Data associated technologies.
The team works in Europe time zones and this role will be focused on Paris working hours.
The operational support covers the following technical scope:
Hadoop Stack (Cloudera Stack)
Kafka (Confluent)
Elasticsearch (ELK Stack)
·         Build and maintain Hadoop stack infrastructure
·         Install Hadoop updates, patches, and version upgrades as required
·         Cluster maintenance and deployments including creation and removal of nodes.
·         Performing HDFS backups and restores
·         User management from Hadoop perspective inclusive of setting up user in Linux, Kerberos setup, access to other Hadoop stack components.
·         HDFS Support and maintenance
·         File system management and monitoring
·         Manage and review Hadoop log files
·         Monitor Hadoop cluster connectivity and security
·         Kafka Administration and Operations
·         Ensure cluster and MapReduce routines are tuned for optimal performance
·         Proper configuration and screening of cluster jobs
·         Capacity Planning
·         Working knowledge of ElasticSearch Stack
·         Working transversally with other teams to guarantee high data quality and availability.
·         Be the link between developers and build/architecture teams.
·         Document best practices and maintain the knowledge base

·         Minimum 10yrs in a typical system administration role, performing system monitoring, storage capacity management, performance tuning, and system infrastructure development.
·         Minimum 2-3 years of experience in deploying and administering large Hadoop clusters.
·         Experience in the Financial and banking industry.

·         Must be a bachelor?s/engineering graduate.