Data Engineer

lbgcareers Lloyds Banking Group

Data Engineer

Lloyds Banking Group, Personalisation Lab

Location: Bristol based with flexibility to work from home 3 days per week and 2 days in the office.

Salary & Benefits: £61,911 to £70,000 base salary, plus annual personal bonus, 15% employer pension contribution (when you put in 6%), 4% flexible cash pot, private medical insurance, 30 days holiday plus bank holidays.

We offer flexible working hours, to give you a good work life balance, agile working practices and are a very family friendly company.

Who are Lloyds Banking Group?

We’re the UK’s biggest Retail, Digital and Mobile bank with over 30 million customers and a big responsibility to help Britain Prosper. We’re in the middle of a £3bn investment into our People, Platforms and Data strategy to build the Bank of the Future and grow sustainably.

About the platform:

Personalised Experiences and Communications (PEC) is a business platform that sits within Consumer Relationships. Which plays a critical role in supporting the achievement of the Groups strategic priority to grow by protecting and deepening customer relationships. We enable & deliver personalised customer communications & experiences across all channels, media and business areas supporting the Customer Relationship Growth strategy to deepen relationships with our customers across both the Retail & Commercial bank. This includes the data, analytics, and technology to unlock the value of differentiating our branded channel experiences, proposition, price and communication; as well as achieving our paper-free sustainable ambitions.

About the role:

A great opportunity has arisen in the Personalisation Lab within the Personalised Experiences and Communications Platform to join product engineering cross functional team as a Data Engineer. As a Data Engineer your responsibilities will be delivering the highest quality data capability, drawing upon your engineering expertise, whilst being open minded to the opportunities the cloud provides.

What you’ll do for us:

  • Build reusable data pipelines at scale, work with structured and unstructured data, and feature engineering for machine learning or curate data to provide real time contextualise insights to power our customers journeys.

  • Use industry leading toolsets, as well as evaluating exciting new technologies to design and build scalable real time data applications.

  • With skills spanning the full data lifecycle and experience using mix of modern and traditional data platforms (e.g. Hadoop, Kafka, GCP, Azure, Teradata, SQL server) you’ll get to work building capabilities with horizon-expanding exposure to a host of wider technologies and careers in data.

  • Help in adopting best engineering practices like Test Driven Development, code reviews, Continuous Integration/Continuous Delivery etc for data pipelines.

  • Mentor other engineers to deliver high quality and data led solutions for our Bank’s customers

  • Be a team player who can build relationships and work productively with other teams across a variety of domains

Key Objectives for the Platform:

  • Simplify and modernise our technology estate to enable internal flexibility and increase responsiveness to evolving customer needs

  • Improve customer experience and support the Bank’s sustainability ambitions by delivering compelling, reliable, safe paperless communications

  • Enable two-way, seamless, multi-channel customer experiences that are differentiated, data-led and personalised – in line with the broader Group Conversational Banking and Channel strategies

  • Deliver engaging, empathetic communications that are tailored to individual customer needs and provided via their preferred channel

Technical Skills and Experience:

  • Best practice coding/scripting experience developed in a commercial/industry setting (Python, Teradata, Postgres SQL, Java).

  • Working experience with operational data stores, data warehouse, big data technologies and data lakes.

  • Experience working with relational and non-relational databases to build data solutions, such as SQL Server/Oracle/Teradata, experience with relational and dimensional data structures.

  • Good knowledge of containers (Docker, Kubernetes etc) and experience with cloud platforms such as GCP, Azure or AWS.

  • Strong experience working with Kafka technologies

  • Computer science fundamentals: a clear understanding of data structures, algorithms, software design, design patterns and core programming concepts.

Together we make it possible!

We want you to experience the freedom and autonomy to realise your potential. Share your ideas and make them happen and feel supported and listened to by a team that celebrates individuality and independent thought, encourages different perspectives, and embraces every background.

We’ll ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process and to perform crucial job functions.

Join us and be part of an inclusive, values-led culture that celebrates diversity.

To apply for this job please visit