Introduction: Require an ambitious individual who can work under your own direction towards agreed targets and goals, have the ability to manage change and to work under stress. Should be curious to learn as demonstrated by his/her up-to-date technical knowledge. Should be a good team player and familiar with Agile methodologies and principles and/or have experience working in an Agile team. We seek an applicant who will thrive in an open, dynamic, flexible, fun, spirited, collaborative environment; an individual who desires creative freedom and the opportunity to work in a high performing team.
Your Roles and Responsibilities:
- work closely with business stakeholders to understand their goals and determine how data can be used to achieve those goals.
- Design data modelling processes, create algorithms and predictive models to extract the data the business needs, then help analyze the data and share insights with peers
- Browse and analyze enterprise databases to simplify and improve product development, marketing techniques, and business processes
- Create custom data models and algorithms
- Use predictive models to improve customer experience, ad targeting, revenue generation, and more
- Develop the organization’s test model quality and A/B testing framework
- Coordinate with various technical/functional teams to implement models and monitor results
- Develop processes, techniques, and tools to analyze and monitor model performance while ensuring data accuracy
- Work on State-of-the-Art cloud technologies provided by client Public Cloud, RedHat, AWS & others.
- Be part of open, transparent agile teams who always thrive for continuous learning and contribute towards continuous improvement.
Primary Skills-Spark, Scala, Kubernetes, Python, SQL, Airflow
- A natural inclination toward solving complex problems
- Knowledge/experience on/with statistical programming languages, including Scala, Python, SQL, etc., to process data and gain insights from it
- Knowledge of using and developing data architectures
- Knowledge of Machine Learning techniques, including decision tree learning, clustering, artificial neural networks, etc., and their pros and cons will be preferable.
- Knowledge and application experience in advanced statistical techniques and concepts, including, regression, distribution properties, statistical testing, etc.
- Good communication skills to promote cross-team collaboration
- Impulse to learn and master new technologies
- Experience/knowledge on ETL tool, preferably Datastage.
- Experience with major web services, including S3, Spark, Redshift, etc.
- Experience/knowledge in distributed data and computing tools, including, MapReduce, MySQL, Hadoop, Spark, Hive, etc.
- Ability to use data visualization tools to showcase data for stakeholders using D3, ggplot, Periscope, and more
ü Represents skills that are an advantage or desired to have
ü Do not list mandatory skills or copy-paste from the Required expertise section