PRAHS

  • Big Data Developer

    Job Locations US-- | US-AZ-Phoenix
    Posted Date 1 day ago(10/19/2018 9:25 AM)
    ID
    2018-48004
  • Overview

    Symphony Health Solutions, a wholly-owned subsidiary of PRAHealthSciences, is a leading provider of high-value data, analytics, technology solutions and actionable insights for healthcare and life sciences manufacturers, payers and providers. The company helps clients drive revenue growth and commercial effectiveness, while adapting to the transformation of the healthcare ecosystem, by integrating a broad set of patient, prescriber, payer and clinical data together with primary and secondary health research, analytics and consulting. Symphony delivers a comprehensive perspective on the real dynamics that drive business in the healthcare and life sciences markets.

     

    At Symphony Health Solutions we pride ourselves on being Collaborative and customer-centric. We do this by focusing on five key values:

    • Love the Customer by consistently exceeding expectations
    • Focus on Brilliant Execution by being detailed and accountable
    • Leapfrog the Competition by Differentiating ourselves and providing innovative solutions
    • Build a High Performance Culture that does not accept mediocrity
    • Make it Fun by Connecting as one passionate high-energy

    Connect with Symphony!  Collaborate with Customers!  Differentiate Yourself!

     

    The Big Data Developer will work closely with Production, Development, and Operational areas to resolve technical issues, perform root cause analysis, develop continuous engineering improvements and foster stakeholder relationships. The successful candidate will be a multi-tasking self-starter with a keen ability to prioritize and lead efforts to solve issues effectively. 

    Responsibilities

    • Develop big data ETL jobs that ingest, integrate, and export healthcare data.
    • Author sound and readable shell script code and SQL code.
    • Document, debug, test, refactor, and prepare code for deployment.
    • Be willing to learn new programming languages.
    • Be innovate in solutions that are inclusive of and reachable by others.
    • Available for escalation as a technical subject matter expert.
    • Adhere to and enforce company service level agreements, specifically customer-facing.
    • Diagnose internal or customer reported issues to determine the validity of the reported issue and undertake analysis and actions for resolution.
    • Communicates effectively to both business and technical stakeholders within the project team.
    • Foster relationships with key stakeholders, understanding their requirements and recommend / implement technical solutions and provide guidance as needed.

    Qualifications

    REQUIRED EDUCATION & EXPERIENCE:

    • Bachelor's Degree or higher with a Minimum of 7 years of experience working on Big Data-related technologies; Master's Degree and a minimum of 5 years of experience are preferred.
    • Must have experience with ETL, Linux, Oracle, Hadoop eco-systems, Spark Framework, and Python.
    • Required Tech Stacks: Big data technologies, Hadoop MapReduce jobs, Python, Spark, Scala, Spark SQL, Columnar databases.
    • Proven working experience in Designing, Building & Implementing solutions using Spark.
    • Experience of Pyspark & Spark SQL.
    • Experience in Python and Shell scripting for automation of Hadoop-related tasks.
    • Experience in debugging issues, resolving performance issues and coordination with external teams (as needed) until successful resolution of issues.
    • Hands-on experience in setting up, configuring Hadoop ecosystem components like MapReduce, HDFS, HIVE, SQOOP and PIG.
    • Proven experience working with Columnar databases, Parquet, ORC formats, Datasets & DataFrames concepts.
    • Experience with performance tuning and concepts such as Bucketing, Sorting and Partitioning.
    • Ability to integrate UDFs with Spark jobs.
    • Hadoop (Cloudera distribution).
    • Sound written and oral communication skills.
    • Flexibility to work outside standard business hours. 

    PREFERRED: 

    • Experience building ETL/ELT pipelines.
    • Familiarity with Kafka.
    • Healthcare data knowledge.

    Options

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed

    Connect With Us!

    Not ready to apply? We get it! Click here to stay in touch for future opportunities, events and other happenings!