Manage teams in the identification of business requirements, functional design, process design (including scenario design, flow mapping), prototyping, testing, training, defining support procedures.
Formulate planning, budgeting, forecasting and reporting strategies.
Manage full life cycle implementations.
Develop statements of work and/or client proposals.
Identify business opportunities to increase usability and profitability of information architecture.
Experience with program leadership, governance and change enablement.
Develop and manage vendor relationships.
Lead workshops for client education.
Manage resources and budget on client projects.
Assist and drive the team by providing oversight.
6+ years of relevant technology architecture consulting or industry experience to include Information delivery, Analytics and Business Intelligence based on data from hybrid of Hadoop Distributed File System (HDFS), non-relational (NoSQL, MongoDB, Cassandra) and relational Data Warehouses.
3 or more years of hands on experience with data lake implementations, core modernization and data ingestion.
At least 3 years hands on working experience with Big data technologies; MapReduce, Pig, Hive, HBase, Sqoop, Spark, Flume, YARN, Kafka, Storm etc.
Experience working with commercial distributions of HDFS (Hortonworks, Cloudera, Pivotal HD, MapR)
1 or more years of hands on experience designing and implementing data ingestion techniques for real time and batch processes for video, voice, weblog, sensor, machine and social media data into Hadoop ecosystems and HDFS clusters.
1 or more years of hands on experience with data integration products like Informatica Power Center Big Data Edition (BDE), IBM BigInsights, Talend etc.
Bachelor’s Degree or equivalent professional experience
Willingness for weekly client-based travel, up to 80-100% (Monday — Thursday/Friday)
Experience designing and implementing reporting and visualization for unstructured and structured data sets
Experience in designing and implementing scalable, distributed systems leveraging cloud computing technologies like AWS EC2, AWS Elastic Map Reduce and Microsoft Azure
Experience designing and developing data cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching
Knowledge of data, master data and metadata related standards, processes and technology
Experience working with multi-Terabyte data sets
Experience with Data Integration on traditional and Hadoop environments
Ability to work independently, manage small engagements or parts of large engagements.
Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).
Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.
An advanced degree in the area of specialization is preferred.