facebook-pixel

381 Projects that match your criteria

Sort by:

Hadoop and Java Expert for Deployment

About Us:

We are a web-hosted platform, which does fully automated machine learning with cutting-edge state of the art performance, which can be deployed either on the cloud or on-premise. Once training datasets are connected to the platform, absolutely no human action is needed in order to get available models in production and get new predictions from incoming data. We fully automate regression, classification, multi-classification, segmentation, and time-series forecasting tasks.

When a user launches a use case, it starts a complex workflow composed of multiple tasks: data acquisition, dataset analysis, feature engineering, hyper-optimization, and modeling, blending...

We use essentially docker. You connect to a web interface, launch machine learning use cases and see all the data science workflow being done automatically. We use Kubernetes for cluster deployment in the cloud and deployment on a single VM with docker for on-premise.

We take  advantage of clusters (kubernetes) to deploy these containers.

Our solution is composed of two types of services:

- Long running services running as docker containers

- Short living tasks: running as jobs scheduled by docker/kube

Long Running Services: 

- website: Our web interface where users can interact with our platform either from the website or automatically through the APIs (nodejs)

- server_engine: service in charge of starting the automated machine learning jobs using the docker/kube client API (python)

Objective:

 There is a high demand of deploying on premise on Hadoop clusters (Hortonworks). The objective is to develop the service that launches jobs with YARN.

Users access our platform through the website or the API. When they launch use cases, a microservice (called server_engine in python) launches the different tasks either through the docker daemon to launch containers or the Kubernetes API to launch pods. Users never get to access the infrastructure, this would be different in a Hadoop cluster.

The objectives are the following:

  • website (should run on an edge node in a docker container) should authenticate users by kerberos auth and propagate identity to server_engine 
  • server_engine should be able to launch YARN containers on behalf of the user who is launching the use case. (should run on an edge node in a docker container)
  • The containers launched through YARN should obviously run on Data nodes, in consequence they can't run in docker container (datanodes don't have docker), they should run on behalf of the user.
  • This gives the ability for the admins to monitor resource usage on the cluster per user per queue.
  • the YARN containers access hdfs resources on behalf of the user as well.
  • ML tasks can take some time, they shouldn't fail right after the users logs out.
  • Kerberos should be used to launch YARN containers in the name of the user who's launching the use case.

Either way if we would have the possibility to impersonate someone it should have the right to impersonate limited users (in a specific group?) on limited access.

Requirements:

  • Hadoop cluster (Hortonworks)
  • Create kerberized YARN applications.
  • Kerberos (delegation / proxy ...)
  • Java dev.
  • (python, node)

The objective is to help the team to finish asap (end of August, beginning of September)

In your proposal : Please tell us more about your experience with Hadoop clusters , other requirements mentioned above and any other relevant experience. Also tell us more on how you would approach this.

 

Hi-Tech
Software
Apache Hadoop

$70/hr - $150/hr

Starts Aug 08, 2018

3 Proposals Status: CLOSED

Client: p************

Posted: Aug 01, 2018

Create Data Pipeline in Python for Big Data Music Tech Startup

The Problem: 

We currently have some proprietary data that has been collected over a few years that we would like to supplement with other data. We need to build a robust data pipeline to feed relational tables, which will then be used to create additional features for our models. This data would include our own proprietary data currently in .csv/.xlsx format, as well as pulling from online sources like Facebook, Twitter, Instagram, Soundcloud, YouTube, Spotify, Pandora, Hypebeast and other brand marketing sources/eCommerce websites. The goal is to figure out how to contextualize the current audience, hype, size, relevance, and potential future of bands along with the same for brands. We will also need to keep track of the value of both bands as well as brands over time. 

Future phases will include working with our data scientists to improve current models using this data, but this phase is focused primarily on creating the pipeline and making it updatable on a daily basis. We are here to help you to understand exactly what data we want from each data source, and to help guide the creative process on other possible sources that could help. 

Deliverables: This first phase consists of three basic steps:

1) Create scripts to create the pipeline for each data source using their API (no scraping will be needed, we know that breaks pretty much all the time)

2) Create the relational database to hold all of the data for each source with unique keys to connect all tables (each source will be it's own table)

3) Connect all of the current data to the same database (csv's and xlsx files, small data)

No GUI is necessary, but all the code must be executable and we prefer that it is written in Python. The databases can be whatever databases you are most comfortable building and whichever databases fit this size of data. We want this pipeline collecting data at the frequency of once a day. We will have a budget for hosting the data as it grows, although we would like a small enough sample of the data to know what we are looking at first. 

We have some username/passwords that can be used for API's to many of the data sources we are asking for data from, and will always be available for communication to ensure that everything needed is provided. This is in no way a 'silo'ed project, we want to be involved as much as we can so that we can get this pipeline built properly.

Domain expertise in the following areas are would be a plus but is not mandatory.

  • brand marketing
  • recommendation engines 
  • data visualization
  •  audience measurement dashboards
  •  consumer analytics
  • statistics
  • music measurement
  •  social media

In your proposal please tell us more about:

  • Your experience building data pipelines and databases or any other relevant experience
  • Any specific expereince with audio/video platforms like spotify, nextbigsound, soundcloud, pandora etc or experience in above mentioned domains
  • How you would approach this development exercise

After the interview and once you get a better understand of our current status we would like the proposal to include:

  • Proposed milestones
  • Estimated hours and budget
Consumer Goods and Retail
Hi-Tech
Media and Advertising

$75/hr - $150/hr

Starts Aug 06, 2018

20 Proposals Status: IN PROGRESS

Net 60

Client: C******** ****

Posted: Jul 28, 2018

Identify Most Valuable Blockchain-Powered Distributed Applications (dApps) for Consumer Industries and Provide IT Architecture Design and Specifications

TASKS

1.   Identify the qualitative and quantitative criteria and gather data to make an educated selection of top-10 most valuable distributed applications (dApps) for the consumer markets (retail / consumer products and services / logistics / etc). Define problems at hand and how you expect distributed applications to solve them. 

2.   Provide IT architecture design and detailed specifications for developers for each of top-10 selected distributed applications (dApps). Propose staged development processes with early PoC (proof-of-concept) releases.

COMPANY OVERVIEW

We are building the next generation high-performance scalable blockchain platform with security protocols and smart contracts designed with the express purpose to meet an immense business scope with focus on the consumer markets (retail, consumer goods and services, logistics, etc). The enterprise-grade distributed ledger cloud platform will help to increase business velocity, create new revenue streams, and reduce cost and risk by securely extending enterprise SaaS and on-premises applications to drive tamper-resistant transactions on a trusted business network.

Our blockchain platform will support public and private blockchains and be able to customize different blockchains for different applications. We will constantly provide common modules on the underlying infrastructure for different kinds of distributed scenarios (direct-to-consumer marketplace, product authenticity and provenance tracking, supply chain and inventory management, customer loyalty and rewards, trade promo management, trade finance, etc). Based on specific scenario requirements, we will continue to develop new common modules.

We value the expansion of the ecosystem which operates across chains, systems, industries and applications. With a range of protocols and modules, data and information will be connected to support various business scenarios. Our goal is to build the underlying blockchain infrastructure to bridge the real world and the distributed digital world. With this, companies from different industries will be able to develop applications for a range of scenarios and collaborate with other entities on the platform. We laid the foundation of the thriving ecosystem, having partnered with dozens of dynamic disruptors and established leaders in the consumer and blockchainindustries.

We initially aimed to build a single distributed application (distributed app or dApp) - a direct-to-consumer marketplace. In the process of refining the go-to-market strategy the team interacted with 100+ C-level executives from largest companies in the consumer industries (retailers, CPG, logistic providers, etc.) and verified an overwhelming interest to both the direct-to-consumer marketplace and other promising distributed applications (product authenticity and provenance tracking, supply chain and inventory management, customer loyalty and rewards, trade promo management, trade finance, warranties, etc). 

Our existing whitepaper is devoted to the direct-to-consumer marketplace application. Our ambition has evolved from developing a single dApp (direct-to-consumer marketplace) to building an enterprise grade blockchain platform for building dApps (3rdparty or inhouse developed) for the consumer markets.

PROBLEMS TO SOLVE

1.   Define problems at hand and how you expect distributed applications to solve them. Identify the qualitative and quantitative criteria and gather data to make an educated selection of top-10 most valuable distributed applications (dApps). 

2.   Provide IT architecture design and detailed specifications for developers for each of top-10 selected distributed applications (dApps). Propose staged development processes with early PoC (proof-of-concept) releases.

The deliverable should include the following information (but not limited to):

  • Key components (user permissions, asset issuance and reissuance mechanism, atomic exchanges, consensus, key management and structure, parameters, signatures, hand-shaking and address formats, etc)
  • Main functionalities
  • Detailed specs for key functions
  • Parts of dApps executed on-chain and off-chain
  • APIs / SDKs / Modules 
  • Integration with existing ERP systems

WHO WE NEED

  • Enterprise architect with 10+ years of experience in building complex systems and products
  • Deep understanding of the consumer markets 
  • Vision on how to build valuable distributed applications powered by blockchain
  • Out-of-the-box and creative mindset

 DELIVERABLES

  • Presentation with a comprehensive rating / selection overview, backed by facts and figures, based on which you advise the most valuable distributed applications (dApps) to take forward to step 2.
  • Technical document sufficient for blockchain engineers to start the actual development of distributed applications (dApps).
Blockchain
Enterprise Architecture
Distributed Applications

$100/hr - $250/hr

Starts Jun 18, 2018

5 Proposals Status: CLOSED

Client: I***

Posted: Jun 18, 2018

Propose Requirements and Architecture for an Enterprise Grade Blockchain Platform

TASK

Propose requirements and architecture for a blockchain platform to be regarded as a solid enterprise grade solution with best capabilities in the market.

Note: We know very well why need to develop a new blockchain and have many "killer" features that none of the existing blockchains has. The goal of this project is to find out if we missed something important.

COMPANY OVERVIEW

We are building the next generation high-performance scalable blockchain platform with security protocols and smart contracts designed with the express purpose to meet an immense business scope with focus on the consumer markets (retail, consumer goods and services, logistics, etc). The enterprise-grade distributed ledger cloud platform will help to increase business velocity, create new revenue streams, and reduce cost and risk by securely extending enterprise SaaS and on-premises applications to drive tamper-resistant transactions on a trusted business network.

Our blockchain platform will support public and private blockchains and be able to customize different blockchains for different applications. We will constantly provide common modules on the underlying infrastructure for different kinds of distributed scenarios (direct-to-consumer marketplace, product authenticity and provenance tracking, supply chain and inventory management, customer loyalty and rewards, trade promo management, trade finance, etc). Based on specific scenario requirements, we will continue to develop new common modules.

We value the expansion of the ecosystem which operates across chains, systems, industries and applications. With a range of protocols and modules, data and information will be connected to support various business scenarios. Our goal is to build the underlying blockchain infrastructure to bridge the real world and the distributed digital world. With this, companies from different industries will be able to develop applications for a range of scenarios and collaborate with other entities on the platform. We laid the foundation of the thriving ecosystem, having partnered with dozens of dynamic disruptors and established leaders in the consumer and blockchainindustries.

 

We initially aimed to build a single distributed application (distributed app or dApp) - a direct-to-consumer marketplace. In the process of refining the go-to-market strategy the team interacted with 100+ C-level executives from largest companies in the consumer industries (retailers, CPG, logistic providers, etc.) and verified an overwhelming interest to both the direct-to-consumer marketplace and other promising distributed applications (product authenticity and provenance tracking, supply chain and inventory management, customer loyalty and rewards, trade promo management, trade finance, warranties, etc). 

 

Our existing whitepaper (enclosed) is devoted to the direct-to-consumer marketplace application. Our ambition has evolved from developing a single dApp (direct-to-consumer marketplace) to building an enterprise grade blockchain platform for building dApps (3rdparty or inhouse developed) for the consumer markets.

PROBLEMS TO SOLVE

1.   Propose requirements for a blockchain platform to be regarded as a solid enterprise grade solution with best capabilities in the market.

  • Performance / Enterprise-grade workloads (transactions per second, latency, etc)
  • Scalability
  • Reliability
  • Resiliency
  • Security
  • Compliance with regulation 
  • Industry standards
  • etc..

2.   Provide detailed specifications for the blockchain platform elements. 

  • Multichain (private / public blockchains)
  • Cross-chain interactions
  • Consensus algorithms
  • Permissioned / permissionless 
  • Sharding
  • Account management (EOS style)
  • Modules
  • APIs / Integration with existing enterprise systems
  • Protocols
  • etc..

 

WHO WE NEED

  • Enterprise architect with 10+ years of experience in building complex systems and products for large enterprises
  • Deep understanding of advantaged and disadvantages of public and private blockchains (both existing and under development)
  • Vision on how to build the best-in-class blockchain platform applicable for enterprises
  • Out-of-the-box and creative mindset

DELIVERABLES 

  • Internal technical document suited for blockchain developers to start development of the blockchain platform.
  • External technical document aimed to convince CTOs/CIOs in large enterprises to use our blockchain platform as the most advanced and ready-to-use enterprise-grade solution.
Blockchain
Enterprise Architecture
Requirements Analysis

$100/hr - $250/hr

Starts Jun 18, 2018

6 Proposals Status: CLOSED

Client: I***

Posted: Jun 18, 2018

Programmer needed to Convert "R" Random-Forest models into stand-alone Java or C/C++ Code

We are a medical devices startup that is working on accurate, affordable screening for the early detection of heart disease. We need a Data Scientist who can convert models developed in R into generic Java or C/C++ code that can be executed on an Android Platform.  The conversion methodology and/or tools used are left to the contractor’s discretion, as long as the results can be validated to be correct.  

This position is for an intermediate or senior experienced level individual who is competent in software programming as well as state of the art machine learning practices – preferably in a Medical Device setting.

Here are some detailed required and desired skills:

Must-Have:

5+ years of experience in developing models using Machine Learning techniques such as:

  • Random Forest (must have)
  • Gradient Boosting
  • Support-Vector-Machine (SVM)

5+ years of experience in Software Programming using one or more of the following languages:

  • C
  • C++
  • Java
  • Android Java

Experience with at least two or more of the following AI/Machine Learning platforms:

  • R (must have)
  • SAS Enterprise Miner
  • Weka
  • TensorFlow Lite
  • Salford Systems

Also desired (but optional):

3+ years of experience in the Medical Device Industry (or working in a regulated environment)

Experience with or knowledge of ECG machines

Statistical Analysis skills and generating documentation with the following:

  • p-values
  • Sensitivity/Specificity
  • AUC / ROC plots
  • PPV/NPV

Experience in Design Validation for Medical Devices

Experience with Atlassian tools (Jira, Stash/Git, Bamboo, etc.)

Familiarity with Android Applications and Java coding

Ability to work on-site in our Lewisville, TX facility 

Proposal Requirement

Please specify experience in Machine learning techniques (random forest), software progreamming, AI/Machine learning paltforms and R expertise.

Previous work that you have done that is relevant.

Healthcare
Pharmaceutical and Life Sciences
Software

$75/hr - $150/hr

Starts Jul 26, 2018

15 Proposals Status: IN PROGRESS

Client: H*************

Posted: Jun 11, 2018

Event Strategy & Data Analysis to Generate Insights using RFID, Survey Data and Social Listening

Company Summary:

We are a world-class experience marketing agency and event production company. We create distinct and data-driven experiences rooted in cultural context to convert consumer attention into action, sales, and absolute brand loyalty.

Problem We Are Trying to Solve:

Our agency is creating an experiential activation for a brand launching a new product at a large tent pole event later in the year. The experience has been designed to incorporate a variety of interactive elements and photo moments with the goal of educating consumers, creating excitement, and generating awareness for the launch.

Each attendee (around 2,500) will register for a unique RFID wristband that will alert us to the interactive touchpoints, photo moments, and survey responses that come from each guest.

Overall Client KPI’s/Objectives

  • Increase brand awareness of product 
  • Increase purchase intent 
  • Competitive Landscape: How does the product stack up to others in the competitive set.
  • Generate Reach
  • Shift in Sentiment
  • Share of Voice at the Tentpole

The Ask / Company Expertise: 

  • Develop survey questions that help indicate awareness, purchase intent, and competitive landscape pre/post event
  • Ability to combine RFID data (survey & touchpoint data) with social, press, and user generated content info.

Types of Data Capture

RFID Data

  • Registration & Survey Data
  • Dwell Time, Attendance, Engagement
  • Interactive Touchpoints
  • Photo Booth

Press & Social Data Capture

  • Overall Impressions/Reach
  • Total Mention Volume Across Select Keywords
  • Demographic Information
  • Sentiment & Brand Lift Overall & Weekend Trend
  • Share of Voice: How the activation compared to other activations at the tent pole.
  • Purchase Intent: A determination of lift in purchase intent using combination of social listening and survey responses to determine if a person is expressing a desire for a product rather than generally talking about a brand or product. (Ex. a post about wanting to pre-order something before its release or a response indicating an eagerness to see a newly-released show.)

Conversions

  • Email Capture (Lead Gen)
  • Coupon Redemption %

Data Sources

1.RFID

  • Registration Data (CSV Format)
  • Interactive Touchpoint Data (CSV Format)
  • Survey Response Data (CSV Format)
  • Photo Share Data (TBD)
  • Email Share (CSV Format)

2.Social Listening Data (PDF Export from Brandwatch)

Deliverables:

  • Written Word Doc Report (5 - 10 pages)
  • Presentation Overview (High Level Overview in Keynote or PowerPoint Format)
  • Incorporate Visualization of Key Data Points & Findings
  • Report would need to be turned around in 2 weeks post-event in July

In your proposal please tell us more about your specific experience in customer analytics and event marketing analysis. Also share a sample of any previous relevant work and visualizatons or presentations done for clients. Only experts based in USA will be considered.

Media and Advertising
Sentiment Analysis
Customer Analytics

$20,000 - $25,000

Starts May 24, 2018

8 Proposals Status: COMPLETED

Client: N*** **********

Posted: May 11, 2018

Machine Learning Algorithm to Predict Favorable Art Unit to be Assigned to Patent Application

About Us

We are a software and services company. We have serviced IPlaw firms for the past 15 years and have a deep understanding of the business and technology required to be successful in a competitive marketplace. We have approximately 50 employees, with an even split between software developers and implementation engineers. Our PLink platform enables firms to optimize efficiencies, reduce risk, and increase client satisfaction. PLink has an analytics module, and the algorithm developed here will ensure that we remain ahead of the market. 

Project overview

Intellectual Property law firms are applying for patents every day.  This process is called ‘patent prosecution.’ A major part of this process is surviving the review of the USPTO patent agents. It’s their job to find problems with the patent application – it’s too broad (you can’t patent a table) or too specific or already exists. Certain patent categories (or ‘art units’) are more difficult to achieve success than others.  We want to develop a machine learning algorithm that analyzes existing approved patent abstracts and their assigned art units. We would then expose new patent abstracts to this algorithm to predict the most likely art unit to be assigned to this patent application.  We need the Experfy expert to develop this algorithm and train one of our resources to maintain it (i.e. continue to ‘teach’ it).

Objectives:

Based on a prior catalog of application claims, the classes and art units in which they were examined, and the class and subclasses that make up each art unit, extrapolate the most likely class and art unit for the proposed new application.

Present global statistics for the art unit as well as analytics on the examiners in the art unit.  Allow for adjustment of proposed language to re-evaluate the projected classification and art unit.

Assumptions & Results

The outcome of this project will be a web service hosted in Azure that will allow PLink to pass proposed application abstract and claims language and return a proposed class, subclass, and art unit.

None of the proposed language entered by the practitioner will be stored. 

User Experience

Upon submission of the proposed application language, the PLink interface will present the user with the likeliest Class, Subclass and Art Unit, a summary of global and internal statistics for the Art Unit to include: overall allowance rate, allowance rate before final, allowance rate after final, allowance rate with appeal, allowance rate without appeal, and a list of examiners that have worked in the art unit.    

Data Sources: See attachment

In your proposal please tell us:

  1. Tell us more about the technique and methodologies you will use
  2. What technologies would you suggest? We are a .NET shop.
  3. How will you work with our in-house tech staff to ensure that we can maintain the algorithm moving forward?
  4. How would you design the solution so that we could build on it?  Our next project would be to recommend language changes to move a patent application into a ‘friendlier’ art unit.
Legal
Machine Learning
Predictive Modeling

$10,000 - $25,000

Starts Jun 13, 2018

16 Proposals Status: COMPLETED

Client: A******

Posted: May 04, 2018

Sr. Data Scientist + Software Engineering Lead

PwC is a network of firms committed to delivering quality in assurance, tax and advisory services.

We help resolve complex issues for our clients and identify opportunities. Learn more about us at www.pwc.com/us.

At PwC, we develop leaders at all levels. The distinctive leadership framework we call the PwC Professional (http://pwc.to/pwcpro) provides our people with a road map to grow their skills and build their careers. Our approach to ongoing development shapes employees into leaders, no matter the role or job title.

Are you ready to build a career in a rapidly changing world? Developing as a PwC Professional means that you will be ready

- to create and capture opportunities to advance your career and fulfill your potential. To learn more, visit us at www.pwc.com/careers.

It takes talented people to support the US firm of the largest professional services organization in the world. Not all of us work directly with external clients. Some of our best people choose to apply their talents inside PwC.

As part of Internal Firm Services, you're serving an organization on par with many of our external clients. Our Internal Firm Services team consists of first-rate marketers, human resource professionals, computer technologists, knowledge managers, accountants, financial planners, administrators and leaders. Internal Firm Services staff are the people who make it work for the people who make it work for our clients. 

Job Description

PwC established New Ventures to invest in new business models that leverage our knowledge and build solutions for the growing digital market.

New Ventures identifies, develops, and commercializes technology-enabled solutions that deliver PwC value, knowledge, and experience to our clients. Each new solution focuses on data-driven platforms or other IP-based solutions that leverage emerging technologies and new business models. Through the process of building new solutions, we foster a culture of innovation within the Firm, extend brand relevance in the market, and generate new revenue.

The New Ventures Engineering Team is responsible for establishing the technical vision with the Solution Architect team and managing developers (both onshore/offshore) to turn the vision into reality with an on-time and quality implementation.

The Tech Lead team supports concept development, product planning/estimation, and works with the Product Management team to drive product development as an critical member of a product leadership team. The Tech Lead team is expected to have full stack development experience utilizing agile development techniques/methodologies. In addition the Tech Lead team is responsible for working with multiple disciplines within the software development lifecycle (UI/UX, Product, Management, QA Testing & DevOps).

For data intensive applications, a data science experience lead is desired to understand data ingestion and processing needs based on product requirements and goals.

Position/Program Requirements

Minimum Year(s) of Experience: 5 in software development with at last 1 year leading developers in delivery of software products.

Minimum Degree Required: Bachelor's degree in Engineering, Computer Science or related field.

Knowledge Preferred: 

Demonstrates extensive knowledge and/or a proven record of success in data science and modern software engineering approaches, technologies, and tools, including the following areas:

- Big Data or Analytics or AI tools; 

- Cloud-ready architectures utilizing infrastructure and platform cloud services for AWS, GCP, or Azure;

- Event-driven and microservices architectures;

- DevOps including virtualization, automation, continuous integration;

- Mobile/Web architecture stacks;

- Polyglot Persistence including RDBMS/NoSQL data stores and appropriate use cases;

- Rapid-prototyping workflows and development tools; 

- Languages including HTML/CSS, Javascript/NodeJS;

- Frameworks/Libraries such as Angular, React, D3;

- Database including noSQL (mongo, neo4j, firebase), relational (mySQL, postgres);

- Configuration Management such as Chef, Puppet, Ansible, Terraform;

- Messaging such as Kafka, RabbitMQ, Redis, GraphQL; and,

- Containers including Docker, Kubernetes.

Skills Preferred: 

Demonstrates extensive abilities and/or a proven record of success in technical lead roles involving the following areas:

- Working in a venture backed software startup developing greenfield products strongly recommended

- Building and optimizing ‘big data’ data pipelines, architectures and data sets, using

    - Big data tools: Hadoop, Spark, Kafka, etc.

    - Cloud PaaS big data services: AWS, GCP, or Azure

- Applying analytic skills related to working with unstructured datasets

- Communicating, verbally and written, with both business and technical stakeholders to achieve product engineering objectives;

- Leading across all aspects of a technology solution such as integration, data, services, front-end, back-end, network, deployment, scaling, security, performance and development;

- Managing rapid-prototyping efforts with new and emerging technologies leveraging agile development techniques;

- Designing successful technical/integration architectures for large-scale platforms with a mix of 3rd party vendor, open-source, custom software, including the documentation of technical assumptions and decisions;

- Contributing to and managing incubators/innovation lab environments, and working with small teams across a variety of new and emerging technologies;

- Working in an environment that leverages project management skills like planning and tracking, issue and risk management, multitasking, team organization, and activity prioritization; and,

- Developing front-end, back-end, and/or systems administration applications with strong proficiency in at least one scripting language (Javascript, Python, etc.).

A Sr. Data Scientist and software engineering lead will

- Provide and communicate unified technical vision for software products and breaks down vision into tangible tasks for developers

- Lead by demonstration of technical expertise (i.e. hands-on) across full technology stack (front-end, back-end, data modeling, 3rd partyintegration)

- Manage performance of offshore and onshore developers through effective task breakdown,management, prioritization, and alignment of work to resource capabilities

- Scale team productivity by decomposing user stories and features into individual units of work

- Directly contribute with written code and provides code reviews to ensure adherence to solution design

- Participate in the cost estimation process by recommending the skills and numbers of developers required, and by performing effort estimation given product requirements

- Coordinate with customer, product team disciplines (e.g. UX/UI, DevOps, QA) and other product-related teams to build, test, and deploy software products 

- Identify technical risks and and proactively address issues that may have an impact on service levels or schedules

- Understand and apply agile software development techniques/methodologies to effect continuous quality improvement across people and processes

- Maintain responsibility for the quality and viability of software engineering deliverables by providing recommendations on technical solution including design, build/buy decisions, open-source tooling, etc.

- Collaborate with New Venture Product Engineering, and other Tech Leads to define the design, development, and support toolsets and processes to improve the overall efficacy of product teams

In your proposal please tell us why you might be a good fit and attach your resume.

Only for US based experts. Remote + Travel & Open to multiple location(s):  CA-San Francisco, CA-San Jose, IL-Chicago, NY-New York

Travel Requirements:   21-40% , 3-6 months contract.

Healthcare
Hi-Tech
Professional Services

$100/hr - $200/hr

Starts May 01, 2018

12 Proposals Status: CLOSED

Client: P***

Posted: Apr 26, 2018

Proof of Concept: Inventory Management Modeling

About Us: We are a metal distribution company with an extensive inventory within our metal market niche. We satisfy customer needs by stocking both commodity and hard-to-find grades, and by providing extensive value-added processing and custom cutting based on customer specifications.

The Problem: We have an extensive inventory offering which includes 35 different grades of steel in 35 different thicknesses and are looking to build out our existing inventory management system to assist in demand forecasting to avoid overstocking while also ensuring we have product availability to meet customer demands. There is a lot of demand volatility and pricing volatility, as well as the mills we source from could have anywhere from 4 weeks to 6 months+ lead times depending on grade, mill location and current demand. Inventories are currently replenished based on historical 3-month, 6-month and 12-month usage data for a particular grade and thickness.  In addition, there may be minimum sizes that must be ordered for certain grades and thicknesses which can far exceed the weight that would be ideally ordered based on historical usage.  All of these factors may result in overstocking on our end to ensure we can meet customer demands, or under stocking for unforeseen surges in demand.

Deliverables: This first project consists of 2 goals. In your proposals please break out the cost and effort you expect each step to take.

1. Experts who understands inventory demand forecast will look at our systems and data available to better understand the roadmap and if we have everything needed to build the model.

2. Proof of concept: Build inventory and forecasting algorithm for a specific grade of material, taking into account current mill lead times, to ensure we have material in stock. Ensure the algorithm is not static, but develops over time to ensure that as market conditions and product demand changes, the algorithm develops as well. In addition, assistance with deploying the model within our infrastructure.

While these are the two goals for this initial proof of concept project, if the model is successful we envision the project expanding to other grades within our inventory.

Data sources: We have been collecting data for over a decade and can provide data in csv format. Data is not too much about 1000+ lines of data except quoting activity data. We have our own custom inventory management software that runs on SQL server. We can provide data like

• Historical inventory level

• Historical demand by month

• Quoting activity, quote conversion, and sales activity data

Metal prices are publicly available, and would help to look at supply demand correlation.

Please propose how you would tackle this project and an estimate. We would also like to know your past experience performing similar work. Lastly, we would require a non-disclosure agreement to be signed as part of the project. Applicants preferred from USA, UK, Canada. 

 

Manufacturing
Inventory Management
Supply Chain and Logistics

$5,000 - $15,000

Starts May 16, 2018

19 Proposals Status: COMPLETED

Client: S********* ***** *******

Posted: Apr 25, 2018

Algorithm to Assign Work to Truck Drivers based on Bidding Data

Project Overview:

We are looking for assistance in developing an algorithm, which will assign work to drivers in a competitive bid environment.  Drivers will have the ability to indicate their interest in working specific projects on a daily basis.  Drivers may be interested in bidding on multiple opportunities on a daily basis, but will only be able to work on one project at a time (if the project is expected to last an entire work day); drivers may be able to work several separate "smaller" projects on a daily basis. 

Company Profile:

We are a local DFW, TX sand and gravel-trucking broker primarily serving the construction industry. We do not own trucks nor employ drivers. We work only with owner/operators.  

Data Source:

A MySQL database hosted on AWS contains all the bid information and relevant historical data.

Deliverable:

An algorithm that integrates with our existing platform and assigns work to drivers considering the following scenarios:

  • No shows - % chance the driver “wins” a bid and does not show up to the complete the order
    • There will be a certain percentage of drivers that will win a bid, but still won’t show up
    • Specific drivers will be worse than others
    • We can “over allocate” the # of drivers we assign to a specific order on each day based of historical “no show” averages and/or specific drivers that are assigned, but poor performers
      • Over time we’ll want to eliminate these drivers from the system
      • The larger the # of requested drivers (for an order by day) may allow us to better account for no shows
        • Ex. if only one driver is requested, it’s a little more difficult to assign another driver just in case the original driver does not show up
        • Ex.  if 20 drivers are requested, it’s easier to assign one or two extra drivers
    • Timeliness is another thing to take into consideration.
      • If a job starts at 7 AM, we want someone to be there at 7 AM – not 10 AM
  • Maximizing revenue vs. Maximizing performance
    • The lowest bid might not always be the most optimal outcome for completing the order
    • It may be more beneficial to pay a slightly higher overall price to guarantee the driver will complete the job, considering the competitive marketplace
      • This may prove seasonal, both from a monthly (summer = busy) or even daily (rain = slow) cycle and should be flexible to handle new MSAs (as we expand to new cities):
        • Slow times = less competition for drivers from other trucking brokers
        • Busy times = more competition for drivers from other trucking brokers
  • A/B testing – Max Bid 
    • For same day “bids” we will show the driver the maximum amount they can bid to be assigned the order.
      • This “max bid” will equate to our current cost allocated for the trucker and is directly relatable to the another project we are working on to create customer-specific Pricing Algorithm Based On Historical Pricing Data.
    • For orders tomorrow (and beyond) the maximum amount may or may not be shown to the driver
      • We’ll be able to control this on a driver by driver basis to test how bidding is affected if the driver sees our max bid or not
  • Est. Loads per Day – we will give the drivers an indication of the duration of orders, both in terms of A. expected time spent working per day and B. expected number of deliveries per day so that he may better frame his bid:
    • Time – do we except for the drivers work day to be a quarter, half, or full day.  Work lasting an entire workday may be more attractive than work only lasting a few hours.
    • Pay – given all variables (load time, drive time, dump time, traffic, time of day, etc.) how many loads can we except for that driver to complete during a work day
      • Load time – will vary by customer's equipment (if a haul off) or pit (if a haul on)
      • Drive time – highway vs. in city driving, toll roads used or avoided
      • Dump time – will vary by customer's equipment (if a haul off) or dump (if a haul off)
      • Time of day – the first load of the day / right after or during lunch may experience longer wait times compared to the average as truck bottlenecks occur
        • Ex.  Lots of trucks will line up to get loaded for the first load of the day each morning.   If driver A is 20th in line, he’ll wait longer than if he had arrived sooner
      • Traffic – construction on the roads?  Rush hour?

In your proposal please share more details on the solution, milestones and your previous experience in optimization algorithms for supply chain or other industries.

 

Transportation and Warehousing
Sales
Inventory Management

$5,000 - $15,000

Starts Apr 24, 2018

14 Proposals Status: CLOSED

Client: A******** ********

Posted: Apr 24, 2018

Matching Providers