facebook-pixel

381 Projects that match your criteria

Sort by:

Senior Data Architect and Project Manager for Airport-Related Use-Cases

We are looking for a PM/ Data Architect to manage a project in SF.  The project manager should have data integration and micro services architected solution and big data experience.

The PM should have delivered at least 3 data integration and micro service architected solutions in last 5 years

Phase 1: 2-3 hrs/week remote consultation for the RFP until mid may and after we win the project Phase 2 would be a full time contract opportunity - part onsite and part remote. For this RFP the Project Manager should help with:

1.Architecture Validation: Validation of the Solution’s architecture against the Airport’s  solution requirements  (‘Validated Solution’); 

2. Solution Design and Configuration: Detailed design & configuration of the Validated Solution; 

3. Change Management and Stakeholder Engagement: Change management and stakeholder engagement; 

4. Use Case Implementation: Working with Airport staff, use Solution to deliver Airport Use Cases; 

5. Training and Knowledge Transfer: Resource dedicated to the transfer of skills to Airport staff;

6. Support and Maintenance: Purchase initial service and support contracts for the selected Solution’s technologies; 

7. Security Protocol and Business Continuity: The delivered Solution is secure and reliable. 

8. Software Support and Maintenance: On-going support and maintenance services for the Solution. 

To prove and validate the delivery, completeness and correctness of the Solution, the Airport requires the implementation of several business Use Cases over the contract term. Through the successful delivery of Use Cases, the Airport will begin to transform its business operations to a future state where brokered services for data, events and messages are delivered to support its operations and business stakeholder requirements.  

Apache Hadoop
Big Data and Cloud
SAP HANA

$95/hr - $200/hr

Starts May 02, 2017

3 Proposals Status: COMPLETED

Client: S****** ******* ***

Posted: Apr 11, 2017

Build Prediction Visualization

Summary

We are building a continuous learning algorithm that will be able to predict execution times of Ansible builds (Playbooks) based on historical Ansible build data.  In a complimentary project we are developing the machine learning model, API and continuous learning environment.  The winner of this challenge will be asked to develop visualizations to show the predicted time-to-build and progress to completion for each playbook, play and task.

Scope of Work

The selected consultant will be responsible for:

•Implementing the visualizations from the wireframes using HTML, CSS and d3.js Javascript libraries (each visualization should be able to function alone, apart from the others)

•What is the status of this playbook (doughnut chart): This graph shows the total time prediction for the playbook and % completed.

•What is the status of plays within this playbook (pie chart): This graph shows the plays within a playbook and progress/time remaning for each.

•What plays/roles/tasks is the playbook working on right now (network diagram): This graph shows the structure of all plays within the playbook (plays contain a combinations of tasks or other plays).  Each node shows progress for that particular play/task and you can mouse-over a node to reveal task/play name and % complete.

•Develop a mock-API for the visualization to call, which will provide mock JSON data to enable the visualizations to function.

•Develop a view to show only the playbook (doughnut) visualization initially.  When clicking on this visualization it will expand to show all three visualizations on the same page.  Collapsing the view will again show only the doughnut chart by itself. 

•Follow the wireframes and material design standards for all work

The attached presentation provides additional details about the broader project scope (including the data generation and machine learning projects).  Details completed by previous projects or otherwise out of scope for this Experfy project posting have been greyed out for scope clarity, however the details may still be relevant to your implementation.

Proposal

As part of your proposal please answer the following questions:

•Please provide references to other design / visualization work you have done in the past?  Please specify whether these visualizations were built using d3.js or another framework.•What data/methods will need to be provided by the API in order to make the visualization fully functional?

•Please list all technology which would be part of your implementation/solution.

•How will you test/show your visualizations are functioning as indented?

•Why should we choose you to develop the visualizations over someone else?

 

Business Intelligence and Visualization

$4,000 - $6,000

Starts Apr 17, 2017

6 Proposals Status: HIRING

Net 60

Client: C*******

Posted: Apr 10, 2017

Automate Data Collection Using Marketo API

We are looking for an expert with experience using APIs to get data and create an extract in csv. Ideally the expert knows Marketo and their API (http://developers.marketo.com/rest-api/).

The file is like this

  1. Header column name is the name of Marketo Smartlist
  2. For each SmartList we need to extract one data point:  the size of the list

The use-case is to automate the collection of this data so it is not manual.  Basically SELECT list_size FROM marketo_smartlist_name.  This needs to be done 50 times.

So I want a script that can use the Marketo API to get this data as pseudo-queried above and dump the output to CSV when it is run.  Ideally the output will be 50 column headers (one for each list) and one number under each heading (again one number of each list).  Again the column name is the SmartList name and the metric is the count of users on the list. Like this: https://screencast.com/t/5obb9gLAL

Make the same call using Marketo API for the same data but change the name of the list. Iterate this 50 times for 50 different lists, and then dump a CSV with the name of lists as the column head and the count of items in the list as the value below.  Does this make sense to you?  It really is that simple. It's one query for one datum reused 50 times changing the name of the list to get the data.

Please provide your experience with Marketo, REST APIs and this type of project. 

Hi-Tech
CRM, ERP, Accounting, Operations, Marketing Automation
Scripts & Utilities

$4,000

Starts Apr 13, 2017

4 Proposals Status: COMPLETED

Net 7

Client: V********

Posted: Apr 06, 2017

Ansible Build Time Prediction

Summary

We would like to build a continuous learning algorithm that will be able to predict execution times of Ansible builds (Playbooks) based on historical Ansible build data.  In a complimentary project we are generating Ansible build data from which the algorithm can learn.  The winner of this project will be responsible for the creation of the continuous learning algorithm and API for making Ansible build predictions.

As part of your proposal please answer the following questions:

•What kind of machine learning algorithms would you use to solve this problem?

•What trade-offs you are making when choosing one algorithm over the other?

•Which technology stack would you use for this challenge?

•What are the underlying assumptions about the training data set?

•How would you approach in tuning the parameters for the chosen algorithm?

•How do you plan to evaluate the performance of the trained machine learning Model?

•How do you plan to develop the API?

Scope of Work

The selected consultant will be responsible for:

•Selecting the machine learning algorithm which optimally predicts execution times of Ansible builds

•Training the model on the historical data of the Ansible builds

•Building an API for evaluating the performance of the model and tuning it to improve further

•Executing the API on the actual test data

•Creating a feedback loop which provides a continuous machine learning environment

The primary output of this project is an API for predicting Ansible build times with the implementation consisting of continuous machine learning algorithm which ingest a data source of Ansible build factors.

The attached presentation provides additional details around the environment and machine learning and gives additional context to the broader project scope (including the data generation project).  Details completed by previous projects or otherwise out of scope for this Experfy project posting have been greyed out for scope clarity, however the details may still be relevant to your implementation.

Challenge Format

We plan to hire more than one expert to implement their model using a common initial data set.  The different approaches will be evaluated after initial implementation and one of the two experts will be asked to continue with the project refining their model and build the API.  The period for determining which approach will be used (and who will complete the final project deliverable) will be variable but is expected to last 1-2 weeks.  For your proposal, please specify a fixed cost for delivering the entire project.  Prior to hire and starting the project, we will negotiate partial payment (whether hourly or fixed) guaranteed to both experts upone completion of the initial model, regardless of the expert being selected to complete the project for full payment.

Application Deployment
System Provisioning & Configuration
Task Execution

$20,000 - $30,000

Starts Apr 07, 2017

10 Proposals Status: IN PROGRESS

Net 60

Client: C*******

Posted: Mar 31, 2017

Dataset Evaluation for Statistical Learning and Data Mining Methods

Data evaluation
The dataset (
Pilot Project_SA170330) represents an extract of a greater data set which contains project data sets similar to the one presented here.


The task is to elaborate and evaluate which statistical learning and data mining methods would be appropriate to give insight into the hidden knowledge of the entire data set, provided that the other datasets are similar with respect to structure and quantity. Of special interest is if the data would allow association rule based learning. This includes the text data hence the data needs to be preprocessed with natural language tools in order to develop metrics for the text data that allow more sophisticated mining methods. 

Questions to be answered:

  • Can data mining methods provide valuable insight into the data and which methods would that be (e.g. clustering with dbscan with the following parameters...)? 
  • What can be expected with respect to the results? 
  • What would be the the costs (money and time)? 
  • If the data does not suffice the requirements of data mining what should be changed be different (structural problem, amount of data, etc.)?
  • What would be the cost for this evaluation (time, money)?
  • What additional data, when integrated with the dataset, would allow analyses and conclusions not possible with the additional data alone?

Consumer Goods and Retail
Brand Equity
Brand Perceptual Mapping

$4,000 - $7,000

Starts Apr 19, 2017

13 Proposals Status: IN PROGRESS

Client: A******** ****

Posted: Mar 27, 2017

Data Visualization Code Review and UI Testing

We have two web apps that require code review and UI testing. The UX is basically a UI dataviz--let us know what this means to you.

Our Criteria for the Ideal Expert

  1. Experience testing the user interface (PHP and D3.js) for a software product where the user clicks on elements of the user experience and triggers changes to data visualization that is the UI.
  2. An emphasis on detail exploring features for selecting things like buttons and areas of the screen
  3. Must have experience testing across browsers including working iOS and Android using native apps and mobile sites
  4. Because the dataviz changes based on the data you will need to provide a test plan and execute it across different data sets that you will load as well as all browsers and native apps including mobile sites.
  5. Ability to log detail in a bug tracking system or video recording so that we can see the problem
  6. Agile

Please share your experience building test plans and testing UIs across browsers and platforms.  How good are you at PHP and D3.js to be able to do code review?  Would you use any specific tools?

You must answer all questions and provide your approach to be considered.

Hi-Tech
Data Mashups
Statistical Graphics

$75/hr - $125/hr

6 Proposals Status: CLOSED

Net 7

Client: V********

Posted: Mar 26, 2017

Machine Learning and AI for E-recruitment

We are an Applicant Tracking System (ATS) solution for HR professionals and recruiters. 

Problem:  Identify the best talent based on historical data, social media and information submitted.

Expert and Skills:

Using existing unstructured data and structured data from variety sources (document, social profile, resume, skill setup, database fields) and historical hiring information from the client to build a statistical model and algorithm tailored to specific clients hiring process and industries. And use the model to perform machine learning and further refine the model and algorithm moving forward to be able to identify the top talent for any new applicant.

Person will have knowledge of using varies technique and Platforms - Hadoop, Spark, Mapreduce, Distributed computing, and predictive analytics with big data to build a model/algorithm that can be trained and perform deep learning and update based on the data set provided.

Deliverables: The deliverable is an algorithm/model/platform that can be used to work with existing HR system as stand alone product.

Human Resources
Artificial Intelligence
Machine Learning

$20,000 - $30,000

Starts Jan 02, 2018

21 Proposals Status: COMPLETED

Client: P******** *******

Posted: Mar 23, 2017

Healthcare Data Website with Visualization

We are an international database initiative that was founded in 2009 as a joint collaboration of several dialysis providers. The initiative forms a consortium in which a variety of academic and non-academic institutions from around the world work together on research projects to analyze primary clinical databases of dialysis patients. We are looking to create a website that will have the following pages:

Home Page – Our Story – First Page

  1. Introduction
  2. Vision statement
  3. Mission statement
  4. Video introduction (about 5 – 10 minutes)

What We Have – Our Product – Second Page

1. Data

  • General description of how our data captured, what we have.
  • Interactive map showing our product in different geolocation of the world. Enclosed is the map that we have data (in red). When the cursor points to a certain region, for example, USA – it would show some brief information - N of patients 5k, average age 65, percentage of male 54%.

2. Publications (Projects we had done)

  • Our published abstract (including poster & oral presentation), papers that published in journals, talks that given in national and international events.

Collaboration – Third Page

1. Current collaborating institutions

  • Description and link to website.
  • Member’s information.

2. Collaboration inquiry

  • Contact
  • Submit questions, inquiry.

Web Design
Web Development
Web Programming

$3,000 - $4,000

Starts May 05, 2017

10 Proposals Status: IN PROGRESS

Client: R***** ******** *********

Posted: Mar 22, 2017

Business Intelligence Dashboard for a Large Financial Advisory Practice

We are one of the largest group of financial advisors in Australia that seeks to develop a dashboard that would provide greater insight into our business, which consists of a network of financial advisors (who are our clients). We have a number of sources of data that need to be combined to provide a coherent view and ability to set alerts when anomalies are detected.

PROJECT OBJECTIVES

By cross analysing the data to create alerts, we are aiming to:

  • Minimise fraud.  Data feeds from XPlan draw direct from banks and fund managers.  
  • Minimise overcharging by using data from funds under management & revenues
  • Minimise churning (re-writing insurance policies to generate continual upfront fees which are higher than ongoing fees) by analysing insurance policies and ‘new’ client data
  • Identify which advisers are generating strong investment portfolio returns, how they are doing it, by looking at portfolio returns and then individual funds within portfolios
  • Accurate funds under management reporting at a dealer level

Real time view of all the above data, with the ability to drill down at an adviser level and then by each individual field.  The alert and reporting system would ideally be customisable as our requirements in terms of what we report on and how we cross analyse data would have a set of initial parameters, but would continually grow and change.  

WHAT WE WANT IN A DASHBOARD

Funds under management (source is XPlan)

- Total $ value, drill down to Adviser, drill down to fund manager

Compliance (source is Accordance Systems)

- # advisers on ‘pass’

- # advisers on ‘watch list’

- # advisers on ‘fail’

Revenue (source is Revex)

- Revenue year to date, drill down to Adviser, drill down to source

Insurance policies (source is XPlan)

- Number of policies and type, drill down to Adviser, drill down to insurer

ALERTS AND REPORTS TO BE GENERATED

Alert : Unsatisfactory Compliance reports

Alert : Fees exceeding 1% of total funds under management

Alert : Upfront Risk revenues that don’t match up with number of new clients

Alert : Declining FUM, not related to market movements

Alert : Outperformance of portfolios (in comparison to a benchmark)

Alert : Correlation of mid range Compliance reports + poor investment returns

Reports : Dealer level FUM report

Reports : Monthly ‘Adviser’ report that summarises all fields as outlined above

DATA AVAILABLE FOR ANALYSIS AND SOURCES OF THIS DATA

CRM – Data source is ‘Accordance Systems’ plus 3 additional fields  

  • Adviser name (manual input)
  • Practice name (manual input)
  • Year they joined the industry (manual input)
  • Qualifications (manual input)
  • Inbound / Outbound communication with the dealer (draws from Outlook) *additional field
  • Dealer assisted projects (manual) *additional field
  • Audit reports (auto generated from Compliance system)
  • Monthly ‘dealer’ report (auto generated from LysensE) *additional field

REVENUE – Data source for everything is ‘Revex.’  

  • Total revenue
  • Total retained revenue
  • Total investment upfront
  • Total investment ongoing
  • Total Risk upfront
  • Total Risk ongoing
  • Total revenue – other
  • Comparison of the above, month on month
  • Revenue as a % of funds under management

INVESTMENTS – Data source for everything is ‘XPlan’ 

  • Total funds under management
  • Total investment returns across asset classes
  • Total investment returns across all portfolios
  • Total investment returns ‘vs’ new funds under management
  • Comparisons month on month
  • Number of insurance policies across Life, TPD, Trauma, Income protection

COMPLIANCE – Data source for everything is ‘Accordance Systems’ 

  • Traffic light system for compliance status

CLIENTS – Data source for everything is ‘XPlan’ 

  • Total clients
  • Total Risk only clients
  • Total investment only clients
  • Total clients – other
  • Total new clients

ACCESS TO DATA SOURCES

You have have full access to all data sources via APIs.  The only system that does not have an API is Accordance Systems.  They are willing to build an API based on your specific requirements because we have a good relationship with the vendor.  The time you will need to spend to provide requirements to Accordance Systems should be factored into your bid.

TECHNOLOGY STACK

We are looking for a cloud-based solution and open to all technologies.  We would like to build this initial system using existing dashboarding tools and we are willing to pay reasonable licence fees for tools that may speed-up the developement.  We strongly prefer technologies that can scale since our eventual goal is to productize this solution and sell to others. There will be additional phases to this project to add more features and functionality.

PROPOSAL

Please provide specific milestones and payment amounts for each and an approximately timeline.  Examples of other dashboarding work you have performed would be helpful.  If you intend to license a cloud-based solution, please provide the monthly cost.

Financial Services
Anomaly Detection
Risk and Compliance

$15,000 - $25,000

Starts Jun 10, 2017

15 Proposals Status: IN PROGRESS

Client: P****** ********* *****

Posted: Mar 15, 2017

SaaS Tool to AWS Benchmarking


We create software that can be activated from a vendor SaaS platform to start a process on AWS.  The AWS process returns results to the SaaS, which are then used to populate an interface.  


We seek a benchmark that the process a) supports over N number of rows b) the total time from clicking "start" the software populating with data and c) how long each component - the SaaS and AWS - takes to process/run.


The outcome is simple:

1) The ability to claim that our software processes N rows. Where N is number we will tell you after awarding the deal.
2) Identify the total time to process across the SaaS & AWS - from start to results populating the software
2a) The time it takes to process on the SaaS
2b) The time it takes to process only on AWS

Data Management
Amazon Web Services
Big Data and Cloud

$100/hr

7 Proposals Status: IN PROGRESS

Net 7

Client: V********

Posted: Mar 08, 2017

Matching Providers