facebook-pixel

381 Projects that match your criteria

Sort by:

Automated Garnishment Database

We are a law office requiring a web based program that will allow the user to enter as much (or as little) information as is known about an individual (debtor) in order to initiate an API web search of specific internet sites such as Facebook, LinkedIn, Twitter, Google, etc. The query should return matching data and store in a host encrypted database for subsequent reporting and data mining analysis. The web application would be fully integrated with our CRM (Simplicity Collect).

Database Platform Functionalities Should Include:

  • Database capable of handling, storing, & mining large data sets
  • API integration into Simplicity Collect CRM
  • Open API (allowing for future developments & integrations)
  • Secure & encrypted database (AES-256, SSL, 2-factor authentication)
  • Single sign-on (like ForgeRock OpenAM or Mozilla Persona)
  • Every new query should be also be run simultaneously against existing (stored) database of results. Anomalies should be flagged. All results should be assigned a confidence rating based on information provided
  • Machine Learning algorithms should be utilized during the API search, as well as mining the resulting data for useful information & patterns
  • Flat-file import/export to XLS, CSV, etc.

Assumptions & Dependencies:

  • Assumes the application will be deployed on an SSL extended validation URL on an encrypted host machine with single sign-on functionality, and API integration with CRM.
  • Dependency: Ability to ensure ongoing reliability of API calls to sites (Facebook, Linked In, Google, etc) is dependent on use of most recent API parameter list. Most of the social media sites will accept standard API call requests for data. Occasionally, the social media sites will modify their API call parameters without notice. Therefore, web-crawler functionality (with developer review) will be required for on-going maintenance to ensure application integrity.

General Requirements:

  • Application will be designed as a web application accessible from any of the usual PC/MAC browser types (Outlook, Mozilla, Chrome, Safari, Aviator, etc)
  • After accessing the application via standard browser interface, the user will be presented with a secure login page prompting for User ID & Password (req 2-factor authentication functionality).
  • There should be an Admin login provided with the system that allows new users to be setup and maintained.
  • System should validate the entered user ID and password against the valid user ID and passwords in the database and display appropriate error message if invalid credentials are provided.
  • If successful login, present the user with Home Page Main Menu. Initially this page will only have option 1 – Skip Trace
  • When user selects option 1 – Skip Trace, the Skip Trace Parameters Entry page will display:
  • Parameters: • First Name: (Required) • First Name Alias: (Optional) • Middle Name: (Optional) • Middle Name Alias: (Optional) • Last Name: (Required) • Last Name Alias: (Optional) • Home Address: (Optional) • City: • State: • Zip: • DOB: (Optional) • Home Phone: (Optional) Cell Phone: (Optional) Current Employer: (Optional) Former Employer: (Optional) Education: (Optional) Relationship Status: (i.e.- married, engaged, in a relationship, single…all social sites have this category)

Search Logic:

  • Search checkboxes: (At least one must be checked). Defaults to all checked
  • Email To Prompt: (Optional)
  • Go button: Performs field validations, displays any error messages or initiates search
  • API Search Logic – use standard API searches for those websites with checkbox checked from Search screen
  • Search results should be similar to a Google search…i.e., the results that match the majority of the search parameters supplied should show up first followed by those results that match fewer of the supplied search parameters entered followed by those that just match on the required fields.
  • Machine Learning API – used to scrub automated API search results before presenting data (i.e.,- to refine the search results, & provide a “confidence assessment” for the user).

Additional features to consider:

  • Integration to Accurint-TLO API for vehicle search and logging of results... (see: http://www.lexisnexis.com/webserviceskit/ for general SDK info)
  • MindMeld API for intelligent voice search/queries
  • API integration to the primary data brokers such as: Acxiom, Experian, etc.

Milestones and Proposal

In your response, please break the project into 3-4 milestones and specify how long each would take. Please also indicate if any existing screen scraping APIs such as Grepser (www.grepser.com) or Mozenda (www.mozenda.com) may be used to simplify the development of the application.

Legal
Predictive Modeling
Machine Learning

$15,000

Starts Dec 08, 2014

6 Proposals Status: IN PROGRESS

Client: T*** *** ****** ** ******** *****

Posted: Sep 30, 2014

Mapping of triplicate carbonless paper form to dot matrix printer

We would like to automate the mapping of a triplicate carbonless paper legal form to a dot matrix printer. We are open to suggestions when it comes to purchasing specific printers.  

Background:  Garnishments in Tennessee must be done the old fashioned way on a triplicate carbonless paper form. We are seriously handwriting these things---hundreds at a time! What I want is for my CRM (Simplicity Collect) to automate this process. I want to be able to initiate the printing process from Simplicity to a dot matrix printer that will then print the information onto the physical triplicate form. The triplicate legal form does not exist on the internet--there is no electronic template/version. Therefore, I will have to send you a hard copy of the form so you can re-create electronically.

Legal
Data Mapping
Data Integration

$500 - $1,500

0 Proposals Status: CLOSED

Client: T*** *** ****** ** ******** *****

Posted: Sep 30, 2014

Distributed Data Expert required for Revolutionary Travel Technology Platform

Everyday millions of travellers change their trip plans. Each time a flight delay, weather pattern or personal choice is made to change a trip, a chain reaction of inefficient, largely offline tasks begin. This chain reaction of painful phone calls to flight, hotel, car, black car, taxi, train, experience and other travel suppliers is an intense problem for any traveler. For travel suppliers, it costs hundreds of millions of dollars in customer services and call centres.

What if there is a platform that can seamlessly and proactively provide any traveller with real-time notifications of impacts to their entire end-to-end trip and allow you to automatically rebook and reroute all legs regardless of supplier? 

This solution dramatically impacts the industry, drastically reducing consumer pain points, improving customer retention, creating new ancillary revenue generation opportunities and providing deep insight into consumer preferences and travel buying behavior across the trip.

This platform is called Switchforce. An ultra-high performance,  industrial strength B2B data platform that  powers end-to-end trip-planning for the travel industry, provides realtime streaming analytics, platform-wide big data insights and revolutionary M2M services.

Switchforce has a Management team with over 40 years of travel technology domain expertise and 2 successful exits (sold to Travelclick and Pegasus) as well as execution expertise overseeing large scale ($30m) mission-critical technology platform deployments at a leading stock exchange. 

We have considerable traction with industry participants, including strategic investment from a world class enterprise technology provider, C-level interest from travel operator with 40,000 agents, one of top 5 car rental cos in the US, one of top 3 black car companies in the US, two GDS' and a major hotel channel manager with 6000 hotels. And Google is a fan.

We are looking for a Distributed Data Expert with specific expertise in:

- Architecting Hadoop databases, cloud computing infrastructure architecture - infrastructure architecture is important and key

- Well-versed in TIBCO Messaging products - built on the TIBCO stack, must have previous TIBCO implementation expertise.

- Experience designing data products using realtime streaming analytics

- Experience with complex event processing and integration - we are using CEP for new use cases that will completely transform the booking path for  hundreds of millions of travelers

- Machine-to-machine services - we are using M2M technology for automating B2B transaction processing and require a candidate with truly advanced knowledge

Big Data
Data Architect
Complex Event Processing

$30,000 - $90,000

Starts Oct 30, 2014

9 Proposals Status: CLOSED

Client: S****** ****

Posted: Sep 29, 2014

Optimization of patient recruitment for clinical trials

GI Dynamics is currently recruiting patients for a clinical trial for an investigational, first of its kind, medical device for the treatment of Type 2 diabetes in an obese population. More information about the trial can be found here - http://www.endobarriertrial.com/

Over the last year, we have been spending a substantial amount on recruiting patients. The pathway from when a patient inquires about the trial, is screened, has bloodwork done, and is then enrolled in the trial is a complex one. We would like to develop a sophisticated approach to determine what is and what isn't working on a city by city basis. This will then be used to optimize spend across different advertising mediums.

Currently, all patient inquiries are tracked in a portal on www.galenrecruitment.com. We receive weekly updates in excel on patient progress through the recruitment pathway. For example, this will provide information as to whether the patient has filled out the questionnaire, whether they have been called by a screener for additional screening, whether they have been referred to the site, etc. In addition, there is another spreadsheet which tracks media campaign spends. 


There are two stages to this project:

  • firstly, we need to better understand the substantial historical data which we have. This will help drive future decision making. 
  • secondly, we need to run analytics on a bi-weekly basis (at least) to determine which campaigns are and aren't working. This will support our continued learning and optimization. 

Below are some key metrics and questions we would like answered: 

  • On a site by site basis, time spent to go from inquiry to randomization 
  • ROI on different advertising mediums by site 
  • If we have $X to spend at a site in NY, what is the best medium to advertise on and how many patients will we enroll in the trial? (predictive analytics)
  • Correlation between recruitment rate by site to prevalence of obesity and diabetes in the region 
  • How do we increase conversion of visitors on the site? 
  • Why are patients not qualifying and ways to improve this? 
  • Establishment of patient tracking 
  • Time distribution from first contact by site following website request 
  • Is there a correlation between the amount of time to contact a patient and their qualifying for the trial? 
  • What are realistic goals to set for each site. 

What we learn at this stage will also facilitate our improved understanding of commercialization of this product. If this project goes well, there is an opportunity for an expanded role. 

Pharmaceutical and Life Sciences
Clickrate Optimization
Media Mix Analysis

$17,500

Starts Oct 06, 2014

16 Proposals Status: IN PROGRESS

Client: G** ********

Posted: Sep 29, 2014

Water Quality Characterization

We desire to map the water quality across the State of Maine and are in need of a spatial statistics expert who can design an optimal spatial sampling technique to create a map of water quality across the State that accurately identifies and spatially models water quality impact sources.

Public Service
Cloudera
Hortonworks

$150/hr

Starts Nov 01, 2014

6 Proposals Status: COMPLETED

Client: M***** **** *****

Posted: Sep 28, 2014

Big Data Architect to Help Guide a Team Building a Social Network


I am business guy with a working knowledge of technology in need of good guidance. I am working with a skilled development team of about 8-15 people at any given time. All of them are located in New Delhi, India (part of a 400+ person company).


We are building a social network with all the typical data sets gathered (location, media, chat, streaming, analytics, etc.) and open APIs. I would like to find someone who has proven experience in architecting related systems and can suggest the right technology stack and architecture for us. We have decided to take the open source route. We are a startup self-funded and want to make sure we get this initial setup right so that we can quickly scale while maintaining performance. We think we have the right idea, but I want to take the guessing out of the effort and just want to get a third party opinion from someone else who has experience in massively parallel grid computing based on the third platform (which is all new to me). I think we only need this person for short stints of guidance and would like to build a long term relationship with this individual.


We want some big data recommendations around technologies like Casandra, Hadoop, Hbase, Rabbit mq, graph dbs., etc. (not to mention any particular). We would like you tell us why to use them from your applied expertise with these or similar technologies.  As well as look at the database structure for any rigidness, look at the coding for any possible recommendations - like there should be a latitude and longitude on all events and facts (geo special awareness aka geo fencing the data) for both security detection and to collect valuable location data (which is currently not present in the db.). Provide some guidance on the idea of making sure the architecture considers massively parallel grid computing, in memory data grids etc. Keep in mind that the social network has all the traditional things that one would come to expect as well as analytics with both structured and unstructured data sets.


Compensation is based on skill, I figure anywhere from $75-$150/h. Initially we might only need 10 or so hours. An NDA is required to be signed.

graph database
Grid Computing
Hadoop

$125/hr

Starts Dec 10, 2014

7 Proposals Status: COMPLETED

Client: S*****

Posted: Sep 25, 2014

Automatically detect buildings using image and lidar data

Company Information

We are a startup company out of MIT. Our customized multi-sensor hardware captures large amounts of data for millions of buildings. Through the use of robotics, computer vision, and big data analysis, the captured data is transformed into a meaningful information for our customers.

Data Challenge

1. Building localization

Data is recorded from thousands of buildings per evening. The vehicle-based data collection platform stores georeferenced long-wave infrared and near-infrared cameras images and 3D point clouds along with numerous other kinds of information. Determining which portions of the data localize individual buildings is a challenging computer vision and GIS effort. This challenge requires:

  • Using pre-existing training data to automatically determining the boundaries of buildings - specifically houses
  • Determine depth and dimensions of such buildings using calibrated LIDAR system
  • Isolating pixels of the structure and removing

Challenge may include mapping pixels between cameras using 3D features obtained from lidar data.

2. Building component identification

Long-wave infrared and near-infrared cameras record nearly one million images of energy loss scenes nightly. These co-collected images differ by scene coverage, but this is reconciled by time of collection and by overlapping features in 3D point clouds. Each building has a limited set of features (less than 20) that make up the most interesting parts of our energy analysis computations. Features include:

  • Windows
  • Doors
  • Walls
  • Soffits
  • Roofs
  • Etc.

Finding such features can be a challenge, depending on the type of construction, geography, image condition, and obstructions. All available data including images and point clouds may be needed to determine these features. Tens of thousands of features have been tagged for training data. Challenge may include mapping pixels between cameras using 3D features obtained from lidar data.

Data

All of the deserialized information is stored in our cloud platform. Data is indexed by datetime and sensor. Both raw and normalized data is databased and queryable.

Project is hourly

This project seeks the highest value, not the lowest cost. We are gauging the responses based on experience and qualifications. The hourly rate range is $50-$150 per hour and will be based on merits.

Computer Vision
Image Analysis
Image Processing and Computer Vision

$150/hr

Starts Oct 20, 2014

10 Proposals Status: COMPLETED

Client: E******* ****

Posted: Sep 23, 2014

Build Network/Cluster Visualization with TV Viewing Dataset

We have a dataset of the channels watched by 1,000 people, and need to generate a graph/network visualization that reveals the clusters, or affinity groups, that TV channels fall into. This is a simple data set with an avg of 10 channels per respondent, for a total of 10,000 records.  We are ready to start immediately.  You should use open source tools and provide the full program used to generate the visualization at completion.

Telecommunications
Customer Behavior Analysis
Consumer Experience

$500 - $1,000

14 Proposals Status: COMPLETED

Client: T********* *******

Posted: Sep 17, 2014

Media Monitoring: Big Data and Analytics Assessment

-          We are Africa’s leading media monitoring and market research company that has invested in the latest technology to monitor, capture, filter and sort print, broadcast (radio & television) and Internet news, delivering electronically with speed and accuracy.  We are also one of only two African members of FIBEP (Federation Internationale des Bureaux d'Extraits de Presse), the world's largest association for media intelligence and communications insight. We are looking to consolidate and streamline our data sources to scale our business processes.  This is the first part of a multiphase project to conduct detailed requirements gathering and make recommendations on which systems to adopt for media monitoring in different formats in order to create a roadmap that would lead to advanced analytics capabilities and reporting dashboards for our clients.  The deliverable of this assessment phase would be a requirements document.

-          We are in the middle of recruiting an agency to help us position the company and hopefully re-brand. We have come down to the realization from client feedback that we need to have a platform that can provide market intelligence data from advertising and editorial media monitoring, as well as research, plus also one that can be used by our clients to interpret their own internal data. Hence we would position as a ‘business intelligence and strategy consultancy’, with the ability to inter-relate both internal and externally-sourced datasets; and then give advice. We want to be able to be regarded as a ‘critical strategic decision-making tool’

-          We currently capture advertising and editorial based publicity from media for the following reasons:

·          Brand – Managements of brands, auditing of media/communication plans and understand how brands are portrayed in the media. Track intelligence regarding communication activities, well as those partners and stakeholders

·          Crisis Management – Allows marketers to respond decisively to major threats to their organizations, as well as stakeholders

·          Lead generation – Find out exclusively the topics and trends running throughout the media, region-wide.

-          Our current capture zone consists of Dar es Salaam, Kilimanjaro, Arusha, Mwanza, Mtwara, Zanzibar and Iringa. We are currently expanding fast and hope to set up in about 10 more commercially and strategically viable regions to cover close to 100% of broadcast media within Tanzania. In total, Tanzania consists of 26 regions.

-          Our digital broadcast capture technology allows for fingerprinting and matching of advertising content without human intervention. We use two parallel monitoring platforms; Volicon Observer and Vidvita (www.volicon.com and www.vidvita.com). For editorial monitoring, it remains a manual process.

-          Our newsprint monitoring system captures, converts and indexes text, and employs Optical Character Recognition (OCR) technology to identify pre-defined keywords related to the client brief. (http://www.newbase.de)

-          In addition, we have just launched a market research service, which will see us collect data using handheld devices, pen and paper interviews, telephonic interviewing, online surveys, SMS-based surveys etc. We will be looking to derive actionable insights on brand and media, direct from the consumer.

-          We’re a looking to automate the entire broadcast media (radio & TV) monitoring process by integrating speech to Text Technology. A system that will allow us to capture advertisements, as well as unstructured speech (editorial, mentions) automatically

-          A system that can enable us to perform content, language and syntax analysis of our data

-          A system that would allow us to capture data that will be used to create predictive analysis/forecasting

-          A system that will allow us to integrate research data from multiple sources as described above, including audience research data

-          An intelligent media planning platform that will combine the ‘unique” sources of brand insights and trends from our advertising, editorial, online and audience research databases to simplify and standardize the consumption of audience research data in the country/region. Also would include published media rates.

-          A market planner/forward planning system that can capture data or future events from print, broadcast and online media

-          A platform that allows for the end-user to manipulate data, generate analysis and visualizations of complex data relationships. It has to be user-friendly but with a powerful engine Our system has to be also very affordable to run, easy to manage, user-friendly on both internal and client side, as well as easy and affordable to deploy to other markets/countries

-          We also would like support on how we can best produce and analyse specific reports from multiple sources targeting different industries e.g. banking, telecoms etc., or extract maximum value from datasets we are analyzing depending on the solution or insight we seek.

-          We have approached a number of potential system partners, including Düsseldorf’s Pressrelations and Tel Aviv’s Actus Digital. Actus is similar to Volicon in functionality but they claim to be cheaper to implement, offer Speech-to-Text integration at USD40, 000.00, auto-translation to six other languages, and efficiencies in terms of administration. Pressrelations is a media monitoring company that also specializes in software for media monitoring companies. They offer a platform that would integrate the broadcast, print, online, research output into one, including storage and Tableau integration. Pressrelations also allows us to do deeper analysis of language and syntax and can work especially well with a system like Actus. Tableau is due for a training programme at our offices in Dar es Salaam next week.  We have also spoken to HP and about a system based on their Vertica and Autonomy solutions. Ninestars, of India, have expressed interest in working on this project.

Hadoop
Professional Services
Customer Analytics

$6,000

Starts Sep 08, 2014

1 Proposal Status: COMPLETED

Client: P**** ********

Posted: Sep 08, 2014

Project Beaver

Pricing Optimization

Churn propensity

Customer segmentation

UPDATE: The client has already interviewed and shortlisted candidates. Please do not submit additional proposals.

Sports and Fitness
Customer Loyalty
Churn Analysis

$20,000 - $50,000

Starts Oct 01, 2014

14 Proposals Status: CLOSED

Client: L******** ******

Posted: Sep 05, 2014

Matching Providers