Automated Garnishment Database

Industry Legal

Specialization Or Business Function

Technical Function Data Warehousing (Data Mapping, Data Integration), Analytics (Predictive Modeling, Machine Learning)

Technology & Tools Big Data and Cloud, Data Analysis and AI Tools


Project Description

We are a law office requiring a web based program that will allow the user to enter as much (or as little) information as is known about an individual (debtor) in order to initiate an API web search of specific internet sites such as Facebook, LinkedIn, Twitter, Google, etc. The query should return matching data and store in a host encrypted database for subsequent reporting and data mining analysis. The web application would be fully integrated with our CRM (Simplicity Collect).

Database Platform Functionalities Should Include:

  • Database capable of handling, storing, & mining large data sets
  • API integration into Simplicity Collect CRM
  • Open API (allowing for future developments & integrations)
  • Secure & encrypted database (AES-256, SSL, 2-factor authentication)
  • Single sign-on (like ForgeRock OpenAM or Mozilla Persona)
  • Every new query should be also be run simultaneously against existing (stored) database of results. Anomalies should be flagged. All results should be assigned a confidence rating based on information provided
  • Machine Learning algorithms should be utilized during the API search, as well as mining the resulting data for useful information & patterns
  • Flat-file import/export to XLS, CSV, etc.

Assumptions & Dependencies:

  • Assumes the application will be deployed on an SSL extended validation URL on an encrypted host machine with single sign-on functionality, and API integration with CRM.
  • Dependency: Ability to ensure ongoing reliability of API calls to sites (Facebook, Linked In, Google, etc) is dependent on use of most recent API parameter list. Most of the social media sites will accept standard API call requests for data. Occasionally, the social media sites will modify their API call parameters without notice. Therefore, web-crawler functionality (with developer review) will be required for on-going maintenance to ensure application integrity.

General Requirements:

  • Application will be designed as a web application accessible from any of the usual PC/MAC browser types (Outlook, Mozilla, Chrome, Safari, Aviator, etc)
  • After accessing the application via standard browser interface, the user will be presented with a secure login page prompting for User ID & Password (req 2-factor authentication functionality).
  • There should be an Admin login provided with the system that allows new users to be setup and maintained.
  • System should validate the entered user ID and password against the valid user ID and passwords in the database and display appropriate error message if invalid credentials are provided.
  • If successful login, present the user with Home Page Main Menu. Initially this page will only have option 1 – Skip Trace
  • When user selects option 1 – Skip Trace, the Skip Trace Parameters Entry page will display:
  • Parameters: • First Name: (Required) • First Name Alias: (Optional) • Middle Name: (Optional) • Middle Name Alias: (Optional) • Last Name: (Required) • Last Name Alias: (Optional) • Home Address: (Optional) • City: • State: • Zip: • DOB: (Optional) • Home Phone: (Optional) Cell Phone: (Optional) Current Employer: (Optional) Former Employer: (Optional) Education: (Optional) Relationship Status: (i.e.- married, engaged, in a relationship, single…all social sites have this category)

Search Logic:

  • Search checkboxes: (At least one must be checked). Defaults to all checked
  • Email To Prompt: (Optional)
  • Go button: Performs field validations, displays any error messages or initiates search
  • API Search Logic – use standard API searches for those websites with checkbox checked from Search screen
  • Search results should be similar to a Google search…i.e., the results that match the majority of the search parameters supplied should show up first followed by those results that match fewer of the supplied search parameters entered followed by those that just match on the required fields.
  • Machine Learning API – used to scrub automated API search results before presenting data (i.e.,- to refine the search results, & provide a “confidence assessment” for the user).

Additional features to consider:

  • Integration to Accurint-TLO API for vehicle search and logging of results... (see: http://www.lexisnexis.com/webserviceskit/ for general SDK info)
  • MindMeld API for intelligent voice search/queries
  • API integration to the primary data brokers such as: Acxiom, Experian, etc.

Milestones and Proposal

In your response, please break the project into 3-4 milestones and specify how long each would take. Please also indicate if any existing screen scraping APIs such as Grepser (www.grepser.com) or Mozenda (www.mozenda.com) may be used to simplify the development of the application.

Project Overview

  • Posted
    September 30, 2014
  • Planned Start
    December 08, 2014
  • Delivery Date
    March 01, 2015
  • Preferred Location
    United States

Client Overview


Matching Providers