In general the process to generating plans from PDFs entailed extracting line and object data from the PDFs, compare information against other similar datasets to help with later steps. 
  To extract this information from the PDFs, the documents are thresholded binarily, run through a connected component analysis to extract text and remove noise. 
  The symbols are then extracted and segmented with another connected component analysis and shape descriptor matching. Resultant lines are extracted using Houghlines. All data is stored, then classified with a variety of learning models. The extracted lines are stored as coordinates to be deployed in the BIM model.
  In converting the data into BIM model for use in Austin, offsets are calculated from the boundary data, coverages are calculated. A simple algorithm for optimal height vs. spread is used to determine the maximum, contiguous volume(s) and maximum number of Kasita units which could be legally placed on the site. The premodeled Kasitas are then placed in a 3D BIM model, which would be the final output.
sp - 3d-1.png
prev / next