Spring 2019 CS543/ECE549

Assignment 1: Shape from Shading

Due date: Monday, February 11, 11:59:59 PM


The goal of this assignment is to implement shape from shading as described in Lecture 4 (see also Section 2.2.4 of Forsyth & Ponce 2nd edition), and to start becoming comfortable with python image and matrix processing and display functions.
  1. You need the following python libraries: numpy, matplotlib, jupyter, Pillow.

  2. Download the data and use this ipython notebook. The data consists of 64 images each of four subjects from the Yale Face database. The light source directions are encoded in the file names. We have provided utilities to load the input data and display the output. Your task will be to implement the functions preprocess, photometric_stereo and get_surface in the ipython notebook, as explained below.

  3. For each subject (subdirectory in croppedyale), read in the images and light source directions. This is accomplished by the function LoadFaceImages. LoadFaceImages returns the images for the 64 light source directions and an ambient image (i.e., image taken with all the light sources turned off).

  4. Preprocess the data: subtract the ambient image from each image in the light source stack, set any negative values to zero, rescale the resulting intensities to between 0 and 1 (they are originally between 0 and 255).

    Hint: these operations can be done without using any loops. You may want to look into the concept of array broadcasting in numpy.

  5. Estimate the albedo and surface normals. For this, you need to fill in code in photometric_stereo, which is a function taking as input the image stack corresponding to the different light source directions and the matrix of the light source directions, and returning an albedo image and surface normal estimates. The latter should be stored in a three-dimensional matrix. That is, if your original image dimensions are h x w, the surface normal matrix should be h x w x 3, where the third dimension corresponds to the x-, y-, and z-components of the normals. To solve for the albedo and the normals, you will need to set up a linear system as shown in slide 20 of Lecture 4.

    Hints:
    • To get the least-squares solution of a linear system, use numpy.linalg.lstsq function.
    • If you directly implement the formulation of slide 20 of the lecture, you will have to loop over every image pixel and separately solve a linear system in each iteration. There is a way to get all the solutions at once by stacking the unknown g vectors for every pixel into a 3 x npix matrix and getting all the solutions with a single call to numpy solver.
    • You will most likely need to reshape your data in various ways before and after solving the linear system. Useful numpy functions for this include reshape, expand_dims and stack.

  6. Compute the surface height map by integration. The method is shown in slide 23 of Lecture 4, except that instead of continuous integration of the partial derivatives over a path, you will simply be summing their discrete values. Your code implementing the integration should go in the get_surface function. As stated in the slide, to get the best results, you should compute integrals over multiple paths and average the results. You should implement the following variants of integration:
    1. Integrating first the rows, then the columns. That is, your path first goes along the same row as the pixel along the top, and then goes vertically down to the pixel. It is possible to implement this without nested loops using the cumsum function.
    2. Integrating first along the columns, then the rows.
    3. Average of the first two options.
    4. Average of multiple random paths. For this, it is fine to use nested loops. You should determine the number of paths experimentally.

  7. Display the results using functions display_output and plot_surface_normals included in the notebook.

Extra Credit

On this assignment, there are not too many opportunities for "easy" extra credit. This said, here are some ideas for exploration:
  • Generate synthetic input data using a 3D model and a graphics renderer and run your method on this data. Do you get better results than on the face data? How close do you get to the ground truth (i.e., the true surface shape and albedo)?
  • Investigate more advanced methods for shape from shading or surface reconstruction from normal fields.
  • Try to detect and/or correct misalignment problems in the initial images and see if you can improve the solution.
  • Using your initial solution, try to detect areas of the original images that do not meet the assumptions of the method (shadows, specularities, etc.). Then try to recompute the solution without that data and see if you can improve the quality of the solution.
If you complete any work for extra credit, be sure to clearly mark that work in your report.

Grading Checklist

You should turn in both your code and a report (please refer to this template) discussing your solution and results. For full credit, your report should include a section for each of the following questions:
  1. Briefly describe your implemented solution, focusing especially on the more "non-trivial" or interesting parts of the solution. What implementation choices did you make, and how did they affect the quality of the result and the speed of computation? What are some artifacts and/or limitations of your implementation, and what are possible reasons for them?

  2. Discuss the differences between the different integration methods you have implemented for #5 above. Specifically, you should choose one subject, display the outputs for all of a-d (be sure to choose viewpoints that make the differences especially visible), and discuss which method produces the best results and why. You should also compare the running times of the different approaches. For the remaining subjects (see below), it is sufficient to simply show the output of your best method, and it is not necessary to give running times.

  3. For every subject, display your estimated albedo maps and screenshots of height maps (use display_output and plot_surface_normals). When inserting results images into your report, you should resize/compress them appropriately to keep the file size manageable -- but make sure that the correctness and quality of your output can be clearly and easily judged. For the 3D screenshots, be sure to choose a viewpoint that makes the structure as clear as possible (and/or feel free to include screenshots from multiple viewpoints). You will not receive credit for any results you have obtained, but failed to include directly in the report PDF file.

  4. Discuss how the Yale Face data violate the assumptions of the shape-from-shading method covered in the slides. What features of the data can contribute to errors in the results? Feel free to include specific input images to illustrate your points. Choose one subject and attempt to select a subset of all viewpoints that better match the assumptions of the method. Show your results for that subset and discuss whether you were able to get any improvement over a reconstruction computed from all the viewpoints.

Instructions for Submitting the Assignment

The files will be submitted through Compass 2g. Your submission should consist of the following:
  1. All your python code in a single ipython notebook, with output cleared. The filename should be lastname_firstname_a1.ipynb.
  2. Your report in a single PDF file with all your results (including images) and discussion. The filename should be lastname_firstname_a1.pdf.
Don't forget to hit "Submit" after uploading your files, otherwise we will not receive your submission. Multiple attempts will be allowed but only your last submission before the deadline will be graded. We reserve the right to take off points for not following directions.

Please refer to course policies on collaborations, late submission, and extension requests.