FAST FAQ

Please ask questions in the comments below.

Q: Where do I get FAST?

A: http://edwardrosten.com/work/fast.html

Q: Why is the code so hard to read?

A: It is machine generated. You aren’t supposed to read it :)

Q: How do I get N corners?

A: You have to adjust the threshold t automatically:

  1. If N is too high, increase t. If N is too low, decrease t.
  2. Perform binary search over t to find the value which gives N corners
  3. Set t too low (N too high).  Compute the corner score and sort corners by the score. Then keep the N with the highest score.

Q: Why is the Python/MATLAB code slow?

A: The FAST algorithm has been designed to work as quickly as possible and is therefore target strongly towards compiled languages. The code is provided for ease of use for interpreted languages, but it will not be especially quick. If you need it to run quickly you will need to interface to the C version.

About these ads
This entry was posted in Uncategorized. Bookmark the permalink.

111 Responses to FAST FAQ

  1. Andrea Caponio says:

    How can I use the software for matlab to match corners between images and not only to find corners?

    • edrosten says:

      FAST is just a system for detecting interest points.

      The type of matching that you want will depend on the application. A very simple scheme is to extract a small square of pixels (e.g. 11×11) from around the FAST interest points, as a vector. You can match two points by looking at the norm of the difference between them. Then, you have to compare every point in the first image to every point in the second to find the best match.

  2. Wang Hai says:

    What’s the major difference between using non-maximal suppression and not using that? Which one has better result? The second question: can threshold 0 to be used if the image intensity is low? Thanks.

    • edrosten says:

      The corner strength in FAST (and all other corner detectors) tends to be high in a region around a corner. E.g. consider a FAST corner which is a light spot on a black background. If the profile through the spot looks something like

      0 0 0 0 200 255 200 0 0 0

      then FAST strength will be high on the pixels with values 200 and 255. Nonmaximal suppression removes all but the highest ones in a small region, so only the corner centred on 255 will be kept.

      You usually want nonmaximal suppression because the non-maximal corners tend to provide little or no extra information and tend to be less stable.

      It will depend on your application though. If in doubt, try both and see which works better.

  3. YK says:

    In FAST-9 you are detecting for a segment of at least 9 contiguous pixels. You also published a machine learned FAST-ER detector, and I was interested to see what your training arrived at (in terms of the decision tree). In other words, I’m quite curious as to what specifically makes it a more repeatable detector compared to FAST-9, and if there is an equivalent intuitive way of describing it (such as 9 contiguous pixels of different intensity around the circumference compared to the centre pixel).

    • edrosten says:

      You can download the decision tree in the FAST-ER source code distribution and take a look. The tree is very large however. I do not believe that there is an obvious intuitive explanation as to how it works. I think decision trees are much like other machine learning systems (SVMs, Neural Networks) in that they essentially create a black box which partitions up the high dimensional space of inputs.

  4. Shubham Sharma says:

    I would like to impliment FAST code in C# .Net 4.0, is the the datastructure for input image same or will need any sort of conversion ?

    • edrosten says:

      There are several ways you can go about implementing it in C#. If you can use some unmanaged code, then you may be able to link against the pure C implementation as-is.

      Probably a better choice it to generate some native C# code. The FAST-ER source code comes with some ready-made trees in a language neutral format, and some scripts to turn the code into C, C++, MATLAB and Python. It should be straigtforward to modify one of the scripts to generate C# code instead.

      The scripts are written in a mixture of GAWK and BASH. Since you’re using C#, you’re probably using Windows: you can get GAWK and BASH for Windows from either the Cygwin or MINGW projects.

      It is should be easy to rewrite the scripts in another language. The language neutral trees are in a very simple format designed for easy conversion to new languages. The conversion can be sone by simple textual search and replace, since the trees are essentially a list of if-else statemets.

  5. MAN says:

    In your code(ex. fast_9.cpp), the decision tree has already been conerted into C-code. It is a very long string of nested if-then-else statements. How can I make it?? What’s the machine learning?? Please explain the process to me.

    • edrosten says:

      It depends if you want to create code from a tree, or make a whole new tree.

      Either way, you will want the FAST-ER source code.

      Pre-made trees which are not yet converted into code are available. Look for the file “FAST_trees.tar.bz2″ and untar it. The raw (unconverted) trees have the suffix .ftree. They contain trees in a language neutral format which should be easy to convert into source code using your language of choice. I’ve used shell scripts to perform the conversion. See fast_tree_to_CXX as an example.

      If you want to make your own trees from scratch, then you need to use the learn_fast_tree program. That takes as input a list of extracted features. You can generate all features for fast-N using fast_N_features to ensure your tree is exact. I have a program to extract fast-N features from images, but it isn’t currently in the distribution. Let me know if you need it and I will add it.

      Let me know if you need any more information.

  6. Vivek Anand says:

    You consider points which are approximately 3 pixels away from the point being tested to find out if its a feature or not. Will the radius of 3pixels work for all image sizes or we need to change it based on the image size.

    • edrosten says:

      The radius you need depends on the scale of the features, rather than the size of the image. If the features are very blurry, then you will need a bigger ring. The easiest and most efficient way to do this is to subsample the image, e.g. by taking 2×2 squares, and averaging the pixels inside to make a single output pixel.

  7. Alex Michael says:

    I just ported FAST12 to Java for my Master’s thesis. I have a question though.. Do we have to convert the input image to grayscale before running FAST on it, or can we do it on a coloured image by checking each channel separately?

    Thanks.

    • edrosten says:

      So far, I have only used FAST on greyscale images. The proper way is to convert to grey by using the CIE weightings. The easiest/quickest way is to use the green channel, which is not a bad approximation of the CIE weightings.

      It is possible to run it on every channel separately, but the colour reproduction on most cameras is quite poor and is heavily biased in favour of green, anyway.

      Would you be interested in releasing the Java code? I can make it available on the FAST page if you are interested.

      • Alex Michael says:

        Hi,

        Thanks for your response. I will look into CIE weightings.

        I am interested to release the code but it is not very mature yet and it lacks the non-maximal suppression step. When it is fully ported and optimised, I will send you an email about it. Is that ok?

        Alex

        • edrosten says:

          There is some non-maximal suppression code packaged with the FAST corner detector. It should port to Java relatively easily because almost all of the work is done using array indices, rather than pointers.

      • Rob Z says:

        I would really be interested in the Java code. I am building an open source image feature extraction library in java.
        Thanks!

  8. Guy says:

    Hi,

    In the “Fusing Points and Lines for High Performance Tracking” paper, you mentioned that the test to check “if the intensities of 12 contiguous pixels are all above or all below the intensity of the center pixel can be optimized by examining pixels 1 , 9 , 5 and 13″ However the auto-generated code does not implement this optimization. I was just wondering if there was reason behind that.

    Regards
    Guy

    • edrosten says:

      The ID3 algorithm is better at finding a good sequence of tests than I am. The four tests: 1,9,5, 13 are quite good, but there are better choices. Also, they only work for N=12. The ID3 algorithm is used to automatically learn an ordering of the tests.

      There is a visualisation of the tests in the following presentation:

      http://mi.eng.cam.ac.uk/~er258/work/rosten_2008_faster_presentation.pdf

      slides 40–52 show you the ordering of the tests, assuming that the pixels tested are all significantly brighter than the centre.

      Slides 53-56 show what happens if the first pixel tested is too similar, or the first is bright but the second is too similar, or the first two are bright and the third is too similar, etc…

  9. A says:

    Is the python code released on 2010-09-17 a learned detector? Or do I have to train this first? I read that you have leanred FAST detectors but am unsure which ones to download.

    Thanks

    • edrosten says:

      The python code released on 2010-09-17 is a learned detector and should be ready to use as-is.

      Any file with a name like fast-X-src.tar.gz or fast-X-src.zip is a learned detector which is ready to use.

      The learning code is in fast-er-1.4.tar.gz, which also contains ready made detectors.

      • A says:

        Thanks for your fast reply :), and excuse my noob question!

      • Russel says:

        Is there an example of using the FAST corner detection using python? I ran the fast9.py with a image opened by OpenCV 2.2 in python and it took a good few minutes to run? Below is a simple code

        import cv
        import fast9 as fast

        greyimg = cv.imread(‘Capture.PNG’,2)
        corners = fast.detect(greyimg,30,1)
        print len(corners), “points found”
        cv.waitKey(0)

        • edrosten says:

          It looks like you’ve used it correctly.

          Currently FAST in python is implemented in pure python, so it will be very slow, since python is rather slow at loops compared to C.

          One option is to use cython instead of python. I think that it will be a question of adding a few cdef’s to the FAST python code. If you get that working, please let me know, and I’ll add it to the source code.

          Another option is to use the FAST detector which is now in OpenCV.

          • Russel says:

            The only problem is that there does not seem to be a python version of the FAST detector in OpenCV 2.2. I will investigate cython and let you know if I can work it out.

  10. Russel says:

    Is there an example of using the FAST corner detection using python? I ran the fast9.py with a image opened by OpenCV 2.2 in python and it took a good few minutes to run? Below is a simple code

    import cv
    import fast9 as fast

    greyimg = cv.imread(‘Capture.PNG’,2)
    corners = fast.detect(greyimg,30,1)
    print len(corners), “points found”
    cv.waitKey(0)

    • edrosten says:

      The Python code implements the FAST algorithm, but it won’t run especially quickly, because python is much less efficient compared to a compiled language for this type of code.

      If you want the corder detection to run quickly then you will need to use a compiled language. I expect the simplest way to do that is to convert the python code into Pyrex by putting cdef’s in front of the loop indices.

  11. abdelwahed says:

    IS it possible to use FAST Features for hand shape classification?

    • edrosten says:

      If there is a hand shape classification algorithm which uses corners then you could probably replace the corner detector with FAST. However, FAST will only make up a small part (if any) of a hand shape classification algorithm.

  12. J05HYYY says:

    Hello, I see that FAST has been recently added to OpenCV but unfortunately I can’t find any documentation on how to use it! When you merged the code, did you add a sample program too? If so – I would like to know the name! If not could you send me some code or alternatively, point me in the right direction on how to use this magical tool?

    Thanks for all your hard work … you truly are a wizard,

    Josh

    • edrosten says:

      There is a sample program in the OpenCV tests directory:

      tests/cv/src/fast.cpp

      which makes use of the FAST detector. This is different from the test program I submitted, since the submitted code matches the OpenCV 1.x style but the OpenCV authors updated the FAST code to match the rather better OpenCV 2.x style.

  13. JongSeok Lee says:

    Can I get the revised version of fast corner detection algorithm, which uses only 3×3 mask?
    -1 0 1
    -1 0 1
    -1 0 1
    I tried to change your source code, but that was very difficult to do that :(

    • JongSeok Lee says:

      P.S I’m using MATLAB

    • edrosten says:

      I’m not entirely sure I understand the question. What would a corner look like?

      You won’t be able to modify the FAST code, since it is machine generated. However, it is possible generate your own code by training FAST for different mask sizes.

  14. Bizman says:

    Hi, I have noticed the FAST iPhone app by Success Software Solutions. Has this code been integrated into other apps or other commercial uses? Are there any restrictions on doing so?

    • edrosten says:

      The FAST corner detector has certainly been used in a number of commercial ventures. The FAST detector itself is available under the BSD license which places no restrictions on usage. Licensing details of the Success Labs port are available on the Success Labs page.

  15. MJ says:

    Hi,
    I am testing the source code of iPhone corner detection app. I use Xcode 4.2 and get the corner points as an output. I would like to have the image + the dots but could not figure how to do that!! (in other words, the option “Camera” is always off!!). Can you help in that?
    Thanks
    MJ

  16. Chih-Hung Pan says:

    Dear Dr. Edrosten ,

    Nice to see you, Edrosten . I am a graduate student in Taiwan.

    I am appreciated deeply for your wonderful method, FAST.

    I tried the Matlab version. It is good.

    But now, I have to implement my method including FAST in C++.

    Therefore, I call the FAST in the OpenCV library.

    But the augment is so hard to understand. Do you have some example code?

    I am eager for the help. Thanks so much.

    I hope it is not to disturb you.

    Best Regards,

    ————————————–
    潘 志 宏 (Peter Pan)Chih-Hung Pan

    • edrosten says:

      The function prototype is:

      void FAST(const Mat& image, vector& keypoints, int threshold, bool nonmaxSupression=true)

      You need to provide an OpenCV image (for instance loaded from a file) as the first argument.

      The second argument returns the list of detected KeyPoint structs.

      For the threshold, you will need to pick a number that gives a reasonable number of corners (eg 500-2000 for a 640×480 image). This will depend on the type of image. For a video, it may be as low as 20, for a high contrast photograph, perhaps as high as 100.

  17. Kang-Kook Kong says:

    Hi, I am interested in FAST-ER

    I run ‘./configure && make’ in fast-er-1.4, but I got the error message:

    checking libCVD… yes
    checking for dgetri_ in -lcvd… no
    configure: error: libCVD must be compiled with lapack support.

    Can you help me?

    • edrosten says:

      Do you have the LAPACK development libraries installed on your machine? If so, libCVD should pick them out automatically. Try the following:

      1. Check the configure output from libcvd. If it doesn’t find LAPACK, then install the LAPACK development libraries.

      2. Check the built libcvd using ldd /usr/local/lib/libcvd.so and see if it has a reference to lapack. If it’s not there, then there libCVD has failed to pick up lapack properly. If 1 is OK and 2 isn’t, then let me know.

      3. As a workaround, run export LDFLAGS=-llapack before running ./configure to get LAPACK manually.

    • dorian says:

      Hi,
      I’m running into the same problem when compiling FAST-ER. I have liblapack-dev installed via Ubuntu’s package manager, but I’ve also tried installing lapack manually and exporting the LDFLAGS variable.

      I get this when configuring libCVD:
      checking if Accelerate framework is needed for LAPACK…
      checking for dgesvd_… no
      checking for dgesvd_ in -llapack… yes
      so I guess it is finding the lapack library. But later, ldd libcvd.so doesn’t return any link to liblapack.

      In the end, I get the same error than Kang. Any clue? Thanks!

      • oscar says:

        Hi,
        I also meet this problem and do not know how to fix it? I already install the liblapack but I do not know why ldd libcvd.so doesn’t return any links related to liblapack.
        OS: Ubuntu 12.4
        libCVD: libcvd-20120202

      • RogerKing says:

        I also meet this problem 。。。I find already install the lapack and I can find it .but when I run: ldd libcvd.so i can’t find any reference to lapack…

  18. Adam Skubel says:

    First, I must express my appreciation for this algorithm. It truly is fast: average execution time of 20ms for a 800×480 image on a modern smartphone and it doesn’t even need a binarized image.

    Questions:

    1. Does FAST store the type of corner it finds? Outer corners (where majority of pixels are bright) versus inner corners (where majority of pixels are dark). If it does store this, can this be accessed in the OpenCV version? If it doesn’t store this, would it be a significant change for someone without prior knowledge of the FAST code (i.e. myself) to make?

    2. In the KeyPoint structure returned from FAST feature detection, there are a number of fields, but the only one that changes besides the location is “response”. Can you tell me what the value of this field means in the context of FAST? The values were larger for corners farther from the camera, and the range (in my tests) was 10 to 30.

    Thank you.

    • edrosten says:

      1. FAST doesn’t currently store the type of corner. It is certainly possible, though the current code isn’t set up to do it. I think it would be possible as a hack by post-processing the FAST trees. There are copies of the trees in the fast-er codebase which are designed to make postprocessing as easy as possible.

      Probably the easiest way is to examine the ring of 16 pixels after detecting the corner. Count the number of pixels >= corner+threshold and the number of pixels <=corner-threshold. One will dominate (since it is a corner) and that will tell you which type it is. The cost should be tiny compared to the cost of doing corner detection.

      2. The response is the highest threshold at which the points are still detected as corners. Ones with a higher response are stronger in that they will likely be detected more reliably under adverse conditions.

  19. Vinny says:

    Hey

    I just wanted to tell you that you are my life saver. I just used your code for my final year project and it works like a charm. Actually I was on the verge of failing my project and was very worried about it until I saw your code. Thanks a lot. :’)

    But the thing is, I do not understand the program at all. At least, could you give me the gist of what the code actually does? Because I have to explain it in my report and also to my professor.

    Your help is greatly appreciated! :)

    Vinny

    • edrosten says:

      I’m glad it was so helpful.

      The code is almost impossible to read because it is machine generated.

      What it does is tell you if there is a contiguous arc of 9 or more pixels which are either all much brighter or all much darker than the centre pixel.

      Instead of looking at pixels in the circle in sequence, the program uses a decision tree in order to reject non-corners as quickly as possible.

  20. Vinny says:

    By the way , with regards to the my above message, I am referring to FAST version 1.

    Thanks
    Vinny

  21. Diniden says:

    Phenomenal algorithm.

    I’m trying to compile the faster.cc decision tree. Any estimate on how long the compile is ‘supposed’ to take? I started the compile awhile ago and still waiting for it to finish (been a few hours now). Just want to make sure it didn’t hang up on me.

    Thanks!

    • Diniden says:

      and a follow up on this: The compiler was indeed hung up. Turns out the FAST-ER decision tree is much too large for the arm7 compiler. It gets a branch out of range error. If there is any options to help fix this I’d appreciate it, but I’m suspecting the architecture just can’t handle it.

      • edrosten says:

        There’s no direct way that I know of to solve the problem. My suggestion is that you try FAST-ER on a PC to see if the extra complexity is worth the effort. It is possible to run an interpreted version of FAST-ER, but it’s probably under 1/10 of the speed.

  22. adil says:

    Hi Mr. Roston

    Indeed I return on the descriptor of the points detected by FAST, is there someting new on the extraction of a reliable descriptor? The proposal made at the beginning (take the neighborhood pixels 11 x 11 as a descriptor vector) is likely to be of low robustness against the usual changes (rotation, brightness, change of scale). Is it possible to used the quarter arc of the circle (16 pixels) to calculate the difference with the intensity of the colors between this arc the rest of the circle.
    Thanks for your engagement

  23. ary says:

    Just curious… I’ve observed in the paper that when considering pixels in the circle the test order goes darker-similar-brighter while in the implementation provided goes brighter-darker-similar. Does it make any difference at all? Thanks!

  24. Hello Dr. Roston,
    Actually I need to use Fast detector on OpenCV and I want to extract my own descriptor using 16 pixels ring which is used by Fast while each interest point is being detected. I need to do this in an efficient way indeed. In addition I don’t have access to .cpp file in opencv but just .hpp file.
    Your help would be much appreciated on how I can make this change and define my own descriptor.
    Thanks

    • edrosten says:

      The cpp files should be in the OpenCV distribution. You can download this from the OpenCV website. Alternatively, you can get the source code for FAST from the FAST web site.

      The FAST code gives you an array of x-y locations. You can simply copy the 16 pixels around each x-y location into a 16 element array to form your descriptor. A description of the offsets you need for the 16 pixels is in the paper and source code of FAST.

  25. Perikles Rammos says:

    Hi, and thanks for this very nice algorithm.
    My questions are mainly on the construction of the decision tree from the training images.
    1. The minimization of the entropy function is used only for creating the decision tree?
    2. Does the training set needs to be annotated? (I guess not, since all 16 pixels are checked)
    3. I assume that with very few training samples (corners), a very simple decision tree will be created. When new corners are trained, how is the tree expanded? (maybe the answer is in the paper, I haven’t been able to figure it out yet)
    Thanks.

    • edrosten says:

      To answer your questions:

      1. yes
      2. The training set does need to be annotated. There are two ways of doing this.
        1. Classify points in an image using a slow, simple algorithm that checks all 16 pixels
        2. Generate all possible corners and non corners. Since each pixel can have 3 states, there are 3^16 ~= 46 million possible corners and non corners.

        I actually use a combination of both methods. I generate all possible corners to get a complete coverage of the data. I then extract corners and non corners from a number of images, so that the pixel statistics are represented in the detector.

        Bear in mind that when it gets as far as training, the pixels are either b (brighter), d (darker) or s (similar) relative to the centre, so corner candidates can be thought of as a list of 16 elements like bsdbbddbsdbsbdsb

        There is program in the FAST-ER distribution to generate exactly one of each type of corner and non-corner, along with the classification.

      3. Yes, if you have too few training samples, then the tree will not be representative of the segment test criterion. The tree is never expanded, because the tree is not trained incrementally. I collect examples of corners first, then train the tree, then emit source code and then use the tree. In principle incremental training could be done however, but I’m not sure how useful that would be.
  26. Pingback: match corners between images « Firefly's space

  27. Domingo says:

    I always used to study article in news papers but now as I am
    a user of web therefore from now I am using net for posts, thanks to web.

  28. David García Sánchez says:

    Hi! (My comment is rather long… sorry)

    I’m writing my Master Thesis in Computer Science and I have several questions about the precompiled program and the source codes you provide in the website, I’ve been studying your paper and code for several weeks now.

    My thesis is about reproducing the behaviour of the FAST algorithm in Hardware synthesis (using an FPGA) in order to accelerate and parallelize the basic operation: decide whether a pixel p is a feature or not.

    Here are my questions, given the same image, threshold and same points fast with nonmaximal suppression:

    - Your C source code displays a result set “A”.
    - Your Python source code displays a different* result set “B”.
    - Your precompiled program (Linux_x86-64) displays a different* result set “C”.
    - My conceptual implementation in C (which is a parallel implementation of the Hardware architecture concept) displays sometimes (small images) a result set “C” as the precompiled and sometimes (big images) a different* result set “D”.

    * when I say here “different” or “differences” I refer to the number of features detected, but every result set shows the same pattern in big images.

    So, all the result sets are close between them, but they’re not the same for the same image…

    Another questions I have is about the C source code, regarding timings:
    I’ve prepared a main function to launch fast9 function and mine in order to measure times on them and I found something rather weird: if I test both functions with a small raw matrix of bytes (100 bytes, a 10×10 pixel grayscale image) hard coded on the source, fast9 function has a time T1 and my function a time T2, 4 times faster than T1.
    BUT if I test both functions with the same 10×10 grayscale image read from a file and stored dynamically with malloc calls, my function still reports a T2 execution time (approx) but fast9 has a time T3 of the same order as T2 and sometimes faster than T2… why is this happening if the source code is the same, the image is the same… the only difference is the use of static or dynamic memory.

    I hope I explained myself clearly enough, that these are not too many questions and that you have the time to answer me soon.

    Thanks so much in advance for your time.

    • edrosten says:

      There are a number of reasons for the differences:

      The precompiled binary uses an old version of the algorithm. The old version is not quite exact (incomplete coverage of data). Also, it uses a rather ad-hoc method for scoring which will affect the nonmaximal suppression.

      I haven’t thoroughly tested the Python source code, but it seemed bug free. The C code from version 2 onwards should be correct. You could try doing a diff on the feature sets to see what the differences are.

      That’s a very strange result about the static and dynamic memory. A 10×10 image is very small. Have you tried it on something much larger (e.g. 1000×1000) to see if the results are similar in nature?

  29. Richard Smith says:

    Please can you tell me how exactly to run the C code in Eclipse!! I’m in a bit of tight spot and really need this code to run!

    • edrosten says:

      I’m not familiar with eclipse, but it’s standard C code so you should just be able to add it and have it compile. Once it’s added you just need to supply a pointer to the image data and the image size.

  30. mmn says:

    hello..
    can u plz tell me the working of FAST corner detection algorithm in simple words. Im finding it difficult to understand.

    • edrosten says:

      FAST asks the following question:

      Is there an arc of 9 or more pixels which are all much brighter than the centre, or all much darker?

      If so, then the pixel is a corner.

      The remainder of the FAST algorithm is involved in making that test much faster than a straightforward implementation.

      • mmn says:

        thank u..

      • PaC says:

        Hi,

        I read the “Fusing Points…” paper multiple times and also whatever I could find on the internet about FAST.

        Still, I couldn’t find anywhere the answer to the following question: “WHY is a pixel a corner if there is an arc of 9 or more pixels brighter or darker than the pixel?”

        Can you please clear this out for me?

        • edrosten says:

          Intuitively, if you have a black region next to a white region, if they’re separated by anything sharper than a straight line, then the corner point will be a FAST corner. FAST finds the tip of wedged shaped things.

          In a more general sense, anything that can be localized in 2D is a candidate for a corner, and FAST fits that criterion.

          In particular, FAST-12 was originally chosen (the 90 degree version) because it could be hand written to be efficient: testing the 4 compass points by hand. FAST-9 came out of a generalization of that. After moving to a decision tree, there was no need to use 12 pixels for a fast test. It turned out 9 was faster and more repeatable.

  31. jin says:

    hi,I run ‘./configure && make’ in fast-er-1.5, but I got the error message:
    configure: creating ./config.status
    config.status: creating Makefile
    g++ -g -O2 -Wall -Wextra -W -ansi -pedantic -DUSESUSAN -DJIT -DNDEBUG -c -o warp_to_png.o warp_to_png.cc
    warp_to_png.cc:50: error: ‘TooN’ is not a namespace-name
    warp_to_png.cc:50: error: expected namespace-name before ‘;’ token
    make: *** [warp_to_png.o] wrong 1

    Can you help me? Thanks!

  32. Lloyd says:

    Hi,

    I am busy writing the FAST detector code into and OpenGL shader (GLSL) so that I can run it as part of my SLAM pipeline on an iPhone for my masters. I have got the basic 12point FAST detector working but there are thousands of features in tight clusters so I want to compute a score for each corner and then run a second shader to extract only the top feature in every 10×10 pixel area. What method would you suggest for computing the score of a corner?

    Later I would like to be able to add tracking to corners, what method would you recommend for tracking corners reliably.

  33. edrosten says:

    The scoring criteria I use now is “what is the highest threshold at which the point is a corner”. This has the advantage in that the score is directly related to the threshold, and is monotonic etc. It also yields a tiny but measurable improvement in repeatability over the old ad-hoc scoring system.

    There are two ways this can be computed that I know of and both involve running the detector repeatedly. The first is just straight up binary search with a threshold. The second is a bit more involved: for each if statement, (is pixel a brighter than pixel b), you can work out how much that passes by. For a given detection, you can then find the minimum of those values over all tests taken. Then add that to the threshold and re-detect.

    I would imagine that binary search would be marginally easier, since you can (with sufficient mangling of comparison and multiplication) do the tests without taking any branches, and there’s less stuff to modify like that.

    Of course neither of those are especially helpful since it is back to front compared to the algorithm you want. I haven’t devised an algorithm yet to compute the score without effectively doing repeated detections at different thresholds.

    When it comes to tracking corners, that largely depends on the task: are you trying to track some large object or a collection of smaller ones? Do you want to track from corner to corner, or do you want to just use the corners as a starting point to run something like a KLT tracker?

    • Lloyd says:

      Thanks for the insights, I have a few ideas on how to compute the score for each corner and then in a second pass discard weaker corners. I will post a link to the code once it works.

      The tracking is to be used to replace feature detection and matching in a sparse structure from motion pipeline. SO I will probably use something along the lines of a KLT tracker.

  34. Lyn says:

    Hi there,

    I would like to try FAST in linux (Ubuntu 11.10) and I’ve downloaded fast-Linux-i686. I’m wondering how I can use it. I’ve put a JPG format image into the folder, and tried to run “fast 1.jpg corner.jpg” but it says that it doesn’t contain such command.

    Cheers,
    Lyn

    • edrosten says:

      I expect the problem is that you do not have the current directory in your PATH.

      After untarring, change into the new directory, then type (for the 32 bit version):

      ./fast-Linux-i686 1.jpg > result.ppm

      Then view result.ppm in an image viewer.

  35. Etto Gawa says:

    Hi,
    i wanna ask about FAST type in Emgu CV , because i can’t get about the information… thankz…

    • edrosten says:

      I believe that EmguCV is a C# wrapper of OpenCV. I have never used EmguCV, so you’d be better off asking any questions about that on the EmguCV mailing list or forum.

      Apart from the specific API, the usage of FAST should be the same as OpenCV and the downl;oadable FAST code.

  36. olivier says:

    Hi,
    I have read, but I do not remember where, that an average performance of 2.3 tests per pixel is achievable (after learning stage). This figure takes into account that most of pixels are not keypoint and then need only 2 tests to be rejected.
    Do you know how many nested tests in average are needed for a keypoint (not rejected)?
    Thank you.

  37. edrosten says:

    The figure came from section 2.4 of this paper: http://www.edwardrosten.com/work/papers.html#rosten_2006_machine

    I haven’t actually tested what you’re asking about. In order to do this, you’d have to instrument the tree to count the number of tests performed, which is how I found the original figure.

  38. fahime says:

    Hi Dear Dr. Edrosten
    I used fast9 matlab code, I want to know how can I give score to corners and choose some of them according to their score, and in different image with default threshold it gives different corners how I can set threshold?

    in addition I want to compare result of fast with other corner detector as like as harris, susun, foerstner so I need to know how I set threshold for this detector for having a reseanoble comparison among them.
    Regards,
    Fahime

    • edrosten says:

      The score is returned as a second value. So use:

      [ corners scores] = fast9(image, threshold, 1);

      to get the scores.

      To choose the threshold, I would generally select a threshold to find a particular number of corners per image. For example, 500 corners in a 640×480 image would be appropriate.

      For comparison to other detectors I would certainly perform comparisons with a fixed number of corners per image. This is because the thresholds for the different detectors are in no way comparable. A threshold is essentially a parameter determining sparseness. Since that is rather indirect, it is best to compare the results at the same level of sparseness.

  39. Jim says:

    Hi, I’m working on my special project called Image Mosaicing. I’m planning to implement FAST-ER in corner detection part but we are only to us Java in the development. Can you suggest a better way to be able to convert the provided source code in java? I can’t convert the entire code by hand, or maybe is there a way I can use the code in eclipse juno? thank you :)

    • edrosten says:

      There are several options.

      One is converting the C-code using search/replace. This is probably quite painful.

      A better option is probably to use the intermediate trees which are in the FAST-ER distribution. These contain the same information, but in a form which is easier to manipulate.

      The way that the Java code will work will depend on how the image is laid out in memory. Usually it will just be an array of bytes, though some care is required because images are generally unsigned but Java bytes are signed.

      If you follow up with some details of the image class, I may be able to offer more specific advice.

      • Jim says:

        Thank you ed for the quick response. I dont understand how your suggestion answers my question, should I make my own java interpretation of the code by hand? how does the intermediate trees work? thank you.

  40. Sejin Kangsong says:

    Hi, I have a question about the machine learning approach that enhances the FAST algorithm and I think I got stuck somewhere in the middle of the context. It’s gonna be a long series of questions so, my apologies first.

    I think I got the clear picture of the previously proposed so called ‘segment test’ method and several weaknesses it has. The problem is, when it comes to the decision tree making that has been said to fix those weaknesses, I’m failing to get the gist of it.

    According to the publication, wikipedia or some references I found, the very first step (except for the segment test for the whole pixels) of the machine learning approach is to partition the 16 pixels into 3 states which are brighter, darker and similar in terms of the intensity of the pixel respect to the corner candidate ‘p’. And then, it seems like they chose 1 pixel ‘x’ out of those 16 pixels and make the whole set of pixels ‘P’ (capital) partitioned into those 3 states respect to ‘x’.

    So, If I understood correctly, one of the 16 pixels has the most information about center ‘p’ being a corner or not by dividing a whole set P correctly because each of them (16 pixels) has different entropy values.

    and here comes my questions.

    so, now we know the which one of the 16 pixels has the most information about ‘p’ being a corner and we can rank those 16 pixels by the information gain. Then, why bother to extend the decision tree? should it be stopped by then? From what I have understood, the whole purpose of the decision tree making is finding a set of pixels other than 1,9,5,13 because those 4 pixels constraint the distribution of the corner appearances. but If we already had 4 pixels other than them, haven’t we already solved our problem?

    What was the purpose of dividing 16 pixels into 3 states? one of those -x- was going to partition the whole set P. Isn’t meaningless?

    How is it possible to be adopted on the actual subjective image (not the training image)?

    So, basically I failed to understand the whole part of it. Could you please break it down for me?

    Sincerely, Sejin Kangsong

    • Sejin Kangsong says:

      I get it now. ID3, Decision Tree the whole concept of it. Truly brilliant algorithm that you built sir, cheers for that. Now that I understand it -even just a brief of it though- I also became able to see how stupid my initial question was.

      Anyways, thanks for developing this awesome algorithm.
      Sincerely, Sejin Kangsong

  41. Selma says:

    Hello,
    i ‘m currently writing a multi-threaded vesrion of FAST and i would like to reuse your code.
    A simple question i have is if your code requires as input a padded image or not, i tried so far with a non padded and the interest points i get are all concentrated in the upper part of the image (no matter what the image is), so i thought that this is maybe due to the non padding,
    Thanks,
    Regards,
    Selma

    • edrosten says:

      The FAST code from 2.0 onwards (the latest version is here: http://www.edwardrosten.com/work/fast-C-src-2.1.tar.gz) supports images with padding.

      The function prtotype looks like this:

      xy* fast9_detect_nonmax(const byte* im, int xsize, int ysize, int stride, int b, int* ret_num_corners)

      xsize corresponds to the number of pixels horizontally in the image excluding padding.

      stride is the number of bytes per row including padding.

      If the position of the corners does not seem to relate well to anything in the image, then you might have the padding parameter set incorrectly.

  42. Slavi Ivanov says:

    Hello

    I want to profile your source code (using gprof for instance) in order to measure the time needed by each one of the functions in order to complete its “task(s)”. So I have downloaded fast-C-src-2.1.tar.gz but when I try to compile it (using gcc) I receive error:
    (.text+0×20): undefined reference to ‘main’
    collect2: ld returned 1 exit status

    Obviously the error occurs because there is no main function into your source code. Due to I am inexperienced C programmer I need your help. Could please provide me the main function so that I can finish with the profiling.

    Thank you in advance for your time!

    • edrosten says:

      Hi,

      The FAST code in fast-C-src is just code to perform the corner detection. In order to use the code, you will have to provide a main() function that loads images from somewhere and then provides them to the corner detection code.

      You may find this substantially easier in C++ using libCVD. Below is a minimal C++ program using libCVD to load an image called “test.png” and perform corner detection on it.

      #include <cvd/image_io.h>
      #include <cvd/fast_corner.h>
      
      int main()
      {
          CVD::Image<CVD::byte> image = CVD::img_load("test.png");
          std::vector<ImageRef> corners;
          fast_corner_detect_9_nonmax(image, corners, 15);
          //Do something with corners here.
      }
      
      • Slavi says:

        Hi again,

        i was asking you because I saw that there is uploaded precompiled version for Linux (fast-Linux-x86_64.tar.gz). That’s why i was thinking you have the main function and it would be easy for you to provide it. Anyway, thank you for your prompt support!

        Regards,
        Slavi

        • edrosten says:

          I see what you mean. Those have been up a while and I think the main function is built against a very old version of the library so it may not be of much use. I’ll see if I can dig it out though.

          The code I posted is more or less equivalent however.

          • slaviivanov says:

            Hello again Dr Edrosten,

            I’m trying to compile the application under Scientific Linux using GCC. Unfortunately, an error occurs. The application can’t find header files from libCVD but actually the files exist. Have you tested the library on Scientific Linux? I suspect that the problem comes from the fact that the application uses relative paths. Is there any quick fix about that?

            Thank you for readiness to help me!
            Slavi

  43. Prashanth says:

    Hi Dr.Ed Rosten,
    I am using Fast9 in a tracking application. I have implemented a brute force implementation of Fast9 which checks for every pixel, if 9 out of 16 pixels around the arc satisfy the bright or dark condition with no early exit logic. I am comparing the corner points given by this method against your matlab implementation of decision tree and I found some differences. One such point was center = 89, and the 16 arc pixels in order are (97, 235, 235, 235, 199, 89, 79, 182, 93, 235, 235, 235, 235, 235, 235, 192). The threshold used was 80. This point doesn’t satisfy the 9 contiguous arc condition for the threshold of 80. Hence it is not detected as a corner in my brute force implementation. However, the decision tree detects this as a corner.
    Could it possibly be a bug in the decision tree implementation?

    • edrosten says:

      Hi,

      Which version are you using?

      The original version didn’t actually have one of every example in the training set, so the decision tree is merely a very good approximation of the fast 9 function.

      In later versions, the training set was augmented with one of each type of feature, resulting in a prefect instance of the fast 9 function.

      There is no practical difference in performance between the two versions, either in repeatability or speed.

      sent from phone. On 24 Mar 2014 05:45, “Edrosten's Blog” wrote: > >

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s