Please ask questions in the comments below.

Q: Where do I get FAST?


Q: Why is the code so hard to read?

A: It is machine generated. You aren’t supposed to read it 🙂

Q: How do I get N corners?

A: You have to adjust the threshold t automatically:

  1. If N is too high, increase t. If N is too low, decrease t.
  2. Perform binary search over t to find the value which gives N corners
  3. Set t too low (N too high).  Compute the corner score and sort corners by the score. Then keep the N with the highest score.

Q: Why is the Python/MATLAB code slow?

A: The FAST algorithm has been designed to work as quickly as possible and is therefore target strongly towards compiled languages. The code is provided for ease of use for interpreted languages, but it will not be especially quick. If you need it to run quickly you will need to interface to the C version.

Q: Where can I ask a question?

A: In the comments of this post.


194 thoughts on “FAST FAQ”

    1. FAST is just a system for detecting interest points.

      The type of matching that you want will depend on the application. A very simple scheme is to extract a small square of pixels (e.g. 11×11) from around the FAST interest points, as a vector. You can match two points by looking at the norm of the difference between them. Then, you have to compare every point in the first image to every point in the second to find the best match.

      1. Hi MR Rosten,

        Im a student and ı have a homework about “operating logic, fundamentals, application areas and matlab codes of FAST Algorithm.” You are a master abouth FAST. Please HELP ME for document and Matlab codes. Because I didnt find than informaytions anywhere from Turkey… 😦

  1. What’s the major difference between using non-maximal suppression and not using that? Which one has better result? The second question: can threshold 0 to be used if the image intensity is low? Thanks.

    1. The corner strength in FAST (and all other corner detectors) tends to be high in a region around a corner. E.g. consider a FAST corner which is a light spot on a black background. If the profile through the spot looks something like

      0 0 0 0 200 255 200 0 0 0

      then FAST strength will be high on the pixels with values 200 and 255. Nonmaximal suppression removes all but the highest ones in a small region, so only the corner centred on 255 will be kept.

      You usually want nonmaximal suppression because the non-maximal corners tend to provide little or no extra information and tend to be less stable.

      It will depend on your application though. If in doubt, try both and see which works better.

      1. Hi MR Rosten,

        Im a student and ı have a homework about “operating logic, fundamentals, application areas and matlab codes of FAST Algorithm.” You are a master abouth FAST. Please HELP ME for document and Matlab codes. Because I didnt find than informaytions anywhere from Turkey… 😦

  2. In FAST-9 you are detecting for a segment of at least 9 contiguous pixels. You also published a machine learned FAST-ER detector, and I was interested to see what your training arrived at (in terms of the decision tree). In other words, I’m quite curious as to what specifically makes it a more repeatable detector compared to FAST-9, and if there is an equivalent intuitive way of describing it (such as 9 contiguous pixels of different intensity around the circumference compared to the centre pixel).

    1. You can download the decision tree in the FAST-ER source code distribution and take a look. The tree is very large however. I do not believe that there is an obvious intuitive explanation as to how it works. I think decision trees are much like other machine learning systems (SVMs, Neural Networks) in that they essentially create a black box which partitions up the high dimensional space of inputs.

  3. I would like to impliment FAST code in C# .Net 4.0, is the the datastructure for input image same or will need any sort of conversion ?

    1. There are several ways you can go about implementing it in C#. If you can use some unmanaged code, then you may be able to link against the pure C implementation as-is.

      Probably a better choice it to generate some native C# code. The FAST-ER source code comes with some ready-made trees in a language neutral format, and some scripts to turn the code into C, C++, MATLAB and Python. It should be straigtforward to modify one of the scripts to generate C# code instead.

      The scripts are written in a mixture of GAWK and BASH. Since you’re using C#, you’re probably using Windows: you can get GAWK and BASH for Windows from either the Cygwin or MINGW projects.

      It is should be easy to rewrite the scripts in another language. The language neutral trees are in a very simple format designed for easy conversion to new languages. The conversion can be sone by simple textual search and replace, since the trees are essentially a list of if-else statemets.

  4. In your code(ex. fast_9.cpp), the decision tree has already been conerted into C-code. It is a very long string of nested if-then-else statements. How can I make it?? What’s the machine learning?? Please explain the process to me.

    1. It depends if you want to create code from a tree, or make a whole new tree.

      Either way, you will want the FAST-ER source code.

      Pre-made trees which are not yet converted into code are available. Look for the file “FAST_trees.tar.bz2” and untar it. The raw (unconverted) trees have the suffix .ftree. They contain trees in a language neutral format which should be easy to convert into source code using your language of choice. I’ve used shell scripts to perform the conversion. See fast_tree_to_CXX as an example.

      If you want to make your own trees from scratch, then you need to use the learn_fast_tree program. That takes as input a list of extracted features. You can generate all features for fast-N using fast_N_features to ensure your tree is exact. I have a program to extract fast-N features from images, but it isn’t currently in the distribution. Let me know if you need it and I will add it.

      Let me know if you need any more information.

            1. That link won’t work: it’s a dead link. Please use the link i gave instead. Can you tell me where you found the link? I’d like to fix it.

              The FAST-ER code contains code to generate the 16 pixel FAST-9 detector. In principle you could use the code to generate a 20 pixel FAST detector.

                1. Could you describe in a little more detail what you’re trying to achieve? Are you going for a 20 pixel ring? If so, where are the pixel locations, and also, how many contiguous pixels do you require?

                  1. I want to stitch two images with cascade fast corner detector in opencv, which have 12, 16, 20 continguous pixel ring

                    1. Image stitching is way beyond the scope of this FAQ, I’m afraid. FAST is just the very first step in a quite long pipeline. I believe OpenCV has image stitching built in with a high level class providing the functionality that you want. I’ve never used it so I can’t provide advice on that specifically.

                      With regards to FAST, I don’t believe OpenCV goes up as high as a 20 pixel ring. I think it only goes as far as 9 contiguous pixels in a 16 pixel ring.

  5. You consider points which are approximately 3 pixels away from the point being tested to find out if its a feature or not. Will the radius of 3pixels work for all image sizes or we need to change it based on the image size.

    1. The radius you need depends on the scale of the features, rather than the size of the image. If the features are very blurry, then you will need a bigger ring. The easiest and most efficient way to do this is to subsample the image, e.g. by taking 2×2 squares, and averaging the pixels inside to make a single output pixel.

  6. I just ported FAST12 to Java for my Master’s thesis. I have a question though.. Do we have to convert the input image to grayscale before running FAST on it, or can we do it on a coloured image by checking each channel separately?


    1. So far, I have only used FAST on greyscale images. The proper way is to convert to grey by using the CIE weightings. The easiest/quickest way is to use the green channel, which is not a bad approximation of the CIE weightings.

      It is possible to run it on every channel separately, but the colour reproduction on most cameras is quite poor and is heavily biased in favour of green, anyway.

      Would you be interested in releasing the Java code? I can make it available on the FAST page if you are interested.

      1. Hi,

        Thanks for your response. I will look into CIE weightings.

        I am interested to release the code but it is not very mature yet and it lacks the non-maximal suppression step. When it is fully ported and optimised, I will send you an email about it. Is that ok?


        1. There is some non-maximal suppression code packaged with the FAST corner detector. It should port to Java relatively easily because almost all of the work is done using array indices, rather than pointers.

      2. I would really be interested in the Java code. I am building an open source image feature extraction library in java.

  7. Hi,

    In the “Fusing Points and Lines for High Performance Tracking” paper, you mentioned that the test to check “if the intensities of 12 contiguous pixels are all above or all below the intensity of the center pixel can be optimized by examining pixels 1 , 9 , 5 and 13” However the auto-generated code does not implement this optimization. I was just wondering if there was reason behind that.


    1. The ID3 algorithm is better at finding a good sequence of tests than I am. The four tests: 1,9,5, 13 are quite good, but there are better choices. Also, they only work for N=12. The ID3 algorithm is used to automatically learn an ordering of the tests.

      There is a visualisation of the tests in the following presentation:

      slides 40–52 show you the ordering of the tests, assuming that the pixels tested are all significantly brighter than the centre.

      Slides 53-56 show what happens if the first pixel tested is too similar, or the first is bright but the second is too similar, or the first two are bright and the third is too similar, etc…

      [Post edited to update stale URL]

          1. Thanks, its very well explained. I am confused why Fast-er would have different repeatability than Fast-9 or Fast-12. I imagine Fast-er uses machine learning to generate either Fast-9 or Fast-12 efficient if-else code. So the results should be identical but computationally faster. Unless the difference is because of a phase of non-max suppression. Thanks,

            1. The standard FAST uses machine learning to learn fast detectors for FAST-9 and FAST-12.

              FAST-ER is an extension which uses more pixels and attempts to explicitly optimize the repeatability.

      1. Why does the four tests work only for N=12? From what I understand, the ID3 algorithm gets the best possible ordering of tests and the four tests will not improve the processing speed. Am I correct?

        1. You are correct. The original version of FAST used N=12 and had the 4 handwritten tests. That’s in the “Fusing points and lines” paper. After doing some more work on FAST, we figures that it would be better to figure out the tests automatically, using ID3. Once that was done, there was no longer any point using the original 4 tests.

  8. Is the python code released on 2010-09-17 a learned detector? Or do I have to train this first? I read that you have leanred FAST detectors but am unsure which ones to download.


    1. The python code released on 2010-09-17 is a learned detector and should be ready to use as-is.

      Any file with a name like fast-X-src.tar.gz or is a learned detector which is ready to use.

      The learning code is in fast-er-1.4.tar.gz, which also contains ready made detectors.

      1. Is there an example of using the FAST corner detection using python? I ran the with a image opened by OpenCV 2.2 in python and it took a good few minutes to run? Below is a simple code

        import cv
        import fast9 as fast

        greyimg = cv.imread(‘Capture.PNG’,2)
        corners = fast.detect(greyimg,30,1)
        print len(corners), “points found”

        1. It looks like you’ve used it correctly.

          Currently FAST in python is implemented in pure python, so it will be very slow, since python is rather slow at loops compared to C.

          One option is to use cython instead of python. I think that it will be a question of adding a few cdef’s to the FAST python code. If you get that working, please let me know, and I’ll add it to the source code.

          Another option is to use the FAST detector which is now in OpenCV.

          1. The only problem is that there does not seem to be a python version of the FAST detector in OpenCV 2.2. I will investigate cython and let you know if I can work it out.

  9. Is there an example of using the FAST corner detection using python? I ran the with a image opened by OpenCV 2.2 in python and it took a good few minutes to run? Below is a simple code

    import cv
    import fast9 as fast

    greyimg = cv.imread(‘Capture.PNG’,2)
    corners = fast.detect(greyimg,30,1)
    print len(corners), “points found”

    1. The Python code implements the FAST algorithm, but it won’t run especially quickly, because python is much less efficient compared to a compiled language for this type of code.

      If you want the corder detection to run quickly then you will need to use a compiled language. I expect the simplest way to do that is to convert the python code into Pyrex by putting cdef’s in front of the loop indices.

    1. If there is a hand shape classification algorithm which uses corners then you could probably replace the corner detector with FAST. However, FAST will only make up a small part (if any) of a hand shape classification algorithm.

  10. Hello, I see that FAST has been recently added to OpenCV but unfortunately I can’t find any documentation on how to use it! When you merged the code, did you add a sample program too? If so – I would like to know the name! If not could you send me some code or alternatively, point me in the right direction on how to use this magical tool?

    Thanks for all your hard work … you truly are a wizard,


    1. There is a sample program in the OpenCV tests directory:


      which makes use of the FAST detector. This is different from the test program I submitted, since the submitted code matches the OpenCV 1.x style but the OpenCV authors updated the FAST code to match the rather better OpenCV 2.x style.

  11. Can I get the revised version of fast corner detection algorithm, which uses only 3×3 mask?
    -1 0 1
    -1 0 1
    -1 0 1
    I tried to change your source code, but that was very difficult to do that 😦

    1. I’m not entirely sure I understand the question. What would a corner look like?

      You won’t be able to modify the FAST code, since it is machine generated. However, it is possible generate your own code by training FAST for different mask sizes.

  12. Hi, I have noticed the FAST iPhone app by Success Software Solutions. Has this code been integrated into other apps or other commercial uses? Are there any restrictions on doing so?

    1. The FAST corner detector has certainly been used in a number of commercial ventures. The FAST detector itself is available under the BSD license which places no restrictions on usage. Licensing details of the Success Labs port are available on the Success Labs page.

  13. Hi,
    I am testing the source code of iPhone corner detection app. I use Xcode 4.2 and get the corner points as an output. I would like to have the image + the dots but could not figure how to do that!! (in other words, the option “Camera” is always off!!). Can you help in that?

  14. Dear Dr. Edrosten ,

    Nice to see you, Edrosten . I am a graduate student in Taiwan.

    I am appreciated deeply for your wonderful method, FAST.

    I tried the Matlab version. It is good.

    But now, I have to implement my method including FAST in C++.

    Therefore, I call the FAST in the OpenCV library.

    But the augment is so hard to understand. Do you have some example code?

    I am eager for the help. Thanks so much.

    I hope it is not to disturb you.

    Best Regards,

    潘 志 宏 (Peter Pan)Chih-Hung Pan

    1. The function prototype is:

      void FAST(const Mat& image, vector& keypoints, int threshold, bool nonmaxSupression=true)

      You need to provide an OpenCV image (for instance loaded from a file) as the first argument.

      The second argument returns the list of detected KeyPoint structs.

      For the threshold, you will need to pick a number that gives a reasonable number of corners (eg 500-2000 for a 640×480 image). This will depend on the type of image. For a video, it may be as low as 20, for a high contrast photograph, perhaps as high as 100.

  15. Hi, I am interested in FAST-ER

    I run ‘./configure && make’ in fast-er-1.4, but I got the error message:

    checking libCVD… yes
    checking for dgetri_ in -lcvd… no
    configure: error: libCVD must be compiled with lapack support.

    Can you help me?

    1. Do you have the LAPACK development libraries installed on your machine? If so, libCVD should pick them out automatically. Try the following:

      1. Check the configure output from libcvd. If it doesn’t find LAPACK, then install the LAPACK development libraries.

      2. Check the built libcvd using ldd /usr/local/lib/ and see if it has a reference to lapack. If it’s not there, then there libCVD has failed to pick up lapack properly. If 1 is OK and 2 isn’t, then let me know.

      3. As a workaround, run export LDFLAGS=-llapack before running ./configure to get LAPACK manually.

      1. I also meet this problem 。。。I find already install the lapack and I can find it .but when I run: ldd i can’t find any reference to lapack…
        Can you help me?

    2. Hi,
      I’m running into the same problem when compiling FAST-ER. I have liblapack-dev installed via Ubuntu’s package manager, but I’ve also tried installing lapack manually and exporting the LDFLAGS variable.

      I get this when configuring libCVD:
      checking if Accelerate framework is needed for LAPACK…
      checking for dgesvd_… no
      checking for dgesvd_ in -llapack… yes
      so I guess it is finding the lapack library. But later, ldd doesn’t return any link to liblapack.

      In the end, I get the same error than Kang. Any clue? Thanks!

      1. Hi,
        I also meet this problem and do not know how to fix it? I already install the liblapack but I do not know why ldd doesn’t return any links related to liblapack.
        OS: Ubuntu 12.4
        libCVD: libcvd-20120202

      2. I also meet this problem 。。。I find already install the lapack and I can find it .but when I run: ldd i can’t find any reference to lapack…

  16. First, I must express my appreciation for this algorithm. It truly is fast: average execution time of 20ms for a 800×480 image on a modern smartphone and it doesn’t even need a binarized image.


    1. Does FAST store the type of corner it finds? Outer corners (where majority of pixels are bright) versus inner corners (where majority of pixels are dark). If it does store this, can this be accessed in the OpenCV version? If it doesn’t store this, would it be a significant change for someone without prior knowledge of the FAST code (i.e. myself) to make?

    2. In the KeyPoint structure returned from FAST feature detection, there are a number of fields, but the only one that changes besides the location is “response”. Can you tell me what the value of this field means in the context of FAST? The values were larger for corners farther from the camera, and the range (in my tests) was 10 to 30.

    Thank you.

    1. 1. FAST doesn’t currently store the type of corner. It is certainly possible, though the current code isn’t set up to do it. I think it would be possible as a hack by post-processing the FAST trees. There are copies of the trees in the fast-er codebase which are designed to make postprocessing as easy as possible.

      Probably the easiest way is to examine the ring of 16 pixels after detecting the corner. Count the number of pixels >= corner+threshold and the number of pixels <=corner-threshold. One will dominate (since it is a corner) and that will tell you which type it is. The cost should be tiny compared to the cost of doing corner detection.

      2. The response is the highest threshold at which the points are still detected as corners. Ones with a higher response are stronger in that they will likely be detected more reliably under adverse conditions.

  17. Hey

    I just wanted to tell you that you are my life saver. I just used your code for my final year project and it works like a charm. Actually I was on the verge of failing my project and was very worried about it until I saw your code. Thanks a lot. :’)

    But the thing is, I do not understand the program at all. At least, could you give me the gist of what the code actually does? Because I have to explain it in my report and also to my professor.

    Your help is greatly appreciated! 🙂


    1. I’m glad it was so helpful.

      The code is almost impossible to read because it is machine generated.

      What it does is tell you if there is a contiguous arc of 9 or more pixels which are either all much brighter or all much darker than the centre pixel.

      Instead of looking at pixels in the circle in sequence, the program uses a decision tree in order to reject non-corners as quickly as possible.

  18. Phenomenal algorithm.

    I’m trying to compile the decision tree. Any estimate on how long the compile is ‘supposed’ to take? I started the compile awhile ago and still waiting for it to finish (been a few hours now). Just want to make sure it didn’t hang up on me.


    1. and a follow up on this: The compiler was indeed hung up. Turns out the FAST-ER decision tree is much too large for the arm7 compiler. It gets a branch out of range error. If there is any options to help fix this I’d appreciate it, but I’m suspecting the architecture just can’t handle it.

      1. There’s no direct way that I know of to solve the problem. My suggestion is that you try FAST-ER on a PC to see if the extra complexity is worth the effort. It is possible to run an interpreted version of FAST-ER, but it’s probably under 1/10 of the speed.

  19. Hi Mr. Roston

    Indeed I return on the descriptor of the points detected by FAST, is there someting new on the extraction of a reliable descriptor? The proposal made at the beginning (take the neighborhood pixels 11 x 11 as a descriptor vector) is likely to be of low robustness against the usual changes (rotation, brightness, change of scale). Is it possible to used the quarter arc of the circle (16 pixels) to calculate the difference with the intensity of the colors between this arc the rest of the circle.
    Thanks for your engagement

  20. Just curious… I’ve observed in the paper that when considering pixels in the circle the test order goes darker-similar-brighter while in the implementation provided goes brighter-darker-similar. Does it make any difference at all? Thanks!

  21. Hello Dr. Roston,
    Actually I need to use Fast detector on OpenCV and I want to extract my own descriptor using 16 pixels ring which is used by Fast while each interest point is being detected. I need to do this in an efficient way indeed. In addition I don’t have access to .cpp file in opencv but just .hpp file.
    Your help would be much appreciated on how I can make this change and define my own descriptor.

    1. The cpp files should be in the OpenCV distribution. You can download this from the OpenCV website. Alternatively, you can get the source code for FAST from the FAST web site.

      The FAST code gives you an array of x-y locations. You can simply copy the 16 pixels around each x-y location into a 16 element array to form your descriptor. A description of the offsets you need for the 16 pixels is in the paper and source code of FAST.

  22. Hi, and thanks for this very nice algorithm.
    My questions are mainly on the construction of the decision tree from the training images.
    1. The minimization of the entropy function is used only for creating the decision tree?
    2. Does the training set needs to be annotated? (I guess not, since all 16 pixels are checked)
    3. I assume that with very few training samples (corners), a very simple decision tree will be created. When new corners are trained, how is the tree expanded? (maybe the answer is in the paper, I haven’t been able to figure it out yet)

    1. To answer your questions:

      1. yes
      2. The training set does need to be annotated. There are two ways of doing this.
        1. Classify points in an image using a slow, simple algorithm that checks all 16 pixels
        2. Generate all possible corners and non corners. Since each pixel can have 3 states, there are 3^16 ~= 46 million possible corners and non corners.

        I actually use a combination of both methods. I generate all possible corners to get a complete coverage of the data. I then extract corners and non corners from a number of images, so that the pixel statistics are represented in the detector.

        Bear in mind that when it gets as far as training, the pixels are either b (brighter), d (darker) or s (similar) relative to the centre, so corner candidates can be thought of as a list of 16 elements like bsdbbddbsdbsbdsb

        There is program in the FAST-ER distribution to generate exactly one of each type of corner and non-corner, along with the classification.

      3. Yes, if you have too few training samples, then the tree will not be representative of the segment test criterion. The tree is never expanded, because the tree is not trained incrementally. I collect examples of corners first, then train the tree, then emit source code and then use the tree. In principle incremental training could be done however, but I’m not sure how useful that would be.
  23. Hi! (My comment is rather long… sorry)

    I’m writing my Master Thesis in Computer Science and I have several questions about the precompiled program and the source codes you provide in the website, I’ve been studying your paper and code for several weeks now.

    My thesis is about reproducing the behaviour of the FAST algorithm in Hardware synthesis (using an FPGA) in order to accelerate and parallelize the basic operation: decide whether a pixel p is a feature or not.

    Here are my questions, given the same image, threshold and same points fast with nonmaximal suppression:

    – Your C source code displays a result set “A”.
    – Your Python source code displays a different* result set “B”.
    – Your precompiled program (Linux_x86-64) displays a different* result set “C”.
    – My conceptual implementation in C (which is a parallel implementation of the Hardware architecture concept) displays sometimes (small images) a result set “C” as the precompiled and sometimes (big images) a different* result set “D”.

    * when I say here “different” or “differences” I refer to the number of features detected, but every result set shows the same pattern in big images.

    So, all the result sets are close between them, but they’re not the same for the same image…

    Another questions I have is about the C source code, regarding timings:
    I’ve prepared a main function to launch fast9 function and mine in order to measure times on them and I found something rather weird: if I test both functions with a small raw matrix of bytes (100 bytes, a 10×10 pixel grayscale image) hard coded on the source, fast9 function has a time T1 and my function a time T2, 4 times faster than T1.
    BUT if I test both functions with the same 10×10 grayscale image read from a file and stored dynamically with malloc calls, my function still reports a T2 execution time (approx) but fast9 has a time T3 of the same order as T2 and sometimes faster than T2… why is this happening if the source code is the same, the image is the same… the only difference is the use of static or dynamic memory.

    I hope I explained myself clearly enough, that these are not too many questions and that you have the time to answer me soon.

    Thanks so much in advance for your time.

    1. There are a number of reasons for the differences:

      The precompiled binary uses an old version of the algorithm. The old version is not quite exact (incomplete coverage of data). Also, it uses a rather ad-hoc method for scoring which will affect the nonmaximal suppression.

      I haven’t thoroughly tested the Python source code, but it seemed bug free. The C code from version 2 onwards should be correct. You could try doing a diff on the feature sets to see what the differences are.

      That’s a very strange result about the static and dynamic memory. A 10×10 image is very small. Have you tried it on something much larger (e.g. 1000×1000) to see if the results are similar in nature?

  24. Please can you tell me how exactly to run the C code in Eclipse!! I’m in a bit of tight spot and really need this code to run!

    1. I’m not familiar with eclipse, but it’s standard C code so you should just be able to add it and have it compile. Once it’s added you just need to supply a pointer to the image data and the image size.

  25. hello..
    can u plz tell me the working of FAST corner detection algorithm in simple words. Im finding it difficult to understand.

    1. FAST asks the following question:

      Is there an arc of 9 or more pixels which are all much brighter than the centre, or all much darker?

      If so, then the pixel is a corner.

      The remainder of the FAST algorithm is involved in making that test much faster than a straightforward implementation.

      1. Hi,

        I read the “Fusing Points…” paper multiple times and also whatever I could find on the internet about FAST.

        Still, I couldn’t find anywhere the answer to the following question: “WHY is a pixel a corner if there is an arc of 9 or more pixels brighter or darker than the pixel?”

        Can you please clear this out for me?

        1. Intuitively, if you have a black region next to a white region, if they’re separated by anything sharper than a straight line, then the corner point will be a FAST corner. FAST finds the tip of wedged shaped things.

          In a more general sense, anything that can be localized in 2D is a candidate for a corner, and FAST fits that criterion.

          In particular, FAST-12 was originally chosen (the 90 degree version) because it could be hand written to be efficient: testing the 4 compass points by hand. FAST-9 came out of a generalization of that. After moving to a decision tree, there was no need to use 12 pixels for a fast test. It turned out 9 was faster and more repeatable.

  26. hi,I run ‘./configure && make’ in fast-er-1.5, but I got the error message:
    configure: creating ./config.status
    config.status: creating Makefile
    g++ -g -O2 -Wall -Wextra -W -ansi -pedantic -DUSESUSAN -DJIT -DNDEBUG -c -o warp_to_png.o error: ‘TooN’ is not a namespace-name error: expected namespace-name before ‘;’ token
    make: *** [warp_to_png.o] wrong 1

    Can you help me? Thanks!

    1. That is a very strange error, and I’m not sure what’s causing it. Can you drop me an email instead, since I’ll probably need you to attach various files to help me find the error.

  27. Hi,

    I am busy writing the FAST detector code into and OpenGL shader (GLSL) so that I can run it as part of my SLAM pipeline on an iPhone for my masters. I have got the basic 12point FAST detector working but there are thousands of features in tight clusters so I want to compute a score for each corner and then run a second shader to extract only the top feature in every 10×10 pixel area. What method would you suggest for computing the score of a corner?

    Later I would like to be able to add tracking to corners, what method would you recommend for tracking corners reliably.

  28. The scoring criteria I use now is “what is the highest threshold at which the point is a corner”. This has the advantage in that the score is directly related to the threshold, and is monotonic etc. It also yields a tiny but measurable improvement in repeatability over the old ad-hoc scoring system.

    There are two ways this can be computed that I know of and both involve running the detector repeatedly. The first is just straight up binary search with a threshold. The second is a bit more involved: for each if statement, (is pixel a brighter than pixel b), you can work out how much that passes by. For a given detection, you can then find the minimum of those values over all tests taken. Then add that to the threshold and re-detect.

    I would imagine that binary search would be marginally easier, since you can (with sufficient mangling of comparison and multiplication) do the tests without taking any branches, and there’s less stuff to modify like that.

    Of course neither of those are especially helpful since it is back to front compared to the algorithm you want. I haven’t devised an algorithm yet to compute the score without effectively doing repeated detections at different thresholds.

    When it comes to tracking corners, that largely depends on the task: are you trying to track some large object or a collection of smaller ones? Do you want to track from corner to corner, or do you want to just use the corners as a starting point to run something like a KLT tracker?

    1. Thanks for the insights, I have a few ideas on how to compute the score for each corner and then in a second pass discard weaker corners. I will post a link to the code once it works.

      The tracking is to be used to replace feature detection and matching in a sparse structure from motion pipeline. SO I will probably use something along the lines of a KLT tracker.

  29. Hi there,

    I would like to try FAST in linux (Ubuntu 11.10) and I’ve downloaded fast-Linux-i686. I’m wondering how I can use it. I’ve put a JPG format image into the folder, and tried to run “fast 1.jpg corner.jpg” but it says that it doesn’t contain such command.


    1. I expect the problem is that you do not have the current directory in your PATH.

      After untarring, change into the new directory, then type (for the 32 bit version):

      ./fast-Linux-i686 1.jpg > result.ppm

      Then view result.ppm in an image viewer.

    1. I believe that EmguCV is a C# wrapper of OpenCV. I have never used EmguCV, so you’d be better off asking any questions about that on the EmguCV mailing list or forum.

      Apart from the specific API, the usage of FAST should be the same as OpenCV and the downloadable FAST code.

  30. Hi,
    I have read, but I do not remember where, that an average performance of 2.3 tests per pixel is achievable (after learning stage). This figure takes into account that most of pixels are not keypoint and then need only 2 tests to be rejected.
    Do you know how many nested tests in average are needed for a keypoint (not rejected)?
    Thank you.

  31. Hi Dear Dr. Edrosten
    I used fast9 matlab code, I want to know how can I give score to corners and choose some of them according to their score, and in different image with default threshold it gives different corners how I can set threshold?

    in addition I want to compare result of fast with other corner detector as like as harris, susun, foerstner so I need to know how I set threshold for this detector for having a reseanoble comparison among them.

    1. The score is returned as a second value. So use:

      [ corners scores] = fast9(image, threshold, 1);

      to get the scores.

      To choose the threshold, I would generally select a threshold to find a particular number of corners per image. For example, 500 corners in a 640×480 image would be appropriate.

      For comparison to other detectors I would certainly perform comparisons with a fixed number of corners per image. This is because the thresholds for the different detectors are in no way comparable. A threshold is essentially a parameter determining sparseness. Since that is rather indirect, it is best to compare the results at the same level of sparseness.

  32. Hi, I’m working on my special project called Image Mosaicing. I’m planning to implement FAST-ER in corner detection part but we are only to us Java in the development. Can you suggest a better way to be able to convert the provided source code in java? I can’t convert the entire code by hand, or maybe is there a way I can use the code in eclipse juno? thank you 🙂

    1. There are several options.

      One is converting the C-code using search/replace. This is probably quite painful.

      A better option is probably to use the intermediate trees which are in the FAST-ER distribution. These contain the same information, but in a form which is easier to manipulate.

      The way that the Java code will work will depend on how the image is laid out in memory. Usually it will just be an array of bytes, though some care is required because images are generally unsigned but Java bytes are signed.

      If you follow up with some details of the image class, I may be able to offer more specific advice.

      1. Thank you ed for the quick response. I dont understand how your suggestion answers my question, should I make my own java interpretation of the code by hand? how does the intermediate trees work? thank you.

  33. Hi, I have a question about the machine learning approach that enhances the FAST algorithm and I think I got stuck somewhere in the middle of the context. It’s gonna be a long series of questions so, my apologies first.

    I think I got the clear picture of the previously proposed so called ‘segment test’ method and several weaknesses it has. The problem is, when it comes to the decision tree making that has been said to fix those weaknesses, I’m failing to get the gist of it.

    According to the publication, wikipedia or some references I found, the very first step (except for the segment test for the whole pixels) of the machine learning approach is to partition the 16 pixels into 3 states which are brighter, darker and similar in terms of the intensity of the pixel respect to the corner candidate ‘p’. And then, it seems like they chose 1 pixel ‘x’ out of those 16 pixels and make the whole set of pixels ‘P’ (capital) partitioned into those 3 states respect to ‘x’.

    So, If I understood correctly, one of the 16 pixels has the most information about center ‘p’ being a corner or not by dividing a whole set P correctly because each of them (16 pixels) has different entropy values.

    and here comes my questions.

    so, now we know the which one of the 16 pixels has the most information about ‘p’ being a corner and we can rank those 16 pixels by the information gain. Then, why bother to extend the decision tree? should it be stopped by then? From what I have understood, the whole purpose of the decision tree making is finding a set of pixels other than 1,9,5,13 because those 4 pixels constraint the distribution of the corner appearances. but If we already had 4 pixels other than them, haven’t we already solved our problem?

    What was the purpose of dividing 16 pixels into 3 states? one of those -x- was going to partition the whole set P. Isn’t meaningless?

    How is it possible to be adopted on the actual subjective image (not the training image)?

    So, basically I failed to understand the whole part of it. Could you please break it down for me?

    Sincerely, Sejin Kangsong

    1. I get it now. ID3, Decision Tree the whole concept of it. Truly brilliant algorithm that you built sir, cheers for that. Now that I understand it -even just a brief of it though- I also became able to see how stupid my initial question was.

      Anyways, thanks for developing this awesome algorithm.
      Sincerely, Sejin Kangsong

        1. Personally, the question did not arise, but now to the contrary of Mr Sejin, I do not (yet) understand. So, if you could provide an explanation,

  34. Hello,
    i ‘m currently writing a multi-threaded vesrion of FAST and i would like to reuse your code.
    A simple question i have is if your code requires as input a padded image or not, i tried so far with a non padded and the interest points i get are all concentrated in the upper part of the image (no matter what the image is), so i thought that this is maybe due to the non padding,

    1. The FAST code from 2.0 onwards (the latest version is here: supports images with padding.

      The function prtotype looks like this:

      xy* fast9_detect_nonmax(const byte* im, int xsize, int ysize, int stride, int b, int* ret_num_corners)

      xsize corresponds to the number of pixels horizontally in the image excluding padding.

      stride is the number of bytes per row including padding.

      If the position of the corners does not seem to relate well to anything in the image, then you might have the padding parameter set incorrectly.

  35. Hello

    I want to profile your source code (using gprof for instance) in order to measure the time needed by each one of the functions in order to complete its “task(s)”. So I have downloaded fast-C-src-2.1.tar.gz but when I try to compile it (using gcc) I receive error:
    (.text+0x20): undefined reference to ‘main’
    collect2: ld returned 1 exit status

    Obviously the error occurs because there is no main function into your source code. Due to I am inexperienced C programmer I need your help. Could please provide me the main function so that I can finish with the profiling.

    Thank you in advance for your time!

    1. Hi,

      The FAST code in fast-C-src is just code to perform the corner detection. In order to use the code, you will have to provide a main() function that loads images from somewhere and then provides them to the corner detection code.

      You may find this substantially easier in C++ using libCVD. Below is a minimal C++ program using libCVD to load an image called “test.png” and perform corner detection on it.

      #include <cvd/image_io.h>
      #include <cvd/fast_corner.h>
      int main()
          CVD::Image<CVD::byte> image = CVD::img_load("test.png");
          std::vector<ImageRef> corners;
          fast_corner_detect_9_nonmax(image, corners, 15);
          //Do something with corners here.
      1. Hi again,

        i was asking you because I saw that there is uploaded precompiled version for Linux (fast-Linux-x86_64.tar.gz). That’s why i was thinking you have the main function and it would be easy for you to provide it. Anyway, thank you for your prompt support!


        1. I see what you mean. Those have been up a while and I think the main function is built against a very old version of the library so it may not be of much use. I’ll see if I can dig it out though.

          The code I posted is more or less equivalent however.

          1. Hello again Dr Edrosten,

            I’m trying to compile the application under Scientific Linux using GCC. Unfortunately, an error occurs. The application can’t find header files from libCVD but actually the files exist. Have you tested the library on Scientific Linux? I suspect that the problem comes from the fact that the application uses relative paths. Is there any quick fix about that?

            Thank you for readiness to help me!

            1. Hi,

              I’ve not tried it on that platform. Can you post the errors that you’re getting?

              sent from phone. On 26 Mar 2014 14:46, “Edrosten's Blog” wrote: > >

                1. Hi,

                  If you’re building FAST through libCVD, then the easiest way to do that is to run ./configure && make in the libCVD directory. This should build everything correctly using the correct include paths.

                  The standalone C source code can be built without using libCVD.


  36. Hi Dr.Ed Rosten,
    I am using Fast9 in a tracking application. I have implemented a brute force implementation of Fast9 which checks for every pixel, if 9 out of 16 pixels around the arc satisfy the bright or dark condition with no early exit logic. I am comparing the corner points given by this method against your matlab implementation of decision tree and I found some differences. One such point was center = 89, and the 16 arc pixels in order are (97, 235, 235, 235, 199, 89, 79, 182, 93, 235, 235, 235, 235, 235, 235, 192). The threshold used was 80. This point doesn’t satisfy the 9 contiguous arc condition for the threshold of 80. Hence it is not detected as a corner in my brute force implementation. However, the decision tree detects this as a corner.
    Could it possibly be a bug in the decision tree implementation?

    1. Hi,

      Which version are you using?

      The original version didn’t actually have one of every example in the training set, so the decision tree is merely a very good approximation of the fast 9 function.

      In later versions, the training set was augmented with one of each type of feature, resulting in a prefect instance of the fast 9 function.

      There is no practical difference in performance between the two versions, either in repeatability or speed.

      sent from phone. On 24 Mar 2014 05:45, “Edrosten's Blog” wrote: > >

  37. Hello Rosten,

    I have question regaring the traing training part.

    When you say (in your paper eccv 2006) “Let Kp be a boolean variable which is true if p is a corner and false otherwise”,
    Is labels for Kp are manually provided?


  38. Hi Dr Rosten,
    first, I must say that I am amazed of the amount of support you provide for your algorithm, and as a grad student it is an inspiring example of how I want I hope to be able to do with my future researches.
    Now, I would like to have your thoughts and insights on the use of a pre-trained detection tree/branch (as you provide in your sample code) vs training a detection tree for a specific application. Maybe my question arise from a miscomprehension of the underlying theory…
    I mean, is your learned detector tailored to a specific set of predetermined wedge types (C-like, V-like, T-like, etc.) or is it more of a general detector aiming to be able to describe the most efficiently almost any wedges shapes. Just as a reference, I have been working lately on a different type of feature descriptors, the kAdjaentSegments (kAS) by Ferrari et al, which especially aim to describe chained edgels as discrete type of geometric features, and I wondered if the approach was similar.


    1. As I read in “Machine learning for high-speed corner detection”,
      “This creates a decision tree which can correctly classify all corners seen in the
      training set and therefore (to a close approximation) correctly embodies the rules
      of the chosen FAST corner detector.”
      So, do you have any experience with some image sets where the decision tree included in the code you provide would not be relevant? To be more direct, I am trying to build an object detector and I wondered if it was necessary (recommended) to learn a decision tree for every object model I want to create. Say, learn one decision tree for cars, one for birds, etc.


  39. Dr. Rosten,
    on the course of optimizing your FAST algorithm specifically for ARM NEON, I managed to create a “unified” algorithm where the results are stored in a matrix with actual numbers of maximum pixels in a row that meet the threshold bound conditions without any kind of performance hits. This algorithm requires roughly 12Mhz / frame computing power @ VGA, consistently regardless of the image’s properties. I’m curious whether this kind of variations already exists and/or further studies are done. If not, it might be a starting point for the next step IMO, and I’d appreciate your opinion on this. Thanks in advance.

    1. Hi Jake,

      I’m not sure what you mean by “actual numbers of maximum pixels”: is this the number of pixels tested in the ring which match or exceed the threshold? Also, what is that information used for? It sounds related to, but not identical to the information used for creating he corner scores.

  40. Hello,
    I hope I’am on the right place to ask my question. I try to using some new optimizations on the FAST. Is there a way to get a Version of FAST without the machine-learnin step? I try to optimize FAST for color images but I only found machine-gerneratet Code which I can’t understand, or FAST-ER code with the machin-learning step. Is there a way to get only the detection step with the criterion checks?

    1. Hi,

      Yes thre is, though it has been optimized for simplicity rather than speed. If you download the FAST-ER code, there is a file called “”. This contains a function “is_corner()”, which does the detection. It needs to be called twice, once for brighter pixels, once for darker pixels.

      1. Hello Dr.Rosten,

        Thank a lot for the fast reply. It was a great help. Sadly I have another problem. I work with openCV mat and try to convert it to an image in the CVD format but without great success. Is there a option to do this?


        1. Hi,

          What problems are you having specifically?

          I think you probably want to convert an OpenCV Mat into a CVD::SubImage. The SubImage constructor accpts a pointer, a size (as an ImageRef) and a stride, which is the number of pixels between rows.

  41. Hi Dr.Rosten
    I plan to train a new decision tree for my project, How can I get the code for training? what kind of file will i get for the raw decision tree? and the decision tree code is incredibly long in C , How can i transform it into C automatically?

    1. Hi,

      The code for training new trees is in the FAST-ER code distribution:
      There is code in there to extract FAST-N features from images, to generate all FAST-N features and code to train a tree from that data.

      The trees are output in a custom intermediate format. There are scripts in that code which will convert the learned tree to a variety of different languages.

      Instructions for use are here:

      You need to follow steps 1 and 4. Steps 2 and 3 can be ignored.

  42. Hi Dr. Rosten
    I’ve also read the paper agast which claims that no need to train new decision tree for new scene and faster than any other algorithms.
    Do you think Agast has better performance then FAST on specific scene?
    If I use one algorithm in AR detector ,which one do you think will have a better repeatability in general scene?

    1. In terms of repeatability, FAST and AGAST algorithms will perform identically: both of them simply implement a fast version of the segment test algorithm. That said the AGAST (but not the FAST) paper also explores the segment-test algorithm with smaller rings than the original 3.4 pixel ring. However I don’t believe that complete repeatability studies have been done for the smaller ring sizes.

      For FAST (and I believe AGAST), the available code has been trained against a complete set of corners, so the resulting trees will be an exact representation of the segment test algorithm.

      In terms of speed, it appears that the tree learned using AGAST is faster than the tree learned using FAST. However, there are some additional implementations of the FAST algorithm which use hand-chosen tests but make heavy use of SSE2 for speed that are considerably faster than the scalar tree based code. There’s an implementation of this in libCVD for example. It is likely that any reasonable SIMD implementation will beat even the most highly optimized trees because corners are sparse and so a SIMD algorithm can reject batches of 8 or 16 corners very quickly, using on average less than one instruction per pixel.

      1. Yes. Fast detector in OpenCV makes use of SIMD and it is incredibly fast. I tried to get a list of developers that were involved in implementing that. I only got your name.
        Did you implement the SIMD optimization for FAST in OpenCV?
        If yes, could you direct me to some link or literature that would get me started with SIMD programming?
        If not, could you direct me to the people who did?


  43. Dear Dr Rosten,
    I have been challenged recently by colleagues about the whole idea of using keypoint-descriptors for object detection and I would like to know if my understanding of the concept is correct.

    When doing object objection with complex settings, we are looking at a method more efficient and robust than simple template matching. Some researchers wondered if there was a type of feature that could be robustly and quickly identified on an image, and with a high probability that the same feature would be found on another image, even if the object was rotated, translated, subject to some change in illumination and maybe seen with a little perspective, among others.

    So, some researchers (including you) found that the use of a type of keypoint (the loosely named “corners”) would be robustly detectable among different images. You and others devised methods in order to quickly identify the keypoints present in an image (FAST, FAST-ER, then AGAST, etc.). Then, some people built binary “descriptors” (BRIEF, ORB, BRISK, etc.) and used efficient method to match the keypoints of 2 images. Classification algorithm such as RANSAC may then be used to see if a whole set of matches indeed represent the original object.

    FAST and AGAST imply the use of a threshold, which in itself has an effect on the number of keypoint detected. Processing speed left aside, we are debating on the relevant number of keypoint appropriate for a given application. My understanding let me think that it is better to have a relatively large number of keypoint on a model image (the picture of the object to be found) since we can’t know beforehand which ones will be found in the scene image (the picture of the setting in which the object is present). I am afraid (and I observe experimentally) that using a too low number of keypoint yield to poorer detection since my program don’t find enough corresponding keypoints between both images. On the other hand, I rely more heavily on classification algorithms such as RANSAC to find the matches that will fit with the model and remove the outliers.

    A colleague, who has a pretty good experience with classification in general (and who I respect a lot) but who is not a vision specialist, believes that reducing the number of keypoints (and matches) will ease the task of RANSAC. In general, it is true that having less stuff to classify increase the performances of the classification. The problem is that it is only true if the keypoints you keep are the most “relevant” keypoints (say, 10) in the model image. But I don’t see how it is possible. If I blindly select 10 keypoints on a model image, spread across the whole image, I believe it is way possible that I won’t even find these keypoints in the scene image. Moreover, I am working on an application where a new object and a new scene are presented quite often, so it is not really possible to handcraft a detector tuned for a specific object. So I must really blindly choose ~10 keypoints and hope that they will really often be found in the scene image. To me, it is not coherent with my understanding of the keypoint descriptor matching principles.

    I hope I was clear enough! Don’t hesitate to ask for clarifications. Neither my colleagues and I are specialists of the keypoint-descriptors, so we may simply misunderstand some of the concepts.

    Best regards

    1. I believe I understand your question.

      In all of these system, corners are thresholded by some kind of score (as are feature matches). The score is some measure of quality, so one assumes that the higher the score, the better the features. When it comes to repeatability, the message is somewhat mixed. Generally increasing the score increases the repeatability. Partly that’s due to blind chance, and partly because the corner score is not very stable. If you look in the original paper, there are graphs of repeatability against threshold (or corner count). For some detectors, there is a marked peak, beyond which increasing the number of features makes things worse. FAST is not generally one of those.

      When it comes to RANSAC, there are two factors. Firstly the probability of selecting an inliner set is related to the probability of having good matches. However if there are too few matches, then it is often the case that matches are nearby in the image. In that case even selecting all inliers can result in a bad. Furthermore, having too few points can make evaluation of the sets inaccurate.

      Otherwise, having more keypoints generally slows things down a bit but not very much. Using either guided RANSAC (or DESAC for a more deterministic version), Nister’s breadth-first RANSAC or a combination of both makes the growth very much sub linear.

      My own experience suggests that one is better off using quite a lot of keypoints (hundreds or even thousands) per image, and letting RANSAC sort them out.

  44. Hi Dr. Rosten
    I would like to reuse your fast code but without the machine-learnin step.

    For the detection step, I have used the function is_corner, there is no problem.
    For the nonmaximun suppression step, there is only a function fast9_corner_score to get scores with the machine-learnin.

    Can you provide a original function of score, a slow version ?

    Thank you in advance.

    Best Regards.


    1. Hi,

      If you want a slow way of computing the score, you can run is_corner with successively higher thresholds until the point turns from a corner to a non corner.

      To increase the speed substantially, you can use binary search over thresholds in order to find the highest threshold at which a point is a corner


      1. Thank you very much.

        I have tested the slow algorithme with the functions is_corners and fast_detect, I found that the is_corner cannot detect exactly the corners in an image.

        1. Which version of the code are you using? If it’s the latest version then there must be a bug. Can you send me a small image patch containing one corner on which the bug occurs?

      1. That’s an interesting suggestion. I just implemented the approach where you iteratively estimate the subcorner position using local gradients, as used by OpenCV in their cornersubpix-method. So far, that seems to work pretty good but it would be interesting to compare it with your suggestion.

  45. It seems like most people calculate their FAST corners using grayscale only. Does anyone know of any work made on color images where you detect and match keypoints in multiple colorchannels? Intuitively, I would guess that there will exist corners in a colorchannel that disappears if you convert it to grayscale or use only one color channel.

    1. It’s perfectly possible to separate the image into three channels and run FAST on the channels independently. It would also be fairly straightforward to make it work on a 3 channel image in place: you’d need to edit the “pixel” array so that pixels within a row also had a stride of 3, then treat the data as an x*3 by y image.

      1. Yes, I just haven’t seen a lot of people actually do that but, on the other hand, I’m not very experienced in keypoint extraction. One might think that some research comparing keypoint extraction using different types of colorspaces/channels would exist. Perhaps it depends too much on what they are actually used for or that you seldom gain that much compared to the increase in computation time.

  46. Hi,
    I want to know about the images which was used to construct the Tree.

    What was the size of each image?
    How many images were used for Tree?
    Are the images of indoor image or outdoor image?
    How many corners were included in an image?

    1. I’m afraid I don’t actually have a huge amount of details on this any more. The good news however is my experiments showed that it didn’t make a huge amount of difference.

      I believe I originally used an image sequence of indoor scenes taken with an analog Pulnix video camera with a BT878 based video card giving images of about 640×480, or 768×576, split up into pairs of 768×288 to remove interlacing.

      The later version would have probably used similar sequences but from a webcam instead (640×480 not interlaced), and I think both sequences were probably of indoor scenes. The later version however is augmented with one copy of every possible corner to ensure coverage of the dataset.

      I think in both cases, I used a few tens of thousand images with around 500-2000 features per frame, giving a few million corners in total.

  47. Hello, Thanks for your great work, do you have a first version of the algorithm without learning machine ? without using the binary search, Thank y ou

    Ph.D Candidate

    1. The first version of the algorithm is here:

      It’s only FAST-12, and was hand written (no machine learning). The scoring mechanism does not use binary search. Note that the newer scoring mechanism is slightly but measurably better than the old one.

      This code may be of historical interest, but the new versions are substantially better.


      On 14 Dec 2015 13:01, “Edrosten's Blog” wrote:


      1. I’m interrested on implementing the FAST algorithm on the Hardware (FPGA) without learning machine, so firslty I want to implement it in the software to show the results of my global algorithm. With this version, can I change the N=12? to 10 for exemple ? and what about the radius of non-supression maximum, can I change it ?

        Thank you again for your quick response,

        Ph.D Candidate

        1. I’m not sure I really follow. N=9 seems to be the detector which gives best results. You can set N to any number greater than 8 to get a corner detector. If you’re after a version which allows you to select N, but isn’t efficient, then look here:

          the is_corner function, and its use on line 160.

          As for non-max suppression, the fast code doesn’t do non max suppression itself. You need to detect corners and then score them in order to do nonmax suppression. There’s some code here for nonmax:

          but it’s very hard-wired for a 3×3 region. You’ll have to write your own code if you want to use different region sizes.

  48. Other question : During seeing your code, I dont remark the part of the FAST algorithm talking about testing firstly the I1 and I9 , than I1, I3, I9, to make the algorithm more speed, you have another version taking account this point ?

    Do you have any document explain clearly the code ?

    Thank you so much

  49. Hello,
    I think it is a stupid question, I don’t understand the utility of imp, high_row, low_row, line_max, line_min in the code, and pointer_dir

    Can u explain me please shorlty ?

    Thank you

    1. I’m not sure I follow. In the original (and not so good) FAST algorithm, 12 consecutive points out of 16 are needed. In the improved one, only 9 consecutive points are required.

  50. yes, you are right but i think the code in the original version doesnt take into account the term of contiguity of pixels, it use only 12 pixels out of 16 not neccessary consecutive

    Thank you

    1. You are mistaken. Without the contiguity requirement, it would not be able to reject candidates without testing fewer than 12 pixels, so it wouldn’t be fast. All versions have required contiguous pixels.

  51. Hi,
    I just read your codes. In your paper, you mentioned that ID3 can generate a decison tree to give better performance. In your matlab codes, the if and else parts are from this ID3? Could I get the ID3 codes, so I can generate and train my own decison tree? Or you already embed the ID3 into the codes? I am very confused.


  52. I just read your paper and your shared code. Thanks for your sharing the good idea. In order to make sure I understand it well, please take a look at my comments in the below paragraph and point it out if I misunderstand it.

    May I assume that the accuracy of detector is mainly decided by the number(for example n is 12/9) of consecutive points? I am also assuming that the purpose to introduce ID3 decision tree is to speed up test only. No matter the improvement with machine learning(ID3) is introduced into FAST or not, the accuracy of detector will be same.

    Please let me know your comments.

    thanks a lot

    1. May I assume that the accuracy of detector is mainly decided by the number(for example n is 12/9) of consecutive points?

      That’s actually a very interesting question and the answer is not simple or obvious. I’ve actually tried several different things, and there’s a few different bits of evidence. So, generally it’s worth rating corners by “quality”. It’s useful for non maximal suppression (otherwise you need to do connected components and centroiding which is slower and can give some odd results in certain cases) and it’s also useful for getting rid of corners if your detection threshold is too low and you have too many.

      It makes some degree of sense to make the threshold and quality the same (i.e. quality = highest threshold for which the point is detected as a corner), because then removing low quality points is identical to having used a higher threshold. The earlier paper used a different definition which was quite closely related, but it turns out that the more obvious one actually gives marginally better results.

      Then there’s your question of the number of consecutive points (arc length). I never used that as a quality measure since it’s too highly quantized (values are only 9–16). If you look in Figure 8 of the paper, you can see I tested different segment lengths. Detectors with longer lengths are worse (i.e. FAST-9 is the best). However, the main reason for that is that I’m measuring at a fixed number of corners per frame and by definition there are by definition no more FAST-10 points than FAST-9 ones for a given threshold. Usually there are many fewer. That means to get the same number of points you need to lower the threshold, so you’re more likely to pick up junk. That’s probably the dominant effect.

      However, that doesn’t really give a complete picture. I would expect that for a given threshold, points which are FAST-12 are probably better than ones which are only FAST-9 on average. You can flip it around and consider how likely they are to cease becoming points when the threshold is raised—unless you make assumptions about the pixel distribution which seems unrealistic, then points which start as FAST-12 are likely to stay as FAST-9 for longer as the threshold is raised.

      So the answer is “it depends”. Personally, I’ve never found any reason to use anything other than FAST-9. That’s mostly been based on the result of my experiments in the FAST paper and to be fair I’ve not looked into it much. Georg Klein however favoured FAST-10 in PTAM. The results of that are undeniably top notch, and he found FAST-10 to be the option (efficient versions for 9, 10, 11, 12 were available at the time). So my guess would be that it depends on the downstream processing (that incidentally is the main failure of repeatability as a test), though in any case for general purpose corner detection it’s unlikely that the higher number FAST detectors will be better than the lower numbered ones (i.e. 9, 10, maybe 11).

      I am also assuming that the purpose to introduce ID3 decision tree is to speed up test only.

      Yes, that’s 100% correct. You can write a slow FAST (a segment test detector?) using a few lines of code. In fact that’s what I use to generate the training data for ID3.

      No matter the improvement with machine learning(ID3) is introduced into FAST or not, the accuracy of detector will be same.

      Yes that’s correct. In fact one of the automated tests in libCVD is to extract corners using FAST and then extract corners using the slow algorithm and compares the results to check they’re identical.

      That said… In the first versions I trained, the training set didn’t actually have complete coverage of all possible FAST corners, so the detector learned was not precisely the same as the direct test. In later versions, as well as extracting data from images, I also generated one of every possible corner type to ensure that the result of the tree was exactly same as the direct test.

      Interestingly, that gave a small but consistent improvement in repeatability.

    1. The short answer is “yes”.

      FAST is very efficient in the average case, but you can construct pathological cases where it’s slow, so it’s not hard realtime. It is very much suitable for soft realtime processing.

  53. Hi. I am curious of the score of detected corner. I think it is different from the description in the paper. Can you show what is the background of it and the reason why you have changed the logic to get the score?

    1. Well, there are several papers. The most recent versions of the code use the score defined in the most recent papers: the score is the maximum threshold for which a point is detected as a corner.

      The reason I changed the score was for two reasons:
      1. It gave very slightly better repeatability
      2. It means that filtering detected corners by score is identical to re-running the detector with a higher threshold.

      1. Thank you for the reply. What I have seen was the max value of the sum between brighter pts and darker pts. You changed it for the repeatability which means it is more useful in feature tracking etc. Am I right?

  54. Hi Dr.Ed Rosten,

    I am a MEng student working on a project using webcam to track specific pattern that attached on hand-held landmine detector to monitoring its position. I designed a corner pattern and use your FAST algorithm with OpenCV and Python to track it and now I have a few questions:

    1. Is the FAST algorithm able to return pixel coordinates of detected corner?

    2. I tried different type of FAST and tweaked the threshold value buy cannot get the result that I want and I know that the FAST algorithm in OpenCV is learned. Thus is there any way that I can re-train the FAST algorithm using my own pattern to improve the corner detection?

    1. Hi,

      1. Th basic FAST algorithm only computes the pixel coordinates: for each pixel in the image, it tests to see whether it passes the segment test criterion. I’m not familiar with the OpenCV API though, so I don’t know how to extract that information.

      2. The learning part of the FAST algorithm is there to optimize the order of the tests, it doesn’t actually learn different appearances. There’s some learned FAST code in libCVD, and the regression tests compare the results to a slow, hand-written detector and show that they produce exactly the same computation. Also, I believe that OpenCV doesn’t actually use a learned FAST any more. They instead use a hand-written SSE based one (as does libCVD in many cases in fact!).


      On 9 February 2017 at 12:15, Edrosten's Blog wrote:


  55. Hello Mr.Edrosten,

    I am doing my Master thesis and I am working on Fast Feature Detector. I used your code to the already existing code of mine and found that for my video input, the features were all detected incorrectly whereas if I replace this function with the OpenCV’s Fast, I am able to get a clear output where the features are detected properly. Also it’s not throwing any error while compling. The only problem is with the mismatched output. I am stuck in finding where it had gone wrong. Could you please help me.

    Thanks in advance

    1. Hi,

      My guess is that you’ve got an error in how you’re using the API. Given that the basic FAST API is in C, that’s easy to do. The FAST code is pretty well tested, so I think it’s unlikely there’s an error in the code itself.

      I would say that if the OpenCV one works, you may as well use it. As far as I know, it’s back to being faster than the basic, non vectorized C code. There’s also a FAST detector in libCVD. Much like the OpenCV one, it removes the problem of messing around with pointers, strides and so on, making it easy to use.

      If using the pure C code is important to your project, I would suggest that you try debugging by creating a non-square image with a black right-angle on a white background so you know that there’s exactly one FAST corner and you know exactly where it is.


  56. Hi Dr. Ed. Rosten,
    I am very impressed with your engagement and dedication on this forum. It took me a while to figure out why you machine learnt the decision tree, but I can see it is a work of extreme elegance. I am building a video graph product to improve learning outcomes in the classroom, by discretizing learning into the smallest instructional blocks that can be reassembled to match different learning curves. As part of this work; I am planning to add in edge detection to my vision pipeline on IOS Metal, and I was considering either write a convertor to the intermediate form that would generate metal code. Or alternatively port over the non-machine code over to Metal ( hand port it ). There are pro-cons to both approaches. The non-machine learnt code is more compact while as expected the machine learnt code would be more efficient as the decision tree gets sooner to a decision. Given the compute available on the GPU, it’s not obvious to me the extent to which the optimization would be significant.
    That said, it appears that I would need to run a two pass a first pass to get the corners and this pass could score the corners immediately after detection; and the second pass would then need to do nomax suppression which can’t be done without the first pass being complete and information about the local neighborhood being fully computed. Would you agree that a two pass is potentially the optimal approach for the GPU?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s