[analyzer] exploration strategies and paths

classic Classic list List threaded Threaded
23 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: [analyzer] exploration strategies and paths

Dennis Luehring via cfe-dev


On Jan 31, 2018, at 10:15 PM, Anna Zaks <[hidden email]> wrote:
A high jump in basic blocks coverage could be explained by your argument. (By the way, have you measured that?)

On sqlite3, with trunk:

===-------------------------------------------------------------------------===
                         Miscellaneous Ungrouped Timers
===-------------------------------------------------------------------------===

   ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
  379.8440 (100.0%)   6.6433 (100.0%)  386.4873 (100.0%)  391.7896 (100.0%)  Analyzer Total Time
  379.8440 (100.0%)   6.6433 (100.0%)  386.4873 (100.0%)  391.7896 (100.0%)  Total

===-------------------------------------------------------------------------===
                          ... Statistics Collected ...
===-------------------------------------------------------------------------===

    1153 AnalysisConsumer - The maximum number of basic blocks in a function.
   22876 AnalysisConsumer - The # of basic blocks in the analyzed functions.
    1552 AnalysisConsumer - The # of functions at top level.
     581 AnalysisConsumer - The # of functions and blocks analyzed (as top level with inlining turned on).
      62 AnalysisConsumer - The % of reachable basic blocks.
    4091 BugReporter      - The maximum number of bug reports in the same equivalence class
     607 BugReporter      - The maximum number of bug reports in the same equivalence class where at least one report is valid (not suppressed)
    9411 CoreEngine       - The # of paths explored by the analyzer.
     210 CoreEngine       - The # of times we reached the max number of steps.
52196353 CoreEngine       - The # of steps executed.
  752599 ExprEngine       - The # of times we inlined a call
   59461 ExprEngine       - The # of aborted paths due to reaching the maximum block count in a top level function
   64855 ExprEngine       - The # of times we reached inline count maximum
12580704 ExprEngine       - The # of times RemoveDeadBindings is called
     310 ExprEngine       - The # of times we re-evaluated a call without inlining

With new approach: (resulting in 7 more reports)

===-------------------------------------------------------------------------===
                         Miscellaneous Ungrouped Timers
===-------------------------------------------------------------------------===

   ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
  265.6685 (100.0%)   4.5682 (100.0%)  270.2367 (100.0%)  423.9704 (100.0%)  Analyzer Total Time
  265.6685 (100.0%)   4.5682 (100.0%)  270.2367 (100.0%)  423.9704 (100.0%)  Total

===-------------------------------------------------------------------------===
                          ... Statistics Collected ...
===-------------------------------------------------------------------------===

    1153 AnalysisConsumer - The maximum number of basic blocks in a function.
   22876 AnalysisConsumer - The # of basic blocks in the analyzed functions.
    1552 AnalysisConsumer - The # of functions at top level.
     448 AnalysisConsumer - The # of functions and blocks analyzed (as top level with inlining turned on).
      77 AnalysisConsumer - The % of reachable basic blocks.
     576 BugReporter      - The maximum number of bug reports in the same equivalence class
     576 BugReporter      - The maximum number of bug reports in the same equivalence class where at least one report is valid (not suppressed)
    7717 CoreEngine       - The # of paths explored by the analyzer.
     143 CoreEngine       - The # of times we reached the max number of steps.
36012220 CoreEngine       - The # of steps executed.
  527875 ExprEngine       - The # of times we inlined a call
   33368 ExprEngine       - The # of aborted paths due to reaching the maximum block count in a top level function
   37822 ExprEngine       - The # of times we reached inline count maximum
 8551997 ExprEngine       - The # of times RemoveDeadBindings is called
     295 ExprEngine       - The # of times we re-evaluated a call without inlining

We can also see a factor of 8 decrease in the maximum number of bug reports within the same equivalence class,
which further validates the hypothesis that the budget is no longer spent on going around the loop and finding the same bug over and over again.
Moreover, even the user time is significantly decreased (I would assume because we run out of budget faster as paths get longer).


However, whenever I see that we start reporting hundreds of new bugs on a single codebase it looks suspicious. Will the developers really want and need to fix more than 3 hundred bugs on postgresql? Often when we dig deeper we find out that either we became too aggressive in reporting what people will view as false positives or we are reporting duplicate reports that have the same underlying cause. It’s possible postgresql is really that buggy, but we really need to be careful here. 


On Jan 31, 2018, at 9:52 PM, Anna Zaks <[hidden email]> wrote:



On Jan 31, 2018, at 9:41 PM, George Karpenkov <[hidden email]> wrote:



On Jan 31, 2018, at 8:46 PM, Anna Zaks <[hidden email]> wrote:


On Jan 31, 2018, at 5:00 PM, George Karpenkov <[hidden email]> wrote:

The list did not like a posting with many images,
so I have posted an evaluation to phabricator: https://reviews.llvm.org/M1

The text part was:

After fixing a few bugs, another evaluation of the approach shows considerably better results.

On openssl:

  • 9 reports added
  • 1 report removed

On postgresql:

  • 377 reports added
  • 43 reports removed

On sqlite3 + a few other misc files:

  • 239 reports added
This is a lot of additional reports! Are there actually that many bugs in that codebase that need to be fixed?

Out of 239 or in general? The 239 reports mostly come from non-sqlite files.

Can you provide data just for sqlite?

For postgresql the number of additional reports also seems very high.

In general, for most practical purposes C codebases provide an infinite supply of bugs :P

On a more serious note, I don’t think this is very surprising.
The previous emails in this chain have shown that for loops, the analyzer has a very high chance of entering
a degenerate behavior where the longest path through the loop will be evaluated first,
and a large chunk of the analyzer budget is then spent on going around the loop in circles.

Under new exploration strategy, paths which increase coverage are explored first, and then we can actually find
bugs with the budget which is no longer spent going in circles.

I think we need to manually evaluate these reports.

Yes, I’ve looked through quite a few of them, the false positive ratio seems to be actually getting lower,
as the probability of the report being a false positive grows with the path length,
and path lengths are getting much shorter.

Also, we have to make sure uniquing works. Do all regression tests pass?

Yes.
  • 1 report removed

Note on histograms (here and below)

-> Histograms only show the ratio for same bugs (compared using issue hash),
that is, if the histogram says “decrease by a factor of three”, it means the new approach finds the *same* bug
with a path size 1/3d of the original
-> Histograms omit data points where the path length has remained the same
(as otherwise they completely dominate the histogram)
-> Relative histograms are provided as both ratio and logarithm of the ratio.
Logarithms of the ratio are convenient as they are symmetric in case changes balance out
(e.g. log(1/2) = -log(2/1))

On Jan 30, 2018, at 4:23 PM, George Karpenkov <[hidden email]> wrote:

Preliminary evaluation of a patch which prefers exploring nodes associated with statements which weren’t seen before first:

On openssl:

 - Adds four reports
 - Removes four reports 
 - Path lengths before: 317, 75, 75, 72, 70, 58, 50, 50, 44, 36, 23, 23, 21, 21, 20, 20, 19, 19, 19, 19, 18, 18, 18, 16, 15, 15, 15, 14, 13, 13, 12, 11, 11, 9, 7, 7, 6, 4
 - Path lengths after: 72, 60, 59, 53, 53, 52, 46, 38, 37, 30, 29, 28, 23, 21, 20, 19, 19, 19, 19, 19, 18, 16, 15, 15, 15, 15, 13, 13, 12, 12, 11, 9, 8, 7, 7, 7, 6, 4

The quality of the added reports seems higher, mainly due to the fact that report length is shorter.

On postgresql:

 - Added 80 reports
 - Removed 154 reports
   -> Out of those, 72 are reports on the yacc/bison autogenerated files, so whatever the cause is, good thing they are gone
 - The overall number of reports is 1188
 - Path lengths are lower on overall, but not in such a dramatic way
 - For many reports, I am quite confused as to why they got removed

On sqlite:

 - 7 inserted, 7 removed

On Jan 30, 2018, at 1:10 PM, Artem Dergachev <[hidden email]> wrote:



On 30/01/2018 12:40 PM, Gábor Horváth via cfe-dev wrote:
Hi George, Artem,

I am glad that you are looking into this problem!

On 30 January 2018 at 01:12, George Karpenkov via cfe-dev <[hidden email] <[hidden email]>> wrote:

   Hi All,

   I was investigating recently bug reports with very long analyzer
   paths (more than a few hundred nodes).
   In many of such cases the path is long for no good reason: namely,
   the analyzer would go 3 times around the loop before
   going further.
   The issue is surprisingly common, and it was exacerbated with a
   recent bump of analyzer thresholds.

   The problem is reproduced on the following file:

   ```
   extern int coin();

   int foo() {
       int *x = 0;
       while (coin()) {
           if (coin())
               return *x;
       }
       return 0;
   }

   void bar() {
       while(coin())
           if (coin())
               foo();
   }
   ```

   While a shortest path to the error does not loop around, the
   current version of the analyzer
   will go around the loop three times before going further.
   (and we are quite fortunate that the unrolling limit for loops is
   three, otherwise it would keep going
   until the unrolling limit is reached).

   Multiple issues were discovered during the investigation.

   1. Analyzer queue does not have a concept of priority, and
   performs a simple DFS by default.
   Thus if the successor of the if-branch under the loop in “bar"
   containing the desired destination is generated second,
   it will never be evaluated until the loop exploration limit is
   exhausted.

   2. The previous issue slows down the exploration, but is not
   enough to get a pathological behavior of ultra-long paths.
   The second problem is a combination of:
   a) Block counter is not a part of a node's identity, and node A
   with a small block counter can be merged into a node B with a
   large block counter,
   and the resulting node will have a block counter associated with B.


Sorry for the questions, just wanted to clarify some things. You mean ExplodedNodes? By merge, you mean the same thing as "caching-out"?

Yeah, George notices that if we construct the same ExplodedNode on two different paths that have different block counts, we'd cache-out on the latter path, while the worklist element of the first path would still possess the original block count.

Which happens a lot when we're evaluating foo() conservatively in this example.

This isn't directly related to our problem though, as i noticed in http://lists.llvm.org/pipermail/cfe-dev/2018-January/056719.html .


   b) The issue in (a) is triggered due to our heuristic to abandon
   the function’s exploration and switch to conservative evaluation
   if we are already *inside* the function and the block limit has
   been reached.

   Issue (1) combined with (2-b) causes the problematic behavior: the
   issue is discovered on the longest path first,
   and by the time the shortest path gets to “bar”, the block limit
   is already reached, and the switch to conservative evaluation is
   performed.

   Thus there are two mitigation strategies currently being evaluated:

   i) Remove the heuristic in (2-b)
   ii) Use a priority queue to hold nodes which should be explored;
   prefer nodes which give new source code coverage over others
   (or alternatively prefer nodes with least depth of loop stack)

   Me and Artem have evaluated the option (i) and the results were
   surprisingly good: some reports disappear, and slightly more
   reports reappear.
   The quality of the new reports seems to be slightly better, and I
   am still trying to figure out exact reasons.
   I suspect merges resulting from heuristic (2-b) cause us to lose
   some actually valid reports.


I also find the results surprising. If you have more information about the reasons please do not forget to follow up this thread. We are curious :)


   Option (ii) has not been evaluated fully yet, but current
   experiments show slightly more reports (5-10%), and a radical
   decline in report lengths
   (e.g. from 400+ to <100 for largest reports)

   Are there any thoughts on the matter?

   Personally I think we should do both (i) and (ii), even if they
   would shake up the results.
   - The original idea for heuristics (2-b) was to be able to produce
   a report even if we are out of budget, but since it actually
   results in less reports,
   I think the data does not validate the approach.

   - Option (ii) is AFAIK how most similar engines work, and should
   get us much larger coverage (and shorter paths) for the same node
   budget,
   even at the cost of log(N) overhead of the priority queue.
   Moreover, not having the priority queue will bite us later if we
   ever decide to further
   increase the analyzer budget or to increase the unroll limit.


I wonder what will the performance implication be. But I also like the idea of having a priority queue. If we find that we get more and better report
but also have worse performance, we can also consider reducing the analysis budget slightly.

Regards,
Gábor


   George



   _______________________________________________
   cfe-dev mailing list
   [hidden email] <[hidden email]>
   http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
   <http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev>




_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev





_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
Reply | Threaded
Open this post in threaded view
|

Re: [analyzer] exploration strategies and paths

Dennis Luehring via cfe-dev


On 1 February 2018 at 23:39, George Karpenkov <[hidden email]> wrote:


On Jan 31, 2018, at 10:15 PM, Anna Zaks <[hidden email]> wrote:
A high jump in basic blocks coverage could be explained by your argument. (By the way, have you measured that?)

On sqlite3, with trunk:

===-------------------------------------------------------------------------===
                         Miscellaneous Ungrouped Timers
===-------------------------------------------------------------------------===

   ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
  379.8440 (100.0%)   6.6433 (100.0%)  386.4873 (100.0%)  391.7896 (100.0%)  Analyzer Total Time
  379.8440 (100.0%)   6.6433 (100.0%)  386.4873 (100.0%)  391.7896 (100.0%)  Total

===-------------------------------------------------------------------------===
                          ... Statistics Collected ...
===-------------------------------------------------------------------------===

    1153 AnalysisConsumer - The maximum number of basic blocks in a function.
   22876 AnalysisConsumer - The # of basic blocks in the analyzed functions.
    1552 AnalysisConsumer - The # of functions at top level.
     581 AnalysisConsumer - The # of functions and blocks analyzed (as top level with inlining turned on).
      62 AnalysisConsumer - The % of reachable basic blocks.
    4091 BugReporter      - The maximum number of bug reports in the same equivalence class
     607 BugReporter      - The maximum number of bug reports in the same equivalence class where at least one report is valid (not suppressed)
    9411 CoreEngine       - The # of paths explored by the analyzer.
     210 CoreEngine       - The # of times we reached the max number of steps.
52196353 CoreEngine       - The # of steps executed.
  752599 ExprEngine       - The # of times we inlined a call
   59461 ExprEngine       - The # of aborted paths due to reaching the maximum block count in a top level function
   64855 ExprEngine       - The # of times we reached inline count maximum
12580704 ExprEngine       - The # of times RemoveDeadBindings is called
     310 ExprEngine       - The # of times we re-evaluated a call without inlining

With new approach: (resulting in 7 more reports)

===-------------------------------------------------------------------------===
                         Miscellaneous Ungrouped Timers
===-------------------------------------------------------------------------===

   ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
  265.6685 (100.0%)   4.5682 (100.0%)  270.2367 (100.0%)  423.9704 (100.0%)  Analyzer Total Time
  265.6685 (100.0%)   4.5682 (100.0%)  270.2367 (100.0%)  423.9704 (100.0%)  Total

===-------------------------------------------------------------------------===
                          ... Statistics Collected ...
===-------------------------------------------------------------------------===

    1153 AnalysisConsumer - The maximum number of basic blocks in a function.
   22876 AnalysisConsumer - The # of basic blocks in the analyzed functions.
    1552 AnalysisConsumer - The # of functions at top level.
     448 AnalysisConsumer - The # of functions and blocks analyzed (as top level with inlining turned on).
      77 AnalysisConsumer - The % of reachable basic blocks.
     576 BugReporter      - The maximum number of bug reports in the same equivalence class
     576 BugReporter      - The maximum number of bug reports in the same equivalence class where at least one report is valid (not suppressed)
    7717 CoreEngine       - The # of paths explored by the analyzer.
     143 CoreEngine       - The # of times we reached the max number of steps.
36012220 CoreEngine       - The # of steps executed.
  527875 ExprEngine       - The # of times we inlined a call
   33368 ExprEngine       - The # of aborted paths due to reaching the maximum block count in a top level function
   37822 ExprEngine       - The # of times we reached inline count maximum
 8551997 ExprEngine       - The # of times RemoveDeadBindings is called
     295 ExprEngine       - The # of times we re-evaluated a call without inlining

We can also see a factor of 8 decrease in the maximum number of bug reports within the same equivalence class,
which further validates the hypothesis that the budget is no longer spent on going around the loop and finding the same bug over and over again.
Moreover, even the user time is significantly decreased (I would assume because we run out of budget faster as paths get longer).

I think, it would be good to know what is the reason behind the significant decrease in the running time. Maybe discovering the shorter paths first (with less looping around) and we cache out on the longer paths later? Or in case the reason is that we run out one of the analysis budgets earlier, maybe it would be worth to consider altering the default budget limits.
 


However, whenever I see that we start reporting hundreds of new bugs on a single codebase it looks suspicious. Will the developers really want and need to fix more than 3 hundred bugs on postgresql? Often when we dig deeper we find out that either we became too aggressive in reporting what people will view as false positives or we are reporting duplicate reports that have the same underlying cause. It’s possible postgresql is really that buggy, but we really need to be careful here. 


On Jan 31, 2018, at 9:52 PM, Anna Zaks <[hidden email]> wrote:



On Jan 31, 2018, at 9:41 PM, George Karpenkov <[hidden email]> wrote:



On Jan 31, 2018, at 8:46 PM, Anna Zaks <[hidden email]> wrote:


On Jan 31, 2018, at 5:00 PM, George Karpenkov <[hidden email]> wrote:

The list did not like a posting with many images,
so I have posted an evaluation to phabricator: https://reviews.llvm.org/M1

The text part was:

After fixing a few bugs, another evaluation of the approach shows considerably better results.

On openssl:

  • 9 reports added
  • 1 report removed

On postgresql:

  • 377 reports added
  • 43 reports removed

On sqlite3 + a few other misc files:

  • 239 reports added
This is a lot of additional reports! Are there actually that many bugs in that codebase that need to be fixed?

Out of 239 or in general? The 239 reports mostly come from non-sqlite files.

Can you provide data just for sqlite?

For postgresql the number of additional reports also seems very high.

In general, for most practical purposes C codebases provide an infinite supply of bugs :P

On a more serious note, I don’t think this is very surprising.
The previous emails in this chain have shown that for loops, the analyzer has a very high chance of entering
a degenerate behavior where the longest path through the loop will be evaluated first,
and a large chunk of the analyzer budget is then spent on going around the loop in circles.

Under new exploration strategy, paths which increase coverage are explored first, and then we can actually find
bugs with the budget which is no longer spent going in circles.

I think we need to manually evaluate these reports.

Yes, I’ve looked through quite a few of them, the false positive ratio seems to be actually getting lower,
as the probability of the report being a false positive grows with the path length,
and path lengths are getting much shorter.

Also, we have to make sure uniquing works. Do all regression tests pass?

Yes.
  • 1 report removed

Note on histograms (here and below)

-> Histograms only show the ratio for same bugs (compared using issue hash),
that is, if the histogram says “decrease by a factor of three”, it means the new approach finds the *same* bug
with a path size 1/3d of the original
-> Histograms omit data points where the path length has remained the same
(as otherwise they completely dominate the histogram)
-> Relative histograms are provided as both ratio and logarithm of the ratio.
Logarithms of the ratio are convenient as they are symmetric in case changes balance out
(e.g. log(1/2) = -log(2/1))

On Jan 30, 2018, at 4:23 PM, George Karpenkov <[hidden email]> wrote:

Preliminary evaluation of a patch which prefers exploring nodes associated with statements which weren’t seen before first:

On openssl:

 - Adds four reports
 - Removes four reports 
 - Path lengths before: 317, 75, 75, 72, 70, 58, 50, 50, 44, 36, 23, 23, 21, 21, 20, 20, 19, 19, 19, 19, 18, 18, 18, 16, 15, 15, 15, 14, 13, 13, 12, 11, 11, 9, 7, 7, 6, 4
 - Path lengths after: 72, 60, 59, 53, 53, 52, 46, 38, 37, 30, 29, 28, 23, 21, 20, 19, 19, 19, 19, 19, 18, 16, 15, 15, 15, 15, 13, 13, 12, 12, 11, 9, 8, 7, 7, 7, 6, 4

The quality of the added reports seems higher, mainly due to the fact that report length is shorter.

On postgresql:

 - Added 80 reports
 - Removed 154 reports
   -> Out of those, 72 are reports on the yacc/bison autogenerated files, so whatever the cause is, good thing they are gone
 - The overall number of reports is 1188
 - Path lengths are lower on overall, but not in such a dramatic way
 - For many reports, I am quite confused as to why they got removed

On sqlite:

 - 7 inserted, 7 removed

On Jan 30, 2018, at 1:10 PM, Artem Dergachev <[hidden email]> wrote:



On 30/01/2018 12:40 PM, Gábor Horváth via cfe-dev wrote:
Hi George, Artem,

I am glad that you are looking into this problem!

On 30 January 2018 at 01:12, George Karpenkov via cfe-dev <[hidden email] <[hidden email]>> wrote:

   Hi All,

   I was investigating recently bug reports with very long analyzer
   paths (more than a few hundred nodes).
   In many of such cases the path is long for no good reason: namely,
   the analyzer would go 3 times around the loop before
   going further.
   The issue is surprisingly common, and it was exacerbated with a
   recent bump of analyzer thresholds.

   The problem is reproduced on the following file:

   ```
   extern int coin();

   int foo() {
       int *x = 0;
       while (coin()) {
           if (coin())
               return *x;
       }
       return 0;
   }

   void bar() {
       while(coin())
           if (coin())
               foo();
   }
   ```

   While a shortest path to the error does not loop around, the
   current version of the analyzer
   will go around the loop three times before going further.
   (and we are quite fortunate that the unrolling limit for loops is
   three, otherwise it would keep going
   until the unrolling limit is reached).

   Multiple issues were discovered during the investigation.

   1. Analyzer queue does not have a concept of priority, and
   performs a simple DFS by default.
   Thus if the successor of the if-branch under the loop in “bar"
   containing the desired destination is generated second,
   it will never be evaluated until the loop exploration limit is
   exhausted.

   2. The previous issue slows down the exploration, but is not
   enough to get a pathological behavior of ultra-long paths.
   The second problem is a combination of:
   a) Block counter is not a part of a node's identity, and node A
   with a small block counter can be merged into a node B with a
   large block counter,
   and the resulting node will have a block counter associated with B.


Sorry for the questions, just wanted to clarify some things. You mean ExplodedNodes? By merge, you mean the same thing as "caching-out"?

Yeah, George notices that if we construct the same ExplodedNode on two different paths that have different block counts, we'd cache-out on the latter path, while the worklist element of the first path would still possess the original block count.

Which happens a lot when we're evaluating foo() conservatively in this example.

This isn't directly related to our problem though, as i noticed in http://lists.llvm.org/pipermail/cfe-dev/2018-January/056719.html .


   b) The issue in (a) is triggered due to our heuristic to abandon
   the function’s exploration and switch to conservative evaluation
   if we are already *inside* the function and the block limit has
   been reached.

   Issue (1) combined with (2-b) causes the problematic behavior: the
   issue is discovered on the longest path first,
   and by the time the shortest path gets to “bar”, the block limit
   is already reached, and the switch to conservative evaluation is
   performed.

   Thus there are two mitigation strategies currently being evaluated:

   i) Remove the heuristic in (2-b)
   ii) Use a priority queue to hold nodes which should be explored;
   prefer nodes which give new source code coverage over others
   (or alternatively prefer nodes with least depth of loop stack)

   Me and Artem have evaluated the option (i) and the results were
   surprisingly good: some reports disappear, and slightly more
   reports reappear.
   The quality of the new reports seems to be slightly better, and I
   am still trying to figure out exact reasons.
   I suspect merges resulting from heuristic (2-b) cause us to lose
   some actually valid reports.


I also find the results surprising. If you have more information about the reasons please do not forget to follow up this thread. We are curious :)


   Option (ii) has not been evaluated fully yet, but current
   experiments show slightly more reports (5-10%), and a radical
   decline in report lengths
   (e.g. from 400+ to <100 for largest reports)

   Are there any thoughts on the matter?

   Personally I think we should do both (i) and (ii), even if they
   would shake up the results.
   - The original idea for heuristics (2-b) was to be able to produce
   a report even if we are out of budget, but since it actually
   results in less reports,
   I think the data does not validate the approach.

   - Option (ii) is AFAIK how most similar engines work, and should
   get us much larger coverage (and shorter paths) for the same node
   budget,
   even at the cost of log(N) overhead of the priority queue.
   Moreover, not having the priority queue will bite us later if we
   ever decide to further
   increase the analyzer budget or to increase the unroll limit.


I wonder what will the performance implication be. But I also like the idea of having a priority queue. If we find that we get more and better report
but also have worse performance, we can also consider reducing the analysis budget slightly.

Regards,
Gábor


   George



   _______________________________________________
   cfe-dev mailing list
   [hidden email] <[hidden email]>
   http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
   <http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev>




_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev






_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
Reply | Threaded
Open this post in threaded view
|

Re: [analyzer] exploration strategies and paths

Dennis Luehring via cfe-dev
After additional investigation (summarized in the table below), we have decided to flip the switch.
Some additional notes on postgres: most additional reports come from the auto-generated bison file,
increase in memory comes from that same file (as we simply have a lot more reports for a single invocation).
Once https://reviews.llvm.org/D43421 [analyzer] Do not analyze bison-generated files comes through, those should be suppressed.

Relevant differential revisions are:

 - https://reviews.llvm.org/D43354 [analyzer] Exploration strategy prioritizing unexplored nodes first
 - https://reviews.llvm.org/D43782 [analyzer] Switch the default exploration strategy to priority queue based on coverage



Project


Total User Time (s)Total Coverage (%)Max Resident Set Size (MB)Total ReportsMax Path Length
sqlite3 (dfs)439621963479
sqlite3 (cqueue)236 (-46%)80 (+29%)211 (-7%)46 (+14 / -2) (+35%)118 (+49%)
XNU (dfs)52377122221942833
XNU (cqueue)5046 (-3%)82 (+15%)2224 (+0%)2194 (+275 / -23) (+12%)360 (-56%)
openSSL (dfs)81081187184465
openSSL (cqueue)862 (+6%)86 (+6%)144 (-22%)193 (+9 / -0) (+4%)170 (-63%)
postgres (dfs)363873272979294
postgres (cqueue)3703 (+2%)85 (+16%)543 (+100%)1352 (+390 / -17) (+38%)199 (-32%)
adium (dfs)28059322233894
adium (cqueue)3075 (+10%)95 (+2%)209 (-5%)347 (+9 / -0) (+3%)94 (-0%)

On Feb 4, 2018, at 4:43 AM, Gábor Horváth <[hidden email]> wrote:



On 1 February 2018 at 23:39, George Karpenkov <[hidden email]> wrote:


On Jan 31, 2018, at 10:15 PM, Anna Zaks <[hidden email]> wrote:
A high jump in basic blocks coverage could be explained by your argument. (By the way, have you measured that?)

On sqlite3, with trunk:

===-------------------------------------------------------------------------===
                         Miscellaneous Ungrouped Timers
===-------------------------------------------------------------------------===

   ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
  379.8440 (100.0%)   6.6433 (100.0%)  386.4873 (100.0%)  391.7896 (100.0%)  Analyzer Total Time
  379.8440 (100.0%)   6.6433 (100.0%)  386.4873 (100.0%)  391.7896 (100.0%)  Total

===-------------------------------------------------------------------------===
                          ... Statistics Collected ...
===-------------------------------------------------------------------------===

    1153 AnalysisConsumer - The maximum number of basic blocks in a function.
   22876 AnalysisConsumer - The # of basic blocks in the analyzed functions.
    1552 AnalysisConsumer - The # of functions at top level.
     581 AnalysisConsumer - The # of functions and blocks analyzed (as top level with inlining turned on).
      62 AnalysisConsumer - The % of reachable basic blocks.
    4091 BugReporter      - The maximum number of bug reports in the same equivalence class
     607 BugReporter      - The maximum number of bug reports in the same equivalence class where at least one report is valid (not suppressed)
    9411 CoreEngine       - The # of paths explored by the analyzer.
     210 CoreEngine       - The # of times we reached the max number of steps.
52196353 CoreEngine       - The # of steps executed.
  752599 ExprEngine       - The # of times we inlined a call
   59461 ExprEngine       - The # of aborted paths due to reaching the maximum block count in a top level function
   64855 ExprEngine       - The # of times we reached inline count maximum
12580704 ExprEngine       - The # of times RemoveDeadBindings is called
     310 ExprEngine       - The # of times we re-evaluated a call without inlining

With new approach: (resulting in 7 more reports)

===-------------------------------------------------------------------------===
                         Miscellaneous Ungrouped Timers
===-------------------------------------------------------------------------===

   ---User Time---   --System Time--   --User+System--   ---Wall Time---  --- Name ---
  265.6685 (100.0%)   4.5682 (100.0%)  270.2367 (100.0%)  423.9704 (100.0%)  Analyzer Total Time
  265.6685 (100.0%)   4.5682 (100.0%)  270.2367 (100.0%)  423.9704 (100.0%)  Total

===-------------------------------------------------------------------------===
                          ... Statistics Collected ...
===-------------------------------------------------------------------------===

    1153 AnalysisConsumer - The maximum number of basic blocks in a function.
   22876 AnalysisConsumer - The # of basic blocks in the analyzed functions.
    1552 AnalysisConsumer - The # of functions at top level.
     448 AnalysisConsumer - The # of functions and blocks analyzed (as top level with inlining turned on).
      77 AnalysisConsumer - The % of reachable basic blocks.
     576 BugReporter      - The maximum number of bug reports in the same equivalence class
     576 BugReporter      - The maximum number of bug reports in the same equivalence class where at least one report is valid (not suppressed)
    7717 CoreEngine       - The # of paths explored by the analyzer.
     143 CoreEngine       - The # of times we reached the max number of steps.
36012220 CoreEngine       - The # of steps executed.
  527875 ExprEngine       - The # of times we inlined a call
   33368 ExprEngine       - The # of aborted paths due to reaching the maximum block count in a top level function
   37822 ExprEngine       - The # of times we reached inline count maximum
 8551997 ExprEngine       - The # of times RemoveDeadBindings is called
     295 ExprEngine       - The # of times we re-evaluated a call without inlining

We can also see a factor of 8 decrease in the maximum number of bug reports within the same equivalence class,
which further validates the hypothesis that the budget is no longer spent on going around the loop and finding the same bug over and over again.
Moreover, even the user time is significantly decreased (I would assume because we run out of budget faster as paths get longer).

I think, it would be good to know what is the reason behind the significant decrease in the running time. Maybe discovering the shorter paths first (with less looping around) and we cache out on the longer paths later? Or in case the reason is that we run out one of the analysis budgets earlier, maybe it would be worth to consider altering the default budget limits.
 


However, whenever I see that we start reporting hundreds of new bugs on a single codebase it looks suspicious. Will the developers really want and need to fix more than 3 hundred bugs on postgresql? Often when we dig deeper we find out that either we became too aggressive in reporting what people will view as false positives or we are reporting duplicate reports that have the same underlying cause. It’s possible postgresql is really that buggy, but we really need to be careful here. 


On Jan 31, 2018, at 9:52 PM, Anna Zaks <[hidden email]> wrote:



On Jan 31, 2018, at 9:41 PM, George Karpenkov <[hidden email]> wrote:



On Jan 31, 2018, at 8:46 PM, Anna Zaks <[hidden email]> wrote:


On Jan 31, 2018, at 5:00 PM, George Karpenkov <[hidden email]> wrote:

The list did not like a posting with many images,
so I have posted an evaluation to phabricator: https://reviews.llvm.org/M1

The text part was:

After fixing a few bugs, another evaluation of the approach shows considerably better results.

On openssl:

  • 9 reports added
  • 1 report removed

On postgresql:

  • 377 reports added
  • 43 reports removed

On sqlite3 + a few other misc files:

  • 239 reports added
This is a lot of additional reports! Are there actually that many bugs in that codebase that need to be fixed?

Out of 239 or in general? The 239 reports mostly come from non-sqlite files.

Can you provide data just for sqlite?

For postgresql the number of additional reports also seems very high.

In general, for most practical purposes C codebases provide an infinite supply of bugs :P

On a more serious note, I don’t think this is very surprising.
The previous emails in this chain have shown that for loops, the analyzer has a very high chance of entering
a degenerate behavior where the longest path through the loop will be evaluated first,
and a large chunk of the analyzer budget is then spent on going around the loop in circles.

Under new exploration strategy, paths which increase coverage are explored first, and then we can actually find
bugs with the budget which is no longer spent going in circles.

I think we need to manually evaluate these reports.

Yes, I’ve looked through quite a few of them, the false positive ratio seems to be actually getting lower,
as the probability of the report being a false positive grows with the path length,
and path lengths are getting much shorter.

Also, we have to make sure uniquing works. Do all regression tests pass?

Yes.
  • 1 report removed

Note on histograms (here and below)

-> Histograms only show the ratio for same bugs (compared using issue hash),
that is, if the histogram says “decrease by a factor of three”, it means the new approach finds the *same* bug
with a path size 1/3d of the original
-> Histograms omit data points where the path length has remained the same
(as otherwise they completely dominate the histogram)
-> Relative histograms are provided as both ratio and logarithm of the ratio.
Logarithms of the ratio are convenient as they are symmetric in case changes balance out
(e.g. log(1/2) = -log(2/1))

On Jan 30, 2018, at 4:23 PM, George Karpenkov <[hidden email]> wrote:

Preliminary evaluation of a patch which prefers exploring nodes associated with statements which weren’t seen before first:

On openssl:

 - Adds four reports
 - Removes four reports 
 - Path lengths before: 317, 75, 75, 72, 70, 58, 50, 50, 44, 36, 23, 23, 21, 21, 20, 20, 19, 19, 19, 19, 18, 18, 18, 16, 15, 15, 15, 14, 13, 13, 12, 11, 11, 9, 7, 7, 6, 4
 - Path lengths after: 72, 60, 59, 53, 53, 52, 46, 38, 37, 30, 29, 28, 23, 21, 20, 19, 19, 19, 19, 19, 18, 16, 15, 15, 15, 15, 13, 13, 12, 12, 11, 9, 8, 7, 7, 7, 6, 4

The quality of the added reports seems higher, mainly due to the fact that report length is shorter.

On postgresql:

 - Added 80 reports
 - Removed 154 reports
   -> Out of those, 72 are reports on the yacc/bison autogenerated files, so whatever the cause is, good thing they are gone
 - The overall number of reports is 1188
 - Path lengths are lower on overall, but not in such a dramatic way
 - For many reports, I am quite confused as to why they got removed

On sqlite:

 - 7 inserted, 7 removed

On Jan 30, 2018, at 1:10 PM, Artem Dergachev <[hidden email]> wrote:



On 30/01/2018 12:40 PM, Gábor Horváth via cfe-dev wrote:
Hi George, Artem,

I am glad that you are looking into this problem!

On 30 January 2018 at 01:12, George Karpenkov via cfe-dev <[hidden email] <[hidden email]>> wrote:

   Hi All,

   I was investigating recently bug reports with very long analyzer
   paths (more than a few hundred nodes).
   In many of such cases the path is long for no good reason: namely,
   the analyzer would go 3 times around the loop before
   going further.
   The issue is surprisingly common, and it was exacerbated with a
   recent bump of analyzer thresholds.

   The problem is reproduced on the following file:

   ```
   extern int coin();

   int foo() {
       int *x = 0;
       while (coin()) {
           if (coin())
               return *x;
       }
       return 0;
   }

   void bar() {
       while(coin())
           if (coin())
               foo();
   }
   ```

   While a shortest path to the error does not loop around, the
   current version of the analyzer
   will go around the loop three times before going further.
   (and we are quite fortunate that the unrolling limit for loops is
   three, otherwise it would keep going
   until the unrolling limit is reached).

   Multiple issues were discovered during the investigation.

   1. Analyzer queue does not have a concept of priority, and
   performs a simple DFS by default.
   Thus if the successor of the if-branch under the loop in “bar"
   containing the desired destination is generated second,
   it will never be evaluated until the loop exploration limit is
   exhausted.

   2. The previous issue slows down the exploration, but is not
   enough to get a pathological behavior of ultra-long paths.
   The second problem is a combination of:
   a) Block counter is not a part of a node's identity, and node A
   with a small block counter can be merged into a node B with a
   large block counter,
   and the resulting node will have a block counter associated with B.


Sorry for the questions, just wanted to clarify some things. You mean ExplodedNodes? By merge, you mean the same thing as "caching-out"?

Yeah, George notices that if we construct the same ExplodedNode on two different paths that have different block counts, we'd cache-out on the latter path, while the worklist element of the first path would still possess the original block count.

Which happens a lot when we're evaluating foo() conservatively in this example.

This isn't directly related to our problem though, as i noticed in http://lists.llvm.org/pipermail/cfe-dev/2018-January/056719.html .


   b) The issue in (a) is triggered due to our heuristic to abandon
   the function’s exploration and switch to conservative evaluation
   if we are already *inside* the function and the block limit has
   been reached.

   Issue (1) combined with (2-b) causes the problematic behavior: the
   issue is discovered on the longest path first,
   and by the time the shortest path gets to “bar”, the block limit
   is already reached, and the switch to conservative evaluation is
   performed.

   Thus there are two mitigation strategies currently being evaluated:

   i) Remove the heuristic in (2-b)
   ii) Use a priority queue to hold nodes which should be explored;
   prefer nodes which give new source code coverage over others
   (or alternatively prefer nodes with least depth of loop stack)

   Me and Artem have evaluated the option (i) and the results were
   surprisingly good: some reports disappear, and slightly more
   reports reappear.
   The quality of the new reports seems to be slightly better, and I
   am still trying to figure out exact reasons.
   I suspect merges resulting from heuristic (2-b) cause us to lose
   some actually valid reports.


I also find the results surprising. If you have more information about the reasons please do not forget to follow up this thread. We are curious :)


   Option (ii) has not been evaluated fully yet, but current
   experiments show slightly more reports (5-10%), and a radical
   decline in report lengths
   (e.g. from 400+ to <100 for largest reports)

   Are there any thoughts on the matter?

   Personally I think we should do both (i) and (ii), even if they
   would shake up the results.
   - The original idea for heuristics (2-b) was to be able to produce
   a report even if we are out of budget, but since it actually
   results in less reports,
   I think the data does not validate the approach.

   - Option (ii) is AFAIK how most similar engines work, and should
   get us much larger coverage (and shorter paths) for the same node
   budget,
   even at the cost of log(N) overhead of the priority queue.
   Moreover, not having the priority queue will bite us later if we
   ever decide to further
   increase the analyzer budget or to increase the unroll limit.


I wonder what will the performance implication be. But I also like the idea of having a priority queue. If we find that we get more and better report
but also have worse performance, we can also consider reducing the analysis budget slightly.

Regards,
Gábor


   George



   _______________________________________________
   cfe-dev mailing list
   [hidden email] <[hidden email]>
   http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
   <http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev>




_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev


_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
12