Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Still Problems with Impacted Tests detection
Der-Albert.com
#1 Posted : Wednesday, December 12, 2018 7:56:10 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/17/2011(UTC)
Posts: 209

Thanks: 11 times
Was thanked: 55 time(s) in 51 post(s)
Still Problems with detecting of impacted test, in CompareIL Mode. And this time
not with async/await stuff. It was a new machine, so I didn't changed it to WatchText Mode
which i do normally, because CompareIL does not work as i expected with async/await.


I was doing a live coding session in the new Microsoft.Extensions.Options Version 2.2 in
a .NET Core 2.2 project. So i have a Video, so i try to describe what happend.

Here is the time where I started to write the code under test, which does not get executed.

https://youtu.be/pj9lIxhkV68?t=3586

The test should be failing after addding the Required Attribute, but is does not
even when I make changes in the unit test, it does not fail

I begins to fail at that time

https://youtu.be/pj9lIxhkV68?t=3698

After failing the test (i changed other stuff), i didnt noticed on changing back to "Default Name"
that the test is still failing, so i went through google if I made something wrong.

It takes me a while to

https://youtu.be/pj9lIxhkV68?t=3832

Around that i realised that the test has run in the meantime (in reallity it was run
earlier as noted). But I discovered that NCrunch does not found the impacted test, because of
that the test was not run. and I looked like a stupid guy (also because I said one or two
episoded before that NCrunch ist the best). Eventually i figured it out and switch back to
run all tests.

The Code is here http://github.com/DerAlbertLive/ExtensionsOptions but other than in the Video
not everything is on one file anymore.
Der-Albert.com
#2 Posted : Wednesday, December 12, 2018 8:01:26 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/17/2011(UTC)
Posts: 209

Thanks: 11 times
Was thanked: 55 time(s) in 51 post(s)
Btw. i also looked stupid around here https://youtu.be/pj9lIxhkV68?t=562 i was not able to create a custom engine mode live on tape (normally i run, new, impacted, pinned, failed and not run).
Because of the new dialog, i did not find it the right places in a timely fashion. In the meantime i thing i found the right filter, but in that moment the dialog was terrible.

Remco
#3 Posted : Wednesday, December 12, 2018 12:41:49 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi Albert,

I'm sorry to hear that NCrunch embarrassed you in this instance.

NCrunch's impact detection is based on IL-based comparisons with covered source lines. This system can work very well for some scenarios but does not work at all for reflection based situations, such as when you have framework code dynamically calling code based on attributes.

Reflection is extremely difficult to reliably handle using impact detection because of its nature. At present, I haven't yet found an acceptable way to handle it.

The impact detection system isn't 100%, and never will be. I strongly recommend using the 'Run all tests automatically' in situations where you are relying on reflection-based frameworks, or in situations where excluding tests via impact detection doesn't give a significant benefit in terms of system resource usage.

I also apologise that the new engine mode dialog frustrated you during your coding session. This dialog was significantly changed in the V3 release to include features from the new configuration system. The original ability to select which tests to target with an engine mode can be controlled under the 'Filters', 'Tests to execute automatically' setting.
Der-Albert.com
#4 Posted : Wednesday, December 12, 2018 2:05:29 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/17/2011(UTC)
Posts: 209

Thanks: 11 times
Was thanked: 55 time(s) in 51 post(s)
Hi Remco,

but here are some differences. I changed code inside a test, but the change did not trigger a new test run. this has nothing to do with reflection in my Eyes (cant reproduce this now)
Also, i changed an Attribute on a Property, NCrunch also knows that the Property is covered on that specific test. So why not rerrun that test?

Why not this behavior?

1.) User changed a line which is not clearly covered by a test (like the Attribute on a property)
2.) find the first surounding lines which are covered
3.) run those test

this is better then running nothing at all.

I like to run only the impacted tests, because, i can get feedback earlier, have less test to run, consume less energy.

Btw. that looking like an Idiot is not a problem for me, if i would be afraid of that i wouldn't live stream something ;) in my eyes it was more a problem for NCrunch which a highlight often.
Remco
#5 Posted : Wednesday, December 12, 2018 11:33:37 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Did the detector fail to notice a non-attribute change during your stream? Sorry I tried to check through the links you gave me but I wasn't able to notice it .. sometimes it isn't very obvious (I guess this is part of the problem).

NCrunch's behaviour around attributes is to consider them a part of the method. This means that if you have a method that is covered by tests and you add an attribute to it, all tests that cover the method will be considered impacted. So basically the attributes form a part of the method's hash signature and they are included in the detection.

The above approach doesn't work for situations where you have a method or attribute that is not covered by tests (i.e. black markers), and you add an attribute to it which is used as a marker by other code to dynamically execute it. For example, consider the following code:

Code:

            foreach (var type in GetType().Assembly.GetTypes())
            {
                var instance = Activator.CreateInstance(type);

                foreach (var requiredProperty in type.GetProperties().Where(m => m.CustomAttributes.Any(a => a.AttributeType.Name == "Required")))
                { 
                    var propertyValue = requiredProperty.GetGetMethod().Invoke(instance, null);

                    // Do validation on propertyValue here
                }
            }


In concept, this is probably quite similar to the validation code used during your live stream. This code uses reflection to find properties that implement a specific attribute. The impact detection rule required to handle this scenario is such where we need to detect when a user has added the 'Required' attribute to a property, then all tests making use of the entire validation framework are considered impacted and must be re-run.

Consider that in this scenario, simply searching for covered code around the new attribute isn't enough. This is because when the attribute is first being added to the property, the property isn't covered at all, so we have no baseline data to tell us which tests should be considered. When you remove the attribute from the property, we likely will have coverage in place and we will probably detect the change.

It's easy to think of impact detection as being a process of choosing which tests are likely to be affected by a change made to your code, but actually the better way to think of it is as a way of excluding the tests that you know haven't been affected. Given a full suite of tests and a set of code changes, how can you tell which tests can be safely excluded from execution? Once tests start making use of reflection to determine which areas of code they interact with, the process of exclusion becomes very difficult because our exclusion rules become coupled to the logic around how reflection is used.

We could, for example, decide that any code calling into Assembly.GetTypes() could be considered as 'covering' the surface all types in the given assembly, but then every time we make a structural change to any type in the assembly all tests touching this code will be impacted, so we'll have a large number of tests running all the time and the process of exclusion wouldn't work well at all.

We then have further extensions of this, such as how to handle this kind of code when it appears in 3rd party assemblies, or even in the runtime itself.

This is also not the only situation where reliability becomes a problem for impact detection. Consider the following code:

Code:
    [TestClass]
    public class UnitTest1
    {
        private static string _myStringValue;

        private static string LazyLoadValue()
        {
            // This code is only run once per test process
            return MyDatabase.LoadStringFromDatabase();
        }

        private static string MyString
        {
            get
            {
                if (_myStringValue == null)
                    _myStringValue = LazyLoadValue();

                return _myStringValue;
            }
        }

        [TestMethod]
        public void TestMethod1()
        {
            Assert.That(MyString.StartsWith("123"));
        }

        [TestMethod]
        public void TestMethod2()
        {
            Assert.That(MyString.EndsWith("abc"));
        }
    }


The above class contains a method (LazyLoadValue) that both tests logically depend on, but most likely will only ever be 'covered' by one of them. This is because both tests probably execute quite quickly and will likely be run by NCrunch in the same physical process.

This sort of situation presents a problem for impact detection, because as far as NCrunch is concerned, the only test actually dependent on LazyLoadValue is the first one to use it inside the test process. Since both tests have different pass/fail criteria for what they receive from this method, it's possible for you to make a change to LazyLoadValue that should trigger a test failure but doesn't because NCrunch only runs one of the two tests automatically.

There are ways to handle this sort of situation. For example, instead of using code coverage to determine which tests might be impacted, we could use structural data from the test instead. We would simply look at each of our test methods, see which methods they call, then form a large dependency tree for each test based on what it might call without any consideration for tracked coverage. The problem with this approach is that it's extremely broad. If you change even the smallest method down the bottom of your dependency tree, you'll likely impact all of your tests even though none of them are actually covering the code at all. This approach might still be better than 'Run all tests automatically' though, so we might add it some time in future as an extra option.

Der-Albert.com
#6 Posted : Friday, December 14, 2018 12:03:15 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/17/2011(UTC)
Posts: 209

Thanks: 11 times
Was thanked: 55 time(s) in 51 post(s)
Remco wrote:

Did the detector fail to notice a non-attribute change during your stream? Sorry I tried to check through the links you gave me but I wasn't able to notice it .. sometimes it isn't very obvious (I guess this is part of the problem).

NCrunch's behaviour around attributes is to consider them a part of the method. This means that if you have a method that is covered by tests and you add an attribute to it, all tests that cover the method will be considered impacted. So basically the attributes form a part of the method's hash signature and they are included in the detection.

This may be the theory, but this is not what happend. I created a new video to show the problem. I don't thing that this has something to do with reflection, because the propery is allready known to NCrunch for this test e.g covered.

Here is the OneDrive Link

https://1drv.ms/v/s!AmXNl8dqs5HUpYhbBM0sZ03uznqwJw

Here is the repository

https://github.com/DerAlbertLive/ExtensionsOptions

I failed to reproduce it on first try (because of the engine mode), but i didn't recreate the video. So after my fail, the fail of ncrunch appears.
Remco
#7 Posted : Friday, December 14, 2018 11:35:36 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Thanks! I've reproduced this as described. It looks like the issue is with property-based attributes; they aren't being included in the IL comparison.

NCrunch does include the attributes applied to methods (i.e. the get/set methods), but not the properties themselves. It looks like this will require some special consideration.

We're in the middle of rebuilding this area of the product at the moment to improve performance, so I'll try to get this included with the changes. Probably it will be fixed when the performance improvements get rolled out early next year.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.059 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download