VIDEO_TDR_FAILURES after S3 sleep - Windows 7 x64

tcnuk

Member
Joined
Jun 21, 2014
Posts
20
Hello all,

Having a little BSOD trouble with VIDEO_TDR_FAILURES that I'd be grateful to enlist some help with.

The crashes occur around 5-15 minutes sometimes after I resume from an S3 sleep and, one time, after I rebooted the machine with the reset button following one of these crashes, but otherwise never from a clean startup from which I can happily game for hours without issue. After the resume from S3 sleep, they can occur while I'm gaming or just web browsing; sometimes everything is fine. They started a little over a month ago (as you'll see from the attached minidump files), roughly around the time that I installed a beta driver for my GPU provided by NVidia (but this may be unrelated). I cleaned this driver off my machine and reverted to an earlier one but the problem wasn't solved. I updated my video BIOS but that didn't help either, so I got used to shutting down my computer cleanly each night for a while until NVidia released a new WHQL driver for my GPU. I installed this and everything seemed to be fine... until yesterday when the problem started up again. I can't identify any obvious reason for everything springing back to life now.

I had a glance through the memory dumps that were generated but couldn't find anything terribly useful (although I'm not terribly experienced with kernel-mode debugging; most of my WinDbg experience is with SOS for .NET applications).

Aside from the driver and BIOS updates described above, here are some other things I've looked at that haven't helped:

-Ran a MemTest: everything looks OK (not that surprised since everything works from a clean startup).
-Disabled/disconnected the two pieces of hardware I was most suspicious about: a Cambridge Audio DACMagic DAC attached by USB (mostly only suspicious because my usage patterns might roughly fit in with the crashes); and a Creative WebCam that is running drivers from 2005 that were made for the x64 version of Windows XP (although these have been working absolutely fine for me for years).
-Ran the system with verifier.exe. No forced verifier BSODs and I didn't see any useful extra information in the one crash that happened while it was running (see 062014-28672-01.dmp).
-Checked out temperatures (although seems unlikely for same reasons as MemTest). Everything looks good.
-Turned off PCI Express link power saving in Windows. No effect.

Here's some system information as requested (although I guess you'll find some of it in the attached files too):

-Windows 7 x64. It is a retail version cleanly installed to a new SSD several months back.
-Age of system varies from component to component. The newest component is the SSD which is only a few months old. GPU is 2 years old. CPU/motherboard/RAM are a little under 5 years old. PSU is 7.5 years old. Floppy disk drive is 22 years old ;-) (yes, really!)
-CPU is i7 860. GPU is NVidia GTX 670. Motherboard is Gigabyte P55-UD3R. PSU is Antec NeoHE 500W. As you probably guessed by now, this is a custom build desktop.

Any guidance offered is much appreciated!

View attachment 8361
 
Code:
BugCheck 116, {fffffa800d4183d0, [COLOR=#008000]fffff88010056d1c[/COLOR], [COLOR=#ff0000]ffffffffc000009a[/COLOR], 4}

0x116 indicate a video tdr fault has occurred which means an attempt to reset the display driver and recover has failed.
If successful you get a flash and a notification at the bottom right saying something along the lines of "The display driver has successfully recovered."

fffff88010056d1c contains a pointer to the driver responsible which is your Nvidia display driver.
Now the most interesting part is the
ffffffffc000009a code which means insufficient resources have prevent the API completion which is normally caused by memory leaks due to the display driver.

Code:
6: kd> [COLOR=#008000]u fffff88010056d1c
[/COLOR]nvlddmkm+0x98bd1c:
[COLOR=#ff0000]fffff880`10056d1c[/COLOR] 48ff25ddb9edff  [COLOR=#ff0000]jmp[/COLOR]     qword ptr [nvlddmkm+0x867700 (fffff880`0ff32700)] <-- jump to stay in the loop, it may be the cause of memory leaks.
fffff880`10056d23 cc              int     3
fffff880`10056d24 48ff25b5b9edff  jmp     qword ptr [nvlddmkm+0x8676e0 (fffff880`0ff326e0)]
fffff880`10056d2b cc              int     3
fffff880`10056d2c 48ff25b5b9edff  jmp     qword ptr [nvlddmkm+0x8676e8 (fffff880`0ff326e8)]
fffff880`10056d33 cc              int     3
fffff880`10056d34 48ff25b5b9edff  jmp     qword ptr [nvlddmkm+0x8676f0 (fffff880`0ff326f0)]

fffff880`10056d3b cc              int     3


Given it's jumping a lot it might be the cause of the memory leaks but someone else might come along and correct me.

Code:
6: kd> [COLOR=#008000]kv
[/COLOR]Child-SP          RetAddr           : Args to Child                                                           : Call Site
fffff880`02befb88 fffff880`05105140 : 00000000`00000116 fffffa80`0d4183d0 fffff880`10056d1c ffffffff`c000009a : [COLOR=#0000ff]nt!KeBugCheckEx[/COLOR][COLOR=#000000] <-- BSOD[/COLOR]
fffff880`02befb90 fffff880`050d8867 : fffff880`10056d1c fffffa80`0ba80000 00000000`00000000 ffffffff`c000009a : [COLOR=#ff0000]dxgkrnl!TdrBugcheckOnTimeout+0xec[/COLOR] <-- If it doesn't respond, bugcheck.
fffff880`02befbd0 fffff880`05104f4f : fffffa80`ffffd84d ffffffff`fffe7960 fffffa80`0d4183d0 fffff880`051d3f3c : [COLOR=#ff0000]dxgkrnl!DXGADAPTER::Reset+0x2a3[/COLOR]
fffff880`02befc80 fffff880`051d403d : fffffa80`098be320 00000000`00000080 00000000`00000000 fffffa80`0ba7f410 : [COLOR=#ff0000]dxgkrnl!TdrResetFromTimeout+0x23[/COLOR] <-- trying to reset the display driver
fffff880`02befd00 fffff800`02d6973a : 00000000`01f9998f fffffa80`0ba614c0 fffffa80`068a7890 fffffa80`0ba614c0 : dxgmms1!VidSchiWorkerThread+0x101
fffff880`02befd40 fffff800`02abe8e6 : fffff800`02c48e80 fffffa80`0ba614c0 fffff800`02c56cc0 fffff880`03962201 : nt!PspSystemThreadStartup+0x5a
fffff880`02befd80 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KxStartSystemThread+0x16

Code:
6: kd> [COLOR=#008000]lmvm nvlddmkm[/COLOR]
start             end                 module name
fffff880`0f6cb000 fffff880`1032f000   nvlddmkm T (no symbols)           
    Loaded symbol image file: nvlddmkm.sys
    Image path: \SystemRoot\system32\DRIVERS\nvlddmkm.sys
    Image name: nvlddmkm.sys
    Timestamp:        [COLOR=#ff0000]Tue May 20 00:08:44 2014[/COLOR] (537A8EFC)
    CheckSum:         00C1EA41
    ImageSize:        00C64000
    Translations:     0000.04b0 0000.04e4 0409.04b0 0409.04e4

I've seen a lot of problems with the latest versions, try rolling back to 314.22 as I believe that's the most stable driver, if it works then you can try and find a more stable version.
If it doesn't work try a few other driver versions, if they don't change the frequency then try Furmark.

FurMark: VGA Stress Test, Graphics Card and GPU Stability Test, Burn-in Test, OpenGL Benchmark and GPU Temperature | oZone3D.Net

It will stress test your GPU, I recommend running it for around 30 minutes but if your GPU starts to overheat before that then stop the test as that may well be the cause.

Post back how everything goes :)
 
Hi Jared,

Thanks so much for taking a look and particularly for stepping through your analysis so thoroughly. It's reassuring to hear that you think a driver issue is the most likely thing which was my initial thought. I checked back through my download logs and found that the driver I downloaded before this started was 337.50 way back on 26th April, which was several weeks before the first crash. I then changed to 335.23 on 18th May after the first crash and 337.88 on 27th May (roughly when it was released). Rolling back to 314.22 doesn't sound like a great option for me for long as those drivers are over a year old and likely to cause crashes in some of my newer games (plus I lose a lot of performance enhancements that NVidia have put in in the last year). However, it might be worth trying it for diagnostic purposes and, like you say, trying to find a newer version if it helps. The problem, of course, is the difficulty of reproducing the issue consistently; I thought it had gone away for good until last night. Any thoughts on why I'm getting such inconsistency?

Given your thoughts, I wonder if it's worth trying to send these details on to NVidia... The difficulty is finding somebody to contact who'll know what to make of it (ideally somebody with some debug symbols for nvlddmkm!)

I looked through all of the dump files and noticed that one of them didn't indicate the insufficient resources, rather a less useful STATUS_IO_TIMEOUT. This was the one with the driver verifier inserted. Can I assume that this is perhaps a side effect of the wrapper functions that the verifier injects?

I did some disassembling around the pointer into the NVidia driver and there seem to be a lot of those jmp instructions in both directions. I don't really have much of a handle on what to make of that, though. What I found more odd was that when I tried to read the memory at nvlddmkm+0x867700 (fffff880`0ff32700), WinDbg would only give me "?"s. Am I doing something wrong? I normally expect this to produce an access violation, but like I say, I'm not too experienced with kernel mode debugging (or even native code debugging really).

I'll give the Furmark test a go. Heat seems an unlikely problem to me since I a) get the problem after just low intensity work like web browsing; and b) can game successfully for hours after a clean boot. However, I'll keep an eye on it.

Thanks again!
 
Oh, a couple of things I can add:

1) I don't get any indication of a successful TDR (i.e. the "display driver has recovered" message) at any point.
2) However, each blue screen is preceded by a couple of minutes by a 117 LiveKernelReport. I expect that the time delay between them is very little more than the TDR reset attempts and then the time to do the full bugcheck. I'm attaching the 117 memory dumps in case they're useful.

Edit: I just looked more closely at these and they make even less sense to me. Firstly, there are apparently two of the 117s that occurred without being followed by a 116. Secondly, I can't even read the memory at the faulting IP in these ones (more "?"s). In light of the existence of two apparently successful TDRs, I wonder if it would be worth trying increasing my TDR timeout to attempt to prevent the BSODs, although that would be more dodging the problem than fixing it.

View attachment WATCHDOG.zip
 
Last edited:
OK, I gave Furmark a run for a little less than 30 minutes. No crashes or obvious visual artefacts and temperatures stabilised well below the GPU thermal limit with the GPU fan only running at around 60%. CPU temperatures also looked fine.
 
A little update...

No issues today in spite of a crash yesterday and nothing changed since then. Yesterday I had a crash even after a clean boot with full power down, so I'm starting to wonder if the S3 thing is a red herring.

I submitted a support ticket to EVGA (manufacturer of my GPU) at the weekend before I posted to get input from them. They asked me to download a tool they produce called "OC Scanner" which runs a series of tests including a wrapper for Furmark tests with an artifact scanner. Most of the tests ran fine, including the main FurMark ones, but the FurMark "GPU memory burner" reported that it detected hundreds of thousands of artifacts immediately after it started. It does this every time. I'm not completely convinced: there doesn't seem to be anything visually wrong with the image and while the number of artifacts it reports are slightly different each time, it never produces any more after the test started. Has anybody else used this tool?

I'm still in dialogue with EVGA, but they seem to be pushing towards me replacing my GPU under warranty. I guess I'll soon find out whether they're right in saying that it's a faulty GPU if I swap back to my old one.
 
Yes, I would get it replaced if it's under warranty, due to a clean install it may well be your card although I've thought the same about my PC due to crashes but it's all display driver related.
If you can get a free replacement then go for it.

Let us know how everything goes :)
 
I still wasn't convinced about this EVGA test result because of the way that all the artifacts appeared at the start of the test (and only one the one test type), so I put my old GTX 260 back in my machine and ran the test again. Same result, so now I'm very suspicious that the result is not correct. Going to get a friend to run the same test on his machine tonight, but if anyone else fancies trying it out, it's the "Furry E (GPU memory burner)" test in this application. Any resolution, any antialiasing setting with the artifact scanner turned on. I get a report of artifacts within about a second of the test starting on both my GPUs although the number does vary slightly each time (and is also naturally dependent on the chosen resolution).

Not sure quite where to take this now! Yesterday, I was again free of the problem despite several restarts from sleep. As good as being free of it is, it does make it rather difficult to diagnose.
 
Discovered something interesting: EVGA FAQ Help Center

I say it's interesting because I have a TV connected to my GPU via HDMI that is usually off when I run my computer. My usage patterns could well fit in with my crashes (in particular, having the TV on when my computer went into S3 sleep and off when it came back). Quite a bit of testing going to be required to check this out...
 
Hmm, I think everything is back on the table again. I had a crash just now with my old GTX 260 in the machine (which rules out the HD audio thing). Circumstances were a little different this time - no BSOD or minidump, just a total freeze up of the machine after I'd been gaming for a few minutes. There was a LiveKernelReport generated which is attached. I can't get anything sensible out of it, much like the ones I posted earlier.

I guess I'm going to have to start messing around with drivers, but do you have any other ideas or can you make anything of the attached?

View attachment WD-20140624-2211.zip
 
Code:
BugCheck 117, {fffffa800d23d4e0, fffff8800f79c530, 0, 0}

This is mini kernel dump which isn't really a proper bugcheck but it means the display driver isn't responding properly.

Does the PC just freeze up like a lock?
 
Yep, it just froze completely like usual with no response to input (including ctrl+alt+del) but this time it didn't write a minidump (and so presumably didn't bugcheck).
 
OK, here's the latest. I put my GTX 670 back in and had another go at at clean driver install. This time I rebooted the computer over and over and manually uninstalled every last driver that my OS had cached and then topped that off with a clean install from the latest NVidia driver; I found reports that this had resolved similar issues for some people, although it seemed to be mostly with older driver versions. I also disabled that HD audio device for the GPU referred to in the link I posted before. Since then I haven't had any crashes, but I have occasionally observed some quick flickers on my displays (without an associated TDR), so I suspect something is still not right. I don't think the GPU is at fault since I got the crash with the old GTX 260. I also don't think there's anything wrong with my memory, motherboard or CPU else I think I'd be seeing a problem other than TDR failures and I don't think MemTest would complete successfully.

So what does that leave? I think my options are as follows:

-Since I haven't had a crash in a few days now, I could just leave it and see how things go...
-Drivers are still on the table and I still need to try going back to the early driver you suggested and working my way forwards from there. Like I said, that's not really a long-term fix for me though as it'll leave me unable to play my most modern games.
-There could still be another piece of hardware to blame, although I have no idea which. I'd need to strip back to a minimal system, removing my PCI cards and disconnecting all my external hardware.
-There's a possibility my PSU could be to blame. This seems unlikely since I suffer issues in both low and high power operations. However, the PSU is one of my oldest components at 7.5 years and it is fairly stretched for the system I have installed, so it may be slowly on the way to failing.

I've been thinking of upgrading my case and PSU anyway for a while to give me more head-room for the future (and stop me having to rely on improvising space in the case for my SSD that I don't really have), so I've decided to do this now. The new PSU has lots of extra wattage headroom (again, future-proofing) and installing to a new case has the advantage of stripping the system back to its raw parts and rebuilding which should iron out any possible dodgy connection issue.

Will keep you posted.

By the way, I was reflecting on my earlier comment about not being able to read the memory that the jmp instruction was pointing to in the disassembly at the TDR_FAILURE address. In hindsight, I presume that I couldn't read this memory because it wasn't included in the kernel memory dump? (can you tell that I'm used to user-mode dumps with everything I need in them?!) Does this sound right, and if so, why is this address not part of the dump? Appreciate any comments you might have as I'm always eager to learn more.
 
That dump file is a mini Kernel dump file that is generated by the Live Report I believe, it contains little to no information as it contains information only related to that bugcheck.

Different dump files contain different information, the reason being is size limitation as a full memory dumps are massive if you have lots of RAM, secondly it depends on the crash as not all information is practical for certain bugchecks.
For example IRPs won't be very practical for 0x116 errors, although I like acquiring Kernel memory dumps as they contain very useful information they're quite unnecessary for these bugchecks.
 
Sorry, I wasn't very clear, but I was talking about my earlier comment for the 116 bugcheck dumps, not the 117 live report dump. As you pointed out in your initial post, the pointer into the GPU driver is sitting at a jump instruction. That jump instruction is trying to get the memory address to jump to out of [nvlddmkm+0x867700 (fffff880`0ff32700)] which is inside the load range for nvlddmkm (fffff880`0f6cb000-fffff880`1032f000), but if I try to read the memory at fffff880`0ff32700, WinDbg just gives me "?"s. Usually when I'm debugging user-mode dumps, this means that this isn't a valid memory address which is what led me to originally comment that I would have expected an access violation. However, the other possibility I didn't consider originally was that that range of memory simply wasn't included in the memory dump. If not, then why not? It's a loaded kernel-mode driver and we can disassemble the pointer that the exception was looking at, so at least some of nvlddmkm has made it into the dump.

P.S. I'm now running on my new PSU and lovely new case. Will keep you posted.
 
Oh right sorry, I understand what you mean now. It's a minidump so most memory isn't recorded to save file size, you will need Kernel memory dumps to access those memory addresses.

A bit off topic, be wary that when you try to access registers or memory addresses, just because they aren't visible doesn't mean they are corrupt.
For example a mov instruction might want to move data from the rax register yet when you look at it, it might be zeroed which doesn't mean it actually was, it could be that the memory wasn't recorded.
this is normally the case with minidump as Kernel memory dumps contain all kernel mode memory loaded at the time.
 
That's the odd thing: I thought I had complete kernel dumps. My options for bugcheck dumps seem to be "(none)", "Small memory dump (256 KB)" and "Kernel memory dump" which is the option I have selected.

Off topics are welcome! Always keen to learn more. Thanks for the advice.
 
Well Kernel memory dumps are found in:

Code:
C:/Windows/memory.dmp

and are too large to upload directly here, they must be uploaded to a file sharing site like Onedrive then shared using a download link.
 
Aha, so that was the link I was missing. I presumed they all went in the same place. I guess the clue is in the directory name, "minidump". Looking at that file, it looks like I can now read things properly and I even have some export symbols for nvlddmkm. Looks like the pointer into the GPU driver is specifically for some crash handling code. In the dump I'm looking at, I have

Code:
BugCheck 116, {fffffa800bd5d010, [B]fffff8800f804530[/B], ffffffffc00000b5, a}

When I disassemble,

Code:
4: kd> u [B]fffff8800f804530[/B] L20
nvlddmkm+0x14f530:
fffff880`0f804530 48ff25d9817100  jmp     qword ptr [[B]nvlddmkm!nvDumpConfig+0x188200 (fffff880`0ff1c710)[/B]]
fffff880`0f804537 cc              int     3
fffff880`0f804538 e94fa2f0ff      jmp     nvlddmkm+0x5978c (fffff880`0f70e78c)
fffff880`0f80453d cc              int     3
fffff880`0f80453e cc              int     3
fffff880`0f80453f cc              int     3
fffff880`0f804540 48ff2589817100  jmp     qword ptr [nvlddmkm!nvDumpConfig+0x1881c0 (fffff880`0ff1c6d0)]
fffff880`0f804547 cc              int     3
fffff880`0f804548 e94ba7f0ff      jmp     nvlddmkm+0x59c98 (fffff880`0f70ec98)
fffff880`0f80454d cc              int     3
fffff880`0f80454e cc              int     3
fffff880`0f80454f cc              int     3
fffff880`0f804550 48ff25c1827100  jmp     qword ptr [nvlddmkm!nvDumpConfig+0x188308 (fffff880`0ff1c818)]
fffff880`0f804557 cc              int     3
fffff880`0f804558 e9d3a8f0ff      jmp     nvlddmkm+0x59e30 (fffff880`0f70ee30)
fffff880`0f80455d cc              int     3
fffff880`0f80455e cc              int     3
fffff880`0f80455f cc              int     3
fffff880`0f804560 48ff25d1817100  jmp     qword ptr [nvlddmkm!nvDumpConfig+0x188228 (fffff880`0ff1c738)]
fffff880`0f804567 cc              int     3
fffff880`0f804568 48ff2559817100  jmp     qword ptr [nvlddmkm!nvDumpConfig+0x1881b8 (fffff880`0ff1c6c8)]
fffff880`0f80456f cc              int     3
fffff880`0f804570 e99bb1f0ff      jmp     nvlddmkm+0x5a710 (fffff880`0f70f710)
fffff880`0f804575 cc              int     3
fffff880`0f804576 cc              int     3
fffff880`0f804577 cc              int     3
fffff880`0f804578 e9fbb3f0ff      jmp     nvlddmkm+0x5a978 (fffff880`0f70f978)
fffff880`0f80457d cc              int     3
fffff880`0f80457e cc              int     3
fffff880`0f80457f cc              int     3
fffff880`0f804580 4183f801        cmp     r8d,1
fffff880`0f804584 0f85f1040000    jne     nvlddmkm+0x14fa7b (fffff880`0f804a7b)

And when I read that memory address,

Code:
4: kd> dq [B]nvlddmkm!nvDumpConfig+0x188200[/B]
fffff880`0ff1c710  [B]fffff880`0f70e178[/B] fffff880`0ff9f890
fffff880`0ff1c720  fffff880`0ff9e86c fffff880`0ffa0000
fffff880`0ff1c730  fffff880`0ff9d738 fffff880`0f70f100
fffff880`0ff1c740  fffff880`0ffa245c fffff880`0ff9b750
fffff880`0ff1c750  fffff880`0ffa2a44 fffff880`0ffa0360
fffff880`0ff1c760  fffff880`0ffa0614 fffff880`0ff9e274
fffff880`0ff1c770  fffff880`0ffa274c fffff880`0ff9bae0
fffff880`0ff1c780  fffff880`0ff9c7e4 fffff880`0ff9d224

And disassembling at the memory address that that is pointing to,

Code:
4: kd> u [B]fffff880`0f70e178[/B] L10
nvlddmkm+0x59178:
fffff880`0f70e178 488bc4          mov     rax,rsp
fffff880`0f70e17b 48895810        mov     qword ptr [rax+10h],rbx
fffff880`0f70e17f 48897018        mov     qword ptr [rax+18h],rsi
fffff880`0f70e183 48897820        mov     qword ptr [rax+20h],rdi
fffff880`0f70e187 4157            push    r15
fffff880`0f70e189 4881ec80000000  sub     rsp,80h
fffff880`0f70e190 4533ff          xor     r15d,r15d
fffff880`0f70e193 488bfa          mov     rdi,rdx
fffff880`0f70e196 488bd9          mov     rbx,rcx
fffff880`0f70e199 4885c9          test    rcx,rcx
fffff880`0f70e19c 0f845e020000    je      nvlddmkm+0x59400 (fffff880`0f70e400)
fffff880`0f70e1a2 4885d2          test    rdx,rdx
fffff880`0f70e1a5 0f8455020000    je      nvlddmkm+0x59400 (fffff880`0f70e400)
fffff880`0f70e1ab 488d4808        lea     rcx,[rax+8]
fffff880`0f70e1af e8503c0900      call    nvlddmkm+0xece04 (fffff880`0f7a1e04)
fffff880`0f70e1b4 4438bb98960000  cmp     byte ptr [rbx+9698h],r15b

I haven't quite decided what to make of all that yet. Is there a way for me to reconstruct the callstack that led to fffff8800f804530 in nvlddmkm? I'm not quite sure what the threading model is in kernel-mode, but the only callstack I've seen so far is the one that led to the bugcheck, which in this case was as follows:

Code:
Child-SP          RetAddr           : Args to Child                                                           : Call Site
fffff880`02befa68 fffff880`05356140 : 00000000`00000116 fffffa80`0bd5d010 fffff880`0f804530 ffffffff`c00000b5 : nt!KeBugCheckEx
fffff880`02befa70 fffff880`05355f1b : fffff880`0f804530 fffffa80`0bd5d010 fffffa80`0bae5350 fffffa80`0ba42010 : dxgkrnl!TdrBugcheckOnTimeout+0xec
fffff880`02befab0 fffff880`0520ff13 : fffffa80`0bd5d010 00000000`c00000b5 fffffa80`0bae5350 fffffa80`0ba42010 : dxgkrnl!TdrIsRecoveryRequired+0x273
fffff880`02befae0 fffff880`05239cf1 : 00000000`ffffffff 00000000`00000a9f 00000000`00000000 00000000`00000000 : dxgmms1!VidSchiReportHwHang+0x40b
fffff880`02befbc0 fffff880`0520b2e1 : fffffa80`0ba42010 ffffffff`00000000 00000000`00000a9f 00000000`00000000 : dxgmms1!VidSchiCheckHwProgress+0x71
fffff880`02befbf0 fffff880`05237ff6 : 00000000`00000000 fffffa80`0bae5350 00000000`00000080 fffffa80`0ba42010 : dxgmms1!VidSchiScheduleCommandToRun+0x1e9
fffff880`02befd00 fffff800`02d7073a : 00000000`01eb24a1 fffffa80`0ba564c0 fffffa80`068a7890 fffffa80`0ba564c0 : dxgmms1!VidSchiWorkerThread+0xba
fffff880`02befd40 fffff800`02ac58e6 : fffff800`02c4fe80 fffffa80`0ba564c0 fffff800`02c5dcc0 fffff880`03962201 : nt!PspSystemThreadStartup+0x5a
fffff880`02befd80 00000000`00000000 : fffff880`02bf0000 fffff880`02bea000 fffff880`02bef7f0 00000000`00000000 : nt!KxStartSystemThread+0x16

Clearly there's some work scheduling going on here, but how can I look at the actual work taking place?

Forgive me indulging my interest here; don't feel obliged to comment if you don't have the time!
 

Has Sysnative Forums helped you? Please consider donating to help us support the site!

Back
Top