That bit of filtering made sense back in the data slicer days, but when feeding a sound card, it's not required. A software decoder will not see any residual RF and any unwanted high frequency junk (like static) can be digitally filtered.
You should be more concerned with how you are going to properly impedance match your signal with a sound card's input. As it stands, your signal, which is probably somewhere in the 0.5 to 1 VPP range or so, is far too hot for a mic input, nor is it ideal for a (600 ohm?) line in either.
Why is the Internet so chock full of bad discriminator tap instructions? "Yeah, just run a wire, maybe add a 10k series resistor; add a series cap if it makes you happy - 0.1uF is more than enough" It's not like a tiny cap will distort the hell out the signal, especially when fed to a line in, is it? Oh wait, 0.1uF and a 600 ohm load - do the math - oops. And nobody ever wants to decode low speed data like LTR, etc., so no worries if those low frequencies can't get through a tiny cap. And we can ignore the fact that high speed data - Moto 3600, EDACS, P25 - also have low frequency content, cuz we're rebels, damn it! And proper impedance matching? Clearly, that's just for sissies...