The typical method is evaluate the expected value of each choice you might make, and take the maximum. But the EV calculations are quite expensive. To calculate an exact EV for discarding a particular two cards you must evaluate (naively) 47C2 draws for you, up to 133459 opponent starting hands, and 40C2 draws for the opponent. This is about 113 billion comparisons. The number can be reduced somewhat by pre-calculating your opponent's final hand distribution (ignoring your draw) generating a table of size 47C7 = 62,891,499. Then for each draw you make you can calculate the EV by summing your result against the better (or worse) hands in this table, filtering out those that overlap with your draw. Doing this in the simple fashion results in 34 billion comparisons. Using inclusion-exclusion on the two cards you draw results in something more like 3.4 billion calculations per two-card discard. (Unfortunately, the Y hand tables must be recalculated for every X hand considered, and three-card inclusion-exclusion tables won't fit in memory on a typical computer. Using the same Y tables for all two-draws would require 7-card inclusion-exclusion, which is not at all feasible.)
However, we don't actually need an accurate EV calculation to make the decision. We just need to prove, to some degree of confidence, that one draw is better than another. This suggests using a Monte Carlo method. The EV can be thought of as a random variable in the following way:
The EV is the sum over a collection of probabilities * outcome values:
EV = p1 * v1 + p2 * v2 + ... + p_n * v_n
But, by simple algebra
EV = ( p1 * v1 * n + p1 * v2 * n + ... + p_n * v_n * n ) / n
so we can view it as the mean value of the uniform distribution
( p1 * v1 * n, p2 * v2 * n, ..., p_n * v_n * n )
Given two options A and B, we can take a random sample of each to get EV_a ( = mean value of sample), EV_b, and their associated standard error (= sample standard deviation / sqrt( sample size)) err_a, err_b. Then we look at the difference between the mean values EV_a - EV_b; the sample error for this difference is sqrt( err_a^2 + err_b^2 ). When we can determine that the confidence interval for EV_a - EV_b does not include zero, we know (to some degree of confidence) that EV_a > EV_b or EV_a < EV_b.
I wrote a quick version of this with the distribution consisting of all 52 ** 9 9-card sequences. Then p_i == 0 (if the combination is impossible) or 1/(47P2 * 45P5 * 40P2) (if the combination is possible.) It should be feasible to use a smaller distribution with fewer zeros, but it's important to calculate n correctly (and as I discovered, easy to screw up.)
This technique found that 7c3c2h is a better draw than 7c3c2c against a simple (and bad) Y strategy, to within 99% confidence, using only 50000-200000 samples. Half those samples are "reusable" to compare against one of the other 32 possible draws.
Getting 1% of your draws wrong is not all that great, but I think that fictitious play will be somewhat forgiving to a small fraction of incorrect draws on each iteration, as long as the errors are independent. Going to higher confidence intervals can get expensive, but 99.9% may be feasible.
Also, it seems important to develop a test for when we'll consider two draws equally good, for example we shouldn't spend a lot of time trying to find a difference between 7c3c2h and 7c3c2s.