Publication bias threatens meta‐analysis validity. It is often assessed via the funnel plot; an asymmetric plot implies small‐study effects, and publication bias is one cause of the asymmetry. Egger’s regression test is a widely used tool to quantitatively assess such asymmetry. It examines the association between the observed effect sizes and their sample SEs; a strong association indicates small‐study effects. However, its false positive rates may be inflated if such an association intrinsically exists even if no small‐study effects appear, particularly in meta‐analyses of odds ratios (ORs). Various alternatives are available to address this problem. They usually replace Egger’s regression predictor or response with different measures; consequently, they are powerful only in specific cases. We propose a Bayesian approach to assessing small‐study effects in meta‐analyses of ORs. It controls false positive rates by using latent “true” SEs, rather than sample SEs, in the Egger‐type regression to avoid the intrinsic association between ORs and their SEs. Although “true” SEs are unknown in practice, they can be modeled under the Bayesian framework. We use simulated and real data to compare various methods. When ORs are away from 1, the proposed method may have high powers with controlled false positive rates, while Egger’s test has seriously inflated false positive rates; nevertheless, in other situations, some other methods may be superior. In general, the proposed method may serve as an alternative to rule out potential confounding effects caused by the intrinsic association between ORs and their SEs in the assessment of small‐study effects.