Jump to content

Talk:Log-normal distribution

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

See also: multiplicative calculus ??

[edit]

I followed the link to the wikipedia page Multiplicative calculus and spent about an hour trying to determine whether it has any legitimacy. I concluded that it does not. In contrast log normal distributions are unarguably legitimate. The multiplicative calculus link under 'See also' should be removed, for it is a waste of reader's time to refer them to some dubious article with no clear relevance to log normal distributions, save the shared multiplicative basis.

If you look at Talk:Multiplicative calculus you will find lengthy disputes about the legitimacy of the content with most contributors concluding that the article should be deleted. Moreover, all the talk, save two entries, is a decade old. Cross-linking to distracting rubbish serves no one, even if the rubbish manages to cling to some tiny thread of legitimacy that prevents its deletion. — Preceding unsigned comment added by 70.68.18.243 (talk) 19:03, 20 September 2019 (UTC)[reply]

Error in lognormal pdf in box?

[edit]

The pdf given in the box at rhs of page seems wrong and does not match the pdf given in the article. In particular seems like first term of pdf should be 1/x sigma root (2pi). I.e., factor of 1/x seems missing. So term should be: frac{1}{ x\sigma \sqrt{2 \pi}}  ? Hesitant to edit as this is my first foray into wikipedia talk and edits, plus its not obvious to me how to edit the box. --QFC JRB (talk) 19:45, 2 May 2017 (UTC)[reply]

I'm also noticing that while μ is listed as 0, the pdf appears centered about 1. Is this an error? Tommy2024 (talk) 22:17, 20 March 2024 (UTC)[reply]

Checked again and it was updated as suggested above. --QFC JRB (talk) 19:50, 3 May 2017 (UTC)[reply]

Derivation of Log-normal Distribution

[edit]

How do you derive the log-normal distribution from the normal distrubution?

By letting X ~ N(\mu, \sigma^2) and finding the distribution of Y = exp X.

D. Clason — Preceding unsigned comment added by 128.123.198.136 (talk) 00:21, 11 November 2011 (UTC)[reply]

I found this derivation hard to follow. This explanation (prop 8. p. 12 from http://norstad.org/finance/normdist.pdf) was easier to follow where it shows the pdf/variable change. I'd like to change the derivation on this page if I get the time (not quite sure how to work the math font).Corwinjoy (talk) 20:03, 17 February 2017 (UTC)[reply]

Old talk

[edit]

Hello. I have changed the intro from "log-normal distributions" to "log-normal distribution". I do understand the notion that, for each pair of values (mu, sigma), it is a different distribution. However, common parlance is to call all the members of a parametric family by a collective name -- normal distribution, beta distribution, exponential distribution, .... In each case these terms denote a family of distributions. This causes no misunderstandings, and I see no advantage in abandoning that convention. Happy editing, Wile E. Heresiarch 03:42, 8 Apr 2004 (UTC)

In the formula for the maximum likelihood estimate of the logsd, shouldn't it be over n-1, not n?

Unless you see an error in the math, I think its ok. The n-1 term usually comes in when doing unbiased estimators, not maximum likelihood estimators.
You're right; I was confused.

QUESTION: Shouldn't there be a square root at the ML estimation of the standard deviation? User:flonks

Right - I fixed it, thanks. PAR 09:15, 27 September 2005 (UTC)[reply]

Could I ask a question?

[edit]

If Y=a^2; a is a log normal distribution ; then What kind of distribution is Y?

a is a lognormal distribution
so log(a) is a normal distribution
log(a^2) = 2 log(a) is also a normal distribution
a^2 is a lognormal distribution --Buglee 00:47, 9 May 2006 (UTC)[reply]

One should say rather that a has---not is---a lognormal distribution. The object called a is a random variable, not a probability distribution. Michael Hardy 01:25, 9 May 2006 (UTC)[reply]


Maria 13 Feb 207: I've never written anything in wikipedia, so I apologise if I am doing the wrong thing. I wanted to note that the following may not be clear to the reader: in the formulas, E(X)^2 represents the square of the mean, rather than the second moment. I would suggest one of the following solutions: 1) skip the parentheses around X and represent the mean by EX. Then it is clear that (EX)^2 will be its square. However, one might wonder about EX^2 (which should represent the second moment...) 2) skip the E operator and put a letter there, i.e. let m be the mean and s the standard deviation. Then there will be no confusion. 3) add a line at some point in the text giving the notation: i.e. that by E(X)^2 you mean the square of the first moment, while the second moment is denoted by E(X^2) (I presume). I had to invert the formula myself in order to figure out what it is supposed to mean.

I've just attended to this. Michael Hardy 00:52, 14 February 2007 (UTC)[reply]

A mistake?

[edit]

I think there is a mistake here : the density function should include a term in sigma squared divided by two, and the mean of the log normal variable becomes mu - sigma ^2/2 Basically what happened is that, I think, the author forgot the Ito term.

I believe the article is correct. See for example http://mathworld.wolfram.com/LogNormalDistribution.html for an alternate source of the density function and the mean. They are the same as shown here, but with a different notation. (M in place of mu and S in place of sigma). Encyclops 00:23, 4 February 2006 (UTC)[reply]
Either the graph of the density function is wrong, or the expected value formula is wrong. As you can see from the graph, as sigma decreases, the expected value moves towards 1 from below. This is consistent with the mean being exp(mu - sigma^2/2), which is what I recall it as. 69.107.6.4 19:29, 5 April 2007 (UTC)[reply]
Here's you're mistake. You cannot see the expected value from the graph at all. It is highly influenced by the fat upper tail, which the graph does not make apparent. See also my comments below. Michael Hardy 20:19, 5 April 2007 (UTC)[reply]

I've just computed the integral and I get

So with μ = 0, as σ decreases to 0, the expected value decreases to 1. Thus it would appear that the graph is wrong. Michael Hardy 19:57, 5 April 2007 (UTC)[reply]

...and now I've done some graphs by computer, and they agree with what the illustration shows. More later.... Michael Hardy 20:06, 5 April 2007 (UTC)[reply]

OK, there's no error. As the mode decreases, the mean increases, because the upper tail gets fatter! So the graphs and the mean and the mode are correct. Michael Hardy 20:15, 5 April 2007 (UTC)[reply]

You're right. My mistake. The mean is highly influenced by the upper tail, so the means are actually decreasing to 1 as sigma decreases. It just looks like the means approach from below because the modes do. 71.198.244.61 23:50, 7 April 2007 (UTC)[reply]

Question on the example charts on the right. Don't these have μ of 1, not 0 (as listed)? They're listed as 1. If the cdf hits 0.5 at 1 for all of them, shouldn't expected value be 1? —Preceding unsigned comment added by 12.17.237.67 (talk) 18:28, 15 December 2008 (UTC)[reply]

The expected value is , not μ. /Pontus (talk) 19:19, 16 December 2008 (UTC)[reply]
Yet the caption indicates the underlying µ is held fixed at 0. In which case we should see the expected value growing with sigma. —Preceding unsigned comment added by 140.247.249.76 (talk) 09:13, 29 April 2009 (UTC)[reply]
Expected value is not the value y at which P[X<y] = P[X > y]. Rookie mistake. —Preceding unsigned comment added by 140.247.249.76 (talk) 09:24, 29 April 2009 (UTC)[reply]

A Typo

[edit]

There is a typo in the PDF formula, a missing '['

Erf and normal cdf

[edit]

There are formulas that use Erf and formulas that use the cdf of the normal distribution, IMHO this is confusing, because those functions are related but not identical. Albmont 15:02, 23 August 2006 (UTC)[reply]

Technical

[edit]

Please remember that Wikipedia articles need to be accessible to people like high school studends, or younger, or without any background in math. I consider myself rather knowledgable in math (had it at college level, and still do) but (taking into account English is not my native language) I found the lead to this article pretty difficult. Please make it more accessible.-- Piotr Konieczny aka Prokonsul Piotrus | talk  22:48, 31 August 2006 (UTC)[reply]

To expect all Wikipedia math articles to be accessible to high-school students is unreasonable. Some can be accessible only to mathematicians; perhaps more can be accessible to a broad audience of professionals who use mathematics; others to anyone who's had a couple of years of calculus and no more; others to a broader audience still. Anyone who knows what the normal distribution is, what a random variable is, and what logarithms are, will readily understand the first sentence in this article. Can you be specific about what it is you found difficult about it? Michael Hardy 23:28, 31 August 2006 (UTC)[reply]

I removed the "too technical" tag. Feel free to reinsert it, but please leave some more details about what specifically you find difficult to understand. Thanks, Lunch 22:18, 22 October 2006 (UTC)[reply]

Skewness formual incorrect?

[edit]

The formula for the skewness appears to be incorrect: the leading exponent term you have is not present in the definitions given by Mathworld and NIST, see http://www.itl.nist.gov/div898/handbook/eda/section3/eda3669.htm and http://mathworld.wolfram.com/LogNormalDistribution.html.

Many thanks.

X log normal, not normal.

[edit]

I think the definition of X as normal and Y as lognormal in the beginning of the page should be changed. The rest of the page treats X as the log normal variable. —The preceding unsigned comment was added by 213.115.25.62 (talk) 17:40, 2 February 2007 (UTC).[reply]

The skewness is fine but the kurtosis is wrong - the last term in the kurtosis is -3 not -6 —Preceding unsigned comment added by 129.31.242.252 (talk) 02:08, 17 February 2009 (UTC)[reply]

Yup I picked up that mistake too and have changed is. The wolfram website also has it wrong, although if you calculate it from their central moments you get -3. I've sent them a message too. Cheers Occa —Preceding unsigned comment added by Occawen (talkcontribs) 21:19, 1 December 2009 (UTC)[reply]

Partial expectation

[edit]

I think that there was a mistake in the formula for the partial expectation: the last term should not be there. Here is a proof: http://faculty.london.edu/ruppal/zenSlides/zCH08%20Black-Scholes.slide.doc See Corollary 2 in Appendix A 2.

I did put my earlier correction back in. Of course, I may be wrong (but, right now, I don't see why). If you change this again, please let me know why I was wrong. Thank you.

Alex —The preceding unsigned comment was added by 72.255.36.161 (talk) 19:39, 27 February 2007 (UTC).[reply]

Thanks. I see the problem. You have the correct expression for

while what I had there before would be correct if we were trying to find

which is (essentially) the B-S formula but is not the partial mean (or partial expectation) by my (or your) definition. (Actually I did find a few sources where the partial expectation is defined as but this usage seems to be rare. For ex. [1]). The term that you dropped occurs in but not , the correct form of the partial mean. So I will leave the formula as it is now. Encyclops 00:47, 28 February 2007 (UTC)[reply]

  • The rest of the page uses rather than , I suggest that also be used here (in addition to which is a nice way to put it). I didn't add it myself since with 50 percent probability I'd botch a . --Tom3118 (talk) 19:29, 16 June 2009 (UTC)[reply]

Generalize distribution of product of lognormal variables

[edit]

About the distribution of a product of independent log-normal variables:

Wouldn't it be possible to generalize it to variables with different average ( mu NOT the same for every variable)?

the name: log vs exponential

[edit]

log normal, sometimes, it is a little bit confusing for me, so a little bit note here:

For variable Y, if X=log(Y) is normal, then Y is log normal, which says after being taken log, it becomes normal. Similarly, there might be exponential normal: for variable Z, exp(Z) is normal. However, exp(Z) can never be normal, so the name log normal. Furthermore, if X is normal, then log(X) is undefined.

In other cases, variable X is in whatever distribution (XXX), we need a name for the distribution of Y=log(X) (in the case it is defined). X=exp(Y), Such a name should exponential XXX. For instance, X is in IG, then Y=log(X) is in exponential IG. Jackzhp 15:37, 13 July 2007 (UTC)[reply]

Mean, and

[edit]

The relationship given for in terms of Var(x) and E(x) suggest that is undefined when . However, I see no reason why must be strictly positive. I propose defining the relatinship in terms of such that

I am suspicious that this causes to be...well, wrong. It suggests that two different values for could result in the same , which I find improbable. In any case, if there is a a way to calculate when then we should include it, if not, we need to explain this subtlety. In my humble opinion.--Phays 20:35, 6 August 2007 (UTC)[reply]

I'm not fully following your comment. I have now made the notation consistent throughout the article: X is the random variable that's log-normally distributed, so E(X) must of course be positive, and μ = E(Y) = E(log(X)).
I don't know what you mean by "E2". It's as if you're squaring the expectation operator. "E2(X) would mean "E(E(X))", but that would be the same thing as E(X), since E(X) is a constant. Michael Hardy 20:56, 6 August 2007 (UTC)[reply]

Maximum Likelihood Estimation

[edit]

Are there mistakes in the MLE? It looks to me as though the provided method is a MLE for the mean and variance, not for the parameters and . If that is so it should be changed to the parameters estimated and and then a redirect to extracting the parameter values from the mean and variance.--Phays 20:40, 6 August 2007 (UTC)[reply]

The MLEs given for μ and σ2 are not for the mean and variance of the log-normal distribution, but for the mean and variance of the distribution of the normally distribution logarithm of the log-normally distributed random variable. They are correct MLEs for μ and σ2. The "functional invariance" of MLEs generally, is being relied on here. Michael Hardy 20:47, 6 August 2007 (UTC)[reply]
I'm afraid I still don't fully understand, but it is simple to explain my confusion. Are the parameters being estimated μ and σ2 from
or are these estimates describing the mean and variance? In other words, if is and then is ? It is my understand that the parameters in the above equation, namely μ and σ are not the mean and standard deviation of . They may be the mean and standard deviation of .--Phays 01:16, 7 August 2007 (UTC)[reply]
The answer to your first question is affirmative. The expected value of Y = exp(X) is not μ; its value if given elsewhere in the article. Michael Hardy 16:10, 10 August 2007 (UTC)[reply]

8/10/2007:

It is my understanding that confidence intervals use standard error of a population in the calculation not standard deviation (sigma).

Therefore I do not understand how the Table is using 2sigma e.tc. for confidence interval calulation as pertains to the log normal distribution.

Why is it shown as 2*sigma?

Angusmdmclean 12:35, 10 August 2007 (UTC) angusmdmclean[reply]


Hi. The formula relating the density of the log normal to that of the normal -- where does the product come form on the r.h.s. ? I think this is a typo. should read: f_L = {1\over x} \times f_N, no?

This page lacks adequate citations!!

[edit]

Wikipedia policy (see WP:CITE#HOW) suggests citation of specific pages in specific books or peer-reviewed articles to support claims made in Wikipedia. Surely this applies to mathematics articles just as much as it does to articles about history, TV shows, or anything else?

I say this because I was looking for a formula for the partial expectation of a lognormal variable, and I was delighted to discover that this excellent, comprehensive article offers one. But how am I supposed to know if the formula is correct? I trust the competence of the people who wrote this article, but how can I know whether or not some mischievous high schooler reversed a sign somewhere? I tried verifying the expectation formula by calculating the integral myself, but I got lost quickly (sorry! some users of these articles are less technically adept than the authors!) I will soon go to the library to look for the formula (the unconditional expectation appears in some books I own, but not the partial expectation) but that defeats the purpose of turning to Wikipedia in the first place.

Of course, I am thankful that Wikipedia cites one book specifically on the lognormal distribution (Aitchison and Brown 1957). That reference may help me when I get to the library. But I'm not sure if that was the source of the formula in question. My point is more general, of course. Since Wikipedia is inevitably subject to errors and vandalism, math formulas can never be trusted, unless they follow in a highly transparent way from prior mathematical statements in the same article. Pages like this one would be vastly more useful if specific mathematical statements were backed by page-specific citations of (one or preferably more) books or articles where they could be verified. --Rinconsoleao 15:11, 28 September 2007 (UTC)[reply]

Normally I do not do this because I think it is rude, but I really should say {{sofixit}} because you are headed to the library and will be able to add good cites. Even if we had a good source for it, the formula could still be incorrect due to vandalism or transcription errors. Such is the reality of Wikipedia. Can you write a program to test it, perhaps? Acct4 15:23, 28 September 2007 (UTC)[reply]
I believe Aitchinson and Brown does have that formula in it, but since I haven't looked at that book in many years I wouldn't swear by it. I will have to check. I derived the formula myself before adding it to Wikipedia, unfortunately there was a slip up in my post which was caught by an anonymous user and corrected. FWIW, at this point I have a near 100% confidence in its correctness. And I am watching this page for vandalism or other problems. In general your point is a good one. Encyclops 22:34, 28 September 2007 (UTC)[reply]

Why has nobody mentioned whether the mean and standard deviation are cacultaed from x or y?. if y = exp(x). Then mean and stdev are from the x values. Book by - Athansious Papoulis. Siddhartha,here. —Preceding unsigned comment added by 203.199.41.181 (talk) 09:26, 2 February 2008 (UTC)[reply]

Derivation of Partial Expectation

[edit]

As requested by Rinconsoleao and others, here is a derivation of the partial expectation formula. It is tedious, so I do not include it in the article itself.

We want to find

where f(x) is the lognormal distribution

so we have

Make a change of variables

and giving

combine the exponentials together

fix the quadratic by 'completing the square'

at this point we can pull out some stuff from the integral

one more change of variable

and

gives

We recognize the integral and the fraction in front of it as the complement of the cdf of the std normal rv

using we finally have

Regards, Encyclops (talk) 21:49, 29 August 2009 (UTC)[reply]

examples for log normal distributions in nature/economy?

[edit]

Some examples would be nice! —Preceding unsigned comment added by 146.113.42.220 (talk) 16:41, 8 February 2008 (UTC)[reply]

One example is neurological reaction time. This distribution has been seen in studies on automobile braking and other responses to stimuli. See also mental chronometry.--IanOsgood (talk) 02:32, 26 February 2008 (UTC)[reply]
This is also useful in telecom. in order to compute slow fading effects on a transmitted signal. -- 82.123.94.169 (talk) 14:42, 28 February 2008 (UTC)[reply]

I think the Black–Scholes Option model uses a log-normal assumption about the price of a stock. This makes sense, because its the percentage change in the price that has real meaning, not the price itself. If some external event makes the stock price fall, the amount that it falls is not very important to an investor, its the percent change that really matters. This suggests a log normal distribution. PAR (talk) 17:13, 28 February 2008 (UTC)[reply]

I recall reading on wiki that high IQs are log-normally distributed. Also, incomes (in a given country) are approximately as well. Elithrion (talk) 21:26, 2 November 2009 (UTC)[reply]

Parameters boundaries ?

[edit]

If the relationship between the log-normal distribution and the normal distribution is right, then I don't understand why needs to be greater than 0 (since is expected to be a real with no boundary in the normal distribution). At least, it can be null since it's the case with the graphs shown for the pdf and cdf (I've edited the article in consequence). Also, that's not that needs to be greater than 0, but (which simply means that can't be null since it's a real number). -- 82.123.94.169 (talk) 15:04, 28 February 2008 (UTC)[reply]

Question: What can possibly be the interpretation of, say, as opposed to ? By strong convention (and quite widely assumed in derivations) standard deviations are taken to be in the domain , although I suppose in this case algebraically can be negative... It's confusing to start talking about negative sds, and unless there's a good reason for it, please don't. --128.59.111.72 (talk) 22:59, 10 March 2008 (UTC)[reply]

Yes, you're right: can't be negative or null (it's also obvious reading the PDF formula). I was confused by the Normal Distribution article where only is expected to be positive (which is also not sufficient there). Thanks for your answer, and sorry for that. I guess can't be negative as well because that would be meaningless if it was (even if it would be mathematically correct). -- 82.123.102.83 (talk) 19:33, 13 March 2008 (UTC)[reply]

Logarithm Base

[edit]

Although yes, any base is OK, the derivations and moments, etc. are all done assuming a natural logarithm. Although the distribution would still be lognormal in another base b, the details would all change by a factor of ln(b). A note should probably be added in this section, that we are using by convention the natural logarithm here. (And possibly re-mention it in the PDF.) --128.59.111.72 (talk) 22:59, 10 March 2008 (UTC)[reply]

Product of "any" distributions

[edit]

I think it should be highlighted in the article that the Log-normal distribution is the analogue of the normal distribution in this way: if we take n independent distributions and add them we "get" the normal distribution (NB: here I am lazy on purpose, the precise idea is the Central Limit Theorem). If we take n positive independent distributions and multiply them, we "get" the log-normal (also lazy). Albmont (talk) 11:58, 5 June 2008 (UTC)[reply]

This is to some extent expressed (or at least suggested) where the article says "A variable might be modeled as log-normal if it can be thought of as the multiplicative product of many small independent factors". Perhaps it could be said better, but the idea is there. Encyclops (talk) 14:58, 5 June 2008 (UTC)[reply]
So we're talking about the difference between "expressed (or at least suggested)" on the one hand, and on the other hand "highlighted". Michael Hardy (talk) 17:39, 5 June 2008 (UTC)[reply]
Yes, the ubiquity of the log-normal in Finance comes from this property, so I think this property is important enough to deserve being stated in the initial paragraphs. Just MHO, of course. Albmont (talk) 20:39, 5 June 2008 (UTC)[reply]
The factors need to have small departure from 1 ... I have corrected this, but can someone think of a rephrasing for the bit about "the product of the daily return rates"? Is a "return rate" defined so as to be close to 1 (no profit =1) or close to zero (no profit=0)? Melcombe (talk) 13:49, 11 September 2008 (UTC)[reply]
The "return rate" should be the one "close to 1 (no profit == 1)." The author must be talking about discount factors rather than rates of return. Rates of return correspond to specific time periods and are therefore neither additive nor multiplicative. Returns are often thought of as normally distributed in finance, so the discount factor would be lognormally distributed. I'll fix this. Alue (talk) 05:14, 19 February 2009 (UTC)[reply]
Moreover, it would be nice to have a reference for this section. 188.97.0.158 (talk) 14:21, 4 September 2012 (UTC)[reply]

Why the pdf value would be greater than 1 in the pdf picture?

[edit]

Why the pdf value would be greater than 1 in the pdf picture? Am I missing something here? I am really puzzled. —Preceding unsigned comment added by 208.13.41.5 (talk) 01:55, 11 September 2008 (UTC)[reply]

Why are you puzzled? When probability is concentrated near a point, the value of the pdf is large. That's what happens here? Could it be that you're mistaking this situation with that of probability mass functions? Those cannot exceed 1, since their values are probabilities. The values of a pdf, however, are not generally probabilities. Michael Hardy (talk) 02:20, 11 September 2008 (UTC)[reply]
Just to put it another way, the area under a pdf is equal to one, not the curve itself. Encyclops (talk) 03:01, 11 September 2008 (UTC)[reply]

Now I am unpuzzled. Thanks ;-) —Preceding unsigned comment added by 208.13.41.5 (talk) 16:54, 11 September 2008 (UTC)[reply]


The moment problem

[edit]

In the article it should really mention that the Log-normal distribution suffers from the Moment problem (see for example Counterexamples in Probability, Stoyanov). Basically, there exists infinitely many distributions that have the same moments as the LN, but have a different pdf. In fact (I think), there are also discrete distributions which have the same moments as the LN distribution. ColinGillespie (talk) 11:45, 30 October 2008 (UTC)[reply]

moment generating function is defined as
On the whole domain R, it doesn not exist, but for t=0, it does exist for sure, and so is for any t<0. so why don't we try to find the domain set on which it exists? the set {t: Mx(t)<infinite}. Jackzhp (talk) 14:39, 21 January 2009 (UTC)[reply]
The cumulant/moment generating function g(t) is convex, 0 belong to the set {t: g(t)<infinite}, if the interior of the set is not empty, then g(t) is analytic there, and infinitely differentialbe there, on the set, g(t) is strictly convex, and g'(t) is strictly increasing. please edit Cumulant#Some_properties_of_cumulant_generating_function or moment generating functionJackzhp (talk) 15:09, 21 January 2009 (UTC)[reply]

Why is this even relevant? The distribution is not bounded, and therefore there is no guarantee the infinite set of moments uniquely describes the distribution. — Preceding unsigned comment added by Conduit242 (talkcontribs) 22:35, 8 October 2014 (UTC)[reply]

My edit summary got truncated

[edit]

Here's the whole summary to my last edit:

Two problems: X is more conventional here, and the new edit fails to distinguish between capital for the random variable and lower case for the argument to the density function.

Michael Hardy (talk) 20:15, 18 February 2009 (UTC)[reply]

The problem you refer to here is still (again?) present in the section about the density. Below I wrote about it under the title 'Density', but it is not understood I'm afraid.Madyno (talk) 14:24, 10 May 2017 (UTC)[reply]

Are the plots accurate?

[edit]

Something seems a bit odd with the plots. In particular the CDF plot appears to demonstrate that all the curves have a mean at about 1, but if the underlying parameter µ is held fixed, we should see P = 0.5 at around x=3 for sigma = 3/2; and at around 1.35 for sigma=1, and all the way at e^50 for sigma=10. The curves appear to have been plotted with the mean of the lognormal distribution fixed at (µ+o^2/2)=1? ~confused~

Don't confuse the expected value with the point at which the probability is one-half. The latter is well-defined for the Cauchy distribution, while the former is not; thus although x=1 is the point at which all these distributions have P[x < 1] = 1/2; it's not the expected value. Hurr hurr, let no overtired idiots make this mistake again. (signed, Original Poster) —Preceding unsigned comment added by 140.247.249.76 (talk) 09:26, 29 April 2009 (UTC)[reply]
We had this discussion on this page in considerable detail before, a couple of years ago. Yes, they're accurates; they're also somewhat counterintuitive. Michael Hardy (talk) 16:40, 17 June 2009 (UTC)[reply]

I think it's worth pointing out the that the formula in the code that generates the pdf plots is wrong. The numerator in the exponent is log(x-mu)^2, when it should be (log(x)-mu)^2. It doesn't actually change the plots, because they all use mu=0, but it's an important difference, in case someone else used and modified the code. Sorry if this isn't the place to discuss this - this is my first time discussing on wikipedia. Crichardsns (talk) 01:25, 19 February 2011 (UTC)[reply]

The pdf plot is wrong if for no other reason (unless I've missed something important) because one of the curves exceed 1 and can't be a proper pdf. —Preceding unsigned comment added by 203.141.92.14 (talk) 05:31, 1 March 2011 (UTC)[reply]

The aesthetics of the plots look terrible.

Nonsense about confidence intervals

[edit]

I commented out this table:

Confidence interval bounds log space geometric
3σ lower bound
2σ lower bound
1σ lower bound
1σ upper bound
2σ upper bound
3σ upper bound

The table has nothing to do with confidence intervals as those are normally understood. I'm not sure there's much point in doing confidence intervals for the parameters here as a separate topic from confidence intervals for the normal distribution.

Obviously, you cannot use μ and σ to form confidence intervals. They're the things you'd want confidence intervals for! You can't observe them. If you could observe them, what would be the point of confidence intervals? Michael Hardy (talk) 16:43, 17 June 2009 (UTC) :Michael Hardy, I think I agree with you. I think the confidence intervals on parameters are sometimes called "credibility intervals". They are obtained through a Bayesian analysis, using a prior, and where the posterior distribution on the parameters gives the credibility interval. Correct me if I'm wrong. Attic Salt (talk) 00:50, 6 June 2020 (UTC)[reply]

This edit was a colossal mistake that stood for almost five years!! Whoever wrote it didn't have a clue what confidence intervals are. Michael Hardy (talk) 16:49, 17 June 2009 (UTC)[reply]

Characteristic function

[edit]

Roy Lepnik [1] obtained the following series formula for the characteristic function:

where are coefficients in Taylor expansion of Reciprocal Gamma function, and are Hermite functions.

  1. ^ Lepnik, R. (1991). On lognormal random variables: I-the characteristic function. J Austral Math Soc Ser B, 32, pp327--347, 1991.


Scaling & inverse

[edit]

In the relation section, we should mention the scaling & inverse of a log normal variable:

  • If then is called shifted log-normal. E(X+c)=E(X)+c, var(X+c)=var(X)
  • If , then is also log normal, and , E(Y)=aE(X),
  • If , the is called inverse log normal,

and EY=?, var(Y)=?

Jackzhp (talk) 12:53, 28 July 2009 (UTC)[reply]

If Y=aX then , formulas for ƒ and F are immediate application of the formulas from the beginning of the article. If Y=1/X then , and again formulas immediately follow. It's actually much easier to work with this representation because one may want to calculate not only mean+variance, but other quantities as well. ... stpasha » talk » 18:32, 28 July 2009 (UTC)[reply]

Partial expectation again

[edit]

As User:Encyclops proves above, the formula in the "partial expectation" section is the quantity .

However, a recent edit defined the term "partial expectation" as a synonym for the "conditional expectation" E(x|x>k).

That doesn't seem correct; it seems unlikely that the uncommon term "partial expectation" would be a synonym for the more standard term "conditional expectation". Instead, it makes sense that "partial expectation" would mean part of the expectation, as this definition states.

Anyway, regardless of semantics, is not E(x|x>k). Instead, is E(x|x>k)prob(x>k).

Therefore, the current "partial expectation" section is incorrect. It is self-consistent if we instead define "partial expectation" as E(x|x>k)prob(x>k). So I will make that change. (unsigned edit by 213.170.45.3 )

Well that is one definition of "partial expectation" that you have found, and I can't find another. If you make the change to be a formal definition of the term then include the citation, otherwise you might change the text to avoid it being a "definition" at all. 08:54, 24 September 2009 (UTC)

Certainly E(X|X>k) must be greater than k, whereas the currently displayed formula for g(k) need not be, in particular when k is large and positive. So certainly something needs putting right in that section.Fathead99 (talk) 15:52, 2 January 2013 (UTC)[reply]

I agree that E(X|X>k) must be greater than k, but not g(k). Recall that in the definition of E(X|X>k) you divide the partial expectation term by the probability of the event {X>k}. AndreaGerali (talk) 11:47, 15 January 2013 (UTC)[reply]

Yes, my comment applied to a version before the recent edits: I'm happy with what's there now.Fathead99 (talk) 10:27, 16 January 2013 (UTC)[reply]

Properties?

[edit]

I would like to start a new section on properties, where one of the properties is, that data that arise from the the log-normal distribution has a symmetric Lorenz curve (see also Lorenz asymmetry coefficient), any objections?

Christian Damgaard —Preceding unsigned comment added by Christian Damgaard (talkcontribs) 10:36, 13 October 2010 (UTC)[reply]

The present section "Characterization" might reasonably be split-up, some of it going into a new section headed "Properties". But "Characterization" doesn't mean here what it usually means so the rest could be renamed. Adding the info you suggest seems OK. Melcombe (talk) 12:24, 13 October 2010 (UTC)[reply]

I have made the section "Properties" but I hope that others will will move relevant parts from the characterisation section

Is the format of the reference OK? —Preceding unsigned comment added by Christian Damgaard (talkcontribs) 13:43, 13 October 2010 (UTC)[reply]

Is the median the same?

[edit]

Is the median of the distribution of the random variable, after converting it to its logarithm, the same as the corresponding median after logarithmizing the whole distribution? Theoretically, it should, because the ranks of the values from smallest to largest should remain the same. If so, it should be mentioned in the article. Mikael Häggström (talk) 05:58, 1 March 2011 (UTC)[reply]

[edit]

The approximate mean and variance for the sum , for i.i.d. log-normal , are given incorrectly, I think. The expressions below for and are meant to be for an approximately normally distributed . So that

and thus direct substitution (for constant ) gives as expected, since the variance of the sum of i.i.d. variables equals to the sum of variances for each variable.

I therefore suggest changing approximated by another log-normal distribution Z to approximated by a normal distribution .

Please let me know if I am wrong. The references to Gao and to Fenton & Wilkinson should also be cited correctly. — Preceding unsigned comment added by Raiontov (talkcontribs) 05:43, 15 November 2011 (UTC)[reply]

Improvement of pictures

[edit]

The pdf and cdf graphs of the normal distribution are very beautiful. If the log-normal ones were changed in the same way (thicker lines, grid, ...) I think the article would be more legible. Jbbinder (talk) 12:16, 18 June 2012 (UTC)[reply]

Confusion about location and shape in the Probability distribution table, row on parameters

[edit]

In the table on the right it reads

Parameters σ2 > 0 — log-scale,
μR — shape (real)

Surely, the use of "shape" must be a mistake. The shape of the distribution is determined by σ2 while the location is determined by μ. This fact can easily be verified by plotting normalized by its maximum value as a function of . Curves with varying μ will coincide. Changing σ2 on the other hand will change the shape of the pdf.

In my opinion it would be better if the table entry read

Parameters σ2 > 0 — log-scale (shape),
μR — location (real)

Comments? — Preceding unsigned comment added by 193.11.28.112 (talk) 13:33, 24 October 2013 (UTC)[reply]

Wrong formula for parameter μ as a function of the mean and variance

[edit]

The formula is clearly wrong (although it was correctly coppied from the reference given).

The formula given is:

But the fraction inside the logarythim is clearly not "dimensionless" and it should be.

I have done the calculation on my own and I arrived at a similar (and dimension consisntent) result: G Furtado (talk) 00:34, 1 December 2013 (UTC)[reply]

Power laws

[edit]

Power law distributions are very similar to but not the same as log-normal distributions. This is mentioned in the power-law article. It should also be brought up here. — Preceding unsigned comment added by 211.225.33.104 (talk) 05:08, 11 July 2014 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Log-normal distribution. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers. —cyberbot IITalk to my owner:Online 10:23, 29 August 2015 (UTC)[reply]

Base-unspecific entropy incorrect

[edit]

An error appears to have been made in converting the entropy to use a base-unspecific log.

gives rather than

I've made the same change in the page itself. — Preceding unsigned comment added by 208.78.228.100 (talk) 00:22, 23 February 2016 (UTC)[reply]

Notation/Grouping Clarification in Formula

[edit]

Would it be clearer to group the argument to the ``ln`` (natural log) function together like this? In scipy.stats and many textbooks there is a ``loc`` (location) parameter that is equivalent to the mean for symmetric distributions, but not the assymetric log-normal distribution. The ``loc`` parameter must be included within the natural log operation but the mean should not (for the log normal distribution only) like (ln(x - loc) - mu)^2. So being explicit about the grouping may help comprehension and comparison to the notation in other texts and code.

Log-normal
PDF
CDF

Hobsonlane (talk) 17:23, 18 April 2016 (UTC)[reply]

Graph explaining relation between normal and lognormal distribution

[edit]

Proposal to use the following graph on the page about the Lognormal distribution:

Relation between normal and lognormal distribution. If is normally distributed, then is lognormally distributed.

— Preceding unsigned comment added by StijnDeVuyst (talkcontribs) 14:15, 2 December 2016 (UTC)[reply]

I think this plot has a typo on the log normal distribution, shouldn't `X ~ lnN(mu, s^2)` say `lnX ~ N(mu, s^2)` ? Kornel.j.k (talk) 08:27, 8 April 2024 (UTC)[reply]

Neuroscience citations

[edit]

User:Isambard Kingdom, would you share your thinking on your recent reversion of the neuroscience citations? User:Rune2earth, would you share your thinking on supplying those citations? 𝕃eegrc (talk) 15:49, 6 January 2017 (UTC)[reply]

Self-citations by "Rune". Isambard Kingdom (talk) 15:51, 6 January 2017 (UTC)[reply]
Maybe so for the eLife citation. However, the Cell Reports and Nature Reviews citations do not appear to be self-citations and are in respectable journals. I do not know about Progress in Neurobiology.
  • Mizuseki, Kenji; Buzsáki, György (2013-09-12). "Preconfigured, skewed distribution of firing rates in the hippocampus and entorhinal cortex". Cell Reports. 4 (5): 1010–1021. doi:10.1016/j.celrep.2013.07.039. ISSN 2211-1247. PMC 3804159. PMID 23994479.
  • Petersen, Peter C.; Berg, Rune W. (2016-10-26). "Lognormal firing rate distribution reveals prominent fluctuation–driven regime in spinal motor networks". eLife. 5: e18805. doi:10.7554/eLife.18805. ISSN 2050-084X. PMC 5135395. PMID 27782883.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  • Buzsáki, György; Mizuseki, Kenji (2017-01-06). "The log-dynamic brain: how skewed distributions affect network operations". Nature reviews. Neuroscience. 15 (4): 264–278. doi:10.1038/nrn3687. ISSN 1471-003X. PMC 4051294. PMID 24569488.
  • Wohrer, Adrien; Humphries, Mark D.; Machens, Christian K. (2013-04-01). "Population-wide distributions of neural activity during perceptual decision-making". Progress in Neurobiology. 103: 156–193. doi:10.1016/j.pneurobio.2012.09.004. ISSN 1873-5118. PMID 23123501.
𝕃eegrc (talk) 17:22, 6 January 2017 (UTC)[reply]

I replace these, but took out "Rune". Isambard Kingdom (talk) 18:02, 6 January 2017 (UTC)[reply]

Plots misleading due to coarse sampling

[edit]

The sigma=1,mu=0 curve in the PDF plot is misleading, since it looks like the PDF is linear in a neighborhood of x=0. I think this is because it was plotted using too few sample points. The fact that the PDF is so small near zero is important in applications, so the graph should not hide it. — Preceding unsigned comment added by 77.88.71.157 (talk) 10:13, 9 February 2017 (UTC)[reply]

Density

[edit]

The way the article derives the density is incomprehensible and seems to use some magic tricks. The right way is as follows:

According to the definition the random variable is lognormally distributed if is normally distributed. Hence for lognormally distributed:

which leads to:

The distribution function of is:

and the density

I'll change it. Madyno (talk) 08:25, 10 May 2017 (UTC)[reply]

I changed it back. While your version is correct, the formulation we had before without terms that are not in as common usage needs to be shown. Rlendog (talk) 16:18, 8 May 2017 (UTC)[reply]

The problem is that this formulation is NOT correct. I kindly called it incomprehensible, but better could have called it nonsens. I don't know who wrote it, but this person may have some understanding of mathematics, they have no knowledge of probability theory. As an example I give you the first sentence of the section:

A random positive variable is log-normally distributed if the logarithm of is normally distributed,

I do not think it is common usage to speak of 'a random positive variable', and it is no common usage to use the small letter for a r.v., allthough it's no crime, but is a complete mistake to use the same letter for the real number in the formula of the density and treat it as the r.v. From thereon no good can be done anymore. I hope you have enough knowledge of the subject, to understand what I'm saying. Madyno (talk) 09:54, 9 May 2017 (UTC)[reply]

BTW, I checked again the article Cumulative distribution function and the notation I used is completely in line with this article. The use of and for the pdf and cdf of the standard normal distribution comes straight from the article on that topic. Madyno (talk) 17:53, 9 May 2017 (UTC)[reply]

I think "Madyno" 's version is better. The use of to refer to the normal density is not standard, and Madyno's method is more elementary, straightforward, and self-contained, and is expressed in more standard language. Moreover, it is grotesquely incorrect to first use lower-case x to refer to a random variable and then in the very next line call it capital X without a word about the inexplicable change in notation. How then would we understand an expression like
in which capital X and lower-case x obviously refer to two different things? When you write
with lower-case x in ƒX(x) used as the argument and the capital X in the subscript, then you can tell what is meant by ƒX(3) and ƒY(3). Michael Hardy (talk) 20:04, 10 May 2017 (UTC)[reply]
I am comfortable with the change now. Thanks. Rlendog (talk) 21:04, 10 May 2017 (UTC)[reply]

Location and scale

[edit]

It strikes me as odd to call μ and σ respectively location and scale parameter. They function as such in the underlying normal distribution, but not in the derived log-normal distribution. Madyno (talk) 20:37, 9 May 2017 (UTC)[reply]

I have expunged all of the statements to the effect that those are location and scale parameters for this family of distributions. That is far worse than "odd". Michael Hardy (talk) 20:49, 10 May 2017 (UTC)[reply]

"\ln\mathcal N" ?

[edit]

Within this article I found the notation

apparently being used to refer to the lognormal density function, and earlier I found (and got rid of) the use of for the normal density. If is a normal density, then should denote the logarithm of the normal density, not the density of the lognormal. Michael Hardy (talk) 21:01, 10 May 2017 (UTC)[reply]

"location and scale"

[edit]

The claim that μ and σ are location and scale parameters for the family of lognormal distributions is beyond horrible. Michael Hardy (talk) 21:06, 10 May 2017 (UTC)[reply]

I have added to the article the statement that eμ is a scale parameter for the lognormal family of distributions. Michael Hardy (talk) 21:36, 10 May 2017 (UTC)[reply]

Dimensions

[edit]

If is a dimensional random variable (i.e. it has physical units, like a lot of the examples) then what are the units of quantities like and its mean ? We can take the log of 1.85 but how do we take the log of 1.85 metres?Fathead99 (talk) 14:48, 24 July 2017 (UTC)[reply]

We don't take the log of such a quantity. Normally the log is taken from the dimensionless ratio of the quantity itself and some reference value. Madyno (talk) 20:40, 26 July 2017 (UTC)[reply]

PDF and CDF plots confusing

[edit]

Colors and standard deviations do not match between the plots, which could be confusing to a casual reader — Preceding unsigned comment added by Mwharton3 (talkcontribs) 21:21, 22 August 2018 (UTC)[reply]

I made the colors match and also evaluated the plot at more points now. I think these are better illustration plots than the previous one. If you notice any other deficiencies with the plots, please write me. Its probably quicker for me to change things as I now have code to produce the plots.Xenonoxid (talk) 00:48, 28 January 2022 (UTC)[reply]

Geometric mean

[edit]

Could someone explain what the geometric mean of a random variable is? Madyno (talk) 22:26, 25 January 2019 (UTC)[reply]

https://en.wikipedia.org/wiki/Geometric_mean It is the antilogarithm of the mean logarithm of the values of a random variate.207.47.175.199 (talk) 17:41, 2 June 2022 (UTC)[reply]

log-Lévy distribution bug report

[edit]

The section "Occurrence and applications" contains a misleading link. Specifically, "log-Lévy distributions" links to https://en.wikipedia.org/wiki/L%C3%A9vy_skew_alpha-stable_distribution, a URL that redirects to https://en.wikipedia.org/wiki/Stable_distribution, a page that does not have the annotation "Redirected from log-Lévy distributions" at its top. Now, consistent with the redirected URL, the Stable distribution page does contain the annotation "(Redirected from Lévy skew alpha-stable distribution)". But the Stable distribution page never actually says anything at all about "Lévy skew" anything, and for that matter, never anything at all about "log-Lévy" anything. There's just a complete, total disconnect. Page Notes (talk) 16:13, 12 March 2019 (UTC)[reply]

Is it really the max entropy distribution?

[edit]

At the end of the intro, the article currently says "The log-normal distribution is the maximum entropy probability distribution for a random variate X—for which the mean and variance of ln(X) are specified."

Are we sure this is correct? I tried looking at the source, and it barely seems to mention the log-normal distribution at all, let alone argue that the log-normal distribution is the maximum entropy probability distribution for any random variable for which the mean and variance of the ln of the variable is defined.

I haven't spent lots of time looking into this, so sorry if I'm missing something. SanjayRedScarf (talk) 23:00, 18 February 2023 (UTC)[reply]

Location parameter modifications

[edit]

Someone is changing the location parameter which is such as to be just which is mistaken. Please someone review and block the user IP. 45.181.122.234 (talk) 02:50, 27 November 2023 (UTC)[reply]

PDF plot mean value

[edit]

The pdf plot in the top right states each distribution has mean = 0. Visually, it appears each distribution has mean = 1. Drewscottt (talk) 01:46, 20 March 2024 (UTC)[reply]