Talk:Lazy evaluation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Demonstration of lazy evaluation[edit]

I have a function in haskell, which can sort infinite lists. Of course it takes an infinite time, but it shows the power of lazy evaluation:

import Data.List
import System.Random
import Control.Monad

sortingStream :: (Ord a) => [a] -> [[a]]
sortingStream = tail . iterate ss

showss r n = print $ take 30 $ sortingStream r !! n

main = do
        s <- newStdGen :: IO StdGen
        let r = randomRs (1,100000) s :: [Int]
        forM_ [0..] (showss r)
ss :: (Ord a) => [a] -> [a]
ss [] = []
ss [x] = [x]
ss [x,y]    | x <= y = [x,y]
            | otherwise = [y,x]
ss (x:y:xs) | x <= y  =  x: ss (y:xs)
            | otherwise =  y: ss (x:xs)

Would it be interesting to add? Or is it too long and not relevant?

Edgar Klerks —Preceding unsigned comment added by 195.241.100.248 (talk) 13:02, 24 June 2010 (UTC)[reply]

Expressions and variables[edit]

Surely it is expressions which are evaluated, not variables? -- The Anome

Yes, I think that's the usual way to say it. If you have referential transparency, like Haskell, then it doesn't make much difference. It's been a while since I did much Scheme, and I don't think it had delayed variables then, but around R4RS there was a promise function, that delayed evaluation of an expression, and force to deliver on a promise.

So, IMHO, we should talk about expressions first, with delayed variables as one possible way of expressing lazy evaluation.

We should also say something about graph versus tree reduction.

Tim Goodwin

X Window[edit]

I'm not sure if the analogy to XWindow's windowmanagers is valid. AFAIK, the windowmanager in X is only responsible for the contents and behaviour of window borders, and manipulation of those. Exposing and redrawing window content is something part of the X protocol, and implemented by the 'widget set', a software library to help abstract drawing and manipulation of objects within a window.

Mark-Jan

C/C++ example[edit]

The C/++ example with the compound predicate was wrong. I check Stroustrup (it was closer to hand than K&R), and C++ guarantees the same behaviour with these operators as Java does. If your compiler doesn't, it's broken. --Robert Merkel 05:03, 20 Apr 2004 (UTC)

Actually I am not sure what the example is trying to show. Isn't it just an example of short-circuited evaluation? Besides, I don't see what a short-circuit has to do with lazy evaluation at all. -- Taku 05:11, Apr 20, 2004 (UTC)
I agree with Taku -- if C was a language that documented that it might short-circuit an expression as an optimisation, the example could be valid. If I have time I'll look for a better example. I guess the right way to say it is that short-circuiting is lazy evaluation at the meta level: "In theory I want to evaluate myfunc(b), but in fact it is never needed". -- Mark Hurd 16:09, 19 Jun 2004 (UTC)

Lazy evaluation as a design pattern[edit]

I think the "lazy evaluation as design pattern" section is misguided. This kind of pattern is usually called either "event-driven" or "reactive" (depending on the community you're a part of). This design pattern is fundamentally based on a push model, where changes are actively propagated to their dependents, rather than the pull model that lazy evaluation uses.

Kimbly 16:46, 7 January 2006 (UTC)[reply]

I also think it's misleading to talk about design patterns (which are ad hoc techniques that programmers adopt when they are using languages that are inadequate to express the abstractions they need to express) in an article about lazy evaluation, which is a programming language design construct that is backed up with formal semantics. Perhaps it would be best to merge this section into Design patterns, and add a note about the "lazy evaluation as design pattern" idea under "See also" or such. Catamorphism 19:18, 7 January 2006 (UTC)[reply]
The terms "lazy evaluation" and "design pattern" are both very loaded words in certain parts of CS; the section on "lazy evaluation as a design pattern" doesn't even describe a design pattern as described in that article. The idea that section was trying to express seems to be well-covered in "minimal evaluation", which is already linked. --bmills 04:56, 8 January 2006 (UTC)[reply]

Evaluation strategies[edit]

I'm not sure I'm completely qualified to write it, but this page should probably mention terms like "call by need", and "leftmost outermost evaluation". The latter tends term is used when describing lazy evaluation in the context of lambda calculus, so a brief section (or at least a link) to LC would be useful. Kimbly 19:36, 14 January 2006 (UTC)[reply]

Evaluation strategy covers the call-by-need/call-by-name distinction, so I added a paragraph linking to that. Catamorphism 21:10, 14 January 2006 (UTC)[reply]
I also edited Evaluation strategy to mention the terms "leftmost outermost" and "leftmost innermost". Catamorphism 21:25, 14 January 2006 (UTC)[reply]

Fuzziness[edit]

The article is really fuzzy with respect to actually defining lazy evaluation (probably in part because lazy programming literature tends to be a bit vague). The article explains what lazy evaluation does, but not what it is. Does lazy evaluation encompass all non-strict evaluation strategies? What about parallel call-by-need, which maximizes (rather than minimizes) the amount of computation performed? Similarly, delayed evaluation is also left undefined. —donhalcon 04:19, 7 March 2006 (UTC)[reply]


Narrower Definition[edit]

From the research that I've done lazy evaluation refers to how expressions are evaluated when they are passed as arguments to functions and entails the following three points:

  1. The expression is only evaluated if the result is required by the calling function.
  2. The expression is only evaluated to the extend that is required by the calling function.
  3. the expression is never evaluated more than once.
The article seems to be erroneous here. Following David A. Schmidt, "Denotational Semantics", p.181: Delayed evaluation is the first item above. The third item is in David A. Watt, "Programming Language concepts and Paradigms", p. 100, called applicative-order evaluation or eager evaluation; the opposite is called normal-order evaluation. Adding the two last items gives lazy evaluation. Call-by-need evaluation requires full evaluation once it has been started.Haberg 09:29, 17 October 2007 (UTC)[reply]

This is something quite similar to call by need though the context is a bit different. The definition given in this entry is broader and is equivalent to what I would have called non-strict evaluation. My reading has been pretty much confined to functional programming so there may well be a bias here, but could someone provide a citation where the broader definition shown in this entry is given? To play fair, one citation for the narrower definition above is given in "Conception, Evolution, and Application of Functional Programming Languages", Paul Hudak, ACM Computing Surveys, Vol. 21, No.3, September 1989, pg. 383-385. Abcarter 15:56, 14 December 2006 (UTC)[reply]

I'm continuing to survey the literature and it confirms that lazy evaluation is a type of non-strict evaluation, which is essentially equivalent to call-by-name. For example

  1. Concepts, Techniques, and Models of Computer Programming by Peter Van Roy, pg 334-335.
  2. Practical Foundations for Programming Languages by Robert Harper, pg. 268
  3. Programming Languages by Mike Grant, Christian Skalka, and Scott Smith, pg. 28
  4. Type Theory and Functional Programming by Simon Thompson, pg. 31

I'm not saying this is an absolute, but it does appear to be the more common definition, a fact that should be taken note of in the entry. Abcarter 02:30, 19 December 2006 (UTC)[reply]

Should not be merged with delay[edit]

I think these are sufficiently distinct concepts that they ought to live in separate articles. --Saforrest 02:31, 14 April 2007 (UTC)[reply]

I agree, as I have never heard of "delay programming", while lazy evaluation is a somewhat important and basic topic in functional programming. If nobody thinks the articles should be merged, can we remove the banner from the front page? (Perhaps next time someone sees this note, if nobody has responded with disagreement, the banner should be removed.) --Piojo 19:30, 4 October 2007 (UTC)[reply]

Since it's been over a month since the previous comment, I removed the tag. Nibios 15:16, 9 November 2007 (UTC)[reply]

List of Lazy Languages[edit]

I am a Haskell neophyte, looking for a list of lazy programming languages. It seems to me that lazyness and type inference are orthogonal features. Does laziness have any "low-level" implementations?... maybe in terms of the Π-calculus, for example?

"As most programming languages are Turing-complete"[edit]

Most?

WikiWikiWeb:TuringComplete and WikiWikiWeb:SqlFlaws say that most programming languages are Turing-complete, with the major exception that (most versions of) SQL are not Turing complete. Does that answer your question? --68.0.124.33 (talk) 03:01, 4 March 2009 (UTC)[reply]
I think the original question is still valid. First of all, that's a pretty big claim, and there aren't any references for it. Moreover, there's no obvious connection between Turing-completeness and the ability to implement lazy evaluation in a language. I don't think the ternary operator is an example of this, because that's something that is provided by the language, not something that is written in the language. Danielx (talk) 16:58, 28 April 2009 (UTC)[reply]
The connection between Turing completeness and the ability to implement lazy evaluation is that in a Turing-complete language you can implement anything. Many lazy languages will have interpreters written in c/++. 84.45.45.84 (talk) 10:39, 28 January 2010 (UTC)[reply]
So your point is that since C/C++ is Turing-complete you can support any control structure in the interpreter? That's not what the article says - it talks about defining control structures within the language. Turing-complete languages can compute any function, but they can't "implement anything". The article is talking about programming language constructs ("lazy control structures [...] defined normally"), and Turing-complete languages can differ a lot in that respect. For instance, a language does not need to support both loops and recursion in order to be Turing-complete.

To further exemplify that the article is incorrect, in Java you just can't implement a while loop as a function. In languages which support closures, you can have control structures, but you usually need to write the loop body as a closure (as done in Smalltalk, indeed, for all control structures offered by the language). In Scala, finally, you can use call-by-name parameters for that (which in Scala are just closures requiring no arguments, i.e. of arity 0).

Therefore, I removed the mention of Turing-completeness and wrote a short summary of the above. --Blaisorblade (talk) 03:57, 22 February 2011 (UTC)[reply]

I think everyone here (except the anonymous contributor) is missing the point. By definition, any Turing complete language can be used to write an interpreter for any other Turing complete language. Thus, people have written LISP interpreters in FORTRAN 66, which would be called as a subroutine when non-FORTRAN semantics were required and thus implement (among other things) recursion in a non-recursive language. Nowadays, the same thing is still done, embedding (for example) Lua into C. So, yes, in Java you *can* implement a while loop as a function, it's just implemented a bit differently than you might have expected. I'm reverting the removal. Vrmlguy (talk) 16:57, 25 February 2011 (UTC)[reply]

Control structures[edit]

The (pseudo?)code and its explanation are unclear; a plain-language interpretation of the code block would be helpful. chrylis (talk) 21:13, 20 February 2009 (UTC)[reply]

merge[edit]

The articles lazy evaluation, lazy initialization, and lazy loading currently don't clearly draw the distinction between them. Is there a clear distinction between them? If so, the articles should be more wp: obvious as to what that distinction is. If not, then we should merge the articles. --68.0.124.33 (talk) 03:33, 4 March 2009 (UTC)[reply]

I agree that lazy initialization and lazy loading need more definition than the examples currently provide.
Lazy evaluation is IMHO more often a compiler, or at least lower-level, term. Mark Hurd (talk) 06:59, 4 March 2009 (UTC)[reply]

Let's get rid of the "Delayed evaluation" section[edit]

According to the intro, "lazy evaluation" and "delayed evaluation" are synonymous. In that case, there shouldn't be a separate section on the latter; otherwise, there needs to be a clearer distinction between the two. Danielx (talk) 17:07, 28 April 2009 (UTC)[reply]

Fundamentally flawed[edit]

This article contains several huge mistakes. E.g. lazy evaluation is usually defined as normal-order (or non-strict) evaluation with sharing of common expressions. The article claims it is applicative-order evaluation, the exact opposite. —Ruud 22:16, 22 January 2011 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Lazy evaluation. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 11:30, 27 February 2016 (UTC)[reply]

Simulating laziness in eager languages[edit]

I have added a brief demonstration for simulating laziness in JavaScript (it is somewhat equivalent to a transpilation of a Haskell implementation using Control.Monad.Memo.Vector.Unsafe). I suggest, in the context of this article, it would benefit the reader if the demonstrations for other languages would also be expressed as brief demonstrations; not tutorials including details such as version differences, syntactic sugar, alternative approaches, or other notes. 85.255.233.198 (talk) 09:37, 21 February 2021 (UTC)[reply]

  • Looking over the "simulating lazyness in other languages" examples, it is clear that there are several issues here:

    -- A Java example that shows, in enough detail to be broadly applicable, how lazy evaluation and memoization are actually implemented in functional programming lanugages. That example doesn't say that it serves this purpose, but it really does show the underlying implementation mechanism.
    -- Examples that show interesting "almost like lazy evaluation" mechanisms in other languages, for example, how to use coroutines to get an effect that isn't lazy but that is like being lazy. I wonder if these really belong here?
    -- Examples that merely illustrate lazyness without explaining the mechanism. I wonder if these would be clearer if they all used their respective languages to solve the same problem?

    As it stands, the examples are something of a jumble. Douglas W. Jones (talk) 18:26, 27 April 2021 (UTC)[reply]