text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hi,
I have a problem trying to iterate over a multimap (or rather map within map).
I have this:
But for a map within a map it won't work.But for a map within a map it won't work.Code:#include <map> using namespace std; int main(){ map<string, map<string, string> > mymap; map<string, map<string, string> >::iterator itm; /* This is how I would iterate over a normal map; for( it=mymap.begin(); it!=mymap.end(); ++it) { cout << "first: \t" << it->first << "second: \t" << it->second << endl; } OR cout << mymap.find("ab")->second << endl; */ return 0; }
Can someone help?
Thanks. | https://cboard.cprogramming.com/cplusplus-programming/150918-iterating-over-multi-map.html | CC-MAIN-2017-09 | refinedweb | 103 | 73.98 |
Python in a Nutshell, by Alex Martelli, 2003 O'Reilly, 636 pages. Perhaps the best book about Python ever written, this book is the perfect capstone to anyone's library of Pythonic books, and also the perfect introduction to Python for anyone well versed in other programming languages. For newbies to programming, this would still be a good second book after a good introductory book on Python, such as Learning Python by Mark Lutz. programminng, namespaces, iterators, generators, and new style division. List comprehensions are made not only comprehes passing time may make this book be no longer the most up-to-date reference on the newest features added to Python. But time can not erase the quality craftsmanship and the shear joy of reading such a well thought out masterpiece of Pythonic literature. Ron Stephens 27+ reviews of books about Python 67+ links to online tutorials about Python and related subjects Daily newsfeed of Pythonic web articles, new sourceforge projects, etc. | https://mail.python.org/pipermail/python-list/2003-April/206470.html | CC-MAIN-2017-04 | refinedweb | 162 | 57 |
2485
Joined
Last visited
Community Reputation10832 Excellent
About fastcall22
- RankWeb Developer
Personal Information
- LocationNowhere Important
- InterestsProgramming
Recent Profile Visitors
HTML5
fastcall22 replied to blueshogun96's topic in General and Gameplay ProgrammingThere are some non–obvious problems with this setup, but let’s cover some CSS basics first. Every HTML element has a set of properties that describe how it should be laid out on a page and how it should rendered. They can be accessed with it’s style property in Javascript. You can write Javascript or add a style attribute to each HTML element to give your page the look and feel, but it quickly becomes tedious and cumbersome. Enter CSS. It allows you to apply styles to elements determined by rules. These rules can even be stored in a separate file, so you don’t have to clutter your HTML. Here’s a CSS cheatsheet. Now, on to style matching. If you want to match styles, then open your browser’s developer tools and inspect random elements on the page. Poke at their CSS properties and find out what they’re made of. Take note of font-family, font-size, line-height, color, background, padding and margin properties as they are a significant chunk of the look and feel. -- Next, let’s talk about RSS. When querying an RSS feed, you are met with an XML document with a summary of recent posts. Here’s an example RSS feed. The problem is that this XML isn’t usable HTML. It will need to be transformed into HTML. You’ve opted to use rssdog.com do this for you. The query string parameters you pass to rssdog tells it which RSS feed to fetch and how to render that HTML, along with some basic styling. By using an iframe tag with this URL, the user’s browser will make a separate request on your homepage to rssdog with your parameters and rssdog will fill that space with the resulting HTML. Browsers are particular about cross–domain interactions for good reason, so while this iframe is a part of your homepage, it is a considered a separate web page and has protections against modification. More on this later. -- So, options at this point: Render the XML yourself, server side. This is the easiest option as you can embed the resulting HTML directly into your page and you can retain your styling. You will need to research what server side languages are available with your web hosting provider. You will then need insert a server side script on your homepage that will download the XML from the RSS feed (using CURL, for example), traverse the resulting XML document, and render the HTML. Render the XML yourself, client side. Include some Javascript to invoke an XHR request to your feed, receiving the XML, and dynamically creating HTML elements from it. This requires that the server send a Access-Control-Allow-Origin header to grant access from shogun3d.net to blog.shogun3d.net. Check your web hosting provider on how to send that header. After enabling the option, use the network tab on your developer tools to ensure that the header was sent. (Be sure to hard refresh to prevent your browser using a cached version of the page and headers.) The theme you have selected has jQuery included, so you can use jQuery.ajax to fetch your XML. Otherwise, you can use a raw XMLHttpRequest. Manually sync your homepage with your blog. This is the most annoying option.
fastcall22 replied to mrpeed's topic in For BeginnersSince
fastcall22 replied to HunterBattles's topic in For BeginnersAt this very moment, no, but it’s still too early to tell! :^) Programming is an art and can more or less carry over to other languages; don’t worry about choosing the “right language” and just focus on learning the fundamentals with one. Read up on Peter Norvig’s Teach Yourself Programming in Ten Years, and reflect on where you are at now with Java. Good luck!
fastcall22 commented on Embassy of Time's blog entry in Creating ComplexityOf all the blog posts that needs pictures, this needs it the most… !! ᕕ( ᐛ )ᕗ
- Gaffer addresses this by clamping the elapsed time per loop to a certain number of updates. There are other ways this can happen as well, such as pausing the game while you’re in the debugger. :^)
- That’s more or less the gist of the game loop. I recommend also supplementing your reading with Gaffer’s Fix Your Timestep. It is a similar implementation and may answer your other questions.
fastcall22 replied to Finalspace's topic in Coding Horrorsbool intersects(const AABB& a, const AABB& b) { AABB test = { min(max(a.min, b.min), b.max), min(max(a.max, b.min), b.max), }; return (test.max - test.min).sq_len() > FLOAT_EPSILON; } Erm, I mean… Post each query to a math forum, using a rendering of the two AABBs on a 3D graph. If the account gets banned for spamming, just keep making new ones.
fastcall22 commented on riuthamus's blog entry in Valley of Crescent MountainLooks great!
fastcall22 replied to LennyLen's topic in GDNet Comments, Suggestions, and IdeasIt was one of Adam Sandler’s better films, but it still wasn’t that great of a movie… ᕕ( ᐛ )ᕗ
fastcall22 replied to Smarkus's topic in Graphics and GPU ProgrammingThen it must be a typo, consider the following: struct Foobar { int x; void frobnicate() { cout << this->x; } }; int main() { Foobar* f = new Foobar {3}; if ( f = nullptr ) { // Compiler should warn about this line return -1; } f->frobnicate(); // This will always crash delete f; }
fastcall22 replied to Shinrai's topic in For BeginnersSimply interpolate between the old and new positions in the grid and ignore input during the animation. Perhaps something like: void Player::update(float dt) { if ( this->move_state == MoveState::STOPPED ) { int2 dir = this->process_move_input(); if ( dir ) { this->move_state = MoveState::MOVING; this->move_timer = this->move_speed; // in tiles per second this->old_pos = this->pos; this->pos += dir; } } else if ( this->move_state == MoveState::MOVING ) { this->move_timer -= dt; if ( this->move_timer <= 0 ) { this->move_timer = 0; this->move_state = MoveState::STOPPED; } } } void Player::draw() { int2 pos_px; if ( this->move_state == MoveState::MOVING ) { // move_timer goes from 1 to 0, so math on positions is reversed from the traditional A→B interpolation pos_px = this->pos + float2(this->old_pos - this->pos) * (this->move_timer / this->move_speed) * TILE_SIZE_PX; } else { pos_px = this->pos * TILE_SIZE_PX; } // draw avatar at pos_px }
fastcall22 commented on khawk's blog entry in GameDev.net Staff BlogStill getting the hang of where things have moved to, but other than that everything looks and feels great!
fastcall22 replied to Pilpel's topic in General and Gameplay ProgrammingYes, you should process all events in the queue, instead of one per frame: while (!quit) { // if (win.pollEvent(ev)) { while ( win.pollEvent(ev) {
fastcall22 replied to myvraccount's topic in For BeginnersThe `EncryptionMode.Decrypt` logic doesn’t initialize the decoding stream with the data provided by the argument. According to MemoryStream(byte[]) it should read something like: ICryptoTransform decryptor = aesm.CreateDecryptor(aesm.Key, aesm.IV); using(MemoryStream ms = new MemoryStream(data)) {
fastcall22 replied to Finalspace's topic in General and Gameplay ProgrammingAny* convention is fine as long as you are consistent. *With a few exceptions | https://www.gamedev.net/profile/135301-fastcall22/?tab=friends | CC-MAIN-2017-30 | refinedweb | 1,211 | 62.07 |
[
]
ASF GitHub Bot commented on ARTEMIS-1093:
-----------------------------------------
Github user tabish121 commented on a diff in the pull request:
--- Diff: tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpSendReceiveTest.java
---
@@ -53,38 +56,61 @@
import org.apache.qpid.proton.engine.Sender;
import org.jgroups.util.UUID;
import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Test basic send and receive scenarios using only AMQP sender and receiver links.
*/
+@RunWith(Parameterized.class)
--- End diff --
This seems like an unnecessary change to this test. The intent here is to test basic
AMQP protocol support not to test the FQQN support in the broker which it appears is already
done in the ProtonFullQualifiedNameTest. I'd prefer if we could keep tests targeted and not
try and make them test the entirety of the broker features in one monolithic test case as
that makes it harder to maintain. If you changed this one why didn't you change every single
AMQP test in the same fashion?
> Full qualified queue name support
> ---------------------------------
>
> Key: ARTEMIS-1093
> URL:
> Project: ActiveMQ Artemis
> Issue Type: Bug
> Components: Broker
> Affects Versions: 2.0.0
> Reporter: Howard Gao
> Assignee: Howard Gao
> Fix For: 2.next
>
>
> Broker should support full qualified queue names (FQQN) as well as bare queue names.
This means when clients access to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname) names. When accessing a queue
by its bare name, it is required that the name should be unique across all addresses.
> Otherwise a warning is given and client should use FQQN instead.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346) | http://mail-archives.apache.org/mod_mbox/activemq-issues/201704.mbox/%3CJIRA.13061608.1491361989000.6938.1492700284147@Atlassian.JIRA%3E | CC-MAIN-2018-26 | refinedweb | 300 | 59.4 |
I was experimenting with Prompt today and found that you can get a "restricted monad" style of behavior out of a regular monad using Prompt: > {-# LANGUAGE GADTs #-} > module SetTest where > import qualified Data.Set as S Prompt is available from > import Control.Monad.Prompt "OrdP" is a prompt that implements MonadPlus for orderable types: > data OrdP m a where > PZero :: OrdP m a > PRestrict :: Ord a => m a -> OrdP m a > PPlus :: Ord a => m a -> m a -> OrdP m a > type SetM = RecPrompt OrdP We can't make this an instance of MonadPlus; mplus would need an Ord constraint. But as long as we don't import it, we can overload the name. > mzero :: SetM a > mzero = prompt PZero > mplus :: Ord a => SetM a -> SetM a -> SetM a > mplus x y = prompt (PPlus x y) "mrestrict" can be inserted at various points in a computation to optimize it; it forces the passed in computation to complete and uses a Set to eliminate duplicate outputs. We could also implement mrestrict without an additional element in our prompt datatype, at the cost of some performance: mrestrict m = mplus mzero m > mrestrict :: Ord a => SetM a -> SetM a > mrestrict x = prompt (PRestrict x) Finally we need an interpretation function to run the monad and extract a set from it: > runSetM :: Ord r => SetM r -> S.Set r > runSetM = runPromptC ret prm . unRecPrompt where > -- ret :: r -> S.Set r > ret = S.singleton > -- prm :: forall a. OrdP SetM a -> (a -> S.Set r) -> S.Set r > prm PZero _ = S.empty > prm (PRestrict m) k = unionMap k (runSetM m) > prm (PPlus m1 m2) k = unionMap k (runSetM m1 `S.union` runSetM m2) unionMap is the equivalent of concatMap for lists. > unionMap :: Ord b => (a -> S.Set b) -> S.Set a -> S.Set b > unionMap f = S.fold (\a r -> f a `S.union` r) S.empty Oleg's test now works without modification: > test1s_do () = do > x <- return "a" > return $ "b" ++ x > olegtest :: S.Set String > olegtest = runSetM $ test1s_do () > -- fromList ["ba"] > settest :: S.Set Int > settest = runSetM $ do > x <- mplus (mplus mzero (return 2)) (mplus (return 2) (return 3)) > return (x+3) > -- fromList [5,6] What this does under the hood is treat the computation on each element of the set separately, except at programmer-specified synchronization points where the computation result is required to be a member of the Ord typeclass. Synchronization points happen at every "mplus" & "mrestrict"; these correspond to a gathering of the computation results up to that point into a Set and then dispatching the remainder of the computation from that Set. -- ryan | http://www.haskell.org/pipermail/haskell-cafe/2008-March/040976.html | CC-MAIN-2014-35 | refinedweb | 432 | 71.85 |
Agenda
<JeniT> ScribeNick: darobin
<scribe> Scribe: Robin Berjon
Date: 15 Mar 2012
NM: regrets next week?
RB: yes, I'm in China
NM: probable cancellation on the 29th, but we'll see later
<noah>
RESOLUTION: minutes 20120308 approved
NM summarises agenda
NM: we'll look at the f2f agenda and revisit if needed
ACTION-641?
<trackbot> ACTION-641 -- Noah Mendelsohn to try and find list of review issues relating to HTML5 from earlier discussions -- due 2012-03-01 -- PENDINGREVIEW
<trackbot>
NM: Larry suggested finding priority list, in the meantime we more or less decided not to do HTML.next
... Robin, how confident about API Privacy?
RB: 10 days earlier?
NM: not hard, but earlier is useful
RB: I'll have it done
NM: logistics?
... agenda still rough, will take shape based on advice today
... looking for guidance on http+aes
LM: i think things have moved on and that it isn't we shouldn't bother talking about it
... -1
JT: didn't we discuss this last week and say yes we should talk about it
NM: won't go into details, but question is: should we schedule a session on this?
JT: what's happened, Larry?
LM: there's an encrypted media task force
NM: JAR talking about are there different classes of URIs, public and private, HT sems to be on side of thinking this is a bad idea
HT: willing to be educated!
NM: so, that's a maybe
JT: the minutes last week say that NM would schedule a discussion this week
LM: there's an encrypted media TF
NM: when do they report?
LM: I don't know; there was a serious proposal from MS, Google, and Cable Labs about this
NM: IETF?
LM: no, this is in HTML WG
HT: suggest you contact the co-chairs and ask what they expect in terms of timing
<scribe> ACTION: Noah to contact HTML WG co-chairs to ask about timing for output of Encrypted Media TF
<trackbot> Created ACTION-677 - Contact HTML WG co-chairs to ask about timing for output of Encrypted Media TF [on Noah Mendelsohn - due 2012-03-22].
<noah> . ACTION: Noah to contact HTML WG chairs to get advice on likely trajectory of work on encryted media
<noah> ACTION-677 Due 2012-03-24
<trackbot> ACTION-677 Contact HTML WG co-chairs to ask about timing for output of Encrypted Media TF due date now 2012-03-24
NM: any other admin?
[nope]
NM: need for further work?
YL: Larry will be around, Robin can be around as needed
... will contact the AD of app area for a meeting
... I will do this shortly
... expect a meeting at the beginning of the week to discuss what we're doing
LM: another thing that's relevant for TAG is HTTP WG, SPDY, HTTP2
YL: that's on Thu; there's also Mon for HTTPbis
LM: IRI working group will meet, there's been a lot of progress
... Websec meets, of course, and is center of work on web security, CORS, etc.
YL: there are other things around privacy that may be of interest to people, plus general discussion, coordination
LM: applications area directorate (HT is member) usually meets
YL: PLH and TLR will be there, and MarkN
LM: apps area working group will take up MIME registry update
YL: lots of opportunities for impromptu meetings, just need to find slots to make interesting to everybody
...
LM: worthwhile reviewing document on MIME registry
... lots of progress on IRI
... involvement from I18N WG reviewing the document
... Web Security group meets, things to do there
YL: apps-area, websec (monday)
LM: as well of course as RTCWeb
YL: good summary of the most interesting stuff for us
LM: MarkN has written an ID capturing the work on discussing registries in general, fixing IANA
... there should be good discussion of that
YL: I don't think there's anything schedule for that
LM: yes, MarkN said he was trying to schedule an impromptu on this
<Zakim> jar, you wanted to raise his very annoying HTTP question
JAR: want to get a volunteer for my question to the HTTP WG
<Yves>
JAR: not discuss now, just looking for volunteer
<noah> JAR: Question is whether "representation" is used as a term of art, or more informally
JAR: YL are you volunteering to understand my question?
YL: I'll try to convey, understanding is another problem :)
<Larry>
<jar> I sent the question on Feb 7 to httpbis
<jar> will dig up pointer
YL: given a pointer, I can more easily raise it
JAR: will dig it up
HT: I would encourage you to come up with an example where one usage applies and another doesn't
... to help them understand what the problem is, why does this make a difference?
... sort of a minimal pair approach
<jar>
JAR: this might only matter to RDF and they might not care about RDF
<Larry> " Happy IANA: Making Large-Scale Registries More User-Friendly"
HT: I don't think that's true
... but you can try something on me before exposing it more widely
NM: okay for IETF
<jar> more detail in blog post pointed from
NM: gathered ideas from last week
<noah>
NM: look at "Chair's working list of likely agenda topics"
NM reads from agenda
NM: we might want to get into some of the IETF topics and dig in, but we'll see based on report
... Chris is coming
... want to look into whether FragID and MIME is still a priority
... extensive time on Publishing and Linking, maybe careful walk through
JT: it could be useful if Rigo were there
YL: Rigo is not in Sophia anymore
HT: any communication with Rigo should make clear how long we're spending on this
<Larry> i don't think meeting with Rigo should be in the critical path of us making progress on this document
<Larry> if he can meet with us, fine, but let's get on with it
JT: if he happens to be around that would be good, but if not it doesn't make sense
NM: JAR you're still waiting for feedback on httpRange-14 change proposals
JAR: I don't think we should focus we should focus on the baseline document
... but we should try to figure out where this is going
NM: see if we can propose a way forward
JAR: even without change, we should figure out what to do with this
... even if it's say it's not our job
<Ashok> +1 to Jar
NM: I welcome it if you want to lead and sketch out what the agenda should be for your session
JAR: I'll think hard about how to best make use of the time
<Larry> I'd like a clear option to drop this, I don't think things are converging
NM: Robin will have a draft on API Privacy
... AM on local storage?
AM: I've written a draft, it's different from the usual stuff, sent for review to LM and RB to see if it was sensible
<jar> larry, noted
AM: depending on the response, we'll see what discussion there ought to be
NM: so if people like it, we can make it a major topic, if not it'll be brief
AM: correct
NM: will be looking at priorities
... would prefer not to burn a lot of time on this
... but have some nervousness that we only have three priorities
... but we should make sure we're well engaged
... HT mentioned PhiloWeb at WWW2012
... the TAG formally lists issues, are we tracking them, working on them, etc.
... we should look at the relationship between products and issues
... ideally, we should have shepherds for issues
... if any of you want to go through issues, see which can be closed, which are being forgotten about, etc. it would be useful
... will take suggestions now or offline
JT: Robin mentioned getting Dom to talk to us about mobile stuff
NM: he lives close?
RB: yes, but we should make sure that we have something useful to discuss with him
<scribe> ACTION: Robin to ask Dom if there's something he think would be worth discussing with the TAG
<trackbot> Created ACTION-678 - Ask Dom if there's something he think would be worth discussing with the TAG [on Robin Berjon - due 2012-03-22].
<noah> RB: Can Chris Lilley brief us on SVG change to HTML syntax
NM: we can always change the agenda of course
... please send me the reading list quickly
<noah> ACTION-641?
<trackbot> ACTION-641 -- Noah Mendelsohn to try and find list of review issues relating to HTML5 from earlier discussions -- due 2012-03-01 -- PENDINGREVIEW
<trackbot>
<noah>
NM: matrix of issues that we thought we might want to look at
... tentatively agreed not to look into this now
<Larry> at most we should forward the list to the HTML working group...
NM: question: look through this list, and if anything in this list makes you feel like we should then raise it
LM: can we ask the list?
NM: the hesitancy is that some of these things might look like different things to different people
... are you saying someone should send this to the HTML WG?
LM: just a pointer
... looking for ways not to spend too much time
NM: that's fine by me
<Larry> that's ok with me not to spend time on this
NM: not hearing a lot of interest in picking this up
... we can close the action
<noah> close ACTION-641
<trackbot> ACTION-641 Try and find list of review issues relating to HTML5 from earlier discussions closed
NM: AM circulated a draft, people will feed back
AM: yes
<noah>
<noah> ACTION-602?
<trackbot> ACTION-602 -- Noah Mendelsohn to work with IETF liaisons to propose possible TAG participation in IETF Paris -- due 2012-03-13 -- PENDINGREVIEW
<trackbot>
<noah> close ACTION-602
<trackbot> ACTION-602 Work with IETF liaisons to propose possible TAG participation in IETF Paris closed
<noah> ACTION-662>
<noah> ACTION-662?
<trackbot> ACTION-662 -- Robin Berjon to redraft proposed product page on API Minimization () -- due 2012-01-31 -- PENDINGREVIEW
<trackbot>
<noah> ACTION-656
<noah> ACTION-656?
<trackbot> ACTION-656 -- Noah Mendelsohn to schedule discussion of possibly getting W3C to invest in technologies for liberal XML processing (e.g. XML5) -- due 2012-01-31 -- PENDINGREVIEW
<trackbot>
NM: I believe this came out of the discussion with NDW and HTML/XML TF
... this has already been taken by the XML-ER CG
<noah>
NM: they're doing it. LM are you happy with where this is?
LM: have there been any comments?
NM: there's a mailing list with active discussion
... active discussion and the start of a spec from AnneVK
<Larry> Can we get a summary of the discussion so far?
<Larry> is the SVG in HTML syntax related to this discussion?
NM: some discussion on requirements and use cases, some browser-related stuff, some HTML-at-front-of-pipeline, some unification, some database error correction, etc.
... give and take on use cases, though Anne has spec and pieces of an implementation
<noah> RB: Anne is not disinclined to change if spec really needs changing
<noah> NM: I meant more that he mostly wants to focus on user-facing use cases, and is less inclined to change for, e.g. database back ends
<noah> RB: Right
NM: Anne targeting UCs that he perceives as high priority, might disagree with some of the other UCs
<noah> RB: And others agree
<ht> I predict TBL will push back on "not punishing users". . .
<Zakim> ht, you wanted to request discussion about goals and use cases at F2F
HT: I think it's worth talking about this, if there's room
... we should think about whether we're happy with the goals of this TF and if we should try to influence them
LM: would you make a proposal?
HT: no, I want to have some time to think about this
<noah> Adding this note to ACTION-656:
<noah> Per HT request on 15 March 2012, I think we should discuss whether we think XML-ER is adopting the right goals and use cases. At F2F if it fits.
HT: if we have any love in our heart of Tim's suggestion to encourage better markup, I'd like to know
<Larry> I'd like to hear some strawman proposals so that we don't go in circles on "Yes, it's a hard problem"
<noah> LM: Encourage Henry to come up with "strawman" proposals. Even something to knock down.
NM: HT, where are we standing on polyglot?
HT: issue about BOMs, and another issue XML Core WG is still working on
NM: wondering if mixing with XML ER is a good idae
HT: probably not
<Larry> I had another category that wasn't "polyglot" I was calling "Multi-view": single content stream that can be viewed as two different types, but mean DIFFERENT but related things
<noah>
<noah> ACTION-669?
<trackbot> ACTION-669 -- Henry Thompson to review and see whether TAG needs to do more on references to evolving specs Due: 2012-08-01 -- due 2012-02-09 -- OPEN
-02-14 -- OPEN
<trackbot>
<noah> Henry to bump dates on all actions.
<noah> ACTION-572?
<trackbot> ACTION-572 -- Yves Lafon to look at appcache in HTML5 -- due 2012-03-06 -- OPEN
<trackbot>
<JeniT> ScribeNick: JeniT
NM: Do we need to talk about this at F2F? Is it combined with web storage?
YL: I think my action is to see whether appcache fits with AM's work on web storage
<darobin> ScribeNick: darobin
<noah> close ACTION-572
<trackbot> ACTION-572 Look at appcache in HTML5 closed
<JeniT> ScribeNick: darobin
YL: check if appcache feeds into AM's document on local storage
<noah> ACTION Yves to check how Appcache should fit into Ashok's storage document - Due 2012-04-02
<trackbot> Created ACTION-679 - check how Appcache should fit into Ashok's storage document [on Yves Lafon - due 2012-04-02].
<noah> ACTION-566?
<trackbot> ACTION-566 -- Yves Lafon to investigate possible liaison or meeting with IETF (maybe IAB) on privacy -- due 2012-01-24 -- OPEN
<trackbot>
NM: seems relevant given Paris
YL: I know that Wendy(?) will be there, working on privacy don't know if we want to track that
NM: we can close, depends on if the group wants to work on this
YL: liaising is a good thing, not sure if it should be only about privacy
<noah> close ACTION-566
<trackbot> ACTION-566 Investigate possible liaison or meeting with IETF (maybe IAB) on privacy closed
<noah> ACTION-638
<noah> ACTION-638?
<trackbot> ACTION-638 -- Yves Lafon to help Noah figure out best ways, if at all, for TAG to participate in IETF paris -- due 2012-02-15 -- OPEN
<trackbot>
<noah> ACTION-638 Due 2012-03-26
<trackbot> ACTION-638 Help Noah figure out best ways, if at all, for TAG to participate in IETF paris due date now 2012-03-26
<noah> ACTION-594?
<trackbot> ACTION-594 -- Yves Lafon to with Peter and Henry produce partial revision of fragment id finding -- due 2012-03-13 -- OPEN
<trackbot>
NM: seems not to be moving. Thoughts?
<Larry> suggest reviewing
<noah> Suggest we discuss priority of fragids at F2F
<Larry> "Media Type Specifications and Registration Procedures"
NM: we'll settle that at the f2f
LM: (XXX?) document being revised
NM: better path forward?
LM: possibly
NM: we can discuss the priority at the f2f
<noah> ACTION-606?
<trackbot> ACTION-606 -- Peter Linss to invite I18N and other concerned groups to provide written technical input as prep to discussion with the TAG regarding unicode normalization -- due 2012-03-06 -- OPEN
<trackbot>
PL: still interest in doing this
<noah> ACTION-606 Due 2012-05-01
<trackbot> ACTION-606 Invite I18N and other concerned groups to provide written technical input as prep to discussion with the TAG regarding unicode normalization due date now 2012-05-01
<noah> ACTION-611?
<trackbot> ACTION-611 -- Larry Masinter to draft initial cut at -- due 2012-02-17 -- OPEN
<trackbot>
NM: nothing says relaxing on your vacation like drafting TAG pages!
<noah> ACTION-611 Due 2012-05
<trackbot> ACTION-611 Draft initial cut at due date now 2012-05
<noah> Actually, I said: Nothing makes a relaxing vacation like drafting architecture pages
LM: rethinking the priorities on these
... any opinions? might drop it
NM: it's a good thing for the TAG to contribute to these pages, but only so far I want to push people. Jeni did a great job, but if other people are not doing actions we should close them
<noah> close ACTION-611
<trackbot> ACTION-611 Draft initial cut at closed
<noah> ACTION-670?
<trackbot> ACTION-670 -- Noah Mendelsohn to update product priority list to mark MIMEWeb completed after final product page available -- due 2012-03-01 -- OPEN
<trackbot>
<Larry> if Ian wants us to help out with these, he can ask us to re-open
NM: note the closing of these efforts in the product page
<noah> ACTION-670 Due 2012-05-01
<trackbot> ACTION-670 Update product priority list to mark MIMEWeb completed after final product page available due date now 2012-05-01
<Larry> mnot's draft and media type drafts are new since we last talked about this
<noah> ACTION-657?
<trackbot> ACTION-657 -- Noah Mendelsohn to schedule telcon discussion of possible XML/HTML Unification next steps -- due 2012-03-06 -- OPEN
<trackbot>
NM: asked NDW if he had reactions
<noah> ACTION-657 Due 2012-04-10
<trackbot> ACTION-657 Schedule telcon discussion of possible XML/HTML Unification next steps due date now 2012-04-10
<noah> ACTION-664?
<trackbot> ACTION-664 -- Noah Mendelsohn to announce completion of TAG work on Microdata/RDFa as recorded in and to finalize the product page and associated links -- due 2012-01-26 -- OPEN
<trackbot>
<noah> ACTION-664 Due 2012-04-17
<trackbot> ACTION-664 Announce completion of TAG work on Microdata/RDFa as recorded in and to finalize the product page and associated links due date now 2012-04-17
JT: feedback is good, will need to come back to it once we have RECs
NM: from TAG POV we can say our work is done
<noah> ACTION-617?
<trackbot> ACTION-617 -- Noah Mendelsohn to work with Yves to take off the Rec track -- due 2012-01-31 -- OPEN
<trackbot>
YL: you mean publishing a note?
... what do you want to do with the content?
NM: leave it as a draft finding, or move it to note
YL: it can't be more than a draft finding, it was a WD
... we can keep as draft and decide if we drop it later
<Zakim> ht, you wanted to say we will _have_ to come back to this
HT: we'll have to come back to this,
... there's been some rethinking around schema namespace documents
... this interacts heavily with fragIDs and media types
... so we'll have to revisit
<noah> Adding a note to the action document saying: Henry notes on 15 March 2012 that we will >have< to come back to this in the context of discussing XML Schema namespace document,
NM: should we do the clerical work to take the old thing off the Rec track in the meantime
YL: I don't think that's necessary
<noah> ACTION-617?
<trackbot> ACTION-617 -- Yves Lafon to work with Noah to take off the Rec track -- due 2012-06-01 -- OPEN
<trackbot>
<noah> ACTION-610?
<trackbot> ACTION-610 -- Jeni Tennison to draft initial cut at -- due 2012-03-13 -- OPEN
<trackbot>
JT: bit from JAR, bit from HT, will put together so can put a draft together
... don't think we need to discuss at f2f
<Larry> ship it
JAR: why not have JT send it off and be done with it
+1 ship it
<Larry> close the action
<Ashok> Ship it!
<noah> ACTION-610 Due 2012-03-27
<trackbot> ACTION-610 Draft initial cut at due date now 2012-03-27
<noah> ACTION-541?
<trackbot> ACTION-541 -- Jeni Tennison to helped by DKA to produce draft on technical issues relating to copyright/linking -- due 2012-02-21 -- OPEN
<trackbot>
<noah> ACTION-541 Due 2012-04-06
<trackbot> ACTION-541 Helped by DKA to produce draft on technical issues relating to copyright/linking due date now 2012-04-06
NM: we've reached the end!
<noah> ACTION-636?
<trackbot> ACTION-636 -- Larry Masinter to update product page for Mime and the Web -- due 2012-02-11 -- OPEN
<trackbot>
NM: want me to pick this up from you?
LM: yup
<noah> ACTION-636?
<trackbot> ACTION-636 -- Noah Mendelsohn to update product page for Mime and the Web -- due 2012-02-11 -- OPEN
<trackbot>
NM: will do!
<noah> ACTION-636?
<trackbot> ACTION-636 -- Noah Mendelsohn to update product page for Mime and the Web -- due 2012-03-27 -- OPEN
<trackbot>
[ADJOURNED] | http://www.w3.org/2001/tag/2012/03/15-minutes.html | CC-MAIN-2016-30 | refinedweb | 3,490 | 59.98 |
view raw
Python and
wc
with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
print(len(t))
f.write(t)
Output : 318885
$> wc commedia.pfc
2181 12282 461491 commedia.pfc
"""
Implementation of prefix-free compression and decompression.
"""
import doctest
from itertools import islice
from collections import Counter
import random
import json
def binary_strings(s):
"""
Given an initial list of binary strings `s`,
yield all binary strings ending in one of `s` strings.
>>> take(9, binary_strings(["010", "111"]))
['010', '111', '0010', '1010', '0111', '1111', '00010', '10010', '01010']
"""
yield from s
while True:
s = [b + x for x in s for b in "01"]
yield from s
def take(n, iterable):
"""
Return first n items of the iterable as a list.
"""
return list(islice(iterable, n))
def chunks(xs, n, pad='0'):
"""
Yield successive n-sized chunks from xs.
"""
for i in range(0, len(xs), n):
yield xs[i:i + n]
def reverse_dict(dictionary):
"""
>>> sorted(reverse_dict({1:"a",2:"b"}).items())
[('a', 1), ('b', 2)]
"""
return {value : key for key, value in dictionary.items()}
def prefix_free(generator):
"""
Given a `generator`, yield all the items from it
that do not start with any preceding element.
>>> take(6, prefix_free(binary_strings(["00", "01"])))
['00', '01', '100', '101', '1100', '1101']
"""
seen = []
for x in generator:
if not any(x.startswith(i) for i in seen):
yield x
seen.append(x)
def build_translation_dict(text, starting_binary_codes=["000", "100","111"]):
"""
Builds a dict for `prefix_free_compression` where
More common char -> More short binary strings
This is compression as the shorter binary strings will be seen more times than
the long ones.
Univocity in decoding is given by the binary_strings being prefix free.
>>> sorted(build_translation_dict("aaaaa bbbb ccc dd e", ["01", "11"]).items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
"""
binaries = sorted(list(take(len(set(text)), prefix_free(binary_strings(starting_binary_codes)))), key=len)
frequencies = Counter(text)
# char value tiebreaker to avoid non-determinism v
alphabet = sorted(list(set(text)), key=(lambda ch: (frequencies[ch], ch)), reverse=True)
return dict(zip(alphabet, binaries))
def prefix_free_compression(text, starting_binary_codes=["000", "100","111"]):
"""
Implements `prefix_free_compression`, simply uses the dict
made with `build_translation_dict`.
Returns a tuple (compressed_message, tranlation_dict) as the dict is needed
for decompression.
>>> prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])[0]
'010101010100111111111001101101101001000100010011001'
"""
translate = build_translation_dict(text, starting_binary_codes)
# print(translate)
return ''.join(translate[i] for i in text), translate
def prefix_free_decompression(compressed, translation_dict):
"""
Decompresses a prefix free `compressed` message in the form of a string
composed only of '0' and '1'.
Being the binary codes prefix free,
the decompression is allowed to take the earliest match it finds.
>>> message, d = prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])
'010101010100111111111001101101101001000100010011001'
>>> sorted(d.items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
>>> ''.join(prefix_free_decompression(message, d))
'aaaaa bbbb ccc dd e'
"""
decoding_translate = reverse_dict(translation_dict)
# print(decoding_translate)
word = ''
for bit in compressed:
# print(word, "-", bit)
if word in decoding_translate:
yield decoding_translate[word]
word = ''
word += bit
yield decoding_translate[word]
if __name__ == "__main__":
doctest.testmod()
with open("commedia.txt") as f:
text = f.read()
compressed, d = prefix_free_compression(text)
with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
print(len(t))
f.write(t)
with open("commedia.pfcd", "w") as f:
f.write(json.dumps(d))
# dividing by 8 goes from bit length to byte length
print("Compressed / uncompressed ratio is {}".format((len(compressed)//8) / len(text)))
original = ''.join(prefix_free_decompression(compressed, d))
assert original == text
commedia.txt
You are using Python3 and an
str object - that means the count you see in
len(t) is the number of characters in the string. Now, characters are not bytes - and it is so since the 90's .
Since you did not declare an explicit text encoding, the file writing is encoding your text using the system default encoding - which on Linux or Mac OS X will be utf-8 - an encoding in which any character that falls out of the ASCII range (ord(ch) > 127) uses more than one byte on disk.
So, your program is basically wrong. First, define if you are dealing with text or bytes . If you are dealign with bytes, open the file for writting in binary mode (
wb, not
w) and change this line:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
to
t = bytes((int(b, base=2) for b in chunks(compressed, 8))
That way it is clear that you are working with the bytes themselves, and not mangling characters and bytes.
Of course there is an ugly workaround to do a "transparent encoding" of the text you had to a bytes object - (if your original list would have all character codepoints in the 0-256 range, that is): You could encode your previous
t with
latin1 encoding before writing it to a file. But that would have been just wrong semantically.
You can also experiment with Python's little known "bytearray" object: it gives one the ability to deal with elements that are 8bit numbers, and have the convenience of being mutable and extendable (just as a C "string" that would have enough memory space pre allocated) | https://codedump.io/share/YPzkNNZwaLkq/1/why-do-python-and-wc-disagree-on-byte-count | CC-MAIN-2017-22 | refinedweb | 866 | 54.93 |
calling sp in controller
calling sp in controller
This not so much a datatables question other than it is in a asp.net mvc project that has datatables in it. I am having the user import a text file into a datatable:
After it is imported and they have reviewed the data, I need the user to click a button to call a stored procedure. The stored procedure is parsing the data and putting into another table, which is the datasource for another datatable. But the stored procedure itself does not return any data. From what I am researching, since datatables uses MVC i need to put that call in a controller.
public class ParseImportDataController: ApiController { [HttpGet] [HttpPost] public IHttpActionResult cleanAndImport() { var request = HttpContext.Current.Request; var settings = Properties.Settings.Default; string AsOfCookie = request.Cookies.Get("AsOfDate").Value; string strCon = settings.DbConnection; SqlConnection DbConnection = new SqlConnection(strCon); DbConnection.Open(); SqlCommand command = new SqlCommand("sp_ImportFTE", DbConnection); command.CommandType = System.Data.CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@EffectiveDate", AsOfCookie)); command.ExecuteNonQuery(); DbConnection.Close(); return Ok(1); //no idea what to return } }
I can't find out how I have the button click call this code in the controller. Any help would be greatly appreciated.
I have no idea if the code in the controller is correct, but I figure that will be the next struggle.
Replies
Can I use Buttons to add a custom button that will call the cleanAndImport function in the controller?
I just realized that I had this as a new discussion, not a question. I have re-posted it as a question.
You can - but I'm not certain that is the way forward here. How are you going to give the data to this controller that it needs to import? Are you going to upload it directly to this controller, or has it already been uploaded somewhere and this just needs to be called to process it?
Could you list the interactions the user will be taking so I can understand the aim a little bit more?
Thanks,
Allan
yes, step one is done. They upload a csv file into a 'raw data' SQL table. I used this code to accomplish that:.
After they make sure the data came in fine, I want them to push a button that will run a stored procedure. That stored procedure will take the data from the SQL table in step one and append it to another SQL table. I just can't figure out how to run the stored procedure. I put the call (code in my original post) into a controller, not sure if that is where it belongs. if it can go there, how do i run it? I see how you can create a custom button that has a function. But what is the line of code I need in the function that says "go run cleanAndImport()"?
Okay - perfect - thanks for the clarifications. In that case, a custom button is the way to go, and you'd have it call an end point on the server (via Ajax) that will do your
cleanAndImport()function - e.g. you might use this for the button:
you might want
successhandlers, and to send data as well, that really depends upon your requirements, but that's the key part from the client-side. The other part you need is the controller / end point on the server, which will be implemented in whatever server-side environment you are using.
Allan
that was it!!! awesome. From my controller code posted above I had to add:
then your code worked perfectly. | https://www.datatables.net/forums/discussion/comment/165797/ | CC-MAIN-2020-24 | refinedweb | 598 | 65.22 |
# Tips and tricks from my Telegram-channel @pythonetc, January 2019

It is new selection of tips and tricks about Python and programming from my Telegram-channel @pythonetc.
Previous publications:
* [December 2018](https://habr.com/ru/company/mailru/blog/436324/)
Two implicit class methods
--------------------------
To create a class method, you should use the `@classmethod` decorator. This method can be called from the class directly, not from its instances, and accepts the class as a first argument (usually called `cls`, not `self`).
However, there are two implicit class methods in Python data model: `__new__` and `__init_subclass__`. They work exactly as though they are decorated with `@classmethod` except they aren't. (`__new__` creates new instances of a class, `__init_subclass__` is a hook that is called when a derived class is created.)
```
class Foo:
def __new__(cls, *args, **kwargs):
print(cls)
return super().__new__(
cls, *args, **kwargs
)
Foo() #
```
Asynchronous context managers
-----------------------------
If you want a context manager to suspend coroutine on entering or exiting context, you should use asynchronous context managers. Instead of calling `m.__enter__()` and `m.__exit__()` Python does await `m.__aenter__()` and await `m.__aexit__()` respectively.
Asynchronous context managers should be used with async with syntax:
```
import asyncio
class Slow:
def __init__(self, delay):
self._delay = delay
async def __aenter__(self):
await asyncio.sleep(self._delay / 2)
async def __aexit__(self, *exception):
await asyncio.sleep(self._delay / 2)
async def main():
async with Slow(1):
print('slow')
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
Defining asynchronous context manager
-------------------------------------
Since Python 3.7, `contextlib` provides the `asynccontextmanager` decorator which allow you to define asynchronous context manager in the exact same manner as `contextmanager` does:
```
import asyncio
from contextlib import asynccontextmanager
@asynccontextmanager
async def slow(delay):
half = delay / 2
await asyncio.sleep(half)
yield
await asyncio.sleep(half)
async def main():
async with slow(1):
print('slow')
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
For older versions, you could use `@asyncio_extras.async_contextmanager`.
Unary plus operator
-------------------
There is no `++` operator in Python, `x += 1` is used instead. However, even `++x` is still a valid syntax (but `x++` is not).
The catch is Python has unary plus operator, and `++x` is actually `x.__pos__().__pos__()`. We can abuse this fact and make ++ work as increment (not recommended though):
```
class Number:
def __init__(self, value):
self._value = value
def __pos__(self):
return self._Incrementer(self)
def inc(self):
self._value += 1
def __str__(self):
return str(self._value)
class _Incrementer:
def __init__(self, number):
self._number = number
def __pos__(self):
self._number.inc()
x = Number(4)
print(x) # 4
++x
print(x) # 5
```
The MagicMock object
--------------------
The `MagicMock` object allows you to get any attribute from it or call any method. New mock will be returned upon such access. What is more, you get the same mock object if access the same attribute (or call the same method):
```
>>> from unittest.mock import MagicMock
>>> m = MagicMock()
>>> a = m.a
>>> b = m.b
>>> a is m.a
True
>>> m.x() is m.x()
True
>>> m.x()
```
This obviously will work with sequential attribute access of any deep. Method arguments are ignored though:
```
>>> m.a.b.c.d
>>> m.a.b.c.d
>>> m.x().y().z()
>>> m.x(1).y(1).z(1)
```
Once you set a value for any attribute, it doesn't return mock anymore:
```
>>> m.a.b.c.d = 42
>>> m.a.b.c.d
42
>>> m.x.return_value.y.return_value = 13
>>> m.x().y()
13
```
However, it doesn't work with `m[1][2]`. The reason is, the item access is not treated specially by `MagicMock`, it's merely a method call:
```
>>> m[1][2] = 3
>>> m[1][2]
>>> m.\_\_getitem\_\_.return\_value.\_\_getitem\_\_.return\_value = 50
>>> m[1][2]
50
``` | https://habr.com/ru/post/438776/ | null | null | 634 | 61.43 |
パッチリリース
「もっと早く、沢山バグを直して欲しい!」という皆様のお声にお答えして、パッチリリースをお出しするという形で対応させて頂いております。それぞれのパッチリリースは完全なエディターとすべてのランタイムを含む形でリリースしており、それぞれのリリースで沢山のバグを修正しています。
Unity 2017.2.1
Unity 2017.2.1
- iOS: Expose APIs that allow changing home button hiding and system gesture deferral properties on runtime.
Fixes
- (941945) - Animation: Improved fix for creating an ongoing transition.
- (967382) - Animation: Fixed an issue that was causing the Editor to throw an exception when selecting animator transitions in an undocked preview window.
- (973917) - Asset import: Fixed bug with import of FBX with custom framerate generating animationClips with an incorrect framerate.
- (975345) - Audio: Fixed a crash in AudioLowPassFilter on Nintendo Switch.
- (959444) - Build Pipeline: Improved player and asset bundle build performance for large builds.
- (953940) - Cache Server: Fixed an issue which caused the Unity client to hang indefinitely.
- (975920) - Editor: Fixed lightingdata.asset files getting re-opened in text mode in certain situations.
- (946550) - GI: Fixed editor crash when when deleting a prefab which is used as a tree in Terrain inspector.
- (957651) - IL2CPP: Fixed a crash on iOS which can occur when a device is awakened during a blocking socket call with a SIGPIPE signal.
- (962771) - IL2CPP: Work around for a C++ compiler bug in the Android r13b NDK that could cause the NullCheck method to be incorrectly removed from the - resulting binary.
- (980362) - iOS: Fix iOS 11 crash when application is launched from URL and airplay screen mirroring is enabled.
- (980303) - iOS: Allow landscape startup on iOS11.
- (956318) (57809) - iOS: Fixed trampoline calling UI methods ([UIApplication delegate]) from a background thread.
- (913856) (57503) - iOS: Fixed locked orientation app getting rotations from portrait to landscape when sharing to another app on iOS 8 & 9
- (979005) (58588) - iOS: Fixed problem with missing keyboard Done/Cancel buttons for iPhone X.
- (None) - Package Manager: Fixed editor not starting because of custom proxy configuration on host machine.
- (966306) - Particles: Fixed emission properties being incorrectly upgraded from previous Unity versions.
- (930358) - Scripting: Fixed crash when using GitHub for Unity.
- (962764) - Terrain: Fixed default smoothness for terrain without a texture.
- (966790) - Universal Windows Platform: Fixed NotSupportedException being thrown on UWP builds with .NET scripting backend enabled when using Timeline.
- (955086) - Universal Windows Platform: Fixed NavMeshObstacles being ignored on 64-bit master builds
- (949806) (934783) - Universal Windows Platform: Fixed debugging scripts on .NET scripting backend
- (953086) - Windows: Fixed game window not getting minimized when using exclusive fullscreen mode on Windows key press.
- (964052) - Windows: Fixed executable not having Unity version info.
- (None) - XR: Fixed a random crash in Windows Mixed Reality.
- (969944) - XR: Fixed tracking loss never recovers in Windows Mixed Reality.
- (963320) - XR: Fixed issue with grabpass when used single-pass stereo rendering.
- (970906) - XR: Fixed ScreenCapture.CaptureScreenshot() captures stereo instancing screenshots incorrectly.
- (971944) - XR: Fixed shaders targeting XR platforms being included in builds where XR support is disabled.
- (None) - XR: Fixed a regression that would cause BEV cameras not to track properly.
Revision: 1dc514532f08
Unity 2017.2.1
- Video: RGB to YUV conversion that happens during video transcoding and recording is now 2-3 times faster.
Fixes
- (970718) - AI: Fixed NavMeshAgents swapping to different NavMeshes when they are disabled and re-enabled right on an edge.
- (971571) - Android: Fixed a rare static splash screen crash.
- (960595) - Asset Import: Fixed crash when setting the property ModelImporter.clipAnimations, when the clip avatar mask is set to "Create From This Model".
- (969114 (945000)) - Editor: Fixed being able to start multiple PlayMode and EditMode test runs at the same time from the test runner ui.
- (969114) - Editor: Fixed bootstrap scene sometiems being left behind if running PlayMode and EditMode tests from script.
- (975427 (956872)) - Facebook: Fixed unhandled BadImageFormatException for deleted SDK .dll.
- (961692, 964998) - Graphics: Fixed asserts and potential memory leaks when Skinned Mesh Renderers with the "Update When Offscreen" property enabled are not visible.
- (968591) - Graphics: Fixed crash during a visibility callback when a GameObject which was not visible is set inactive followed by setting a visible GameObject inactive.
- (951975) - Graphics: Fixed crash in CommandBuffer.DrawMeshInstanced when called with a null property block.
- (973686) - Graphics: Fixed crash in Canvas node extraction, usually triggering a crash on save.
- (930408) - Graphics: Fixed a crash in the job system related to light culling.
- (841236) - Graphics: Fixed changing the projectors render queue in script not having an effect.
- (956919 (891894)) - Graphics: Fixed issue where DrawMeshInstanced calls will render with inverse normals if the previous draw call used negative scaling.
- (922979) - IL2CPP: Fixed issue where android builds would crash on launch with the 4.5 runtime when the byte code stripping option was selected.
- (954593) - iOS: Fixed videos started with Handheld.PlayFullScreenMovie not resuming after returning to the app in some circumstances.
- (877423 (972989)) - License: Removed misleading "This should not be called in batch mode" message when returning license.
- (940084) - License: Fixed issue when cached refresh token & access token expired, command line activation will failed.
- (960527) (964537) - Particles: Fixed Rate over Distance emission issues.
- (955697) - Physics: Fixed potential crash in Rigidbody::GetVelocity() when opening certain scenes.
- (951789) - Physics: Fixed crashes when loading new scene and activating cloth gameobject in coroutine.
- (948201) - Physics: Fixed issue where Physics Debugger's layer mask does not filter GameObject correctly.
- (962711) - Scripting: Fixed TypeLoadException for types and array initializers larger than 1 MB.
- (952631) - Scripting: Fixed DllImport when library name is specified with path.
- (957072) - Scripting: Improved TypeLoadException messages.
- (827984) - Scripting: Fixed deadlocks and pauses when using System.Threading.Monitor.
- (945353) - Scripting: Fixed InternalsVisibleToAttribute.
- (907918) - Scripting: Fixed URI processing on OSX.
- (967206) - Scripting: Fixed random crash due to memory corruption on domain reload.
- (None) - Shaders: Fixed some edge cases where shaders would be compiled using cached include file data which was out of date.
- (964302) - Terrain: Removed Editor error about non read/write textures.
- (965091) - Timeline: Fixed preview mode for properties animated by Control Tracks.
- (967026) - Timeline: Fixed console errors when using Default Playables package.
- (963979 (976826)) - Video: Fixed videos being transcoded when switching platform even when they are cached in Cache Server.
- (962118) - Video: Meta files are updated for videos that were transcoded during platform switch resulting in litter for VCS.
- (None) - Web: Fixed DownloadHandlerFile not truncating file on overwrite.
- (965165) - Web: UnityWebRequest: improve performance for DownloadHandlerScript.
- (963947) - Web: Fixed hang when busy waiting on a redirecting request.
- (968877) - Web: Fixed crash when checking AssetBundle download progress, that gets aborted.
- (961465) - WebGL: Fixed UnityWebRequest with relative URL when running with custom port.
- (965094) - WebGL: Fixed crash for relative URL in UnityWebRequest when exceptions are disabled.
- (None) - Windows: Fixed possible crash in UnityWebRequest on Windows platforms when using UnityWebRequest with custom download handler script.
- (972924 (950056)) - Windows Standalone: Fixed a regression where a fullscreen application would be restored to the native resolution of the attached display after losing focus. The expected outcome was that it should be restored to the original resolution that it was launched on.
- (970844) - XR: Fixed inconsistency between generic code path and WSA-namespace-specific path for whether select is pressed.
- (963315) - XR: Fixed UnityEngine.Experimental.XR.Boundary.TryGetGeometry throwing internal exception always failing.
- (948931) - XR: Fixed Windows Mixed Reality controllers not being properly detected in Editor Play mode.
- (883630) - XR: Fixed Oculus not reporting values for XRStats properties.
Revision: edf5bdf50eb0
Unity 2017.2.0p
- Scripting: Added command line option "overrideMonoSearchPath" for desktop standalone players (OSX, Windows).
- XR: Added PlayerSettings
Get/
SetVirtualRealitySupported,
Get/
SetVirtualRealitySDKs, and
GetAvailableVirtualRealitySDKsfor aquiring and setting the XR Settings on Virtual Reality Supported toggle and Virtual Reality SDKs list.
- XR: Enabled single-pass stereo rendering for Windows MR.
Fixes
- (968721) - 2D: Fixed switching platform and building project without making any changes will not cause Sprite Atlas asset to change.
- (961094) - Android: Fixed crash on some Adreno devices.
- (963291) - Android: Fixed plugin importer architecture selection.
- (959908) - Android: Fixed loading player data for very specific file sizes/content.
- (964073 - DirectX 12: Fixed a crash when running on Windows 7 machine with first API in the list being DX12 and Graphics Jobs enabled.
- (947024) - Editor: Fixed incorrect wrap icon button in particle curve editor window.
- (953161) - Editor: Fixed player settings not using the default icon if non are specified.
- (962721) - Editor: Fixed incorrect tool placement when pivot mode was set to Center.
- (968535) - Editor: Fixed editor restart prompt when selecting the same Active Input Handling option.
- (956577) - GI: Fixed 'Show Lightmap Resolution' checkbox not working.
- (962696) - Graphics: Fixed batching with unused stencil bit in G-Buffer pass.
- (None) - Multiplayer: Fixed reliable message being delivered twice.
- (964731) - OSX: Fixed incorrect mouse position when running in full screen and non-native aspect ratio.
- (944450) - Package Manager: Fixed initialisation failure when host file is missing or empty.
- (957000) - Package Manager: Fixed creation of empty folder named 'etc'.
- (957436) - Package Manager: Fixed error/crash if there is a comma symbol in the project name.
- (965605) - Physics: Ensured that loading a scene with a Rigidbody2D with simulation off allows interpolation when simulation is subsequently turned on.
- (963200) - Physics: Ensured that manual transform sync correctly updates the Rigidbody2D pose correctly.
- (967740) - Physics: Ensured that 'Collision2D.GetContacts()' returns a single contact when passing a single element array.
- (956316) - Playables: Fixed crash when setting an invalid source playable.
- (969932) - UI: Fixed RectMask2D not masking when it's child has a child with Mask and Image components inside the RectMask2d area.
- (964652) - Video: Fixed clip from asset bundle not played.
- (969297) - Video: Fixed muting game view does not mute video audio.
- (969298) - Video: Fixed video not playing in build when building for another platform.
- (969299) (937173) - Video: Fixed inspector preview not cropping title.
- (966690) - Video: Fixed erroneous stride crash on Windows.
- (973008) - Video: Fixed crashing RemoteWebCamTexture (when using the Unity Remote helper app) when marked DontDestroy.
- (973005) - Video: Fixed editor crash when previewing/playing video on older OSX version.
- (962204) - Video: Fixed video decoding errors due to bad file I/O for high res/bitrate video.
- (973009) - Video: Fixed Video Player component rendering video with artifacts on OSX.
- (900105) - WebGL - Fixed missing logo/progress bar during loading screen.
- (966173) - XR: Fixed Daydream applications hanging before quiting to Android home when calling
Application.Quit.
- (None) - XR: Fixed forcing
LandscapeLeftdefault Orientation on all mobile VR applications.
- (953314) - XR: Fixed Windows Mixed Reality XAML apps not able to get back to exclusive mode after blooming to the shell.
- (954629) - XR: Fixed Windows Mixed Reality XAML apps launching to a black screen.
- (922492) - XR: Fixed Windows Mixed Reality XAML apps not rendering on desktop.
- (963878) - XR: Fixed Windows Mixed Reality apps rendering with incorrect eye offsets.
Revision: 0c3a6a294e34
Unity 2017.2.0
- Android: Proguard is no longer enabled by default for gradle release builds.
- Android: Release gradle builds now signed with a debug key instead of failing to build.
- GI: Various lightmap seam stitching improvements for Progressive Lightmapper.
- IL2CPP: Improved incremental build performance on OSX.
- Playables: Updated the Playable API documentation.
- Playables: Updated Playable.ConnectInput to take take an optional weight parameter.
Fixes
- (934841) - Android: Fixed android video player playback starting to lag after activating input field.
- (931038) - Android: Fixed android video player stuttering and dropping frames.
- (942625) - Android: Fixed symbols.zip not including symbols file.
- (951350) - Collab: Fixed project manifest not being tracked in Collab.
- (957525) - Editor: Fixed incorrect framing of GameObject hierarchies in Scene View.
- (953301) - Editor: Fixed SceneView bounds calculations not taking multiple colliders and renderers into account when calculating the framing and centre - point.
- (947024) - Editor: Fixed curve WrapModeIcon drawing when the Curve window is embedded into another window.
- (943051) - Editor: Fixed Scene view picking sometimes not selecting the topmost object.
- (961428 925381) - Editor: Fixed issue where performing a drag and drop operation to a GameObject that is being edited in the Preview window of Timeline would apply changes to its associated Prefab that cannot be reverted.
- (935149) - GI: Fixed MaterialPropertyBlock values for Meta pass when using on terrain mesh with Realtime GI.
- (929875) - GI: Fixed the UI in inspector not correctly showing light mode when multiple lights are selected.
- (851817) - GI: Fixed crash when deleting Speedtree asset files from project folder when in use by prefab.
- (930221) - GI: Fixed shadows when shadow prepare job is not run.
- (950907) - Graphics: Fixed potential hang with DirectX11 or DirectX12 when using different sized render targets and multiple cameras.
- (954828) - Graphics: Fixed culling of projectors not matching Editor scene cameras..
- (840098) - Graphics: Fixed incorrect calculation of the Umbra occlusion culling near plane from the camera settings.
- (956877) - Graphics: MeshRenderers with disabled "Dynamic Occluded" property were not being frustum culled.
- (952043) - IL2CPP: Fixed crash when calling Socket.GetSocketOption using latest scripting runtime.
- (943671) - IL2CPP: Fixed ArgumentException when accessing Socket.LocalEndPoint.
- (966623) - IL2CPP: Fixed crash in thread pool during shutdown.
- (966830) - iOS: Fixed an issue where the development team ID was not written to the Xcode project in manual signing mode.
- (962793 960914) - iOS: Fixed the Screen.dpi() method in the Trampoline code not returning the correct number of DPI in iPhone 8, iPhone 8+ and iPhone X.
- (963865) - OSX: Fixed Editor crash when using GLCore on High Sierra with Intel 6xxx series GPU.
- (957044) - Physics: Fixed Collider2D crashing when disabled by an animation.
- (960530) - Physics: Fixed previously ignored collision not being ignored when recreating 2D physics contacts.
- (960775) - Prefabs: Fixed issue where resetting SerializedProperty.prefabOverride for one property could incorrectly reset other properties.
- (None) - Shaders: Fixed importing a shader include file only clearing the include cache on a single shader compiler process leaving all the other processes with outdated include files in the cache.
- (944334) - Graphics: Fixed issue where some user shader keywords were handled incorrectly leading to compilation errors when building for standalone.
- (951036) - UWP: Fixed issue where
ComputeBuffer.SetData()and
ComputeBuffer.GetData()returned empty result when running on .NET scripting backend and the array types weren't referenced from any parameters.
- (None) - Video: Fixed crash in VideoPlayer when stopping and callbacks were pending.
- (None) - VR: Fixed incorrect stereo eye offsets in Windows Mixed Reality.
- (910488) - WWW: Fixed reading of local files not working in UWP with UnityWebRequest.
- (949418) - WWW: Fixed WWW class regressions related to throwing NullReferenceException
- (None) - XR: Fixed Windows Mixed Reality applications not reporting an updated boundary if it has been reconfigured while the app was running.
Known Issues
- (971024) - Editor: Focusing on Canvas focuses on child object when pivot is in center mode.
- (971895) - Editor: Center pivot point of UI objects is not consistent.
Revision: 40117ac43b95
Unity 2017.2.0 -
Unity 2017.2.0 - Exposed methods to set and retrieve Physics Shape from a Sprite.
- 2D - Sprites created by importing a texture now have a default Physics Shape generated.
- Graphics - Added APIs to retrieve areas safe for UI rendering. Currently supported on iOS and tvOS only.
- iOS - Added identification enums for iPhone 8, 8+ and X.
- Shaders - Concatenated matrix macros (e.g. UNITY_MATRIX_MVP) are now changed to static variables to avoid repeated calculations.
- tvOS - Implemented support for 4K AppleTV icons and splashscreens.
- XR - Improved performance on Windows Mixed Reality by removing a potential thread stall that would occur whenever beginning a new frame while the previous frame had not completed presenting.
Fixes
- (947462) - 2D: Fixed updating an active Tilemap palette prefab not exposing it into the SceneView.
- (951514) - 2D: Fixed TilemapRenderer showing tiles when Tilemap.ClearAllTiles() is called.
- (952556) - 2D: Fixed ReflectionTypeLoadException from TilePalette when TilePalette is opened with 4.6 .Net and a user assembly cannot be loaded.
- (930830)(959526) - AI: Fixed unwanted gap in the NavMesh produced by a concave edge crossing a tile boundary.
- (945953) - Android: Fixed shader compile error on devices not supporting GL_FRAGMENT_PRECISION_HIGH.
- (944091) - Android: Fixed setting multiple response headers with same name in UnityWebRequest.
- (924891) - Android: Disable GPU fences for two Android 6 devices which have been found to have compatibility issues causing performance loss: HTC 10 and LG G5 SE.
- (945292) - Animation: Fixed case where sprite and material reference were not animatable at the same time in the SpriteRenderer.
- (945035) - Animation: Fixed case where transition between animations makes GetIKRotation and GetIKPosition returned incorrect value.
- (952170) - Animation: Fixed CurveField not updating animation curve when reference changes from render to render.
- (941945) - Animation: Fixed being unable to set the transition time in Animator.CrossFade().
- (948768 947491) - Animation: Fixed bool property not properly restored to initial value when exiting animation window.
- (931359 931267) - Asset Import: Fixed psd import issue where a psd looked different from a png.
- (931944) - AssetDatabase: Fixed an issue where AssetDatabase.GetSubFolders() didn't return any results, and updated the manual to reflect that this method only accepts relative paths.
- (942296) - Build: Exceptions in OnPreProcessBuild will now halt the build process correctly.
- (941192) (958237) - Build Pipeline: Fixed a crash in BuildReporting::BuildReport::BeginBuildStep caused when BuildAssetBundles was being called from an OnPreprocessBuild callback.
- (905397 918819) - Editor: Fixed importing a cubemap with invalid metafile crashing.
- (948326 930624) - Editor: Fixed Plugin Inspector showing only one option in Framework Dependencies when switching to iOS platform and .NET 4.6
- (950172) - Editor: Fixed crash when dragging component without managed instance to hierarchy.
- (942923) - Graphics: Fixed atlased ETC1 textures with split alpha rendering in editor outside of play mode.
- (939897) - Graphics: Fixed an issue where an off-screen SkinnedMeshRenderer with 'update when off-screen' enabled was not being skinned.
- (950215) - Graphics: Fixed asserts when animations disable newly visible renderers.
- (None) - Graphics: Fixed GL_INVALID_ENUM error with OpenGL ES when using point primitives.
- (946068) - Graphics: Fixed not being able to set any Mesh on a Skinned Mesh Renderer.
- (942563) - Graphics: Fixed crash when certain variables of CustomRenderTexture are used in script.
- (941334) - Graphics: Fixed Tree shadows being culled when zoomed in on Occlusion Visualization mode.
- (944223) - Graphics: Fixed object with "Dynamic Occludee" property being disabled rendering in Preview window when selecting a different object.
- (912323) - Graphics: Added error message for graphics APIs that do not support texture wrap mode "mirror once" (Android Vulkan, Android GLES3 and WebGL).
- (947342) - Graphics: Emit error messages instead of assert when the screen position is out of view frustum.
- (None) - Graphics: Fix Vulkan validation layer errors associated with image barriers.
- (941149) - Graphics: Fix Vulkan validation layer errors (on Windows) when switching to fullscreen.
- (948053) - Graphics: Fixed specific case where not all requested shader variants ending up in an asset bundle.
- (932940) - Graphics: Fixed D3D11 driver assert message and potential crash "Invalid mask passed to GetVertexDeclaration() when using post-effect".
- (935463) - Graphics: Fixed updating of bounding boxes for SkinnedMeshRenderers with 'Update When Offscreen' set.
- (942401) - iOS: Fixed screen not always automatically rotating correctly after disabling and enabling auto-rotation.
- (949032) - iOS: Fixed SystemInfo.supportedRenderTargetCount not correctly returning 8 for devices that support it.
- (949361) - iOS: Fixed crash in Handheld.PlayFullScreenMovie when playback ends.
- (847499) - Lighting: Fixed maximum lightmap import size.
- (954747) - OSX: Fixed High Sierra OS freeze while using Local Cache Server.
- (941076) - Particles: Fixed Birth SubEmitter not always firing when using random between two constants lifetime.
- (950833) - Physics: Fixed PlatformEffector2D not ignoring contacts involving trigger colliders.
- (941024) - Physics: Fixed RigidBody2Ds being woken when set to "StartAsleep" sleep mode.
- (953653) - Physics: Fixed Collider2D material changes not being propagated to existing contacts.
- (932044) - Physics: Ensure that we correctly match enter/exit collision/trigger callbacks when a single simulation step causes a contact to stop then start again.
- (946307) - Physics: Fixes GameObject which has a disabled cloth component not following parent's transform.
- (953068) - Scripting: Fixed Awake containing the wrong transform values when instantiated.
- (958250 955089) - Scripting: Fixed startup-crash on macOS 10.13 when using multiple monitors.
- (951875 899729) - Shaders: Fixed shadow precision for mobile platforms.
- (935126)(941827) - Shaders: Disable instancing support when performing surface shader analysis.
- (927339) - Shaders: Fixed incorrect translation to GLSL of compute shaders using bfi instructions with mask operators.
- (943340) - Shaders: Fixed incorrect translation of shaders using resinfo with mask operators.
- (None) - Shaders: Fixed HLSLcc shader conversion not handling F32TO16 and F16TO32 opcodes.
- (951780) - Terrain: Fixed crash when loading a non read/write enabled texture from an asset bundle. Texture will not be shown unless it is marked as read/write.
- (None) - UI: Fixed many bugs/performance problems caused by driven properties in uGUI by reverting to the 2017.1 driven property system.
- (None) - Video: Fixed VideoPlayer CameraNear/FarPlane RenderModes when used in conjunction with VR.
- (946124) - Web: Fixed POST key/value dictionary containing very long values in UnityWebRequest.
- (949038) - Web: Fixed returning null.
- (943241) - WebGL: Fixed MS Edge detection.
- (946393) - WebGL: Fixed divide by zero when AudioSource.pitch is zero.
- (949858) - WebGL: Fixed Timeline crash on missing DSPConnection::setMix().
- (931829) - Windows: Fixed ProcessMouseInWindow causing CPU spikes up to 4ms on Standalone builds.
- (899209) - Windows: Fixed Windows touch input events being out of sync from positioning events.
- (860330) - Windows: Fixed loading animation on cursor continuing to play after the game is loaded.
- (946829) - XR: Fixed landscape left being forced when landscape right is disabled.
- (931397) - XR: Fixed black screen on startup on Cardboard when GLES2 or GLES3 is used.
- (927404) - XR: Fixed incorrect culling when using multiple cameras with Windows Mixed Reality.
- (950519) - XR: Fixed Assert when playing Mixed Reality applications in Editor without Mixed Reality Portal running.
- (952039) - XR: Fixed Holographic Simulation not working in Editor.
- (943109) - XR: Eliminated errors and warnings showing per frame in console during Holographic Emulation.
- (948814) - XR: Fixed crash in Editor when toggling play mode aftering blooming to shell on Windows Mixed Reality.
- (956693) - XR: Fixed issue with "Unsupported texture format .." warnings appearing when XR is enabled.
- (None) - XR: Fixed stretched background image for ARCore apps running on Samsung S8.
- (909869) - XR: InputTracking.Recenter is now hooked up properly on Windows Mixed Reality.
- (942154) - XR: Tracking loss screen no longer appears on WindowsMR headsets, now mimics the behavior of other platforms during tracking loss.
- (945163) - XR: Tracking space type now falls back to Stationary when boundary hasn't been configured.
- (946714) - XR: Fixed issue with being able to set tracking space type to Stationary.
- (949193) - XR: Camera transform changes during and after tracking loss now mimics other platforms' behavior.
Known Issues
- (963224 - Graphics: Graphical glitches on certain materials when creating a build for Android devices with Vulkan API using Standard shader.
- (966036) - Editor: Input field stays highlighted after Enter key is pressed.
Revision: 24fd82ce573a
Unity QA からの最新情報
私たちは単にツールを提供しているだけではなく、情報も発信しています。ご興味がございましたら、Unity QAのブログ記事もぜひお読みください。 | https://unity3d.com/jp/unity/qa/patch-releases?version=2017.2&page=2 | CC-MAIN-2019-51 | refinedweb | 3,645 | 50.23 |
Hello,
I want to test Network Bandwidth. I've seen that CommVaut provide a Workflow for this purpose.
So I've download "CvNetworkTestTool Gui" version 11 workflow.
When I run this workflow.
I had this error message:
Error Code: [19:857] Description: com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. Source: <commserve>, Process: Workflow
Could you help me ?
Regards.
Sam
I noticed there are a couple query activities in the workflow that are not properly terminated with semicolons. This needs to be fixed officially, but in the meantime I will private message you a fixed version.
I will then submit the official fix internally.
I checked further into this issue and found that it is officially fixed in the next release of the workflow.
I've retrieve and install your new Workflow.
Now I've the following error message:
"Error Code: [19:857] Description: 'rm' is not recognized as an internal or external command, operable program or batch file. Source: <commserve>, Process: Workflow"
The workflow is not correctly detecting the OS type, therefore the OS decision activity directs to the Unix based commands wherein RM is not a windows command.
This is also fixed in the next version. Let me check when this will be available.
Do you have check when this workflow will be fixed ?
Thanks.
We are actively working on a fix for this issue. We hope to have the fix uploaded to the store on or before friday. Thanks for your patience.
Hi
Please download the updated CvNetworkTestTool Gui workflow.
See store link below.
Let us know if you need further help on this.
Leo
Hi Leo,
I can now run the Workflow.
But I got a error not linked to the Workflow directly since I got the same error with CVNetworkTestTool.exe.
I'll open a new post for that.
Regards,
Sam. | http://sso.forum.commvault.com/forums/thread/52890.aspx | CC-MAIN-2019-47 | refinedweb | 335 | 67.76 |
---
---
# From 5.1 to 6.0
*Make sure you view this [upgrade guide from the `master` branch]() for the most up to date instructions.*
## Warning
GitLab 6.0 is affected by critical security vulnerabilities CVE-2013-4490 and CVE-2013-4489.
## Deprecations
### Global projects
The root (global) namespace for projects is deprecated.
So you need to move all your global projects under groups or users manually before update or they will be automatically moved to the project owner namespace during the update. When a project is moved all its members will receive an email with instructions how to update their git remote URL. Please make sure you disable sending email when you do a test of the upgrade.
### Teams.
## 0. Backup & prepare for update
It's useful to make a backup just in case things go south:
(With MySQL, this may require granting "LOCK TABLES" privileges to the GitLab user on the database version)
```bash
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:backup:create RAILS_ENV=production
```
The migrations in this update are very sensitive to incomplete or inconsistent data. If you have a long-running GitLab installation and some of the previous upgrades did not work out 100% correct this may bite you now. The following can help you have a more smooth upgrade.
### Find projects with invalid project names
#### MySQL
Login to MySQL:
mysql -u root -p
Find projects with invalid names:
```bash
mysql> use gitlabhq_production;
# find projects with invalid first char, projects must start with letter
mysql> select name from projects where name REGEXP '^[^A-Za-z]';
# find projects with other invalid chars
## names must only contain alphanumeric chars, underscores, spaces, periods, and dashes
mysql> select name from projects where name REGEXP '[^a-zA-Z0-9_ .-]+';
```
If any projects have invalid names try correcting them from the web interface before starting the upgrade.
If correcting them from the web interface fails you can correct them using MySQL:
```bash
# e.g. replace invalid / with allowed _
mysql> update projects set name = REPLACE(name,'/','_');
# repeat for all invalid chars found in project names
```
#### PostgreSQL
Make sure all project names start with a letter and only contain alphanumeric chars, underscores, spaces, periods, and dashes (a-zA-Z0-9_ .-).
### Find other common errors
```
cd /home/git/gitlab
# Start rails console
sudo -u git -H bin/rails console production
# Make sure none of the following rails commands return results
# All project owners should have an owner:
Project.all.select { |project| project.owner.blank? }
# Every user should have a namespace:
User.all.select { |u| u.namespace.blank? }
# Projects in the global namespace should not conflict with projects in the owner namespace:
Project.where(namespace_id: nil).select { |p| Project.where(path: p.path, namespace_id: p.owner.try(:namespace).try(:id)).present? }
```
If any of the above rails commands returned results other than `=> []` try correcting the issue from the web interface.
If you find projects without an owner (first rails command above), correct it. For MySQL setups:
```bash
# get your user id
mysql> select id, name from users order by name;
# set yourself as owner of project
# replace your_user_id with your user id and bad_project_id with the project id from the rails command
mysql> update projects set creator_id=your_user_id where id=bad_project_id;
```
## 1. Stop server
sudo service gitlab stop
## 2. Get latest code
```bash
cd /home/git/gitlab
sudo -u git -H git fetch
sudo -u git -H git checkout 6-0-stable
```
## 3. Update gitlab-shell
```bash
cd /home/git/gitlab-shell
sudo -u git -H git fetch
sudo -u git -H git checkout v1.7.9
```
## 4. Install additional packages
```bash
# For reStructuredText markup language support install required package:
sudo apt-get install python-docutils
```
## 5. Install libs, migrations, etc.
```bash migrate_groups RAILS_ENV=production
sudo -u git -H bundle exec rake migrate_global_projects RAILS_ENV=production
sudo -u git -H bundle exec rake migrate_keys RAILS_ENV=production
sudo -u git -H bundle exec rake migrate_inline_notes RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:satellites:create RAILS_ENV=production
# Clear redis cache
sudo -u git -H bundle exec rake cache:clear RAILS_ENV=production
# Clear and precompile assets
sudo -u git -H bundle exec rake assets:clean RAILS_ENV=production
sudo -u git -H bundle exec rake assets:precompile RAILS_ENV=production
#Add dealing with newlines for editor
sudo -u git -H git config --global core.autocrlf input
## 6. Update config files
Note: We switched from Puma in GitLab 5.x to unicorn in GitLab 6.0.
- Make `/home/git/gitlab/config/gitlab.yml` the same as <> but with your settings.
- Make `/home/git/gitlab/config/unicorn.rb` the same as <> but with your settings.
## 7. Update Init script
```bash
cd /home/git/gitlab
sudo rm /etc/init.d/gitlab
sudo cp lib/support/init.d/gitlab /etc/init.d/gitlab
sudo chmod +x /etc/init.d/gitlab
```
## 8. Create uploads directory
```bash
cd /home/git/gitlab
sudo -u git -H mkdir -p public/uploads
sudo chmod -R u+rwX public/uploads
```
## (5.1)
### 1. Revert the code to the previous version
Follow the [upgrade guide from 5.0 to 5.1](5.0-to-5.1.md), except for the database migration (the backup is already migrated to the previous version).
### 2. Restore from the backup:
```bash
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:backup:restore RAILS_ENV=production
``` | https://gitlab.com/gitlab-org/gitlab-ce/blame/11-8-stable/doc/update/5.1-to-6.0.md | CC-MAIN-2019-30 | refinedweb | 890 | 53.71 |
@jupyterlab/launcher
- Version 3.3.2
- Published
- 30.9 kB
- 10 dependencies
- BSD-3-Clause license
Install
npm i @jupyterlab/launcher
yarn add @jupyterlab/launcher
pnpm add @jupyterlab/launcher
Overview
launcher
Index
Variables
Classes
Interfaces
Namespaces
Variables
Classes
class Launcher
class Launcher extends VDomRenderer<LauncherModel> {}
A virtual-DOM-based widget for the Launcher.
constructor
constructor(options: ILauncher.IOptions);
Construct a new launcher widget.
property cwd
cwd: string;
The cwd of the launcher.
property pending
pending: boolean;
Whether there is a pending item being launched.
property translator
protected translator: ITranslator;
method render
protected render: () => React.ReactElement<any> | null;
Render the launcher to virtual DOM nodes.
class LauncherModel
class LauncherModel extends VDomModel implements ILauncher {}
LauncherModel keeps track of the path to working directory and has a list of LauncherItems, which the Launcher will render..
method items
items: () => IIterator<ILauncher.IItemOptions>;
Return an iterator of launcher items.
Interfaces
interface ILauncher
interface ILauncher {}
The launcher interface..
Namespaces
namespace ILauncher
namespace ILauncher {}
The namespace for
ILauncherclass statics.
interface IItemOptions
interface IItemOptions {}
The options used to create a launcher item.
property args
args?: ReadonlyJSONObject;
The arguments given to the command for creating the launcher item.
### Notes The launcher will also add the current working directory of the filebrowser in the
cwdfield of the args, which a command may use to create the activity with respect to the right directory.
property category
category?: string;
The category for the launcher item.
The default value is an empty string.
property command
command: string;
The command ID for the launcher item.
#### Notes If the command's
executemethod returns a
Widgetor a promise that resolves with a
Widget, then that widget will replace the launcher in the same location of the application shell. If the
executemethod does something else (i.e., create a modal dialog), then the launcher will not be disposed.
property kernelIconUrl
kernelIconUrl?: string;
For items that have a kernel associated with them, the URL of the kernel icon.
This is not a CSS class, but the URL that points to the icon in the kernel spec.
property metadata
metadata?: ReadonlyJSONObject;
Metadata about the item. This can be used by the launcher to affect how the item is displayed.
property rank
rank?: number;
The rank for the launcher item.
The rank is used when ordering launcher items for display. After grouping into categories, items are sorted in the following order: 1. Rank (lower is better) 3. Display Name (locale order)
The default rank is
Infinity.
interface IOptions
interface IOptions {}
The options used to create a Launcher.
property callback
callback: (widget: Widget) => void;
The callback used when an item is launched.
property commands
commands: CommandRegistry;
The command registry used by the launcher.
property cwd
cwd: string;
The cwd of the launcher.
property model
model: LauncherModel;
The model of the launcher.
property translator
translator?: ITranslator;
The application language translation.
Package Files (1)
Dependencies (10)
Dev Dependencies (4) 4070 ms.
- Missing or incorrect documentation? Open an issue for this package. | https://www.jsdocs.io/package/@jupyterlab/launcher | CC-MAIN-2022-21 | refinedweb | 488 | 52.87 |
Ex. Badges 0 3 5 (for the team name, wins, losses, and ties, respectively)
- into a string line, and from that string line, convert the string into four different arrays (using structures).
For example, my beginning is
#include <iostream> #include <iomanip> #include <fstream> #include <cstring> using namespace std; const int MAX_TEAMS = 12; const int SIZE = 81; struct team { char name[SIZE]; int wins[SIZE]; int losses[SIZE]; int ties[SIZE]; }; int main() { cout << "******************************************" << endl; cout << "Hello! This program will allow you to put \nin a file name, open said"; cout << " file, calculate\nthe teams' statistics, and then read the\nteams' data"; cout << " from that file onto the screen. \nThe data will include a team's name,"; cout << " wins, \nlosses, and ties." <<endl; cout << "******************************************" << endl;
I know how to read in things from files, generally, but I don't know how I would read in an entire string of data versus just one piece at a time. The code I'm familiar with stops at every null space instead of every newline. For example, I've written
ifstream inFile; char name[SIZE]; char fileName[SIZE]; cout << "Enter the name of the file: "; cin >> fileName; inFile.open(fileName); inFile >> name; cout << name << endl; inFile >> name; cout << "\t" << name << endl; inFile.close();
while using the practice txt document with it saying
hello user
hello
Could someone help push me in the right direction for using strings with an explanation or pseudocode? I've yet to find anything in my textbook that describes the situation I'm currently describing. | http://www.dreamincode.net/forums/topic/129947-ifstreams-strings-and-arrays/ | CC-MAIN-2016-50 | refinedweb | 255 | 64.24 |
Original article was published on Deep Learning on Medium
Including Assets
Next, we need to include an audio file in
index.html and create a video element in order to access the webcam. Lastly, we need to add an
index.js script file into our
index.html file that contains all the JavaScript code. After including our
index.html file, our source code should look like the code snippet below:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Hand tracking for prevent covid-19</title>
</head>
<body>
<audio src="./warning-sound.mp3" id="audio"></audio>
<video id="video"></video>
<script src="src/index.js"></script>
</body>
</html>
The next step is to add the handtrack.js library to our project. In order to do that, we need to run the following command in our project command prompt:
yarn add handtrackjs
Then, we need to open our
src/index.js file and import the handtrack plugin, as shown below:
import * as handTrack from "handtrackjs";
Now, we need to initialize handtrack library plugin with default parameters named as
modelParams. The
modelParams constant holds the object with the handtrack plugin configurations.
By using the
load method provided by the
handTrack module, we’re going to load the parameters into the plugin and get the results, which we’re going to assign to a variable object called
model. All the coding implementation for this is provided in the code snippet below:
const modelParams = {
flipHorizontal: true, // flip e.g for video
imageScaleFactor: 0.7, // reduce input image size for gains in speed.
maxNumBoxes: 20, // maximum number of boxes to detect
iouThreshold: 0.5, // ioU threshold for non-max suppression
scoreThreshold: 0.79, // confidence threshold for predictions.
}let model;handTrack.load(modelParams).then((lmodel) => {
model = lmodel;
});
Our initialization of a
handTrack instance is now complete. We’re now going to detect the hand in the webcam screen and fetch the data from the plugin.
Fetching the Webcam Stream Data
The process of fetching the webcam stream data is simple and easy. All we have to do is to make use of browser API named
MediaDevices.getUserMedia().
First, we need to get the video and audio element using the
querySelector method, as shown in the code snippet below:
const video = document.querySelector("#video");
const audio = document.querySelector("#audio");
Then, we integrate the
handTrack object with the video source.
As a reminder, the process is to detect hands using the handtrack model by adding the video object to the detecting function. Then, we’ll get the prediction data.
Next, we need to run the function every second in order to get the data for each moment. The function is implemented as a
getUsermedia function.
As a result, the data length will be zero if the hand doesn’t appear on the screen. And if a hand does appears on the screen, then the length of the data will be more than zero, as shown in the console result below:
By using the simple condition based on the data length, we can implement the audio function to trigger the sound when the hand appears on the webcam screen.
Hence, we have successfully completed our simple hand detection app.
Conclusion
In this post, we used the power of TensorFlow technology in the web JavaScript environment for the detection of hand through the webcam. We learned how to detect hand movement with Handtrack.js. The aim of this project was to detect the hand before it touches the face where we use a webcam for sending visual data to the system. The system with Handtrack.js and TensorFlow technology detects the hand and notifies the user with data. The project is just the start-up for what we can do using machine learning technology like TensorFlow. There are many other technologies that you can use and make this project better.
The full source code is available in this GitHub repo: | https://mc.ai/introduction-to-hand-detection-in-the-browser-with-handtrack-js-and-tensorflow/ | CC-MAIN-2020-34 | refinedweb | 652 | 57.16 |
.
SUMMARY
NOTE: These are general modem commands. Certain commands may not work with all modems. Consult the documentation for your modem if you experience difficulties, or contact your modem manufacturer's technical support department.
All commands except two must begin with the characters AT. The two exceptions are the escape sequence (+++), and the repeat command (A/). The command line prefix (letters AT), and the command sequences that follow, can be typed in uppercase or lowercase (used on older modems), but generally case must not be mixed. More than one command can be typed on one line; you may separate them by spaces for easier reading. The spaces are ignored by the modem's command interpreter but are included in the character count on the input line. With most modems, the command line buffer accepts up to 39 characters including the A and T characters. Spaces, carriage return, and any line feed characters do not go into the buffer, and do not count against the 39 character limitation. Some modems have line length limitations as low as 24 characters. Others may have a larger buffer. Refer to modem documentation for specifics about your particular modem. If more than 39 characters are entered, or a syntax error is found anywhere in the command line, the modem returns an ERROR result code, and the command input is ignored.
Back to the top
|
Give Feedback
MORE INFORMATION
Basic Commands
IMPORTANT: You must be in the Command mode of your communication software to use the AT commands. Refer to the documentation that came with your communications software for information on entering the Command mode.
AT : This prefix begins all but two commands you issue to the modem locally, and tells the modem ATtention! commands to follow.
D Dial. Use the D command to dial a telephone number from the command line. The format of the command is as follows:
ATD [string]
The string parameter can contain up to 45 characters, the phone number and dial modifiers. The dial modifiers instruct a modem how to place the call. Do not use any added characters, such as parentheses or hyphens in the phone number.
+++ -- Escape Character Sequence. After you have connected to another modem, you may need to return to command mode to adjust the modem configuration, or, more commonly, to hang up. To do this, leave your keyboard idle (press no keys) for at least one second, and then press the plus sign (+) three times. This is one of the two commands that do not use the AT prefix, or a carriage return to enter. After a moment, the modem will respond with OK indicating you have been returned to command mode.
P : Pulse dialing. Also known as rotary dialing, this dial modifier follows the D command and precedes the telephone number to tell the modem to dial the number using pulse service. For example, to dial the number 123-4567 on a pulse phone line, type "ATDP 1234567".
T : Tone dialing. This modifier selects the tone method of dialing using DTMF tones. Note: Tone and pulse dialing can also be combined in a dial command line when both dialing methods are required. For example, to dial the number 123-4567 on a touch-tone phone line, type "ATDT 1234567".
Dial Command Modifiers
Syntax: ATD{dial modifier} 1234567 [Enter]
; : Resume command mode after dialing. If you need to dial a number that is too long to be contained in the command buffer (45 characters for the D command), use the semicolon (;) modifier to separate the dial string into multiple dial commands. All but the last command must end with the ; modifier.
, : Pause While Dialing. The comma (,) dial modifier causes the modem to pause while dialing. The modem will pause the number of seconds specified in S-Register S8 and then continue dialing. If a pause time longer than the value in S-Register S8, it can be increased by either inserting more than one (,) in the dial command line or changing the value of S-Register S8. In the following example, the command accesses the outside (public) telephone line with the 9 dial modifier. Because the comma (,) dial modifier is present, the modem delays before dialing the telephone number 5551212.
Example: ATD 9, 5551212 [Enter]
! : Using the Hook Flash. The exclamation point (!) dial modifier causes the modem to go on-hook (hang up) for one-half second and is equivalent to holding down the switch-hook on your telephone for one-half second. This feature is useful when transferring calls.
W : Wait for a Subsequent Dial Tone. The W dial modifier causes a modem to wait for an additional dial tone before dialing the numbers that follow the W. The length of time the modem waits depends on the value in S- Register S7. The modem can be instructed to dial through Private Branch Exchanges (PBXs) or long-distance calling services that require delays during dialing. This can be done with the W command to wait for a secondary dial tone or with a comma (,) command to pause for a fixed time and then dial.
Example: ATDT 9 W 1 2155551212 [Enter]
A/ : -- Repeat. This command does not use the AT prefix nor does it require a carriage return to enter. Typing this command causes the modem to repeat the last command line entered, and is most useful for redialing telephone numbers that are busy.
&Fn : Factory Defaults. This command (in which n=0 or 1) returns all parameters to the selected set of factory defaults if the modem has factory defaults; not all modems do.
H : Hang Up. This command tells the modem to go "on-hook," or disconnect, the telephone line.
O : Online. This command returns the modem to the on-line mode and is usually used after the escape sequence (+++) to resume communication.
Zn : Reset Modem. This command (in which n=0 or 1) resets the modem to the configuration profile stored in non-volatile memory location 0 or 1.
Making a Call
ATD 1234567 The modem dials the telephone number 1234567 and then waits for a carrier from a distant, or remote, modem. If no carrier is detected within a given time (as defined by the initial settings in S-Register 6), the modem automatically releases the line and sends a NO CARRIER result code. If a carrier is detected, the modem gives a CONNECT result code and goes online, allowing communications with the remote modem. The connection between the two modems ends when any of the following occurs causing the modem to hang up, return to command mode, and send the NO CARRIER response:
The local modem loses the carrier signal from the remote modem.
The Hang Up command (H) is sent.
The DTR interface signal is dropped between the local DTE and modem when the &D2 or &D3 command is in effect.
NOTE: All ampersand (&) based commands are totally dependent on implementation. They are extensions from the original HAYES command set. AT &F1DT9,P5551234 [Enter]
This command restores the factory default settings, dials, using tones, a 9 to access an outside line, pauses briefly, then pulse dials the number 555-1234. see above.
Manually Dialing with the Telephone
Lift the receiver of the telephone and dial the number you wish to call.
Type
ATH1
and press ENTER to connect the modem and then hang up the receiver.
Type
AT0
and press ENTER to tell the modem to go online.
Manual Answer
Online State:
To transmit or receive data, the modem must be in the online state. When placing a call, the modem is put online with the dial command. At the remote end, the modem goes online when it is instructed to answer a call automatically by setting the S-Register S0 equal to any number greater than 0 and less than 255. You can also perform this function with the ATA command.
Escape to Command Mode:
After a connection has been established with a remote modem, you can return to command mode without breaking that connection by typing the escape sequence. The escape sequence consists of three plus signs (+++).
It is not necessary to begin this sequence with the attention code, or by pressing ENTER after typing the escape code.
Return to Online State:
The modem can be returned to the online state after the escape sequence has been sent. To return the modem to the online state, send the ATO command. To return the modem online and then initiate an equalizer retrain sequence, use the ATO1 command.
Repeating the Last Command:
Each command sent to the modem remains in the command buffer until the next command is sent, or until power to the modem is turned off. To repeat the last command sent, type the command A/. The A/ command is not preceded by the AT characters or followed by pressing ENTER. See the command reference for more information.
Omitting a Parameter:
Some commands require a parameter to completely define them. If a parameter is omitted from a command that requires one, the command will use the default value.
Result Codes:
A response is displayed on your screen by the modem, indicating the result after it processes, or tries to process, a command. These result codes display the status of the modem, or the progress of a call sequence, and can take the form of either words or digits. The default consists of word responses that are defined by the ATV1 command. To receive digit responses rather than words, use the ATV0 command. To disable responses entirely, use the ATQ1 command.
Command Syntax:
All control commands to the modem are prefixed with either AT or at and terminated by a carriage return (Enter). Mixed case set (At or aT) is not allowed. The AT sequence is called the Attention command. The Attention command precedes all other commands except re-execute (A/) and the escape (+++) code.
Several commands that are preceded by AT can be entered in a single line followed by the carriage return character. Spaces can be inserted between commands to increase readability, but will not be stored in the command buffer, the size of which is 255 characters. The backspace character can be used to erase mistakes but is not saved as part of the contents of the command buffer in terminal applications. Unsupported commands will be logged and an OK or ERROR will be returned.
Commands will only be accepted by the modem after the previous command has fully executed. A command line may be canceled at any time by entering CTRL+X. The AT sequence may be followed by any number of commands in sequence, except for commands such as Z, D, or A. Commands following Z, D, or A on the same command line will be ignored. The maximum number of characters on any command line is 56 (including A and T).
Additional information may be found at the Hayes Web site and the US Robotics Web site. Also, your modem manufacturer may have additional information about commands that your modem supports.
Back to the top
|
Give Feedback
Properties
Article ID: 164659 - Last Review: December 16, 2004 - Revision: 3.1 Advanced Server 3.1
Microsoft Windows NT Server 3.5
Microsoft Windows NT Server 3.51
Microsoft Windows NT Server 4.0 Standard Edition
Microsoft Windows 95
Microsoft Windows 98 Standard Edition
Microsoft LAN Manager to Windows NT Advanced Server Upgrade
Microsoft LAN Manager 2.2c
Microsoft TCP/IP for Windows for Workgroups 3.11
Microsoft TCP/IP for Windows for Workgroups 3.11a
Microsoft TCP/IP for Windows for Workgroups 3.11b
Microsoft Windows for Workgroups 3.11
Microsoft Internet Information Server 1.0
Keywords:
kbinfo KB164659 | http://support.microsoft.com/kb/164659 | CC-MAIN-2014-42 | refinedweb | 1,945 | 64.2 |
A utility class that makes it easier to parse the command line. More...
#include <iutil/cmdline.h>
Detailed Description
A utility class that makes it easier to parse the command line.
Main creators of instances implementing this interface:
Main ways to get pointers to this interface:
Definition at line 41 of file cmdline.h.
Member Function Documentation
Add a command-line name to the command-line names array.
Add a command-line option to the command-line option array.
Returns the directory in which the application executable resides; or the directory in which the Cocoa application wrapper resides on MacOS/X.
Returns the full path to the application executable.
Check for a -[no]option toggle.
The difference to using GetOption() to check for the two possibilities is that this function respects the argument order.
Example: the result of evaluating the arguments
-option -nooption would depend on if you either check for "option" or "nooption" using GetOption(), while GetBoolOption() returns false because it looks for the last toggle argument.
- Parameters:
-
Query filename specified on commandline (that is, without leading '-').
- Parameters:
-
- Returns:
- Pointer to the filename or 0 if the index is out of bound.
Query specific commandline option (you can query second etc. such option).
Query specific commandline option by index.
Query the name of the Nth command line option.
Returns the directory in which the application's resources resides.
On many platforms, this may be the same as the directory returned by GetAppDir(); however, on MacOS/X, it is the "Resources" directory within the Cocoa application wrapper.
Initialize for the given command line.
Options from command line are added to any options already present --- i.e. those added via AddName() or AddOption().
Replace the Nth command-line name with a new value.
Replace the Nth command-line option with a new value.
Clear all options and names.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4/structiCommandLineParser.html | CC-MAIN-2015-27 | refinedweb | 330 | 50.33 |
code problem - Hibernate
.....How can i solve this problem.
Am i need to Kept any other jars in path. ... session.flush();
i am using hibernate 3.3.1 and I kept hibernate3.jar... of Hibernate material what u have given.
I have written contact.java-HQL subquery - Hibernate
Hibernate-HQL subquery Hi,
I need to fetch latest 5 records from...) where ROWNUM <=5;
--------------------------
I need an equivalent query in HQL which should return list of "Status" type beans.Is it possible?If yes I want to learn how to use hibernate framework in web application . for storing database in our application
Hibernate
Hibernate I downloaded the zip file given in the tutorial of Hibernate. I followed all th steps as given in the tutorial, but a build error... not be resolved. How can i rectify that error?
Hi,
The error comes when
Facing - Ajax
Facing Hello All, i m using ajax in my application i m fetching... then how can i fetch data from xml response child nodes. Hi,
If ur data... the problem.
once check u r xml response format.
xml response should be like code problem - Hibernate
this
lngInsuranceId='1'.
but i want to search not only the value 1 but the value...hibernate code problem String SQL_QUERY =" from Insurance...: "
+ insurance. getInsuranceName());
}
in the above code
hibernate how to write join quaries using hql Hi satish
I am sending links to you where u can find the solution regarding your query:
hibernate
hibernate please give me the link where i can freely download hibernate software(with dependencies)
Learn hibernate, if you want to learn hibernate, please visit the following link:
Hibernate Tutorials
:( I am not getting Problem (RMI)
I am not getting Problem (RMI) When i am excuting RMI EXAMPLE 3,2
I am getting error daying nested exception and Connect Exception
hibernate - Hibernate
hibernate hi i am new to hibernate.
i wan to run a select query... Deepak Kumar
*
*
* Select HQL Example
*/
public... hibernate for use
SessionFactory sessionFactory = new Configuration
delete query problem - Hibernate
= sess. beginTransaction();
String hql = "delete from STUDENT where name = 'mitha'";
Query query = sess.createQuery(hql)
query.executeUpate();
// int row=query.executeUpdate();
tx.commit();
i coded the above query but the IDEA 6.0
how can i add hibernate plugin to eclipse?
how can i add hibernate plugin to eclipse? how can i add hibernate plugin to eclipse eclipse 3.1 - Hibernate
hibernate configuration with eclipse 3.1 Dear Sir,
i got your mail... project.its not running.
so i want to about the whole process.
i have... by step process.
i have that folder in
d:/hibernate
i thing
Webservices(Axis2 + tomcat ) Consuming Hibernate Giving Problem - Hibernate
Please tell me how can i integrate hibernate in axis2 for my webservice??
I tried in some ways..but its not working..Main problem is hibernate session is not creating..I am getting error like
HIBERNATE CODE - Hibernate
HIBERNATE CODE Dear sir,
I am working with MyEclipse IDE.I want to connect with MYSQL using Hibernate.
What is the Driver Template and URL of MYSQL toconnect using Hibernate
hibernate
composinte primary key in hibernate
composinte primary key in hibernate I am facing following problem... from tha table using hql query. Whenever i call the list() on the Query object... of getting pojo class object. Please help me in resolving this problem.
I
hibernate...............
hibernate............... goodevining. I am using hibernate on eclipse, while connection to database Orcale10g geting error .........driver
ARNING: SQL Error: 0, SQLState: null
31 May, 2012 8:18:01 PM Hello,
I wanted to learn hibernate.I am not understanding which version to learn.please let me know
Hibernate - Hibernate
Hibernate What is a lazy loading in hibernate?i want one example of source code?plz reply
HQL order by.
HQL order by. HQL Order by query giving problem.
Order... employee name into ascending order.
If you want to print employee name into descending order you can follow the following HQL ?
String hql="Select emp FROM - Hibernate
hibernate hi friends i had one doubt how to do struts with hibernate in myeclipse ide
its urgent
update. Firstly I am facing with problem to add or reading link or parameter, than I have to update user but I am not sure how should I select user and finally... about integration
of Hibernate and JSF and Spring, and I need to add some more
HQL
HQL What is the HQL in hibernate ? Explain the use of HQL.
Hi Friend,
Please visit on this link.......
Hibernate Query - Hibernate
Hibernate code example code of how insert data in the parent table and child tabel coloums at a time in hibernate Hi friend,
I am...:
Thanks
hibernate annotations
hibernate annotations I am facing following problem, I have created 2 tables (student and address) with foreign key in address table. I am using hibernate annotations to insert records into these tables. But it is trying I have written the following program
package Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import...();
session =sessionFactory.openSession();
//Create Select Clause HQL
String SQL
HQL Sample
technical stuff very easily.
The HQL or Hibernate Query language is very... is then translated into SQL by the
ORM component of Hibernate.
HQL is very powerful... at
HQL index page
hibernate - Hibernate
hibernate Hai,This is jagadhish
I have a problem while developing... the application I got an exception that it antlr..... Exception.Tell me the answer...,
As per your problem exception is related to ant.Plz check the lib for Ant
Hi Radhika,
i think, you hibernate configuration...hibernate I have written the following program
package Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import ORDER BY Clause
Here I am giving a simple example which will demonstrate you how a HQL order
by
clause can be used. To achieve the solution of such problem at first I...Hibernate ORDER BY Clause
In this tutorial you will learn about HQL ORDER
Hibernate GROUP BY Clause
Hibernate GROUP BY Clause
In this tutorial you will learn how to use HQL GROUP... the
results that may be grouped in one or more columns.
Example
Here I am giving a simple example which will demonstrate you how a HQL group
by
clause can
how I do select from select in hql
how I do select from select in hql select RN from (
select rownum... 'a%' order by globalid ) )
where db_id = 259;
this is the sql
The below hql... link:
HQL Select Example
Hibernate WHERE Clause
Hibernate WHERE Clause
In this tutorial you will learn how to use HQL where... criteria
Example :
Here I am giving a simple example which will demonstrate you how a HQL where
clause can be used. To achieve the solution of such problem
Hibernate SELECT Clause
Hibernate SELECT Clause
In this tutorial you will learn how to use HQL select... for selecting the data from the database.
Example :
Here I am giving a simple example which will demonstrate you how a HQL select
clause can be used. To achieve
Hibernate FROM Clause
Hibernate FROM Clause
In this tutorial you will learn how to use HQL from... that
is available to the other clause such as SELECT, DELETE.
Example :
Here I am giving a simple example which will demonstrate you how a HQL from
clause can be used... place. Journaling can solve this problem. But i am not allowed to use
Hibernate code problem - Hibernate
Hibernate code problem how to write hibernate Left outer join program
Hibernate How to create Dynamic array of Objects during HQL execution
Hibernate - Hibernate
Hibernate Hai this is jagadhish, while executing a program in Hibernate in Tomcat i got an error like this
HTTP Status 500....
Hopefully this will solve your problem.
Regards
Deepak Kumar
Eclipse hibernate problem
Eclipse hibernate problem hie..i've just started a basic hibernate... and created a pojo class.
I have created HibernateUtil class but I'm getting...;Hibernate Eclipse Integration
Hibernate Tutorials
Struts Hibernate Spring - Login and User Registration - Hibernate
/hibernate-spring/index. shtml) but the problem is that Registration doesn't register...Struts Hibernate Spring - Login and User Registration Hi All,
I... is running, caa you send me code and explain in detail. I am sending you a link. I hope validate javascriptcode n i am attaching file give validations
how to validate javascriptcode n i am attaching file give validations <%@page import="java.sql.SQLException"%>
<%@page import... : 173px;
top : 97px;
width : 270px;
height: 177px;
z-index:
Facing Problem to insert Multiple Array values in database - JSP-Servlet
of Problem.
Thanks.
dear friend
i want to insert...Facing Problem to insert Multiple Array values in database Hai... facing the problem while inserting the data in database.
iam using the MsAccess.
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/8070 | CC-MAIN-2013-20 | refinedweb | 1,484 | 58.79 |
Re: WSH and XML Parser
From: name (nospam_at_user.com)
Date: 05/12/04
- ]
Date: Wed, 12 May 2004 01:52:10 -0400
Your question resembles a sitting duck for 101 computer,
spam, and from trunk of car sales pitches.
No, seriously ?
=========
Do you script the receptor interface, the one that puts the
values into e.g. windows server process ?
Do you just want to see what's there or do you want to set it ?
===========================================
Basically what you want to keep in mind is that under wsh
an error will be an error, while under hta or ie it will show 'object'
even when there are no xml data, a non existing attribute, or 'null'.
Once you pass that hurdle it is routine selectNodes or selectSingleNode.
And don't forget that you will need a password for later.
===========================================
If you use any browsers, any OS, from any location,
set a global namespace so you can access via w3.org dom.
document.getElementsByTagName("drive")[0].text
document.getElementsByName("NetworkShare")
document.getElementById("...")
attributes.getNamedItem("name")
If you use an xml parser use xPath.
And suppose your documentation gets stolen ?
Whatever scripting language you prefer, if you do not learn how to write
an xml file there is a fat chance that you will never learn how to read one.
For those who manage to set their server shares via script, in whatever
manner,
you will not see a post (that could be reciprecated OR NOT).
There are just not enough rivers, mountains, and other resources
in Redmond, Washington, Windows Land, to live and let die for prematurely.
"Thomas Bosscher" <thomasabosscher@eaton.com> wrote in message
news:c7r76k$7j1@interserv.etn.com...
> Does anyone have an example of how to retrieve data from an XML file using
> VBScript?
>
> I would like to map a set of network drives in which the data fro the
drive
> mappings are located in an xml file like the one below:
>
> <MapDrives>
> <DriveMapping name="NetworkShare">
> <Description>Network Share</Description>
> <Drive>K:</Drive>
> <Share>\\SERVER\Share\</Share>
> <Group>Group_All</Group>
> </DriveMapping>
> <DriveMapping name="AppsShare">
> <Description>Network Share</Description>
> <Drive>L:</Drive>
> <Share>\\Server\Apps\</Share>
> <Group>Group_All</Group>
> </DriveMapping>
> </MapDrives>
>
> Thanks for any direction in advance. I have been searching for an example
> online with no luck.
>
> Tom
>
>
- ] | http://www.tech-archive.net/Archive/Scripting/microsoft.public.scripting.wsh/2004-05/0246.html | crawl-002 | refinedweb | 379 | 62.07 |
ASP.NET Core updates in .NET Core 3.0 Preview 5
ASP.NET Core updates in .NET Core 3.0 Preview 5
.NET Core 3.0 Preview 5 is now available. This iteration was brief for the team and primarily includes bug fixes and improvements to the more significant updates in Preview 4. This post summarizes the important points in this release.
Please see the release notes for additional details and known issues.
To get started with ASP.NET Core in .NET Core 3.0 Preview 5 install the .NET Core 3.0 Preview 5 SDK. If you’re on Windows using Visual Studio, you also need to install the latest preview of Visual Studio.
Upgrade an existing project
To upgrade an existing an ASP.NET Core app (including Blazor apps) to .NET Core 3.0 Preview 5, follow the migrations steps in the ASP.NET Core docs. Please also see the full list of breaking changes in ASP.NET Core 3.0.
To upgrade an existing ASP.NET Core 3.0 Preview 4 project to Preview 5:
- Update Microsoft.AspNetCore.* package references to 3.0.0-preview5-19227-01
- Update Microsoft.Extensions.* package references to 3.0.0-preview5.19227.01
That’s it! You should be good to go with this latest preview release.
New JSON Serialization
In 3.0-preview5, ASP.NET Core MVC adds supports for reading and writing JSON using System.Text.Json. The System.Text.Json serializer can read and write JSON asynchronously, and is optimized for UTF-8 text making it ideal for REST APIs and backend applications.
This is available for you to try out in Preview 5, but is not yet the default in the templates. You can use the new serializer by removing the call to add Newtonsoft.Json formatters:
public void ConfigureServices(IServiceCollection services) { ... services.AddControllers() .AddNewtonsoftJson() ... }
In the future this will be default for all new ASP.NET Core applications. We hope that you will try it in these earlier previews and log any issues you find here.
We used this WeatherForecast model when we profiled JSON read/writer performance using Newtonsoft.Json, our previous serializer.
public class WeatherForecast { public DateTime Date { get; set; } public int TemperatureC { get; set; } public string Summary { get; set; } }
JSON deserialization (input)
JSON serialization (output)
For the most common payload sizes, System.Text.Json offers about 20% throughput increase during input and output formatting with a smaller memory footprint.
Options for the serializer can be configured using
MvcOptions:
services.AddControllers(options => options.SerializerOptions.WriteIndented = true)
Integration with SignalR
System.Text.Json is now the default Hub Protocol used by SignalR clients and servers starting in ASP.NET Core 3.0-preview5. Please try it out and file issues if you find anything not working as expected.
Switching back to Newtonsoft.Json
If you would:
new HubConnectionBuilder() .WithUrl("/chatHub") .AddNewtonsoftJsonProtocol() .Build();
On the server add
.AddNewtonsoftJsonProtocol()to the
AddSignalR()call:
services.AddSignalR() .AddNewtonsoftJsonProtocol();
Give feedback
We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on Github. | https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-core-3-0-preview-5/ | CC-MAIN-2019-22 | refinedweb | 518 | 54.9 |
1.15 anton 1: \ A powerful locals implementation 2: 1.50 ! pazsan 3: \ Copyright (C) 1995-2003 Free Software Foundation, Inc. 1.15 311: 312: forth definitions 313: 314: \ the following gymnastics are for declaring locals without type specifier. 315: \ we exploit a feature of our dictionary: every wordlist 316: \ has it's own methods for finding words etc. 317: \ So we create a vocabulary new-locals, that creates a 'w:' local named x 318: \ when it is asked if it contains x. 319: 320: also locals-types 321: 322: : new-locals-find ( caddr u w -- nfa ) 323: \ this is the find method of the new-locals vocabulary 324: \ make a new local with name caddr u; w is ignored 325: \ the returned nfa denotes a word that produces what W: produces 326: \ !! do the whole thing without nextname 380: 381: forth definitions 382: 383: \ A few thoughts on automatic scopes for locals and how they can be 384: \ implemented: 385: 386: \ We have to combine locals with the control structures. My basic idea 387: \ was to start the life of a local at the declaration point. The life 388: \ would end at any control flow join (THEN, BEGIN etc.) where the local 389: \ is lot live on both input flows (note that the local can still live in 390: \ other, later parts of the control flow). This would make a local live 391: \ as long as you expected and sometimes longer (e.g. a local declared in 392: \ a BEGIN..UNTIL loop would still live after the UNTIL). 393: 394: \ The following example illustrates the problems of this approach: 395: 396: \ { z } 397: \ if 398: \ { x } 399: \ begin 400: \ { y } 401: \ [ 1 cs-roll ] then 402: \ ... 403: \ until 404: 405: \ x lives only until the BEGIN, but the compiler does not know this 406: \ until it compiles the UNTIL (it can deduce it at the THEN, because at 407: \ that point x lives in no thread, but that does not help much). This is 408: \ solved by optimistically assuming at the BEGIN that x lives, but 409: \ warning at the UNTIL that it does not. The user is then responsible 410: \ for checking that x is only used where it lives. 411: 412: \ The produced code might look like this (leaving out alignment code): 413: 414: \ >l ( z ) 415: \ ?branch <then> 416: \ >l ( x ) 417: \ <begin>: 418: \ >l ( y ) 419: \ lp+!# 8 ( RIP: x,y ) 420: \ <then>: 421: \ ... 422: \ lp+!# -4 ( adjust lp to <begin> state ) 423: \ ?branch <begin> 424: \ lp+!# 4 ( undo adjust ) 425: 426: \ The BEGIN problem also has another incarnation: 427: 428: \ AHEAD 429: \ BEGIN 430: \ x 431: \ [ 1 CS-ROLL ] THEN 432: \ { x } 433: \ ... 434: \ UNTIL 435: 436: \ should be legal: The BEGIN is not a control flow join in this case, 437: \ since it cannot be entered from the top; therefore the definition of x 438: \ dominates the use. But the compiler processes the use first, and since 439: \ it does not look ahead to notice the definition, it will complain 440: \ about it. Here's another variation of this problem: 441: 442: \ IF 443: \ { x } 444: \ ELSE 445: \ ... 446: \ AHEAD 447: \ BEGIN 448: \ x 449: \ [ 2 CS-ROLL ] THEN 450: \ ... 451: \ UNTIL 452: 453: \ In this case x is defined before the use, and the definition dominates 454: \ the use, but the compiler does not know this until it processes the 455: \ UNTIL. So what should the compiler assume does live at the BEGIN, if 456: \ the BEGIN is not a control flow join? The safest assumption would be 457: \ the intersection of all locals lists on the control flow 458: \ stack. However, our compiler assumes that the same variables are live 459: \ as on the top of the control flow stack. This covers the following case: 460: 461: \ { x } 462: \ AHEAD 463: \ BEGIN 464: \ x 465: \ [ 1 CS-ROLL ] THEN 466: \ ... 467: \ UNTIL 468: 469: \ If this assumption is too optimistic, the compiler will warn the user. 470: 507: \ THEN (another control flow from before joins the current one): 508: \ The new locals-list is the intersection of the current locals-list and 509: \ the orig-local-list. The new locals-size is the (alignment-adjusted) 510: \ size of the new locals-list. The following code is generated: 511: \ lp+!# (current-locals-size - orig-locals-size) 512: \ <then>: 513: \ lp+!# (orig-locals-size - new-locals-size) 514: 515: \ Of course "lp+!# 0" is not generated. Still this is admittedly a bit 516: \ inefficient, e.g. if there is a locals declaration between IF and 517: \ ELSE. However, if ELSE generates an appropriate "lp+!#" before the 518: \ branch, there will be none after the target <then>. 5 532: then 533: then ; 534: 535: : (begin-like) ( -- ) 536: dead-code @ if 537: \ set up an assumption of the locals visible here. if the 538: \ users want something to be visible, they have to declare 539: \ that using ASSUME-LIVE 540: backedge-locals @ set-locals-size-list 541: then 542: dead-code off ; 543: 544: \ AGAIN (the current control flow joins another, earlier one): 545: \ If the dest-locals-list is not a subset of the current locals-list, 546: \ issue a warning (see below). The following code is generated: 547: \ lp+!# (current-local-size - dest-locals-size) 548: \ branch <begin> 549: 550: : (again-like) ( dest -- addr ) 551: over list-size adjust-locals-size 552: swap check-begin POSTPONE unreachable ; 553: 554: \ UNTIL (the current control flow may join an earlier one or continue): 555: \ Similar to AGAIN. The new locals-list and locals-size are the current 556: \ ones. The following code is generated: 557: \ ?branch-lp+!# <begin> (current-local-size - dest-locals-size) 558: 559: : (until-like) ( list addr xt1 xt2 -- ) 560: \ list and addr are a fragment of a cs-item 561: \ xt1 is the conditional branch without lp adjustment, xt2 is with 562: >r >r 563: locals-size @ 2 pick list-size - dup if ( list dest-addr adjustment ) 564: r> drop r> compile, 565: swap <resolve ( list adjustment ) , 566: else ( list dest-addr adjustment ) 567: drop 568: r> compile, <resolve 569: r> drop 570: then ( list ) 571: check-begin ; 572: 573: : (exit-like) ( -- ) 574: 0 adjust-locals-size ; 575: 584: 585: \ The words in the locals dictionary space are not deleted until the end 586: \ of the current word. This is a bit too conservative, but very simple. 587: 588: \ There are a few cases to consider: (see above) 589: 590: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 591: \ We have to special-case the above cases against that. In this case the 592: \ things above are not control flow joins. Everything should be taken 593: \ over from the live flow. No lp+!# is generated. 594: 595: \ About warning against uses of dead locals. There are several options: 596: 597: \ 1) Do not complain (After all, this is Forth;-) 598: 599: \ 2) Additional restrictions can be imposed so that the situation cannot 600: \ arise; the programmer would have to introduce explicit scoping 601: \ declarations in cases like the above one. I.e., complain if there are 602: \ locals that are live before the BEGIN but not before the corresponding 603: \ AGAIN (replace DO etc. for BEGIN and UNTIL etc. for AGAIN). 604: 605: \ 3) The real thing: i.e. complain, iff a local lives at a BEGIN, is 606: \ used on a path starting at the BEGIN, and does not live at the 607: \ corresponding AGAIN. This is somewhat hard to implement. a) How does 608: \ the compiler know when it is working on a path starting at a BEGIN 609: \ (consider "{ x } if begin [ 1 cs-roll ] else x endif again")? b) How 610: \ is the usage info stored? 611: 612: \ For now I'll resort to alternative 2. When it produces warnings they 613: \ will often be spurious, but warnings should be rare. And better 614: \ spurious warnings now and then than days of bug-searching. 615: 616: \ Explicit scoping of locals is implemented by cs-pushing the current 617: \ locals-list and -size (and an unused cell, to make the size equal to 618: \ the other entries) at the start of the scope, and restoring them at 619: \ the end of the scope to the intersection, like THEN does. 620: 621: 622: \ And here's finally the ANS standard stuff 623: 636: \G @var{Definer} is a unique identifier for the way the @var{xt} 637: \G was defined. Words defined with different @code{does>}-codes 638: \G have different definers. The definer can be used for 639: \G comparison and in @code{definer!}. | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.50;f=h;only_with_tag=MAIN;ln=1 | CC-MAIN-2022-27 | refinedweb | 1,438 | 68.81 |
I'm trying to pass a string argument to a target function in a process. Somehow, the string is interpreted as a list of as many arguments as there are characters.
This is the code:
import multiprocessing
def write(s):
print s
write('hello')
p = multiprocessing.Process(target=write, args=('hello'))
p.start()
hello
Process Process-1:
Traceback (most recent call last):
>>> File "/usr/local/lib/python2.5/site-packages/multiprocessing/process.py", line 237, in _bootstrap
self.run()
File "/usr/local/lib/python2.5/site-packages/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
TypeError: write() takes exactly 1 argument (5 given)
>>>
This is a common gotcha in Python - if you want to have a tuple with only one element, you need to specify that it's actually a tuple (and not just something with brackets around it) - this is done by adding a comma after the element.
To fix this, just put a comma after the string, inside the brackets:
p = multiprocessing.Process(target=write, args=('hello',))
That way, Python will recognise it as a tuple with a single element, as intended. Currently, Python is interpreting your code as just a string. However, it's failing in this particular way because a string is effectively list of characters. So Python is thinking that you want to pass ('h', 'e', 'l', 'l', 'o'). That's why it's saying "you gave me 5 parameters". | https://codedump.io/share/nOUADpDjZcKT/1/string-arguments-in-python-multiprocessing | CC-MAIN-2017-39 | refinedweb | 240 | 57.98 |
Hi there.
I am facing issues with Gstreamer.
When setting property "timeout" of "multisocketsink" plugin, the cpu run 100%.
I am using NXP SABRE Board with i.MX6Q. Also using karnel and rootfs built with yocto-L4.14.98_2.0.0_ga(Sumo).
I try to experiment with the original SD system, however, the same thing happens.
I debug against Gstreamer and I found issues.
1. gst-plugins-base/gst/tcp/gstmultisocketsink.c:gst_multi_socket_sink_thread() sets timeout and timeout callback to main_context.
2. But does not block on subsequent calls to g_main_context_iteration(). Doing the same thing in Ubuntu will block here.
3. As a result, the while statement of gst_multi_socket_sink_thread() looks like an infinite loop.
Is there anyone in the same situation?
Best regard,
After all, this issue has fixed updating GLib version from 2.54.3 to 2.56.4.
Is there a bug in GLib?
Thank you for your help.
Best regard,
Hi igor. Thank you for your immidiate reply.
However, I think that this phenomenon I encountered is a sort of malfunction of GLib on i.MX6(ARM) and Yocto because the timeout of GSource doesn't seem to work correctly.
Additionally, curiously, The same phenomenon occurs in different environments(NXP, ARM), but it was not happend Ubuntu(x86). Plus, It happen under NXP original SD Card Linux image.I use the same version of Gstreamer on each Linux.
It will easily happen in the following command.
# gst-launch-1.0 videotestsrc ! multisocketsink : OK, CPU usage is low according to top command.
# gst-launch-1.0 videotestsrc ! multisocketsink timeout=1000000000 : NG, CPU usage is 100% according to top command.
It would be nice if you could try and see if the same thing happens, or any other mentions about debugging if you think I miss something wrong.
Best regard,
I tried the following code snippet.
-----------------------------------------------------------------
#include <stdio.h>
#include <unistd.h>
#include <glib.h>
static gboolean
g_source_func(gpointer user_data) {
(void)user_data;
printf("%s\n", __func__);
return FALSE;
}
static gpointer
timeout_test_thread(gpointer data) {
int cnt = 0;
GMainLoop *loop = data;
GMainContext *context = g_main_loop_get_context(loop);
while(1) {
printf("%s:%d:%d\n", __func__, __LINE__, cnt);
GSource *timeout = g_timeout_source_new(2000);
g_source_set_callback(timeout, g_source_func, NULL, NULL);
g_source_attach(timeout, context);
g_main_context_iteration(context, TRUE);
g_source_destroy(timeout);
g_source_unref(timeout);
cnt += 1;
}
return NULL;
}
int main(void) {
GMainLoop *loop = g_main_loop_new(NULL, FALSE);
GThread *thread = g_thread_new("timeout_test_thread", timeout_test_thread, loop);
(void)thread;
g_main_loop_run(loop);
g_main_loop_unref(loop);
return 0;
}
-----------------------------------------------------------------
make and run, "g_source_func" printed after 2 seconds once. It is OK.
In NXP environment, I guess the current context is unnecessarily asserted.
Hi Taro
one can try to debug it using:
GStreamer Debugging | GStreamer Element Debugging | Gst debug
Best regards
igor
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
----------------------------------------------------------------------------------------------------------------------- | https://community.nxp.com/t5/i-MX-Processors/timeout-property-of-plugin-multisocketsink-of-Gstreamer-does-not/m-p/1060891 | CC-MAIN-2022-27 | refinedweb | 455 | 50.73 |
I am trying to use python to get text between two dollar signs ($), but the dollar signs should not start with a backslash i.e. \$ (this is for a LaTeX rendering program). So if this is given
$\$x + \$y = 5$ and $3$
['\$x + \$y = 5', ' and ', '3']
def parse_latex(text):
return re.findall(r'(^|[^\\])\$.*?[^\\]\$', text)
print(parse_latex(r'$\$x + \$y = 5$ and $3$'))
['', ' ']
You can use this lookaround based regex that excluded escaped characters:
>>> text = r'$\$x + \$y = 5$ and $3$' >>> re.findall(r'(?<=\$)([^$\\]*(?:\\.[^$\\]*)*)(?=\$)', text) ['\\$x + \\$y = 5', ' and ', '3']
RegEx Breakup:
(?<=\$) # Lookbehind to assert previous character is $ ( # start capture group [^$\\]* # match 0 or more characters that are not $ and \ (?: # start non-capturing group \\. # match \ followed any escaped character [^$\\]* # match 0 or more characters that are not $ and \ )* # non-capturing group, match 0 or more of this non-capturing group ) # end capture group (?=\$) # Lookahead to assert next character is $ | https://codedump.io/share/3Sso4EW2ctx0/1/python---regex-for-matching-text-between-two-characters-while-ignoring-backslashed-characters | CC-MAIN-2017-30 | refinedweb | 149 | 73.07 |
A brief note on recent updates to the WebKit WebGL implementation shared by Safari and Chromium, and one on Chromium in particular. - WebGL shader validation and translation via the ANGLE project is now enabled by default in Chromium continuous builds. This means that all shaders loaded into WebGL must conform to the WebGL shading language requirements, which are a superset of the OpenGL ES 2.0 shading language specification. From a practical standpoint, you should only have to add the following line to your fragment shader to make it compliant: precision highp float; or precision mediump float; If you need to make your shader temporarily work both on compliant and non-compliant WebGL implementations, you can add the following: #ifdef GL_ES precision highp float; #endif If you encounter any problems with this change, please post to the list. You can temporarily disable the shader translator if necessary by passing the command line argument --disable-glsl-translator . The shader translator should be enabled imminently in WebKit nightly builds as well. - Support for the obsolete texImage2D and texSubImage2D variants, which did not include the format, internal format or type arguments and which accepted premultiplyAlpha and flipY as optional arguments, has been removed. - The obsolete WebGLArray type names, as well as WebGLArrayBuffer, have been removed. The current WebGL and TypedArray draft specifications describe the supported names. All of these changes are either present in the Chromium continuous builds and WebKit nightly builds, or will show up in the next one. Please post if you run into any problems. Thanks, -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl@khronos.org. To unsubscribe, send an email to majordomo@khronos.org with the following command in the body of your email: | https://www.khronos.org/webgl/public-mailing-list/public_webgl/1008/msg00086.php | CC-MAIN-2021-17 | refinedweb | 284 | 52.8 |
Red Hat Bugzilla – Bug 208169
Review Request: python-twisted-core - An asynchronous networking framework written in Python
Last modified: 2008-02-20 15:30:04 EST
Spec URL:
SRPM URL:.
First pass comments:
1. The files list is very long and results in lots of "File listed twice"
warnings from rpmbuild. These could be fixed by removing these lines from the
%files list:
%{python_sitearch}/twisted/manhole/ui/*.py*
%{python_sitearch}/twisted/manhole/ui/*.glade
%{python_sitearch}/twisted/manhole/ui/gtkrc
%{python_sitearch}/twisted/persisted/journal/
However, the whole %{python_sitearch} %files tree could be simplified down to:
%dir %{python_sitearch}/twisted/
%{python_sitearch}/twisted/*.py*
%{python_sitearch}/twisted/application/
%{python_sitearch}/twisted/cred/
%{python_sitearch}/twisted/enterprise/
%{python_sitearch}/twisted/internet/
%{python_sitearch}/twisted/manhole/
%{python_sitearch}/twisted/persisted/
%dir %{python_sitearch}/twisted/plugins/
%{python_sitearch}/twisted/plugins/*.py*
%ghost %{python_sitearch}/twisted/plugins/dropin.cache
%{python_sitearch}/twisted/protocols/
%{python_sitearch}/twisted/python/
%{python_sitearch}/twisted/scripts/
%{python_sitearch}/twisted/spread/
%{python_sitearch}/twisted/tap/
%{python_sitearch}/twisted/test/
%{python_sitearch}/twisted/trial/
2. There is lots of unpackaged documentation in the doc/ directory. How about a
separate -doc subpackage?
3. rpmlint output:
E: python-twisted-core non-executable-script
/usr/lib64/python2.4/site-packages/twisted/internet/glib2reactor.py 0644
W: python-twisted-core devel-file-in-non-devel-package
/usr/lib64/python2.4/site-packages/twisted/spread/cBanana.c
W: python-twisted-core devel-file-in-non-devel-package
/usr/lib64/python2.4/site-packages/twisted/protocols/_c_urlarg.c
E: python-twisted-core non-executable-script
/usr/lib64/python2.4/site-packages/twisted/trial/test/scripttest.py 0644
E: python-twisted-core script-without-shebang
/usr/share/zsh/site-functions/_twisted_zsh_stub
The non-executable-script errors could be fixed by quick couple of seds in %prep:
sed -i -e '/^#! *\/usr\/bin\/python/d' twisted/internet/glib2reactor.py
sed -i -e '/^#!\/bin\/python/d' twisted/trial/test/scripttest.py
The script-without-shebang error could be fixed by installing
/usr/share/zsh/site-functions/_twisted_zsh_stub with mode 644
Not sure about the devel-file-in-non-devel-package warning; are these devel
files or are they needed at runtime for something? Are they needed at all?
4. Strictly speaking the package should have a dependency on zsh for the
ownership of the %{_datadir}/zsh/site-functions directory. I guess the "right"
think to do would be to break out a separate -zsh subpackage for it, but that
seems rather like overkill for one tiny file. Thoughts?
5. I think the URL for this package should be, with reserved for the python-twisted metapackage.
Paul: Are you reviewing this package? If so, this should block FE-REVIEW.
I am going to change it to do so. If you aren't reviewing, change it back to FE-
NEW and reassign back to nobody@fedoraproject.org
The python-zope-interface package has been approved (at least conditionally).
It would be nice to move this review forward too. Paul, are you able to review
this package soon? If not I'd be willing to do the review. I'd like to have
the new twisted packages available so that I can play with Elisa...
I was hoping that Thomas would respond to the comments I made in Comment #1
before doing a full review. Please feel free to take over the review if you
want; you may well be more familiar with the package than I am (I just use
Twisted Core and Web for the current bittorrent package).
Created attachment 139771 [details]
Fixes for issues in comment #1
I took some of Jeffrey's changes (with credit) but not all.
- I want to keep the %{__python} macros, I adapted the spec from pyvault and I'd
prefer to keep differences reasonable.
- I want to keep the %{origname} macro
- the Spec already uses $RPM_BUILD_ROOT so I don't want to mix with %{buildroot}
- I prefer the manifest as it is because this allows me to notice new files when
I update for new source releases, and that allows me to fix the problems that
are similar to the current ones mentioned (packaging .c files, wrong execution, ...)
These updated packages do not generate any rpmlint warnings for me anymore.
A few more quick comments before I do a more detailed review (hopefully later
today):
1. I think the Group for the -doc subpackage should be Documentation
2. I think the Group for the -zsh subpackage should be System Environment/Shells
(à la bash-completion in Extras)
3. Is %{python_sitearch}/twisted/python/_twisted_zsh_stub needed, or can it be
removed since we have %{_datadir}/zsh/site-functions/_twisted_zsh_stub?
4. The docs in the -zsh subpackage are duplicates of the same docs in the main
package. How about as an alternative:
%prep
...
# Generate a brief README.zsh
awk '/^Zsh Notes:/,/^Have fun!/' twisted/python/zshcomp.py > README.zsh
%files zsh
...
%doc README.zsh
5. TwistedCore contains an extensive test suite. After installing the package, I
can run it OK using "trial twisted", but it seems to fail some tests when I try
to run it in the buildroot in %check; any thoughts on what to do about testing?
Review:
- package and spec naming OK
- package meets guidelines
- license is MIT, matches spec, text included
- spec file written in English and is legible
- sources match upstream
- package builds ok in mock for rawhide, fc5, and fc6 (i386 and x86_64)
- BR's OK
- no locales or libraries to worry about
- not relocatable
- no directory ownership or permissions problems
- %clean section present and correct
- macro usage is consistent enough
- code, not content
- large doc directory properly split off into -doc subpackage
- docs don't affect runtime
- no pkgconfig files or libtool archives to worry about
- not a GUI app, no desktop file needed
- package appears to work OK
- scriptlet is sane
- subpackages have proper dependencies
Nits:
rpmlint output:
W: python-twisted-core strange-permission twisted-dropin-cache 0775
W: python-twisted-core mixed-use-of-spaces-and-tabs (spaces: line 7, tab: line 39)
Both trivially fixed; there's no need to have twisted-dropin-cache executable
in the SRPM as it's installed with the correct mode anyway.
Also see Comment #7.
Once these are addressed, I'll be happy to approve.
Thanks for the comments. New version:
I don't know what the strange-permission warning is about. I changed to 0755
and it did the same. Pretty crappy warning in any case, how should one know
what to do about it ?
Anyway, let me know if this version satisfies you to the point of approval :)
Thanks
(In reply to comment #9)
>
> I don't know what the strange-permission warning is about. I changed to 0755
> and it did the same. Pretty crappy warning in any case, how should one know
> what to do about it ?
I think that it wants the source files to be 0644.
What ? That's silly! It's a script, it should be executable, I want to be able
to test and execute it. Is this an important thing to fix ?
(In reply to comment #11)
> What ? That's silly! It's a script, it should be executable, I want to be able
> to test and execute it. Is this an important thing to fix ?
That's rpmlint for ya... Anyway, the permission of the file in the SRPM doesn't
matter since you use "install -m 755" to install it during the build.
OK, I'm happy with this now.
As Jeffrey says in Comment #12, there's no need for the script to be executable
in the SRPM, but that's not a blocker.
If you can figure out at some point how to get the test suite to run from the
buildroot in %check, it would be good to add that.
Approved.
Will you be submitting the other Twisted components for review now?
Thomas, ping! I'm sure you must be busy but is there a chance that you could
import this soon?
Well, it's imported and I'm currently up to step 10 in the process.
However.
(In reply to comment #15)
>.
The bugzilla number is the one for the review ticket, which would be 208169 in
this case; no need to create a new one.
(In reply to comment #15)
>
> Please let me know so I can do what I need to do and clarify the wiki.
>
I took this opportunity to make my first "contribution" to the wiki... feel free
to rephrase the comment if you still feel it is not clear.
ok, that helps. Did that step.
so, poll time. Should I already request builds for FC6 and devel, or should we
first move on with the rest of the twisted stack until we have a
feature-complete replacement for the FC-5 complete python-twisted package ?
Creating tickets for the next packages
I think we should review, import, and build all of the packages in a bottom-up
dependency order. So the python-twisted metapackage would be the last one, and
that is the only one that has been present in previous releases, so that's the
only one that should cause an issue for anyone, so long as nobody builds any
other package requiring one of the new python-twisted-* packages before the
top-level metapackage is available.
Not having python-twisted-core pushed out (at least to devel) will make testing
builds with mock harder, as you'll have to set up a local repo with
python-twisted-core in it and configure mock to look at it..
My 2 cents here: if some breakage should happen because of missing pieces, we
should start building _only_ to -devel, where breakage is at least tolerated
(if not expected ;) ).
When the chain is complete, start builds for FC-6
There shouldn't be any breakage until the python-twisted metapackage is
released; I really don't see there being an issue with the other packages.
The one issue with this package not being built for fc6 at least is that if it
was there the flumotion package could be built, which would fix a E-V-R problem
with that package. (flumotion in fc5 is newer than fc6, so people who upgrade
don't get an updated package).
In any case I am trying to move forward all the other python-twisted-* reviews.
Are there any that are not submitted yet? Or can we get this in once the other
pending ones are finished?
Hi, I need this for poker-network bug #219972
Hi, if this package is dependent on other packages can you please add those to
the depend list. If not can you please go ahead and push this out for devel and
FC6? Thanks.
I have placed all the dependencies in bug #171543 so now this one can be closed
and pushed for devel, then when all sub packages are done you can push to fc6
and close 171543. Does this make sense? I need python-twisted-core and
python-twisted-web as soon as possible for my package review. Please let me
know if I have missed any dependencies for bug #171543. Thanks.
pushed to devel, keeping open for an FC-6 push
Thomas: I'm interested in EL-5 branch. Would you maintain it yourself or you
won't mind if I maintained: python-twisted-core
New Branches: EL-5
Cvsextras Commits: yes
cvs done. | https://bugzilla.redhat.com/show_bug.cgi?id=208169 | CC-MAIN-2017-17 | refinedweb | 1,886 | 63.19 |
Hey guys, in this short post lets see how we can download youtube videos using Python.
To make this possible, we use a package called pytube
Their official docs says,
pytube is a very serious, lightweight, dependency-free Python library (and command-line utility) for downloading YouTube Videos.
Run the below command to install the
pytube package.
pip install pytube
Now,
pytube is successfully installed on your system.
To download a video, simply copy the URL of that video and run the below code
from pytube import YouTube url = "" YouTube(url).streams.first().download()
Thats it folks :)
To learn more about
pytubevisit their docs | https://manitej.hashnode.dev/download-youtube-videos-using-python | CC-MAIN-2021-49 | refinedweb | 104 | 63.39 |
|
My husband and I are married for 8 years. I am 32, he is 42.
Customer Question
My husband and I are married for 8 years. I am 32, he is 42. I love him, but don't enjoy sex with him, tried to explain him, but didn't get much response. During last 4 years we had very little sex. I cheated on him only once with cybersex and he found out. I tried to explain that this could have never happened if we would have good sex together. Recently, I found Viagra in his bag. Since we don't have intimate relations, crazy thoughts started to attack me. I confronted him, he did look nervous, but denied any relations with women. He said he has problems with erection and one doctor advised him to try Viagra, but that also doesnt help him. So his doctor told him to first fix his marriage life and may be that will help.
But he still doesn't want to have sex with me, because he can't forget my Internet affair. He wants to try little by little.
I never betrayed him physically in real life, should I be concerned about him and this accident with Viagra?
Submitted:
1 year ago.
Category:
Relationship
Expert:
Karin Samms
replied 1 year ago.
Hi there, welcome to Just Answer. I will try and help you with your question.
I am very sorry to hear that your husband is finding it difficult to be physical with you and shows little affection. There does seem to be an emotional block there for him and the cybersex on your part probably has contributed to his distance. He now has become used to not opening up nor conveying his real feelings and his erection problem has becomes a part of this too.
His doctor is right for suggesting sorting out the marriage, once emotions and feelings for each other return (one hopes very much that they would), can you then both begin to work on the physical/sexual side of things. If the feelings for each have disappeared then this is what needs to be tackled first. The Viagra is probably not a real threat, he may be wanting to feel the process of being able to experience an erection but you've both it seems, done things you're not happy about, so perhaps it is now the question of whether you move forward and try and salvage your marriage and therefore work on the sexual side of things gradually alongside it too - this CAN be achieved with an experienced sex therapist or maybe a couple counselor to start with, how would this feel for you both?
Here are some links that might help you to start thinking about this:
USA therapists website:
Another website where you can search for counselors:
For sex therapy in particular, here's a link that could help:
All of the above links will take you to websites that have a directory of counselors/ therapists, you need to add your zip code and it will search for one local to you.
Some things that you may want to consider are that when he hears that you "don't enjoy sex with him" this can psychologically also create a wall which then blocks him from trying harder too.. He has said that he's willing to try "little by little" so if you're willing too, to try and make a go of things, perhaps there can be some much needed light at the end if this tunnel for you both.
You have asked him directly if he's been unfaithful and he's said no. So, maybe now it's about working on the core of your issues - together with the same goal in mind. The Viagra may be around to make him feel better about himself and allows him to feel more confident about himself - it doesn't necessarily mean he's having relations with other women.
My very best to you, I truly.
Hi there,
Thank you for rating my service positively, it is very much appreciated.
Please do let me know if I can be of further help in the future, if you have new questions and would like to return to me, please open a new page and ask your question, please ensure you add "For Karin" at the start of your question and I will do my very best to help and support you.
Take care, my best wishes to you and I truly hope you both can move forward together.
Karin. | http://www.justanswer.com/relationship/7vxzk-husband-married-years-32-42.html | CC-MAIN-2014-35 | refinedweb | 765 | 75.74 |
Flush a context, given a context and a set of flags.
#include <screen/screen.h>
int screen_flush_context(screen_context_t ctx, int flags)
The connection to the composition manager that is to be flushed. This context must have been created with screen_create_context().
The flag to indicate whether or not to wait until contents of all displays have been updated or to execute immediately.
Function type: Apply Execution
This function flushes any delayed command and causes the contents of displays to be updated, when applicable. If SCREEN_WAIT_IDLE is specified, the function will not return until the contents of all affected displays have been updated. Passing no flags causes the function to return immediately.
If debugging, you can call this function after all delayed function calls as a way to determine the exact function call which may have caused an error.
0 if the context was flushed, or -1 if an error occurred (errno is set). The error could have been caused by any delayed function that just got flushed. | http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.qnxcar2.screen/topic/screen_flush_context.html | CC-MAIN-2022-05 | refinedweb | 166 | 54.93 |
Hi All,
I would like to share this tool that I´ve created.
This tool generate reports with information from Series 7 namespace.
It´s a excel workbook with macros that uses access manager automation .
You need to enable macros to run these reports, and Access Manager Administrator version 7 installed.
Looking for some ideas to improve this tool.
Enjoy.
Topic
Pinned topic Access Manager Reporting Tool
2009-07-25T18:53:41Z |
Updated on 2009-09-17T15:42:21Z at 2009-09-17T15:42:21Z by SystemAdmin
Re: Access Manager Reporting Tool2009-07-26T21:32:44Z
This is the accepted answer. This is the accepted answer.now comes the attachment
Attachments
Re: Access Manager Reporting Tool2009-09-17T15:42:21Z
This is the accepted answer. This is the accepted answer.Excellent work pal. Cheers Sameer | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014278340 | CC-MAIN-2017-13 | refinedweb | 135 | 51.75 |
Package are used in Java, in-order to avoid name conflicts and to control access of class, interface and enumeration etc. A package can be defined as a group of similar types of classes, interface, enumeration or sub-package. Using package it becomes easier to locate the related classes and it also provides a good structure for projects with hundreds of classes and other files.
java.lang,
java.utiletc.
Creating a package in java is quite easy. Simply include a package command followed by name of the package as the first statement in java source file.
package mypack; public class employee { statement; }
The above statement will create a package woth name mypack in the project directory.
Java uses file system directories to store packages. For example the
.java file for any class you define to be part of mypack package must be stored in a directory called mypack.
Additional points about package:
//save as FirstProgram.java package learnjava; public class FirstProgram{ public static void main(String args[]) { System.out.println("Welcome to package"); } }
This is just like compiling a normal java program. If you are not using any IDE, you need to follow the steps given below to successfully compile your packages:
javac -d directory javafilename
Example:
javac -d . FirstProgram.java
The
-d switch specifies the destination where to put the generated class file. You can use any directory name like d:/abc (in case of windows) etc. If you want to keep the package within the same directory, you can use
. (dot).
You need to use fully qualified name e.g.
learnjava.FirstProgram etc to run the class.
To Compile:
javac -d . FirstProgram.java
To Run:
java learnjava.FirstProgram
Output: Welcome to package
importkeyword
import keyword is used to import built-in and user-defined packages into your java source file so that your class can refer to a class that is in another package by directly using its name.
There are 3 different ways to refer to any class that is present in a different package:
If you use fully qualified name to import any class into your program, then only that particular class of the package will be accessible in your program, other classes in the same package will not be accessible. For this approach, there is no need to use the
import statement. But you will have to use the fully qualified name every time you are accessing the class or the interface, which can look a little untidy if the package name is long.
This is generally used when two packages have classes with same names. For example:
java.util and
java.sql packages contain
Date class.
Example :
//save by A.java package pack; public class A { public void msg() { System.out.println("Hello"); } } //save by B.java package mypack; class B { public static void main(String args[]) { pack.A obj = new pack.A(); //using fully qualified name obj.msg(); } }
Output:
Hello
If you import
packagename.classname then only the class with name classname in the package with name packagename will be available for use.
Example :
//save by A.java package pack; public class A { public void msg() { System.out.println("Hello"); } } //save by B.java package mypack; import pack.A; class B { public static void main(String args[]) { A obj = new A(); obj.msg(); } }
Output:
Hello
If you use
packagename.*, then all the classes and interfaces of this package will be accessible but the classes and interface inside the subpackages will not be available for use.
The
import keyword is used to make the classes and interface of another package accessible to the current package.
Example :
//save by First.java package learnjava; public class First{ public void msg() { System.out.println("Hello"); } } //save by Second.java package Java; import learnjava.*; class Second { public static void main(String args[]) { First obj = new First(); obj.msg(); } }
Output:
Hello
// not allowed import package p1.*; package p3;
Below code is correct, while the code mentioned above is incorrect.
//correct syntax package p3; import package p1.*; | https://www.studytonight.com/java/package-in-java.php | CC-MAIN-2020-05 | refinedweb | 667 | 59.3 |
You can subscribe to this list here.
Showing
8
results of 8
Hmm... Yes it does now appear to be working. I'll let you know if I run
into any problems.
FYI...
I've disabled the touch, get and mail tests.
The get test does work fine but downloads about a meg of information
which is a bit slow and causes my firewall to report that a modified
version of nant.exe is trying to access the internet.
The mail test/task just doesn't appear work on my machine.
I think I'll be making two test suites. One that gets run for each
build. The other that is more "intensive" that is designed to be run as
needed, i.e., before cvs commits & releases.
The touch test/task is failing.
This is a shame since Jay is the only one that has submitted unit tests
:(
If we get a cvs task than nant will be self sufficient to rebuild itself
given a binary distribution and the build file. That would be pretty
cool.
My main priority is to get a stable version ready and be in a position
to make another release when the final .NET SDK becomes available.
Ps, I've also cleaned up how xml errors are reported in the build file.
If you capture the output with a text editor you should now be able to
double click on the error line to jump to the error, i.e., its in the
same format as all the other build errors.
> -----Original Message-----
> From: Ian MacLean [mailto:ianm@...]
> Sent: December 19, 2001 3:16 PM
> To: Gerry Shaw
> Subject: Re: nunit status
>
>
>
>
> Gerry Shaw wrote:
>
> >I've been out of the nant loop for a bit due to final exams and some
> >school but would like to get back into development. The
> biggest issue
> >that I have right now is with Nunit. Now I'm pretty sure its not
> >because of what I've checked in over the last couple of days
> but when I
> >do:
> >nant test
> >
> >It bombs with a caught thrown exception in the test runner. Is this
> >because we have not found a solution to the nunit problem?
> >
> Where exactly is the exception happening ? I found a problem
> where the
> code I checked in had a line commented out that shouldn't
> have ( to do
> with loading the correct nAnt.Core.dll ). Now that thats
> fixed it seems
> to be working for me. I nuked my nant directory - did a clean
> and build. It successfully loads both NAnt.core.dll's and
> runs the tests
> against the correct one. I ensured my build Nant did not have
> the touch
> and get tasks.
>
> If I take the modifed Nant.code and put it in the build directory the
> test fails because it knows that its not the right NAnt. Ie I
> get this
> warning "WRN: Comparing the assembly name resulted in the mismatch:
> Revision Number" which is correct behaviour. the test should
> only work
> agains a version of an assembly it was build against.
>
> The other difference is that I'm in a RC build. Maybe thats
> an issue. If
> you're still seeing the same problem and I can't repro should we make
> the decision to move to the RC version in cvs ? I have the cds if you
> need them,
>
> Ian
>
> >
> >
> >
>
For people not using CVS you can now easily grab the latest cvs contents
(as of 3am of the current day) at:
There is a link from the main web site.
Thanks to Jason @ NDoc for sending me the scripts.
I was planning on writing this - based on your library. I might as well=20
start now
Ian
Mike Kr=FCger wrote:
>Hi
>
>What about a cvs task ? I've a CVS communication library capabale of doi=
ng
>and other stuff (look at NCvs it uses this library to communicate with t=
he
>CVS server). It would
>be platform independend too ...
>
>cya
>
>
>_______________________________________________
>Nant-developers mailing list
>Nant-developers@...
>
>
>
Hi
What about a cvs task ? I've a CVS communication library capabale of doing
and other stuff (look at NCvs it uses this library to communicate with the
CVS server). It would
be platform independend too ...
cya
> Do we want to look at having NAnt scan a directory looking for valid
> task assemblies rather than having to use taskdef every time theres a
> new one ?
I think so but until that's implemented this is the way to have it work
now. The change is pretty easy to make, just call Project.AddTasks()
with the names of assemblies in a specified subfolder. Make sure you
catch exceptions so that when you try to load a standard C .dll instead
of a .NET exception it doesn't blow up (this happened to me once already
:)
I would think that if we looked in a subfolder off of nant.exe called
Tasks and any subfolders inside of that one we should use any tasks that
we find.
Gerry Shaw wrote:
"/>
>
Do we want to look at having NAnt scan a directory looking for valid
task assemblies rather than having to use taskdef every time theres a
new one ?
Ian
There is a new nant.exe and nant.core.dll in cvs that has the zip task
built in. The nant build file now uses this built in task rather than
caling the external zip.exe program."/>
See the UserTask example for more detail.
The reasoning for this is that I want to keep nant totally portable so
that when Mono starts running under linux nant will easily move to that
platform.
I'd still like to include the code in the distribution so I've made a
subfolder under src called Extras where I plan to add these sorts of
tasks. I don't have source safe on my machine so if you could do a get
from cvs and see if this all works I'd appreciate it. If you don't have
cvs access let me know and I'll email you a zip of the project.
My next task is to get nightly .zip's of the cvs tree generated...
Re: underscores, as soon as a large body publishes a coding convention
that isn't too wacked I'll happily adopt it. I find the _ character is
simpler than m_ and helps distinguish between class fields and local
variables better than having to use 'this.' type of syntax.
Gerry
> -----Original Message-----
> From: Jason Reimer [mailto:jpreimer1@...]
> Sent: December 16, 2001 4:04 PM
> To: Gerry Shaw
> Subject: RE: NAnt project
>
>
> Hi,
>
> I got some free time and I updated the source code to
> conform with the projects conventions. If I may be so
> bold, drop the _ before the variable names. Always
> have hated 'em. I did however put them in the code to
> be consistent.
>
> I also regenerated a new interop dll with the name
> SourceSafe.Interop.dll.
>
> If you have any questions, please let me know.
>
> Jason
>
>
>
> --- Gerry Shaw <gerry_shaw@...> wrote:
> > Great, I don't have SourceSafe so I can't test it
> > but I was wondering if
> > you could make the following small changes:
> >
> > 1. Rename the interop dll to something a bit
> > smaller, say
> > Interop.SourceSafe.dll, whatever makes sense but
> > something short and
> > clear.
> >
> > 2. Remove the VSS prefix from everything and place
> > all the class in a
> > new namespace called
> > SourceForge.NAnt.Tasks.SourceSafe. In general
> > avoid abbreviations and use SourceSafe instead of
> > VSS. So
> > SourceForge.NAnt.VSSBase would become
> SourceForge.NAnt.SourceSafe.Base
> > (or BaseTask).
> >
> > If you don't think you'll get that done within say a
> > week let me know
> > and I'll post the changes as is and make them myself
> > in the future.
> >
> > This is a great contribution.
> > Thanks!
> >
> >
> > > -----Original Message-----
> > > From: Jason Reimer [mailto:jpreimer1@...]
> > > Sent: December 10, 2001 3:59 PM
> > > To: Gerry Shaw
> > > Subject: Re: NAnt project
> > >
> > >
> > > Hi again,
> > >
> > > I've finished a set of 4 tasks for use with Visual
> > > Source Safe. These are vssget, vsslabel,
> > vsscheckin,
> > > vsscheckout. I attached a zip file that contains
> > these 4
> > > plus a base abstract class and a COM interop dll
> > to the
> > > Source Safe interface. They are not a direct port
> > of the
> > > Java tasks, either from a interface or
> > implementation
> > > perspectives, but they contain most if not more
> > than the
> > > functionality the Ant tasks. I have documented
> > the tasks and
> > > attributes fairly well using the doc comments. I
> > may still
> > > add additional attributes in the future, and if I
> > do so I
> > > will send you those updates. I've done a fair
> > amount of
> > > testing on these also, and I believe they are
> > stable.
> > > Honestly, you and the others did a great job with
> > the
> > > foundation, and it was very simple to build these
> > tasks, so
> > > consequently are not that complex to debug.
> > > I was thinking of going back and adding a
> > > StringValidatorAttribute class (for string length
> > > validation) for some of this stuff, but that's
> > just a
> > > bell and whistle.
> > >
> > > I hadn't looked at your coding standards
> > completely
> > > before I started coding, so some of the variable
> > > naming conventions vary from the existing tasks.
> > If I
> > > get some time, I will try and go back and change
> > this
> > > to be more uniform.
> > >
> > > If you have any questions, please let me know.
> > >
> > > Thanks,
> > >
> > > Jason
> > >
> > >
> > > --- Gerry Shaw <gerry_shaw@...> wrote:
> > > > I believe somebody is working on some cvs tasks
> > but
> > > > AFIK nobody has done the VSS tasks. If you
> > could
> > > > write a working task for VSS I'd love to include
> > it
> > > > in
> > > > the distribution. I'm sure many others would
> > find
> > > > it
> > > > quite useful.
> > > >
> > > > --- Jason Reimer <jpreimer1@...> wrote:
> > > > > Hi,
> > > > >
> > > > > I got your email from the source forge web
> > site,
> > > > > after
> > > > > looking at NAnt. I have noticed (or at least
> > I
> > > > > can't
> > > > > find any) that you do not have any tasks for
> > > > source
> > > > > control systems yet. I am most interested in
> > > > doing
> > > > > automated builds, while integrating with
> > Visual
> > > > > Source
> > > > > Safe. I was going to port the Ant VSS tasks
> > to C#
> > > > > for
> > > > > this purpose, and was wondering if you would
> > like
> > > > me
> > > > > to send you the code, or contribute to the
> > project
> > > > > in
> > > > > some way.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jason Reimer
> > > > >
> > > > >
> > __________________________________________________
> > > > > Do You Yahoo!?
> > > > > Find the one for you at Yahoo! Personals
> > >
> > > >
> > > >
> > > >
> > > __________________________________________________
> > > > Do You Yahoo!?
> > > > Yahoo! GeoCities - quick and easy web site
> > hosting,
> > > > just $8.95/month.
> > > >
> > >
> > >
> > > =====
> > > Jason P. Reimer
> > > jpreimer1@...
> > >
> > > __________________________________________________
> > > Do You Yahoo!?
> > > Send your FREE holiday greetings online!
> >
> > >
> >
>
>
> =====
> Jason P. Reimer
> jpreimer1@...
>
> __________________________________________________
> Do You Yahoo!?
> Check out Yahoo! Shopping and Yahoo! Auctions for all of
> your unique holiday gifts! Buy at
> or bid at
> | http://sourceforge.net/p/nant/mailman/nant-developers/?viewmonth=200112&viewday=19 | CC-MAIN-2015-48 | refinedweb | 1,774 | 82.75 |
I have checked all other posts I could find on this forum and none of the solutions has worked for me.
I have confirmed permissions on the collection and the datasets
I have the .Onready statement in my code
I have synced the sandbox to the live table
My site is
I have separate input and output pages and datasets for each. The code on my page is listed below.
import wixdata from 'wix-data'; function getData(){ let query = wixdata.query ("#prayersread"); return query.limit(1000).find().then(results => { console.log('getdata', results); return results.items; }); } export function button5_click(event) { $w("#prayerswrite").save() } $w.onReady( () => { $w("#prayerswrite").onAfterSave( () => { getData().then((items)=> { $w("#prayerlist").data = items; }); }); });
Any help is appreciated!!!
First off, do you actually need the button 5 in your code as assuming that this is the 'Submit Request' button on the prayer submit form.
This can easily just be setup with the button being connected directly to the dataset itself to save it and then you won't need to use the save function in your code.
Then in theory your code could be shuffled and the onAfterSave can be placed at the top for the code to run after the form has been submitted and saved into your dataset.
Something simple like this...
Thanks for the suggestion, but it still does not show the live data. It works perfectly in Editor. I'm not sure what I'm missing. | https://www.wix.com/corvid/forum/community-discussion/live-data-not-showing-in-repeater-on-published-site-although-it-works-perfectly-in-editor | CC-MAIN-2020-24 | refinedweb | 241 | 66.13 |
Opened 4 years ago
Closed 2 years ago
Last modified 2 years ago
#20734 closed Cleanup/optimization (fixed)
URL namespacing documentation should be clearer
Description
There are a couple of places in the documentation that refer to "application instances":
<>
<>
However, it's not really clear what an application instance is or why I might need to deploy multiple instances of one.
I think that's worth explaining in the documentation, especially since I have found a number of questions and discussions on the subject elsewhere.
I'd suggest that the explanation be incorporated in the URLs documentation.
Change History (10)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
And in, instead of
namespace="polls", should we not be advising to use
app_name="polls"?
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
Yes, there's some overlap. However, this one's a bit more specific, and also I once I start to get the answers clear in my head I will do some work on a patch to see if I can improve the documentation. How about a link from that one to this one?
comment:5 Changed 4 years ago by
I have a file with notes from a discussion I had with Malcolm at DjangoCon US last year. I plan (at some point) to clarify the use case of url namespaces, and maybe to relax some constraints on their use. I want to do it because it's part of Malcolm's legacy but it hasn't reached the top of my TODO list for Django yet.
comment:6 Changed 4 years ago by
comment:7 Changed 3 years ago by
Mailing-list discussion:
I copied my notes in #21927.
Also: | https://code.djangoproject.com/ticket/20734 | CC-MAIN-2017-04 | refinedweb | 294 | 63.02 |
Python is a general-purpose, interpreted, object-oriented, and high-level programming language with dynamic semantics. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. It is one of the most preferred programming languages by software developers due to its interpreted nature and its elegant syntax.
The success of Python lies in its simple and easy-to-learn syntax and the support of a wide variety of modules and packages that encourage program modularity and code reuse. Being an interpreted language, there is no compilation step, which makes the edit-test-debug cycle incredibly fast, paving the way to Rapid Application Development, the need of the hour. The support of object-oriented features and high-level data structures, such as generators and list comprehensions, makes Python a superior language for coding small scripting programs to more advanced game programming.
This book assumes that you have been acquainted with Python and want to test its capability in creating GUI applications. However, Python is easy to learn in just a week. If you already know programming, then learning Python will be like walking in the park for you. There are many resources available online and offline covering a wide range of topics. Being an open source language, Python is also supported by many programmers around the globe in the IRC system under the tag #python .
Tip
Python is named after the BBC show Monty Python's Flying Circus and has nothing to do with reptiles. Thus, making references to Monty Python skits in documentation is practiced and encouraged.
The Python newsgroup,
comp.lang.python, and mailing list python-list at will help you learn and explore Python.
Many of the modern programming languages are backed up by a set of libraries (commonly referred to as toolkits) to create GUI applications, such as Qt, Tcl/Tk, and so on. PySide is a Python binding of the cross-platform GUI toolkit Qt, and it runs on all platforms that are supported by Qt, including Windows, Mac OS X, and Linux. It is one of the alternatives to toolkits such as Tkinter for GUI programming in Python.
PySide combines the advantages of Qt and Python. A PySide programmer has all the power of Qt, but it is able to exploit it with the simplicity of Python. PySide is licensed under the LGPL version 2.1 license, allowing both Free/Open Source software and proprietary software development. PySide is evolving continuously, like any other open source product, and you are free to contribute to its development. Some of the applications, such as matplotlib, PhotoGrabber, QBitTorrent, Lucas Chess, Fminer and so on, certify the wide spread usage of PySide in the software industry.
PySide has also become an enabler of mobile development. Qt Mobility is a project that is creating a new suite of Qt APIs for mobile device functionality. The project Pyside Mobility is a set of bindings that allows Python to access the Qt Mobility API. The Qt Mobility API enables the developer to access the bread and butter of services provided by the underlying operating system that are essential for any mobile application. Learning PySide, you learn this for free. Without further ado, let's get hacking!
In computing terms, GUI (pronounced as gooey, or Graphical User Interface) is used to denote a set of interfaces with computing systems that involves user-friendly images rather than boring text commands. GUI comes to the rescue of the numerous command-line interfaces that have always been coupled with a steep learning curve because learning and mastering commands requires a lot of effort due to their nonintuitive nature. Moreover, GUI layers make it easy for the end users to fulfill their needs without knowing much about the underlying implementation, which is unnecessary for them.
Every other application in the modern world is designed with interactive graphics to attract the end users. Simplicity and usability are the two main ingredients for a successful GUI system. The demanding feature of a GUI is to allow the user to concentrate on the task at hand. To achieve this, it must serve the interaction between the human and the computer, and make it no less than seamless and flowing. Therefore, learning to create GUIs will not only make you a successful developer, but it will also help in getting some revenue for yourself.
At a very basic level, a GUI is seen as a window (visibly noticeable or not) consisting of the following parts: controls, menu, layout, and interaction. A GUI is represented as a window on the screen and contains a number of different controls, as follows:
Controls: These can, for example, be labels, buttons or text boxes.
Menu: This is usually situated under the top frame of the GUI window and presents to the users some choices to control the application. The top frame can also have buttons to hide, resize, or destroy the windows, which are, again, controls.
Layout: This is the way that the controls are positioned, which is very important in good GUI design.
Interaction: This happens in the way of I/O devices, such as a mouse and keyboard.
Development of a GUI application revolves around defining and controlling these components, and designing the area of interaction is the most challenging part of all. The correct exploitation of events, listeners, and handlers will help in developing better GUI applications. Many frameworks have been developed to support GUI development, such as the Model-View-Controller framework that is used in many web-based applications. Using some of these frameworks can make the GUI programming easier and will come in handy for future implementations. A good user-interface design relates to the user, not to the system architecture.
This is your first step in this series of learning. PySide is compatible with Python 2.6 or later and Qt 4.6 or better. So, before getting to install PySide, we must make sure that minimum version compatibility is achieved. This section will teach you two ways of installing PySide. One, being the most common and easiest way, is using simple point and click installers and package managers. This will install the most stable version of PySide on your system, which you can comfortably use without worrying too much about the stability. However, if you are an advanced programmer, you may prefer to build PySide from scratch from the latest builds that are available when you are reading this book. Both these methods are explained here for Windows, Mac OS X, and Linux systems, and you are free to choose your own setup style.
Installation of PySide on Windows is pretty much easy with the help of an installer. Perform the following steps for setup:
Get the latest stable package matching your Operating System architecture and the Python version installed from the releases page at
Run the downloaded installer executable, which will automatically detect the Python installation from your system
You are given an option to install PySide on the default path or at the path of your choice
On clicking Next in the subsequent windows, and finally clicking Finish, PySide is installed successfully on your system
The binaries for MAC OS X installers of PySide are available at:
Download the latest version that is compatible with your system and perform a similar installation as explained in the previous section.
You can also choose to install PySide from the command line with the help of Homebrew or using MacPorts. The commands, respectively, are as follows:
brew install pyside port-install pyXX-pyside
Replace
XX with your Python version.
Installing PySide on a Debian-based system is much easier with the synaptic package manager. Issuing the following command will fetch and install the latest stable version available in the aptitude distribution:
sudo apt-get install python-pyside
On an RPM-based system, you can use the RPM-based distribution, yum, as follows:
yum install python-pyside pyside-tools
If you want to make sure that PySide is installed properly on your system, issue the following commands in the Python shell environment, as shown in Figure 1. The
import pyside command should not return any errors.
PySide.__version__ should output something similar to
1.1.2:
Figure 1
Let's move on to see how we can build PySide from scratch.
Before starting to build PySide on Windows, ensure that the following prerequisites are installed:
Visual Studio Express 2008 (Python 2.6, 2.7, or 3.2) / Visual Studio Express 2010 (Python 3.3) []
Qt 4.8 libraries for Windows []
CMake []
Git []
Python 2.6, 2.7, 3.2, or 3.3 []
OpenSSL [] (Optional)
Make sure that the Git and cmake executables are set in your system path. Now, perform the following steps to start building PySide:
Git Clone the PySide repository from GitHub, as follows:
c:/> git clone pyside-setup
Change your working directory to
pyside-setup, as follows:
c:/> cd pyside-setup
Build the installer:
c:\> c:\Python27\python.exe setup.py bdist_wininst --msvc-version=9.0 --make=c:\Qt\4.8.4\bin\qmake.exe --openssl=c:\OpenSSL32bit\bin
Upon successful installation, the binaries can be found in the
distsub-folder:
c:\pyside-setup\dist
On completion of these steps, the PySide should have been successfully built on your system.
The following are the prerequisites to build PySide in Linux:
CMake version 2.6.0 or higher []
Qt libraries and development headers version 4.6 or higher []
libxml2 and development headers version 2.6.32 or higher []
libxslt and development headers version 1.1.19 or higher []
Python libraries and development headers version 2.5 or higher []
PySide is a collection of four interdependent packages, namely API Extractor, Generator Runner, Shiboken Generator, and Pyside Qt bindings. In order to build PySide, you have to download and install these packages in that order:
API Extractor: This is a set of libraries that is used by the binding generator to parse the header and type system files to create an internal representation of the API [].
Generator Runner: This is the program that controls the bindings generation process according to the rules given by the user through headers, type system files, and generator frontends. It is dependent on the API Extractor [].
Shiboken Generator: This is the plugin that creates the PySide bindings source files from Qt headers and auxiliary files (type systems, global.h, and glue files). It is dependent on Generator Runner and API Extractor [].
PySide Qt Bindings: This is a set of type system definitions and glue codes that allows generation of Python Qt binding modules using the PySide tool chain. It is dependent on Shiboken and Generator Runner [].
Always, make sure that you have downloaded and built these packages in this order because each of these packages is interdependent. The build steps for each of these are:
Unzip the downloaded packages and change into the package directory:
tar –xvf <package_name> cd <package_directory>
Create a build directory under the package directory and enter that directory:
mkdir build && cd build
Make the build using cmake:
cmake .. && make
On a successful make, build and install the package:
sudo make install
Please note that you require sudo permissions to install the packages.
To update the runtime linker cache, issue the following command:
sudo ldconfig
Once you complete these steps in this order for each of these packages, PySide should be successfully built on your system.
Building PySide on a Mac system follows the same procedure as the Linux system except that Mac needs Xcode-Developer Tools to be installed as a prerequisite.
Congratulations on setting up Pyside successfully on your system. Now, it's time to do some real work using PySide. We have set up PySide and now we want to use it in our application. To do this, you have to import the PySide modules in your program to access the PySide data and functions. Here, let's learn some basics of importing modules in your Python program.
There are basically two ways that are widely followed when importing modules in Python. The first is to use a direct
import <module> statement. This statement will import the module and creates a reference to the module in the current namespace. If you have to refer to entities (functions and data) that are defined in module, you can use
module.function. The second is to use
from module import*. This statement will import all of the entities that the module provides and set up references in the current namespace to all the public objects defined by that module. In this case, referencing an object within the module will boil down to simply stating its literal name in code.
Therefore, in order to use PySide functions and data in your program, you have to import it by saying either
import PySide or
from PySide import*. In the former case, if you have to refer to some function from PySide you have to prefix it with PySide, such as
PySide.<function_name>. In the latter, you can simply call the function by
<function_name>. Also, please note that in the latter statement,
* can be replaced by specific functions or objects. The use of
* denotes that we are trying to import all the available functions from that module. Throughout this book, I would prefer to use the latter format as I do not have to prefix the module name every time when I have to refer to something inside that module.
It's time to roll up our sleeves and get our hands dirty with some real coding now. We are going to learn how to create our first and the traditional
Hello World application. Have a look at the code first, and we will dissect the program line by line for a complete explanation of what it does. The code may look a little strange to you at first but you will gain understanding as we move through:
# Import the necessary modules required import sys from PySide.QtCore import * from PySide.QtGui import * # Main Function if __name__ == '__main__': # Create the main application myApp = QApplication(sys.argv) # Create a Label and set its properties()
On interpretation, you will get an output window, as shown in the figure:
Now, let's get into the working of the code. We start with importing the necessary objects into the program.
Lines 1, 2 and 3 imports the necessary modules that are required for the program. Python is supported with a library of standard modules that are built into the interpreter and provide access to operations that are not a part of the core language. One such standard module is
sys, which provides access to some variables and functions that are used closely by the interpreter. In the preceding program, we need the
sys module to pass command-line arguments
sys.argv as a parameter to the
QApplication class. It contains the list of command-line arguments that are passed to a Python script. Any basic GUI application that uses PySide should have two classes imported for basic functionality. They are
QtCore and
QtGui. The
QtCore module contains functions that handle signals and slots and overall control of the application, whereas
QtGui contains methods to create and modify various GUI window components and widgets.
In the main program, we are creating an instance of the
QApplication class.
QApplication creates the main event loop, where all events from the window system and other sources are processed and dispatched. This class is responsible for an application's initialization, finalization, and session management. It also handles the events and sets the application's look and feel. It parses the command-line arguments (
sys.argv) and sets its internal state, accordingly. There should be only one
QApplication object in the whole application even though the application creates one or many windows at any point in time.
Tip
The
QApplication object must be created before the creation of any other objects as this handles system-wide and application-wide settings for your application. It is also advised to create it before any modification of command-line arguments is received.
Once the main application instance is created, we move on by creating a
QLabel instance that will display the required message on the screen. This class is used to display a text or an image. The appearance of the text or image can be controlled in many ways by the functions provided by this class. The next two lines that follow the instantiation of this class set the text to be displayed and align it in a way that is centered on the application window.
As Python is an object-oriented programming language, we take the advantage of many object-oriented features, such as polymorphism, inheritance, object initialization, and so on. The complete Qt modules are designed in an object-oriented paradigm that supports these features.
QLabel is a base class that is inherited from the
QFrame super class whose parent class is
QWidget (the details will be covered in forthcoming chapters). So, the functions that are available in
QWidget and
QFrame are inherited to
QLabel. The two functions,
setWindowTitle and
setGeometry, are functions of
QWidget, which are inherited by the
QLabel class. These are used to set the title of the window and position it on the screen.
Now that all the instantiation and setup is done, we are calling the show function of the
QLabel object to present the label on the screen. At this point only, the label becomes visible to the user and they are able to view it on the screen. Finally, we call the
exec_() function of the
QApplication object, which will enter the Qt main loop and start executing the Qt code. In reality, this is where the label will be shown to the user but the details can be safely ignored as of now. Finally, we exit the program by calling
sys.exit().
It is not always possible to foresee all the errors in your programs and deal with them. Python comes with an excellent feature called exception handling to deal with all runtime errors. The aim of the book is not to explain this feature in detail but to give you some basic ideas so that you can implement it in the code that you write.
In general, the exceptions that are captured while executing a program are handled by saving the current state of the execution in a predefined place and switching the execution to a specific subroutine known as exception handler. Once they are handled successfully, the program takes the normal execution flow using the saved information. Sometimes, the normal flow may be hindered due to some exceptions that could not be resolved transparently. In any case, exception handling provides a mechanism for smooth flow of the program altogether.
In Python, the exception handling is carried out in a set of try and except statements. The try statements consist of a set of suspicious code that we think may cause an exception. On hitting an exception, the statement control is transferred to the except block where we can have a set of statements that handles the exception and resolves it for a normal execution of a program. The syntax for the same is as follows:
try : suite except exception <, target> : suite except : suite
Here, suite is an indented block of statements. We can also have a set of
try,
except block in a try suite. The former except statement provides a specific exception class that can be matched with the exception that is raised. The latter except statement is a general clause that is used to handle a catch-all version. It is always advisable to write our code in the exception encapsulation.
In the previous example, consider that we have missed instantiating the
appLabel object. This might cause an exception confronting to a class of exception called
NameError. If we did not encapsulate our code within the try block, this raises a runtime error. However, if we had put our code in a try block, an exception can be raised and handled separately, which will not cause any hindrance to the normal execution of the program. The following set of code explains this with the possible output:
# Import the necessary modules required import sys from PySide.QtCore import * from PySide.QtGui import * # Main Function if __name__ == '__main__': # Create the main application myApp = QApplication(sys.argv) # Create a Label and set its properties try: () except NameError: print("Name Error:", sys.exc_info()[1]) pass
In the preceding program, if we did not handle the exceptions, the output would be as shown in the figure:
Conversely, if we execute the preceding code, we will not run into any of the errors shown in the preceding figure. Instead, we will have captured the exception and given some information about it to the user, as follows:
Hence, it is always advised to implement exception handling as a good practice in your code.
The combination of Qt with Python provides the flexibility of Qt developers, develops GUI programs in a more robust language, and presents a rapid application development platform available on all major operating systems. We introduced to you the basics of PySide and its installation procedure on Windows, Linux, and Mac systems. We went on to create our first application, which introduced the main components of creating a GUI application and the event loop. We have concluded this chapter with an awareness on how to introduce exception handling as a best practice. Moving on, we are set to create some real-time applications in PySide. | https://www.packtpub.com/product/pyside-gui-application-development-second-edition/9781785282454 | CC-MAIN-2021-17 | refinedweb | 3,595 | 52.49 |
LMDE Update Pack 7
Introduction)
How to upgrade
1. Make sure your mirror is up-to-date
It is not safe to upgrade any package unless your mirror points to UP7
Here is the current status of the LMDE mirrors:
- UP7:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- UP6:
-
To select another mirror:
- Make sure “mint-debian-mirrors” is installed
- Open a terminal and run the following command: mint-choose-debian-mirror
2. Check your APT sources
- In the Update Manager, click on the “Update Pack Info” button
- Make sure “Your system configuration” shows up as green and doesn’t show any warnings or errors.
- If you see a warning or an error, follow the instructions given and repeat the process until they’re gone..
September 23rd, 2013 at 5:59 am
will update pack 8 ISO have UEFI support?
September 23rd, 2013 at 6:02 am
Finally!
I’ve checked for updates with Update Manager and see that UP7 is released before this topic was created.
September 23rd, 2013 at 6:19 am
mirror.rts-informatique.fr just stepped into UP7, let it rock \o/
Good job, Mint team
September 23rd, 2013 at 7:46 am
Just did the update, after a restart i get the usual desktop but with no bottom menu. When i select a shortcut on the desktop, screen goes black.
Any suggestions?
September 23rd, 2013 at 8:32 am
Skype’s audio is broken after the upgrade. ldd does not show any missing dependency, and the i386 packages that used to make it work with UP6 don’t help any more with UP7. I’ve also tried to upgrade skype from 4.1 to 4.2, but to no avail.
Is there any change upstream I should be aware of to make it work? Any specific information I can provide to help you help me?
Thanks
September 23rd, 2013 at 9:50 am
Hi Clem,
I see there’s no in the list, but it’s still listed by mint-choose-debian-mirror. There’s a problem with that mirror – yields a 403 error ( is still there, but its subdirectories show 403 as well). Maybe it should be removed from mint-choose-debian-mirror?
September 23rd, 2013 at 9:56 am
Desktop goes black after update and restart (no icons ets, only black screen with bottom menu). Nautilus (somehow after update nemo become nautilus) and terminal do not work. I have a lot of important info on desktop and in home folder. Please help.
September 23rd, 2013 at 10:05 am
BTW, if you open the UP info in mintupdate-debian and click on the link to the upstream bug (yeah, that one with the authentication warning), it will open in the UP info window with no means to get back. I mean, no right click, no backspace to get back, nothing.
You can only close the window.
Should the links be opened in a new browser window?
This probably affects the main edition’s mintupdate too.
September 23rd, 2013 at 10:21 am
very cool
now upgrading, thanks!
September 23rd, 2013 at 10:25 am
tnx !!!
September 23rd, 2013 at 10:26 am
Everything went according to plan! Thank you very much!
A dedicated Linux Mint DEBIAN user.
September 23rd, 2013 at 10:29 am
Hi, opening the software manager, the system almost freezes; is it updating the software (internal DB)? The harddisk light remains on like the system is buisy to write data. Hint? I’m waiting more than 5 mins now.
September 23rd, 2013 at 10:39 am
Did another restart and i get a warning that cinnamon is in fallback mode. A restart does not work.
Entering cinnamon –replace gives a warning about libgl.so.1 not being available.
September 23rd, 2013 at 11:13 am
Hi everyone,
Please don’t hesitate to connect to the chatroom if you have questions or problems. It’s much easier to troubleshoot there and to help you with the update.
Good luck everyone.
September 23rd, 2013 at 11:19 am
I guess we need not only the usual feedback threads on the forum, but also some kind of formal testing for UPs, like it’s always done for ISOs.
Of course all the software can’t be covered by a checklist of, say, 100 items, but at least the most used apps can be.
Since some bugs may still slip in, it would be good to have the out-of-band updates to the frozen UP base, like it’s described in
I know, that’s an old idea, but I think it’s still relevant.
September 23rd, 2013 at 11:57 am
libc 2.13 to 2.17.. Finally!
Now we can use the new versions of:
Intel graphics drivers, google music manager, etc..YEAH!
September 23rd, 2013 at 2:48 pm
Upgrade in progress…
Thanks a lot, LMDE team!!
September 23rd, 2013 at 4:23 pm
I found 3 bugs:
1. Caja always use 30 to 40% of CPU, if i start caja 2 times it uses 88% CPU
2. Preview of pictures and other documents broken
3. Problem with skype sound
September 23rd, 2013 at 4:40 pm
You need to disable all thumbnails in Caja preferences (“Preview” tab) in order to solve the 40% CPU problem – I had the same issue with LMDE MATE 64bits.
(It is listed as a Caja bug but doesn’t apply to everybody apparently ?)
September 23rd, 2013 at 4:52 pm
Upgrade completed with no problems encountered so far.
Thanks a lot for all your efforts.
September 23rd, 2013 at 5:59 pm
all went well until towards the end. it keeps running ..
creating /etc/mail/database…
validating configuration
waited 10 minutes but screen keeps flashing with the message. killed it and restart. run sudo dkpg –configure -a. same message. any help will be highly appreciated.
Edit by Clem: Somebody on the chat mentioned this. It’s a bug upstream in Debian. Remove sendmail and restart the upgrade.
September 23rd, 2013 at 6:32 pm
Can’t run some GTK apps, e.g. audacious or synaptic.
Example of running audacious from CLI:
Also generated kernel 3.10.2 misses some features. For example I can’t find i2c-devices to control my LCD brightness. Running old 3.2.0 kernel I get those devices working well. Why did you do such differences configuring new kernel?
Edit by Clem: Some GTK 3.x themes segfault with GTK 3.y, the GNOME guys are still working on its themeing engine and so artists are expected to update their themes. Mint-X was updated in LMDE to work with GTK 3.8. If your theme isn’t up to date yet it can make any GTK 3 application segfault.
September 23rd, 2013 at 7:16 pm
don’t do upgrade to update pack 7 if you using xfce desktop!
big problem with xfce desktop. it’s stuck by gconf and nothing after that you can do.
i’m going to do new install, fresh system without update pack 7.
if you are on xfce update pack 7 will destroy you existing system and probably must be mate or cinnamon to work with update pack 7.
clem, what’s goin’ on with xfce? please, i need a answer
Edit by Clem: Hi Pejton, I need more info to understand what the problem is and what is causing it. What do you mean by “it’s stuck by gconf”?
September 23rd, 2013 at 8:35 pm?
September 23rd, 2013 at 10:24 pm
The update went super smooth. The system is faster than ever. Very pleased! Thank you to all for the hard work!
September 24th, 2013 at 1:19 am
@14: No, it is still mate 1.6 in UP7. But before with UP6 no problem with high cpu load and caja! And yes, if you diasble preview, the CPU goes down to 0%!
Mate 64Bit.
September 24th, 2013 at 1:33 am
Thank you so much, what a smooth update.
People with Skype trouble check the forum,
f. e. this:
LMDE and Clem and his team are brilliant!
September 24th, 2013 at 3:03 am
Can confirm problems:
1. Skype sound not working. As per ‘Bronto’ @ 5, I also tried upgrading to 4.1 & 4.2 but no good.
2. File preview icons not visible for documents & music file types as per ‘Raven’ @ 18 & 25
3 Not experienced the high CPU load either before or after UP7
The icon preview disappearing problem occurred after minor update that came out some weeks prior to UP7 i.e. when Caja and MATE were updated.
September 24th, 2013 at 3:38 am
Update works well for me on a HP8770w. The only niggle is that when I boot, text messages don’t render properly on the plymouth splash screen. Instead of a letter, you just see a black rectangle with a border in the font color. Text does show when I shutdown. I’m guessing that there’s some font missing from the boot image, but not really sure how to fix it. As I run LMDE on an encrypted LVM disk, you need to know the boot sequence in order to understand what messages are being shown.
September 24th, 2013 at 4:06 am
Updated to 7 on 2 machines and everything seems going well. Thanks.
Only a little issue: in software manager if I want edit the software sources it doesn’t work. It works from the synaptic.
September 24th, 2013 at 5:03 am
@clem
i don’t know where is exactly problem. upgrading it was fine till gconf. on this moment system try to start synaptic i think. someone else got same problem. i was on #linuxmint-debian irc and try to find out what is going on.
some outputs from chanel:
3130 E: gconf2: subprocess installed post-installation script returned error exit status 127
2121 that is where I started the new installation, pejton
3130 chattr, here stay in informations: Ready to launch synaptic
3130 /usr/sbin/synaptic: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory
3130 ++ Return code:127
chattr>21 pejton: package libffi6 supplies /usr/lib/i386-linux-gnu/libffi.so.6 is that package installed?
3130 i don’t know
2121 pejton: does ‘ apt-cache policy libffi6 ‘ in terminal tell you?
3130 i can’t up terminal
3130 or synaptic
clem, thank you for all what you doing, linux mint is great os. i think this isn’t my mistake or big problem.
i hope you can resolve what going on in xfce during update pack 7.
clem, thank’s a lot again i’m linux mint fan and user few years. you doing great job. keep goin!
September 24th, 2013 at 5:24 am
oh yes, one more thing clem.
before is upgrade gone mad
it was a warning about xscreensaver. i kill xscreensaver, i know this is normal.
after that in short come asking dialog to confirm restarting libraries or something like that, i can’t remember, lib686 or something like that…
maybe is nothing, maybe can help you.
Edit by Clem: I’ll try to catch you on the IRC and hopefully we can troubleshoot this together.
September 24th, 2013 at 6:06 am
Clem,
Many thanks to you and your team!
Updating such a big project as LMDE always triggers some issues because of nature of any linux system. Happily, due to lot of forum threads on web, all the issues can be quickly (less or more) solved.
LMDE system updated to UP7 seems to be more responsive and less buggy
Thank you all one more time for your hard work! Good luck and best wishes!
September 24th, 2013 at 6:15 am
Yes, @dbzix it seems now nvidia beeing more responsive and smooth. The desktop reacts more responsive too. Nice
September 24th, 2013 at 6:27 am?
Nope, only happened after the upgrade, I’ve got more time today to look into it, I’ll pop into IRC if I can find a fix or narrow down the reason.
September 24th, 2013 at 6:55 am
One issue during boot: got the following warning:
platform microcode: firmware: agent aborted loading intel-ucode/06-25-05 (not found?)
Fix is to install the intel-microcode package
September 24th, 2013 at 10:28 am
Well, at least it fixes the things that UP6 broke like support for my printer and video card. BTW, is there supposed to be some application in Mate for adding, configuring and deleting printers? I installed hplip-gui for my HP printer, but what about other brands of printers.
And, it would be really sweet if LMDE had a Net Install CD like Debian so you wouldn’t have to run all the updates on a new install.
September 24th, 2013 at 10:48 am
So I installed the update for LMDE and now during boot it stays black.
when i go into recovery mode it says that there is the grphic driver missing or something. maybe it has something to with that i installed the ati linux driver from the ati.com page …
September 24th, 2013 at 12:54 pm
Mostly smooth upgrade on my AMD X2 machine (although had to uninstall Virtualbox to fix a complete system crash), but it’s completely buggered my Intel Core2Duo laptop – install hung early on so never completed, system no longer boots.
Install with EXTREME caution (in other words, take a full system image – like with Clonezilla – first).
September 24th, 2013 at 12:56 pm
Skype-Problem seems fixed with the “work a round” from post 27!
Edit file “/etc/pulse/default.pa” and put “tsched=0″ behind the line with “load-module module-udev-detect”. Should look like this: “load-module module-udev-detect tsched=0″
After reboot it works.
Bug with icon-preview and high CPU-Load is the same. Hope someone find a fix for this.
And i see a strange message in terminal when i start “pluma”:
Fontconfig warning: “/etc/fonts/conf.d/53-monospace-lcd-filter.conf”, line 10: Having multiple values in isn’t supported and may not work as expected
September 24th, 2013 at 1:22 pm
@laofzu: try system-config-printer
September 24th, 2013 at 1:42 pm
I updated to Pack 7 from Pack 6.
Now I cannot see the icons of pictures or videos.
I cannot use my printer HP connected via usb.
On the positive side skype’s audio works well and now problem between caja and high usage of cpu.
September 24th, 2013 at 3:51 pm
Hello!
I was running the “big update” on my machine.
After downloading over 700mb, the installation process started but failed for some reasons after a few minutes.
I rebooted the system but… I can use LMDE anymore!
I can just access the prompt mode and check the X error: it seems that there are conflicts between different versions of nvidia drivers/libraries.
What can I do now? :\
PLEASE HELP ME!
thanks,
alessio
September 24th, 2013 at 4:41 pm
@Monsta: Thanks, that is what I install in Debian to fix the problem.
September 24th, 2013 at 4:42 pm
@palimmo
if the upgrade process is not finished you can try
sudo apt-get update
apt-get upgrade -f
Good luck
September 24th, 2013 at 4:50 pm
Upgraded my acer laptop 64 all fine but the thumbnails preview in caja.
Searching on the web it seems an issue with glib.
September 24th, 2013 at 5:12 pm
sudo apt-get -f install saved me!
thanks guys! now everything it’s ok!
September 24th, 2013 at 6:36 pm
I updated my 64bit to UP7….after reboot, had pop up saying I was in software rendering mode…I re-installed my nvidia driver via smxi and and all is well….thxs
September 24th, 2013 at 6:42 pm
My LMDE updated 1321 packages last night, which I think is much. Maybe you should release a version 201309.
However, the operation in the first place was good, although a bit late to start loading the new kernel.
Greetings from Cali – Colombia.
September 24th, 2013 at 8:18 pm
Update smooth and easy. Thanks mint team!
The only problem I had: After the update, the display was slightly fuzzy, hard on the eyes.
Messed around with drivers, hosed my xorg.conf, couldn’t start X, ha ha! Turns out it something minor.
Fixed by changing the monitor refresh rate from 60hz to 75hz in the monitor preferences dialog.
September 24th, 2013 at 10:14 pm
I’ll try using aptitude safe-upgrade
September 24th, 2013 at 11:20 pm
You Legends…
September 24th, 2013 at 11:49 pm
Ran into a strange bluetooth behavior when updating my EeePC 1000HA netbook. After the update pack 7 installation the Update Manager quit but there was constant disk activity for a couple of hours. The system had run low on root disk space during the update and I had set Synaptic previously to delete package files after installation so I thought that was working itself out. However, I noticed the root filesystem had stopped shrinking and /home was constantly going up and down by a very small amount. Finally I installed iotop to find out what was hammering the disk, and it indicated bluetoothd. However, nothing was connected via BT and BT was greyed out in the toolbar. When I selected the “Bluetooth – Configure Bluetooth Settings” menu entry, it showed the “Make This Computer Visible” checkbox toggling constantly. I did a sudo pkill bluetoothd and the disk thrashing stopped immediately. Bluetooth was fine before the UP7 install. I’m off to reboot and see what happens. My desktop machine was updated earlier today with no similar issue.
September 25th, 2013 at 12:24 am
After reboot the system comes up with working bluetooth and no disk thrashing but within a MATE session which looks and acts different from how it was set up before the update. Instead of my single bottom auto-hide panel there’s now a top and bottom panel; the theme is very different (looks very old school basic gnome with a Debian swirl background image), touchpad options not set, etc. However, the right programs are there and the wireless works, etc. Argh! Any ideas how to get back to how things were set up before, short of redoing it all?
September 25th, 2013 at 12:43 am
opengl kde nvidia crash!!!
September 25th, 2013 at 1:34 am
I’ve upgraded to UP7. Now:
– Caja doesn’t show thumbnails for .pdf, photos, video files;
– Caja crashes immediately when mouse over mp3 file;
– Caja utilizes my CPU to 100%
Does anyone know how to fix this shit? Otherwise, I am going to backup, reinstall and apply all my specific settings set over the years. I already feel the pain in the ass.
September 25th, 2013 at 2:38 am
My broadcom wifi didnt make it trough the dkms thing.
And my touchpad didnt transform to normal as i hoped (still cant select text with it). Its a Dell mini 10v.
September 25th, 2013 at 3:15 am
Hi,
got a little trouble with the broadcom-wl (sta) wlan driver (as predicted during the installation).
After restart and login i got a black screen and the login process stops. Login on terminal also failed.
If you have the same troubles with the installed driver (in my case for the BCM43225 wlan module) =>
* start the OS from the kernel before upgrade
* install the brcm80211 driver instead
()
This fixed my problem.
Everything else worked perfekt, good job guys.
cheers
September 25th, 2013 at 3:41 am
guess from google says it was a buggy debian package for wifi.
Installed broadcom-sta version 6 from unstable, now it works.
September 25th, 2013 at 4:27 am
Bluetooth is not working for me after update.
Although update manager says my system is up to date,
apt-get reports:
The following packages have been kept back:
gir1.2-gnomebluetooth-1.0 gir1.2-peas-1.0 gjs gnome-bluetooth gnome-shell
gnome-shell-common libpeas-1.0-0
This is due to a problem discussed in the UPDATE PACK 7 FEEDBACK thread. These packages depend on libgjs0c, but libgjs0b is required for cinnamon and the two libgjs packages conflict. See:
Does anyone else have bluetooth working?
In my case…
Bluetooth Devices connect for about 7 seconds, then disconnect.
For a headset, last messages from hcidump are:
> ACL data: handle 38 flags 0x02 dlen 41
L2CAP(d): cid 0x0040 len 37 [psm 1]
SDP SSA Rsp: tid 0xc len 0x20
count 29
record #0
aid 0x0000 (SrvRecHndl)
uint 0x10000
aid 0x0001 (SrvClassIDList)
aid 0x0005 (BrwGrpList)
aid 0x0200 (VersionNumList)
uint 0x103
aid 0x0201 (SrvDBState)
uint 0x55
aid 0x0202 (unknown)
uint 0x22
aid 0x0203 (unknown)
uint 0x26
aid 0x0204 (unknown)
bool 0x1
aid 0x0205 (unknown)
uint 0x1
aid 0x000a (DocURL)
url “”
cont 00
> 242
HCI Event: Command Complete (0x0e) plen 7
Read Transmit Power Level (0x03|0x002d) ncmd 1
status 0x00 handle 38 level 0 236
HCI Event: Command Complete (0x0e) plen 7
Read Transmit Power Level (0x03|0x002d) ncmd 1
status 0x00 handle 38 level -4
ACL data: handle 38 flags 0x02 dlen 12
L2CAP(s): Disconn rsp: dcid 0x04c1 scid 0x0040
> 228
HCI Event: Command Complete (0x0e) plen 7
Read Transmit Power Level (0x03|0x002d) ncmd 1
status 0x00 handle 38 level -8 224
HCI Event: Command Complete (0x0e) plen 7
Read Transmit Power Level (0x03|0x002d) ncmd 1
status 0x00 handle 38 level -12
HCI Event: Command Status (0x0f) plen 4
Disconnect (0x01|0x0006) status 0x00 ncmd 1
> HCI Event: Disconn Complete (0x05) plen 4
status 0x00 handle 38 reason 0x16
Reason: Connection Terminated by Local Host
September 25th, 2013 at 6:30 am
So, I tried to use update manager to update new UP, haven’t seen anything about new update patch. I changed mirror already.
But there’s one problem:
– My disk is full, after downloading all upgrade-able packages. However, update manager still keep installing stuff and even the package install result is out of space, it still continue …
I think this application should top upgrading as soon as disk is full.
Btw, my update manager is mintUpdate.py, and I could not find “Update package info” button like before. Is there anything wrong with it?
September 25th, 2013 at 7:00 am
I encountered the same issue that “palimmo” met with. I was really worried, but… I also just could survive with a “sudo apt-get -f install”
September 25th, 2013 at 7:07 am
Clem, looks like the UP notes should be updated to reflect that GEdit is now incompatible with Cinnamon because of the known libgjs0b/libgjs0c problem:
September 25th, 2013 at 7:29 am
I’m unable to update due to the following issue:
W: GPG error: debian Release: The following signatures were invalid: BADSIG 3EE67F3D0FF405B2 Clement Lefebvre (Linux Mint Package Repository v1)
could someone help?
September 25th, 2013 at 8:47 am
i upgraded to update pack 7 on lmde mate 64bit..
the process is stuck on the grub page – i chose to install grub to the /boot partition i was already using for lmde and i see:
“writing GRUB to boot device failed – continue?”
when i press ‘help’ i see:
“GRUB failed to install to the following devices:
dev/sdc5
do you want to continue anyway? if you do your computer may not start up properly.”
dev/sdc5 is the correct boot partition on this pc.
anyone got any advice here?
presently i can choose to tick the box for “writing GRUB to boot device failed – continue?” and then also either click ‘forward’ or ‘cancel’..
the logic here is not clear.
thanks
September 25th, 2013 at 10:02 am
@everyone seeing the warning about 53-monospace-lcd-filter.conf: look at this post for a fix –
September 25th, 2013 at 10:26 am
And I’m experiencing the caja thumbnails issue too, CPU utilization of ~40% and frequent disk writes until I shut off thumbnails.
September 25th, 2013 at 12:23 pm
GLX is not using direct rendering in INTEL (& possibly others)
glxgears is very slow and jerky before I added this symlink:
# cd /usr/lib
# ln -s i386-linux-gnu/dri dri
September 25th, 2013 at 12:31 pm
In reply to post 46:
gedit works fine for me although I see the libgjs0b/libgjs0c problem here.
In reply to post 47:
Yes, I saw the Fontconfig warning: “/etc/fonts/conf.d/53-monospace-lcd-filter.conf before following the instructions in the post to fix it.
September 25th, 2013 at 12:52 pm
Does anyone have bluetooth working?
September 25th, 2013 at 2:10 pm
Smooth update on my “Samsung NC10 Plus” . Excellent work! Congratulations.
There are some minor problems with some Mate-menus, due to the small 10″-screen. I do not see the “OK” buttons sometimes.
September 25th, 2013 at 2:44 pm
After this massive update it killed a good running system, thanks, what can you do, it could not load the NVIDA drivers, so goodbye system ,
beeping you know what , not happy, who came out with this update , needs to rethink what they are doing if you ask me
September 25th, 2013 at 2:46 pm
@Brian.
I had the same problem.
I solved running:
sudo apt-get -f install
September 25th, 2013 at 4:08 pm
@palimmo
No cannot even do that, I am tried to go into recovery, place the root password, I then get at the prompt with computer name, then I place the commands you have mention, and it stated not found, I believe this update has completely and utterly beeped up the system
So for example my system name is Karma that is on the screen I am in root mode, then I place those commands, nothing , not found etc etc
so now I will have to redo this system which I cannot be bloody arsed to be honest,
September 25th, 2013 at 4:56 pm
For me was a disaster! after finish the upgrade a lot of libs was broken and cinnamon doesn’t start…
September 25th, 2013 at 5:07 pm
Deja-Dup crashes.
September 25th, 2013 at 7:03 pm
Thanks for the update: Mate seems to have survived but apache 2.4 has resulted in my WSGI virtual hosts just returning 404s
gutted as I do all my wsgi development on LMDE
Anyone had this issue?
September 25th, 2013 at 10:57 pm
I updated my 64bit to UP7….after reboot, had pop up saying I was in software rendering mode…I re-installed my nvidia driver via smxi and and all is well….thxs
September 25th, 2013 at 11:57 pm
Thank you guys for the UP7 update. Just want to share my experience with the caja’s thumnailer problem when it uses considerable cpu time and doesn’t shoe thumbnails. I was able to solve by following one of advises from the mate forums. Here is the workaround. Create a symlink in the ~/.cache dir called thumbnails pointing at ~/.thumbnails so that
lrwxrwxrwx 1 gene gene 15 Sep 25 21:55 .cache/thumbnails -> ../.thumbnails/
E.g., you can do it like this:
cd ~/.cache; ln -s ../.thumbnails thumbnails
Then caja process should will get back to normal and thumbnails will be seen again. Hope it will help others as well
September 26th, 2013 at 12:51 am
@Brian
Can you boot into your last stable kernel? Can you see the list of kernels and choose between them in the grub menu? Did you use the proprietary nVidia driver before the update? This update must have switched you to nouveau, I guess.
September 26th, 2013 at 4:01 am
So there is Mint-Z GTK and GNOME Shell themes? I’ll miss them…
It’s good practice to get rid of fglrx before upgrading! Then I select fglrx packages to upgrade, first upgrade attempt are only partially successful (maybe it just need linux-headers for 3.10.2 kernel to make dkms modules right). The gdm3 (?) went mad and restaring all the time, crying about some videocard IRQ problems and denying me to login in console (cause it switch to it’s own tty just after each restart). I’ve rebooted in restore mode and execute: mv -f /usr/sbin/gdm3 /usr/sbin/gdm3.off The graphics login was hang, but at least I was able to login in console mode with my networks connected. Then I replaced gdm3 with mdm, and go further.
And with some other minor magic, I’m happy LMDE user again
September 26th, 2013 at 5:42 am
I tried the upgrade on several different machines and on none of them the upgrade went without problems.
When initram-fs complained about a missing file (something with “/pango/”) i needed to uninstall plymouth*
A machine with an ATI XPress 200M only shows a blank screen on boot and even the TTY are not accessible. With the old kernel (3.2) the TTYs are available but tons of graphic card related error messages flood the screen.
And several problems with dpkg not being able to successfully configure packages, which I needed to resolve with “apt-get -f install” and restart the mintupdate procedure.
For technically inexperienced users I would not recommend this upgrade.
September 26th, 2013 at 9:47 am
i had some issues, i solved it, thanks god. i’m not satisfied with upgrading pack 7 problems but is worth.
finally it’s over. i have fast lmde, work fine for now.
clem, thank you very much for your time & support over #linuxmint-debian irc, much appreciated your advices.
keep going, i wish all best in future work.
September 26th, 2013 at 10:42 am
LMDE update pack 7. Problems I have noticed:
1.Rhythmbox and Banshee freeze up or crash- cause:playlist cause problems. Remove the Banshee cache at .cache solves this problem. To be fully sure backup the banshee1 file in .config, then delete. Afterwards you can use your backup to use the old playlists again.
2.No thumbnails (graphical preview) anymore when opening a photo map.
Computer keep loading.
3. Banshee and Rhythmbox see a normal USB drive as a mediaplayer, so a popup appears.
All tested on 3 different types of computer, i3, Coredual 2 and AMD.
September 26th, 2013 at 10:49 am
I use LMDE with LXDE;
(almost) everything was fine but after installing UP7 I experience some issues, in particular I can’t run some GTK apps due to GTK warnings.
I lost the possibility to run Gnote (which is meant to be a Mono-free Tomboy alternative) and quite frankly I think this is not acceptable.
However logging in a Cinnamnon session Gnote is working…so what?
September 26th, 2013 at 12:03 pm
The updated kernel 3.10-2-amd64 seems to be buggy for the i915 card. It froze and stack-dumped on one of my machines already. It’s pretty old, dating back to Aug. 8. I’d recommend going with a more recent 3.10.12 version or any other stable one (using 3.11.1-custom+ #2 SMP Sun Sep 15 at the moment)
September 26th, 2013 at 1:54 pm
After this update the automatic mount of memory cards stopped working and I can not seem to get it to work.
Other than that…. Smooth as silk
September 26th, 2013 at 3:48 pm
After updating to UP7 Deluge
crashes with python error if you try to remove a torrent. Any ideas?
September 26th, 2013 at 7:09 pm
Sorted: only two issues after upgrade apache2.4 and cinnamon not working. For anyone out there: apache2.4 you need to change your virtual host files form [name] to [name].conf and they work again
as regards cinny not working: some other time: I moved to Mate
Thanks Clem and co for LMDE, btw
September 26th, 2013 at 7:33 pm
i’ve got mate. i also ran sudo apt-get -f install and upgraded again, but using the terminal. Now almost everything’s fine,
but:
– after upgrade compiz doesn’t work anymore, but it’s installed, neither works compiz fusion icon.
– mintupdate says i’m still pack 6, but i’ve installed more than 700 mb…
September 26th, 2013 at 10:55 pm
So, I’ve got 6 computers running LMDE, I’ve updated 4 of them so far, and none of the updates went properly. 2 of them I don’t mind, they were updating from UP4 or so, but the other two were band new installs. My brother had similar issues, libpango caused his, removing it let him finish the update. I ended up with several broken packages, I don’t remember exactly what I had to do to fix it.
September 27th, 2013 at 4:43 am
now my system is broken
very sad… losing a huge amount of precious time…
my eclipse crash every minute… i had to chroot with a livecd to install lightdm… lots of problems here…
September 27th, 2013 at 6:15 am
After adding kernel 3.10 to Linux lmde64 3.2.0-4-amd64 #1 SMP Debian 3.2.32-1 x86_64 GNU/Linux
on a hp635 with amd e450 board
bluetooth does not work any more, neither in 3.2 nor 3.10 kernel. Before adding 3.10 bluetooth did work.
Also the fan is much more active in kernel 3.10 than it still is in 3.2. Anyone any idea how to solve theese problems?
September 27th, 2013 at 6:33 am
Hi all, thanks to Clem for the enormous job he does daily.
Unfortunately I am rather unsatisfied with UP7. I installed LMDE soon after its appearance on my laptop (Asus k52f) and almost everything goes plain (out of screen hangs from time to time because of weird i915 driver). I had cinnamon and cairo-dock with glx running fine. After the upgrade, cinnamon was removed and I was left with the choice of other DE’s. Given the big trouble in going through gnome 2 to 3 upgrade I decided to perform a fresh install, maybe there was some inheritage from those troubles. As a result I’m writing from MATE, because cinnamon has been removed as well. And, moreover, I can’t use cairo-dock, since there is no compiz anymore in repositories.
Every other thing works correctly in my laptop, but user interface is fundamental. Moreover, why Linux Mint, the home of Cinnamon, forces me to remove Cinnamon?
What’s going on?
Cheers
Edit by Clem: Hi, Cinnamon 1.8 works fine in LMDE UP7 as long as your sources are correct and you’re pointing to the right cinnamon (1.8.8+lmde from packages.linuxmint.com) and gjs (1.32 from packages.linuxmint.com). If you’re pointing to the Debian packages from debian.linuxmint.com, there’s something wrong with your sources. You can check this with “apt policy gjs” and “apt policy cinnamon”, versions 1.32 and 1.8.8+lmde should have a priority of 700.
September 27th, 2013 at 8:34 am
@jasper
Said: September 23rd, 2013 at 7:46 am
Just did the update, after a restart i get the usual desktop but with no bottom menu….
I found this,
being an XFCE(64bit) user.
Just go CLI (using CTL-ALT-F3)
and login as root, cd to your /home/user directory and then delete the
~/.config/xfce4/panel directory.
reboot
bottom panel comes back, skies clear, sun shines again,
and Lassie came home.
cheers
September 27th, 2013 at 10:35 am
@lessons of life
Please see my comment #79. This should fix it.
September 27th, 2013 at 11:46 am
Update 7 has several broken packages even after a fresh Mate rebuild install:
1) Caja thumbnail system is broken and consumes 80% of the CPU until thumbnails are disabled.
2) Cinnamon 1.8 has MANY conflicts and broken dependencies too numerous to list. Cinnamon 1.8 is now broken and disfunctional
3) There is no Nvidia driver for the 3.10 kernel. Attempting remove it and replace with Nouveau is not possible without completely hosing X and just left with the Vesa driver.
It is painfully clear no basic testing occurred for Update 7.
Edit by Clem: It is painfully clear you didn’t take part in its testing when it hit incoming. A few people did and many issues were solved before UP7 got to you. I’m quite sorry to hear issues filtered through but I don’t understand how you can blame those who volunteered to testing it when you yourself didn’t. With that said, the caja thumbnail issue is now fixed today. The situation with gjs (I assume that’s what you refered to in point 2) is detailed in the UP notes.
September 27th, 2013 at 11:48 am
Guys, if you experience serious problems with UP7 (like in the last 5-10 posts), don’t hesitate to post at the forum. You’ll get more replies than here and possibly a solution for your problem.
September 27th, 2013 at 12:34 pm
Any sugestions, please help !
E: /var/cache/apt/archives/libc6_2.17-92_amd64.deb: subprocess new pre-installation script returned error exit status 1
Preconfiguring packages …
(Reading database … 168143 files and directories currently installed.)
Preparing to replace libc6:amd64 2.13-37 (using …/libc6_2.17-92_amd64.deb) …
Checking for services that may need to be restarted…
Checking init scripts…
A copy of the C library was found in an unexpected directory:
‘/lib/x86_64-linux-gnu/ld-2.17.so’
It is not safe to upgrade the C library in this situation;
please remove that copy of the C library or get it out of
‘/lib/x86_64-linux-gnu’ and try again.
dpkg: error processing /var/cache/apt/archives/libc6_2.17-92_amd64.deb (–unpack):
subprocess new pre-installation script returned error exit status 1
Errors were encountered while processing:
/var/cache/apt/archives/libc6_2.17-92_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install. Trying to recover:
dpkg: dependency problems prevent configuration of libc6-dev:amd64:
libc6-dev:amd64 depends on libc6 (= 2.17-92); however:
Version of libc6:amd64 on system is 2.13-37.
dpkg: error processing libc6-dev:amd64 (–configure):
dependency problems – leaving unconfigured
dpkg: dependency problems prevent configuration of libc-dev-bin:
libc-dev-bin depends on libc6 (>> 2.17); however:
Version of libc6:amd64 on system is 2.13-37.
dpkg: error processing libc-dev-bin (–configure):
dependency problems – leaving unconfigured
dpkg: dependency problems prevent configuration of locales:
locales depends on glibc-2.17-1; however:
Package glibc-2.17-1 is not installed.
dpkg: error processing locales (–configure):
dependency problems – leaving unconfigured
Errors were encountered while processing:
libc6-dev:amd64
libc-dev-bin
locales
September 27th, 2013 at 4:35 pm
Same story here. Was trying to migrate from Windows to Linux. Never had such a bug-fest in a decade+ of using M$ products.
September 27th, 2013 at 4:48 pm
UP7 downloaded and installed without incident on my Dell Inspiron N7010 Laptop. LMDE is an amazing hybrid! Kudos to those who make such an OS possible.
September 27th, 2013 at 5:08 pm
I updated to UP7 and now my computer just shows the cursor after loading the initial linux mint logo after bios run,
I cant do anything now. Completely stuck
September 27th, 2013 at 9:01 pm
(Update to #97)
Clem: Sincere apologies for my uncalled for flame. I have the highest respect for all the hard work of the volunteers and if my comments offended even one, I most humbly apologize. My personal frustrations are no reason and no excuse.
The fix for the thumbnail problem did indeed role out and it’s testament to all those involved that it happened so quickly. Can you imagine anything so responsive coming out of the stale halls of Micro$oft, Apple or 0racle?
The problems with Mint-Cinnamon incompatibility break I’m sure will be resolved just as quickly
As for the Nvidia 3.10 kernel issue, that is nothing to do with LMDE and everything to do with the problems of closed proprietary binaries.
Keep up the fantastic work. I will make an effort to do my own testing and try to contribute back to the community.
September 28th, 2013 at 2:01 am
The upgrade, though huge, went really smooth. Great Job!
September 28th, 2013 at 9:33 am
Hello!
Sadly, I’m also having issues with UP7.
It seems that thumbnails one has been already fixed (didn’t check it yet), and the one with Skype sound is fixed as well.
But there is yet another one: I have HP 4710s and it doesn’t wake up from suspend mode. The HDD indicator doesn’t show any activity (usually it does) and the screen remains black.
Anyway, thank you for all your efforts, guys.
September 28th, 2013 at 10:20 am
some bugs should be mentioned for fixing
skype has sound problems, allready reported in this forum
sometimes pulldown menu windows are black
cannot transfer files with nemo to my sony ebook reader, window does not open
the long update process (cinnamon) took much time but worked without problem on 2 different computers
UP 6 was better
September 28th, 2013 at 10:32 am
Edit file “/etc/pulse/default.pa” and put “tsched=0″
solved the skype problem for me
Greetings from Hessen, W.-Germany
September 28th, 2013 at 10:48 am
Hi, I did an upgrade to 7 on a system with 6. I now have no burning capability. Basero does not see the disk and K3B gives an error about the disk being mounted by another user. I am the only user. Any insight out in Mintdom? Thanks
September 28th, 2013 at 10:50 am
Hello,
Caja problems with icons are solved, many thanks Clam!
One other minor problems I noticed:
1.DVD’s won’t mount automatic anymore. Loading Diskutilty solves this problem, but after every restart you have to do this again.
2.Digital camera (Fujifilm) isn’t recognised anymore. Its is seen as a Music-player. Removing the SD card and put it in a adapter-usb card will solve this. Inserting an USB stick has the same result, it is also seen as a musicplayer.
3. Skype sound doesn’t work, typing and things are oke.
Tested on 3 computers
September 28th, 2013 at 11:41 am
The Caja/thumbnail bug is fixed, great! (you guys are fast!)
September 28th, 2013 at 11:43 am
after installing usbmount my sony ebookreader is recognized
September 28th, 2013 at 4:46 pm
There’s a lot of self-entitled, helpless babies in here.
How are a bunch of linux users so averse to doing things for themselves? Don’t you know that running gnu/linux is a hands-on experience? That when stuff breaks you can fix it? That it’s in extremely poor taste to complain that something didn’t go like you expected without even trying to track down the problem? That a huge amount of work goes into making a distro? That Clem and everyone else who works on Mint deserve nothing but gratitude?!
Edit by Clem: Thanks, I appreciate this, and it’s true, I do expect novice users to run the mainstream edition and I do expect a certain level of APT knowledge from LMDE users. The main reason we don’t recommend upgrading from one release to the next via APT is precisely because people don’t always know enough about their package manager, how to troubleshoot and how to get out of situations which are fine for experimented users but might push novices into going for a fresh reinstall. Now, with that said, I think it’s important to welcome people, even if they’re getting into something that might be a little too technical for them. We’ve all done this at some stage, most of us failed of course and that’s how we learnt and got experienced. At the time it was nice to be helped, encouraged and supported and I definitely don’t want people to be told that something isn’t for them, that they should rtfm and stop asking questions. “Helpless babies” here is completely inappropriate for instance. There are warnings when it comes to LMDE, everybody knows it’s a bit harder to maintain as the main editions, but once people decide to go and use it, our role is to help them to do so. We can encourage them to get more involved with learning their system but it’s important to be positive and helpful rather than deterring when we do so…. so now with all that said, YES if you’re an LMDE user, go and learn about DPKG if you haven’t already done so, but NO don’t worry if you are new to this and certainly do not shy away from asking for help.
September 28th, 2013 at 5:07 pm
Clem, the Linux Mint and Debian teams, just would like to reiterate to you this once again. Great job, you guys! Thank you so much for LMDE!
@ Dan, #100, “Never had such a bug-fest in a decade+ of using M$ products.” Lucky you. What about the recent patches MS had so many issues with? And btw, every GNU/Linux distro deals with so much more software than MS could never even dream of supporting. AMOF, I had only one bug I was able to kill within a few minutes of google search (and later very promptly fixed by Clem and the team)
September 28th, 2013 at 6:52 pm
For the records, I solved the problem with Skype. See
September 28th, 2013 at 8:01 pm
my experience:
messages during update;
missing folder, something to do with font/icons, GTK broken.
hardware drives weren’t available, no sound.
incompatible nvidea versions, hosed X, command line only on boot.
looks like a fresh install for me, so much quicker and more reliable.
Edit by Clem: The new GTK3 segfaults with any unico based theme that isn’t updated. That means, if you’re using an old theme, no GTK3 apps will run. GNOME is still working on their rendering engine, developers are still blindly migrating to GTK3 … and there’s no sign of this stopping until GTK4 comes around. “mint-themes” was updated, make sure you have the latest version.
September 28th, 2013 at 8:03 pm
just noticed, no ISO for this update, so no fresh install for this.
September 29th, 2013 at 10:55 am
After reading the UP notes Y realized I should not touch my beloved Cinnamon-64 bit, but I did anyway, I just could not resist the temptation. After 10 hours of neurons workout I found that I have a lot more to learn ( a novice here ). The result: I had to instal mate to get pack 7, which went flawless and still flawless today. I must admit I miss cinnamon it has such a nice feel.
@ Clem, thank you for your efforts and tenacity, Mints are rock solid, It is very uplifting to see the way you deal with others’ frustration, I think you should change your user name to smooth Clem.
Cheers
Edit by Clem: Cinnamon is challenging in this UP.. because of the mismatch with gjs. This is solved in Cinnamon 2.0 thankfully. In the meantime, do learn more about dpkg/apt, Cinnamon can work fine in UP7 you just need to make sure the proper packages are installed and with the proper versions.
September 29th, 2013 at 12:38 pm
Thankfully, my LMDE XFCE box picked up and applied UP7 without any problem, except that it changed the system default graphical environment to Cinnamon (to be expected, I suppose), and scrambled some of my XFCE settings.
I’m glad that I’m not completely out of the cold as a diehard LMDE XFCE user.
September 29th, 2013 at 2:48 pm
Hi, i also tried to update the system, but it ended in console after restart. Almost everything was broken and lot of packages were missing. I have managed to get X to work. I could find those missing packages in reinstalled synaptic (-filter not installed(residual config)).There are still some things which are not completely working.
: gedit is obviously not compatible with cinnamon due to libpeas and gir1.2-peas version problem. Is it correct or do i miss something else?
It is a pity, that there is no possibility to reinstall all default core packages. Some kind of a list or filter for synaptic would be useful.
Edit by Clem: gedit is fine with Cinnamon. I’m running version 3.8.3-3+b1 without problems here.
September 30th, 2013 at 12:50 am
Hi, I install LMDE with UEFI bios on P8H61-M LX3 R2.0 without troubles
just define the /boot/efi partition
September 30th, 2013 at 6:27 am
Thanks for the update. On two freshly installed LMDE machines I experience the following issues:
1) After installation of all packages, the following packages are kept back:
gir1.2-peas-1.0
libpeas-1.0.0
libpeas-1.0-0 seems to depend onto uninstallable libgjs0c (>= 1.36.1)
2) Configuration of mint-debian-mirrors fails.
In the initial installation it reports:
mint-debian-mirrors (2013.09.23) wird eingerichtet …
Traceback (most recent call last):
File “/var/lib/dpkg/info/mint-debian-mirrors.postinst”, line 4, in
import pycurl
ImportError: /usr/lib/pymodules/python2.7/pycurl.so: undefined symbol: _PyTrash_thread_destroy_chain
dpkg: Fehler beim Bearbeiten von mint-debian-mirrors (–configure):
Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück
In the second run it reports:
E: Internal Error, No file name for mint-debian-mirrors:amd64
Am I’m the only one experiencing this?
September 30th, 2013 at 11:10 am
Have updated both PC and netbook with very little issue.
The skype sound issue is a long standing one for me) and requires you to run alsamixer and disable one side of the microphone input and it them works fine.
Love Mint and love the Debian version thsnks to all the developers and keep up the hard work, Upgrade pack 8 here we come.
September 30th, 2013 at 11:34 am
Hi. I did upgrade UP7 on Sathurday 28, download OK, Install OK, and after restart my laptop didn’t work again, no messages, no terminal, no keyboard, nothing. I had to reinstall using SolydK because LMDE201303 .iso is obsolete right now. May be with a new LMDE iso I’ll reinstal LMDE.
September 30th, 2013 at 12:58 pm
I encountered problems with caja when i transfer files from samba share after a succesfull update of LMDE Update Pack 7.
(I use LMDE MATE)
The file transfer is freezing.
Also i have problems with PDF rendering via cups-pdf.
I had no any of theese problems before update.
September 30th, 2013 at 2:13 pm
I installed up7 with no issues whatsoever. Great work Clem and team!
September 30th, 2013 at 4:13 pm
I assume that LMDE still lacks full disk encryption as an option during install?
September 30th, 2013 at 4:19 pm
NVIDIA broke, required some fiddling in the console to get the GUI running again.
October 1st, 2013 at 2:32 am
@Ifx: GEdit seems to be compatible with Cinnamon after all, it’s just tricky to get libpeas if you didn’t have it installed before UP7 (or removed it).
See for more info and leave a comment there if something goes wrong.
October 1st, 2013 at 2:34 am
when is cinnamon 2.0 coming to lmde?? anyone knows
October 1st, 2013 at 5:48 am
A helpful hint to avoid LMDE upgrade pain rom Monsta:
This is especially recommended for people, like me, who are doing a fresh LMDE install from the UP6/201303 ISO. My experience was that without the preparation listed in the thread the update will bomb out when it gets to the Cinnamon related section, and that without preinstalling debian-system-adjustments (with sudo apt-get install debian-system-adjustments), Cinnamon either becomes unrecoverably broken or you manually remove it with the steps listed in the notes.
In fact, if it isn’t noted somewhere already, Monsta’s suggestion should be added to the standard notes for UP7 (since there are going to be a lot of frustrated new LMDE users between now and UP8’s respins).
October 1st, 2013 at 6:52 am
I broke fresh LMDE installation two times on upgrading. In the third run I did the following and the upgrade went fine:
1) updated mint-debian-mirrors beforehand
apt-get install mint-debian-mirrors
2) executed mint-choose-debian-mirror
3) deinstalled gedit and totem* (otherwise upgrade would not work)
4) used and only used the Mint Update manager.
In this case the upgrade worked for me.
October 1st, 2013 at 3:46 pm
Hello, I have 3 pc with LMDE and gnome 3 and 3 years run. 1 pc update but broke it, black screen. Another pc upgrade but not install new kernel 3.10 y anoter things it count with amd fx 8120, the 2 pc are desktop and have nvidia gtx 650 and 650 ti. And the 3 is one laptop acer 4720z but not upgrade my mint, not know if work fine.
I love mint,I try Ubuntu, Fedora, Mandriva, but this distros each six monts change.
I work with gnome 3, and I review LXDE for use for render blender for fast work.
Any solution? Thanks for LMDE is very good…
October 1st, 2013 at 3:52 pm
John, #129 – Cinnamon 2.0 hasn’t yet came out. Probably, it will be when LM 16 will came.
October 2nd, 2013 at 6:07 am
Hello,
I installed LMDE updatepack 7 on one i3 Intel, 2 Coreduo Intel’s and one AMD. On all 4 computers DVD’s don’t mount, meaning don’t appear on the desktop. After starting Diskutilty DVD’s mount, “appear” on the desktop. I was thinking of reinstalling pmount and mount, but don’t know if that is wise to do. Any suggestions?
Also after the recent Caja update. sound preview crashes and keeps playing the music. I use LMDE Mate.
On all other parts updatepack seems to work fine
October 2nd, 2013 at 7:54 am
Concerning nvidia drivers problems: New nvidia drivers have been released on 1.10.2013 and they add Linux kernel 3.10 support. Hence, now you should be able to install them. See
October 2nd, 2013 at 3:04 pm
Since UP7, my Android mobile phone and its internal microSD cannnot be connected and recognized anymore.
I connect it to the laptop, I accept the connection from my mobile phone but I cannot see it listed anywhere as a new device (and even in /media, where in the past was visible, there’s no trace)…
Any help? Thanks!
October 2nd, 2013 at 3:43 pm
seems related to the gvfs packages
October 2nd, 2013 at 10:32 pm
After UP7 my display does not work correct. Fonts are displayed wrong and strange. VirtualBox does not start the guest system (Win XP), Google Earth crashes and several times the system crashes by displaying a green display. The computer blocks fully and a restart cannot be done. I have to do a hard switch off, which has an impact into the memory and hard disk. I need urgent help to solve the display driver. Who can help? Thanks
October 3rd, 2013 at 8:14 am
Note that these comments will be closed in approximately a week (as it usually happens here), so it’ll become useless to ask here for help with any serious problems. Not to mention that it’s hard to quote and track the posts. In other words, use the forum please
October 3rd, 2013 at 8:38 am
This may be a little late, but I hope one of the devs reads this:
In the latest firefox package (24.0~linuxmint1+lmde) you forgot the icon (/opt/firefox/icons/mozicon128.png). Or maybe you removed it on purpose. Anyway for users not using the Mint icon theme, the icon for firefox is completely missing. This also affects plain Debian users that imported the lmde firefox repo.
Workaround: Get a firefox icon from somewhere and copy it to /usr/share/pixmaps/firefox.png
October 3rd, 2013 at 8:46 am
I returned to the fold. After running Ubuntu 12.04 with Cinnamon installed, I finally decided that Mint 13 with the backports installed integrated far better so I’m back running Mint and there I’ll stay (far better to run the native version). Why mention this in this thread? It’s my hope that Mint Debian with Update Pack 8 slipstreamed will be smooth-edged enough to replace Ubuntu-based Mint as my permanent desktop running all my multimedia stuff (GIMP, Audacity, MKVmerge, DeadbeeF, Puddletag, etc.). It will also have to be compatible with GNOME-mousewheel-zoom by Tobias Quinn so I can actually see what I’m doing
A rolling distro with continuous updates not requiring me to reinstall when a new version came out is where I ultimately want to be.
Here’s wishing and hoping…
October 3rd, 2013 at 3:31 pm
After update I suffer from some screen artifacts.
NVidia 8600GT using proprietary driver (installed from repo, not from nvidia).
Artifacts appear in top left corner. I did a screenshot, you can look at it at (http)napisz.se/artifacts.png
October 3rd, 2013 at 3:35 pm
Forgot to mention: NVidia driver version is 304.88.
It would be nice to choose between different versions of proprietary driver.
October 4th, 2013 at 10:47 am
I had LMDE Xfce with full KDE on top of it. I tried the UP7 upgrade and it didn’t work. Well, it didn’t give any errors, but got into a loop trying to install sendmail – which I don’t even use. So, I aborted.
I would have had to do that image restore to start over and then find the solution. Instead I took a lazy way out and installed openSUSE 12.3. It worked perfectly. Sorry that I didn’t work this bug out.
I’m also sorry to leave LM. I’ve used it since 6 – I think. LM has been very good to me and I still love it. However, it’s no longer the best fit for me. I love the rolling release of LMDE and I love KDE. That divorce makes the future very uncertain. openSUSE (with tumbleweed) seems like a more supported future for me.
Oh, I’ll still recommend LM for many people and probably will install it for a lot of friends too. For a lot of situations it’s still the best.
So long and thanks for all the fish,
Clyde
October 4th, 2013 at 12:29 pm
@Jamo After update I’ve also ended with vhosts dead (403/404 error) and it is related to Apache 2.2 to 2.4 update issue (read more on following link):
What helped me was editing sites-enabled conf with (2.4 version):
Require all granted
instead (2.2 version):
Order allow,deny
Allow from all
October 4th, 2013 at 7:33 pm
Had some issues with gvfs after updating to UP7. With some help by the great people in the forums, I downgraded gvfs and related packages, completely solving the problems, which prevented me from properly mounting/viewing/ejecting SD cards, whether directly attached or through Android devices. No other new problems so far.
Thanks, as always, Clem & Crew!
October 6th, 2013 at 8:35 am
no more wifi connection on my hp
October 7th, 2013 at 12:30 am
Good to know LMDE is still being updated.
October 7th, 2013 at 3:55 am
Is it necessasary to update to UP7? Or can I continue with the previous version (UP6), and what would the disadvantage of that be? I have recently installed LMDE from a DVD iso, 201303. | http://blog.linuxmint.com/?p=2455 | CC-MAIN-2015-14 | refinedweb | 10,104 | 72.66 |
2013-04-01
Counting Frequencies from Zotero Items
Recommended for Beginning Users
Lesson Goals
In Counting Frequencies you learned how to count the frequency of specific words in a list using python. In this lesson, we will expand on that topic by showing you how to get information from Zotero HTML items, save the content from those items, and count the frequencies of words. It may be beneficial to look over the previous lesson before we begin.
Files Needed For This Lesson
obo.py
If you do not have these files, you can download programming-historian-3, a (zip) file from the previous lesson.
Modifying the obo.py Module
Before we begin, we need to adjust
obo.py in order to use this module to
interact with different html files. The stripTags function in the
obo.py
module must be updated to the following, because it was previously
designed for Old Bailey Online content only. First, we need to remove
the line that instructs the program to begin at the end of the header,
then we will tell it where to begin. Open the
obo.py file in your text
editor and follow the instructions below:
def stripTags(pageContents): #remove the following line #startLoc = pageContents.find("<hr/><h2>") #modify the following line #pageContents = pageContents[startLoc:] #so that it looks like this pageContents = pageContents[0:] inside = 0'): inside = 0 elif inside == 1: continue else: text += char return text
Remember to save your changes before we continue.
Get Items from Zotero and Save Local Copy
After we have modified the
obo.py file, we can create a program designed
to request the top two items from a collection within a Zotero library,
retrieve their associated URLs, read the web pages, and save the content
to a local copy. This particular program will only work on webpage-type
items with html content (for instance, entering the URLs of JSTOR or
Google Books pages will not result in an analysis of the actual
content).
First, create a new .py file and save it in your programming historian
directory. Make sure your copy of the
obo.py file is in the same
location. Once you have saved your file, we can begin by importing the
libraries and program data we will need to run this program:
#Get urls from Zotero items, create local copy, count frequencies import obo from libZotero import zotero import urllib2
Next, we need to tell our program where to find the items we will be using in
our analysis. Using the sample Zotero library from which we retrieved items in
the lesson on the Zotero API, or using your personal library, we will pull
the first two top-level items from either the library or from a specific
collection within the library. (To find your collection key, mouseover the RSS
button on that collection’s page and use the second alpha-numeric sequence in
the URL. If you are trying to connect to an individual user library, you must
change the word
group to the word
user, replace the six-digit number
with your user ID, and insert your own API key.)
#links to Zotero library zlib = zotero.Library('group', '155975', '<null>', 'f4Bfk3OTYb7bukNwfcKXKNLG') #specifies subcollection - leave blank to use whole library collectionKey = 'I253KRDT' #retrieves top two items from library items = zlib.fetchItemsTop({'limit': 2, 'collectionKey': collectionKey, 'content': 'json,bib,coins'})
Now we can instruct our program to retrieve the URL from each of our items, create a filename using that URL, and save a copy of the html on the page.
#retrieves url from each item, creates a filename from the url, saves a local copy for item in items: url = item.get('url') filename = url.split('/')[-1] + '.html' #splits url at last / filename = filename.split('=')[-1] #splits url at last = filename = filename.replace('.html.html', '.html') #removes double .html print 'Saving local copy of ' + filename response = urllib2.urlopen(url) webContent = response.read() f = open(filename,'w') f.write(webContent) f.close()
Running this portion of the program will result in the following:
Saving local copy of PastsFutures.html Saving local copy of 29.html
Get Item URLs from Zotero and Count Frequencies
Now that we’ve retrieved our items and created local html files, we can use the next portion of our program to retrieve the URLs, read the web pages, create a list of words, count their frequencies, and display them. Most of this should be familiar to you from the Counting Frequencies lesson.
#retrieves url from each item, creates a filename from the url for item in items: itemTitle = item.get('title') url = item.get('url') filename = url.split('/')[-1] + '.html' #splits url at last / filename = filename.split('=')[-1] #splits url at last = filename = filename.replace('.html.html', '.html') #removes double .html print '\n' + itemTitle +'\nFilename: ' + filename + '\nWord Frequencies\n' response = urllib2.urlopen(url) html = response.read()
This section of code grabs the URL from our items, removes the unnecessary portions, and creates and prints a filename. For the items in our sample collection, the output looks something like this:
The Pasts and Futures of Digital History Filename: PastsFutures.html Word Frequencies History and the Web, From the Illustrated Newspaper to Cyberspace: Visual Technologies and Interaction in the Nineteenth and Twenty-First Centuries Filename: 29.html Word Frequencies
Now we can go ahead and create our list of words and their frequencies. Enter the following:
#strips HTML tags, strips nonAlpha characters, removes stopwords text = obo.stripTags(html).lower() fullwordlist = obo.stripNonAlphaNum(text) wordlist = obo.removeStopwords(fullwordlist, obo.stopwords) #counts frequencies dictionary = obo.wordListToFreqDict(wordlist) sorteddict = obo.sortFreqDict(dictionary) #displays list of words and frequencies for s in sorteddict: print str(s)
Your final output will include a long list of words accompanied by their frequency within the html file:
Saving local copy of PastsFutures.html Saving local copy of 29.html The Pasts and Futures of Digital History Filename: PastsFutures.html Word Frequencies (51, 'history') (43, 'new') (31, '9') (27, 'historians') (24, 'digital') (23, 'social') (21, 'narrative') (16, 'media') (15, 'time') (13, 'possibilities') (13, 'past') (12, 'science') ... History and the Web, From the Illustrated Newspaper to Cyberspace: Visual Technologies and Interaction in the Nineteenth and Twenty-First Centuries Filename: 29.html Word Frequencies (52, 'new') (49, 'history') (46, 'media') (44, 'ndash') (34, 'figure') (34, 'digital') (24, 'visual') (24, 'museum') (24, 'http') (23, 'edu') (22, 'web') (22, 'text') (22, 'barnum') (21, 'users') (21, 'information') ...
Suggested Citation
Spencer Roberts , "Counting Frequencies from Zotero Items," Programming Historian, (2013-04-01), | http://programminghistorian.org/lessons/counting-frequencies-from-zotero-items | CC-MAIN-2017-26 | refinedweb | 1,079 | 64.2 |
At OpenShift we have long provided command line tools for the creation, management, monitoring and deployment of your application code to the OpenShift cloud. This is great if you live and die on the command line. What about the developers who want a nice integrated experience with their IDE? We have a solution for that: JBoss Tools now provides OpenShift integration from inside of Eclipse.
Lets get started on how to install, configure, and use this new toolset to make your next cloud deployment a few mouse clicks away.
Step 1: Install Eclipse
Point your browser at the eclipse download page and select the installation package for your needs. For this blog post, I will be using the Eclipse IDE for Java Developers package.
After downloading eclipse, extract the archive and you will have an eclipse directory. Inside of this directory, you will have an Eclipse executable that will start the IDE.
Step 2: Install JBoss Tools
Once you have Eclipse up and running, the next step is to install the JBoss Tools package that provides the OpenShift integration. You can install JBoss Tools from the Eclipse marketplace. Make sure that the version of JBoss Tools that you are using matches the version of Eclipse that you are using (Helios. Indigo, Juno, etc.).
Expand JBoss Tools and select JBoss OpenShift Tools.
Once the JBoss OpenShift tools have been installed, you will need to restart Eclipse.
Step 3: Create an OpenShift Project
Now that we have the toolset installed, it’s time to create our cloud project and begin development. Select File -> New -> Other -> Expand OpenShift -> OpenShift Application and click Next.
Before you can start creating and deploying your projects, you will need to authenticate with the OpenShift web services via the dialog provided.
If you do not already have an OpenShift account, click the sign up button or head on over to the OpenShift web site and create a new account.
After creating your account, you will also need to specify a domain that you want to use for your applications. The domain is used to create the unique URL for your running application with the following template: applicationName-domainName.rhcloud.com. The domain you select will be used for all of your deployed applications on the Red Hat Cloud.
Once you have successfully authenticated to the OpenShift web services, you will be presented with the Setup OpenShift Application Wizard. The dialog provides the following functionality:
- Use an existing application that is already deployed on your OpenShift account
- View details for existing application
- Create new application
- Add cartridges to your application
- Specify your gear size
- Enable scaling for your application
Note: Although in this blog post, we will be creating a new application, if you want to continue development on an existing application, select use an existing application and click browse. This will create a new Eclipse project and perform a git clone on your existing git repository.
In order to create a new application, give your application a name and select which cartridge or application server that your deployment will target. At the time of this writing, OpenShift provides support for JBoss AS 7, PHP, perl, ruby, python, node.js, and diy.
Step 4: Configure your ssh keys
In order to communicate via git to your repository, we need to configure our ssh keys. If this is the first time you have used OpenShift, and go through the domain creation wizard, a key will be created and automatically uploaded for you.
If you are an existing OpenShift user, we will need to use your existing key or create a new one. Fortunately, we provide a UI to accomplish this task as well. Click the SSH2 Preferences link provided in order to create your keys.
If you don’t already have a ssh key created on your machine, select the Key Management tab -> Generate RSA Key. Provide a passphrase if you desire and then click Save Private Key. You will also want to copy your public key that is displayed so that we can add it to OpenShift.
Now that you have a key pair generated, we need to tell OpenShift what our public key is. Login to OpenShift website, click Manage your Apps, click on My Account, and select to Add a new key.
Give your new key a name of JBossTools and paste the key into the textfield provided.
Step 5: Clone your repo
After creating and publishing your public ssh key to OpenShift, we are ready to clone our repo and begin development. Switch back over to your Eclipse IDE and click the finish button. This will create your project and clone the private git repository that lives on the OpenShift servers.
Step 6: Code
This is what you love to do.
Step 7: Update maven configuration
You may notice an error marker next your project after you have cloned your git repository. This is easily fixed by updating your maven project configuration. Right click on your project, select maven, and then update project configuration.
Step 8: Deploy your code to the cloud
Once you have modified some of your source files, click the server tab at the bottom of the screen. You should see your OpenShift application listed. In order to push your changes live, right click on your application and select publish.
Once the push process is complete, point your browser to applicatonName-namespace.rhcloud.com and you should see your brand new application deployed on the cloud.
Step 9: View the log files for your application
An important step in developing software is the ability to view your server log files. In order to view the server log files, select the server tab, right click on your application, select OpenShift, and then tail files.
This will open up the console view tab and allow you to monitor your logs files on the system.
Step 10: Use port forwarding to connect to remote databases.
When developing database driven applications, it is essential that you have access to connect to your database in both your development and production environments. Doing this without opening yourself up to security concerns is often a place where developers that are not intimately familiar with system administration and firewall configuration may stumble.
With OpenShift port forwarding, developers are now able to connect to their remote services while using local client tools without having to worry about the details of configuring complicated firewall rules.
In order to use port forwarding from inside of Eclipse, select the server tab, right click on your application, select OpenShift and then port forwarding.
Note:
All of the functionality exposed via the eclipse plugin is also available for you to use in your own projects. We have provided a java client that communicates with the OpenShift web services on github. If you have interest it communicating directly to OpenShift via the API(s), check it out.
Next Steps:
- Learn more about how OpenShift works
- Experiment with your own DevOps workflow with a free OpenShift Online account
Automatic Updates
Stay informed and learn more about OpenShift by receiving email updates. | https://blog.openshift.com/getting-started-with-eclipse-paas-integration/ | CC-MAIN-2017-30 | refinedweb | 1,182 | 61.06 |
C# Corner
Interfaces can make your applications easier to test and maintain.
Interfaces help define a contract, or agreement, between your application and other objects. This agreement indicates what sort of methods, properties and events are exposed by an object. But interfaces can be utilized for much more. Here, we'll look at how interfaces can help make your applications and components easier to test.
Using Interfaces in Your DesignInterfaces are often used when you have different types of objects that you want to perform a similar operation on. The System.IComparable interface in the .NET Framework is a good example. Object comparison is common in programming. Microsoft can't know every way you might compare objects, so it defined an interface for this purpose. Using interfaces, we can write code that works against the interface implemented by different objects--but we shouldn't reserve interfaces just for defining common attributes or behavior.
Suppose we're writing an application that lets users register for a conference. Once they've entered their personal information, there are a number of steps we'll need to take to get them registered:
If we look at a visual diagram to show how this all fits together, it might look something like Figure 1. Look at an extremely simple implementation of step 1, the fee calculator:
public class FeeCalculator
{
private readonly DateTime conferenceDate = new
DateTime(2010, 2, 1);
public decimal CalculateFee()
{
if ((conferenceDate - DateTime.Now).Days >= 30)
{
return 100;
}
return 160;
}
}
If the user signs up at least 30 days before the conference, it will only cost them $100. If it's less than 30 days before the conference, it will cost them $160. Register early.
We have a single class that implements our fee calculator. We don't need multiple fee calculators. Why define an interface just for calculating fees? We're also only going to be saving to a single type of database-SQL Server. How can this example be used to show the benefits of interfaces? The answer: Testability.
Using Interfaces to Isolate TestingWe need to write unit tests for our registration service. When we test it, we want to make sure we test just the registration service and not the fee calculator, payment processor and so on. If we use the real implementations of those other objects, our RegistrationService constructor may look something like this:
public class RegistrationService
{
public RegistrationService(
FeeCalculator feeCalculator,
PaymentProcessor
paymentProcessor,
SQLRepository sqlRepository,
{
}
}
During testing, we'd call our real FeeCalculator, and we'd hit a real database and other resources. This makes testing more difficult. Problems in the fee calculator service will impact our registration service testing. If we're hitting a real database, we'll have to initialize the database to a known state; make sure SQL Server is installed; and worry about timeouts and details unrelated to our registration service. The coupling of the service with the components is too tight. Interfaces can isolate components we'd like to test.
Let's start with our FeeCalculator. All it needs to do is contain a method called CalculateFee that returns a decimal value. We'll create an interface to define how a FeeCalculator works:
public interface IFeeCalculator
{
decimal CalculateFee();
}
Our earlier FeeCalculator can be updated to state that it implements this interface:
public class FeeCalculator : IFeeCalculator
{
...
}
Now we define interfaces for all our other dependant objects. Then we'll update the RegistrationService constructor so it'll accept the interfaces instead of the actual implementation objects:
public class RegistrationService
{
public RegistrationService(
IFeeCalculator feeCalculator,
IPaymentProcessor paymentProcessor,
ISQLRepsoitory sqlRepository,
IEmailGenerator emailGenerator,
IEmailSender emailSender)
{
}
}
Now you're asking, "If we don't want to use the real implementations of those objects, what are we supposed to use during testing?" The answer is mock objects.
Using Mock Objects in Your Unit TestsA mock object, sometimes referred to as a "fake" or "stub," is simply an object that mimics the behavior of a real object. The advantage we get with mock objects, or "mocks," is that they allow us to swap out the real implementation on an object for an implementation that we can control to produce specific results.
If we go back to our FeeCalculator example, the fee returned by the CalculateFee method is dependent on the current date. If we were to use the real object during testing, the result of CalculateFee could change depending on the date we run the test. This doesn't make for a very robust unit test. Instead, a mock IFeeCalculator can return what we want:
public class EarlyBirdFeeCalculator : IFeeCalculator
{
public decimal CalculateFee()
{
return 100;
}
}
public class ProcrastinatorFeeCalculator :
IFeeCalculator
{
public decimal CalculateFee()
{
return 200;
}
}
These two classes are part of our unit test assembly. They define fee calculators that return the two types of fees we need to use in our registration service. Whenever we want to test some code that will involve the user registering 30 days before the conference date, we'll use the EarlyBirdFeeCalculator. Likewise, any tests that need to assess those who register within 30 days of the conference will use the ProcrastinatorFeeCalculator. Notice that the procrastinator fee is different from the real FeeCalculator. The value doesn't matter here because it's just a mock. What's important is that we'll know-during our testing-that when we use the ProcrastinatorFeeCalculator, the CalculateFee method will always return 200.
Mock objects can be used for more than just a canned response. We can use them to ensure certain calls have been made. For example, let's create a Mock IEmailSender to make sure that the Send method on IEmailSender was called. Assuming the following interface:
public interface IEmailSender
{
void SendMail(MailMessage message);
}
We can create our mock as:
public class ExpectedSendEmailSender : IEmailSender
{
private bool sendCalled = false;
public void SendMail(MailMessage message)
{
sendCalled = true;
}
public bool SendCalled
{
get { return sendCalled; }
}
}
In unit tests, we can create an instance of ExpectedSendEmailSender and pass it into our RegistrationService. After we've called our method to register a user, we can make sure the SendCalled property is true. If someone were to edit the code that handles registering a user-and accidentally delete the line of code that calls IEmailSender.Send-our unit test would fail since SendCalled would be false.
This pattern of creating mock objects can be used for all the other interfaces as well:
The benefits we get are huge. Here are just a few:
When we consider our original diagram, we can see how we'll isolate the testing of our registration service by employing mocks. The downside of this approach is writing and maintaining all of that mock code. This is where a mocking framework comes in handy.
Using Rhino.MocksI'll use Rhino.Mocks as my mock framework in these examples. It's free, open source and can automate the creation of our mocks and stubs. Other frameworks exist and behave similarly, so the concepts shown here will work with any of the frameworks.
From an academic standpoint, there isn't much difference between a mock and a stub, but Rhino.Mocks makes a small distinction.
Stubs are used for returning a canned response. The example we used earlier with our EarlyBirdFeeCalculator and ProcrastinatorFeeCalculator is a perfect example of a stub. Mocks can provide canned responses as well, but they can also have expectations set. The example shown earlier of making sure the SendMail method of IEmailSender was called is an example of where you'd use a mock. You would tell Rhino.Mocks that you expect the SendMail method to be called. Later, after the test is run, you'd ask Rhino.Mocks to verify that your expectation was met.
In a nutshell, stubs and mocks are almost identical. Both allow you to return a specific response from method calls. The key difference is that you can only set expectations on a mock.
Let's get back to our unit tests of our RegistrationService. When we want to use our EarlyBirdFeeCalculator or our ProcrastinatorFeeCalculator, we'd simply create a new instance of them in our unit test code. The FeeCalculator we created would be passed to the RegistrationService constructor, and we'd write our test code knowing that the call to CalculateFee will return a known value. Let's use Rhino.Mocks to eliminate the need for a separate class.We'll remove the EarlyBirdFeeCalculator class and instead create a stub:
IFeeCalculator feeCalculator =
MockRepository.GenerateStub<IFeeCalculator>();
feeCalculator.Stub(f =>
f.CalculateFee()).Return(100);
What we're doing here is asking Rhino.Mocks to create a stubbed implementation of our IFeeCalculator interface. Then, using the stub extension method, we use a lambda expression to indicate exactly what we want the CalculateFee method on our IFeeCalculator to return. Our seven lines of code to define the EarlyBirdFeeCalculator have been reduced to two. And we don't need a separate class for the two different types of fees.
In addition to maintaining less code, we can see in our unit test code what the value returned by CalculateFee is. We don't have to jump over to the other class definition to see what we'd defined for the value of CalculateFee-it's right in front of us. This makes the readability and maintainability of our unit tests much easier.
What about the case of setting expectations? For that, we need to use a mock. The code for creating a mock in Rhino.Mocks is simple:
IEmailSender emailSender =
MockRepository.GenerateMock<IEmailSender>();
Now we can set an expectation that we expect the SendMail method to be called. Again, we'll use an extension method and a lambdA:
.IgnoreArguments();
When you express a method call in a lambda, you must provide arguments for the method call. In this case, we can't specify the exact MailMessage object that'll be passed to SendMail-the MailMessage object will be constructed somewhere inside our RegistrationService class. Therefore, we simply supply a null argument and tell Rhino.Mocks to ignore the arguments passed to the SendMail method. Without the IgnoreArguments call, Rhino.Mocks would be expecting a call to SendMail with a null MailMessage object. We don't want that, so we tell Rhino.Mocks to only look for a call to SendMail and ignore the differences in argument values between the lambda and the actual call made during the test.
The last thing to do with expectations is verify your expectations were met. This is called after executing the method you want to test:
The above code will throw an exception if any expectations defined via "Expect" calls on the mocked object were not met. For our example we set up a single expectation on SendMail. If that's never called, your unit test will fail because VerifyAllExpectations will throw an exception. What if you need to set both an expectation and a canned response? You can set a return value on an expectation just like you did on a stub. Here's an example of expecting a call on IPaymentProcess.Process and always returning true:
IPaymentProcessor pp = MockRepository.
GenerateMock<IPaymentProcessor>();
pp.Expect(p => p.Process()).Return(true);
This allows you to easily create a test to check both success and failure processing of your registration payments.
Exception HandlingEvery application has to deal with exceptions. Testing your exception handling code can be tricky since exceptions aren't supposed to happen. Fortunately, Rhino.Mocks makes testing your exception handling easier. Instead of telling Rhino.Mocks that you want a method to return a specific result, you can actually have it run specific code when the method is called. The "Do" extension method takes in a delegate that'll be executed whenever your stubbed method is called.
Suppose you want to test your exception handling that handles missing files. It's as easy as:
public delegate void ThrowExceptionDelegate();
IFileReader reader =
MockRepository.GenerateStub<IFileReader>();
reader.Stub(r => r.ReadFile()).Do(
new ThrowExceptionDelegate(delegate()
{ throw new FileNotFoundException(); }
));
When your test code calls the ReadFile method, a FileNotFoundException will be thrown, and you can ensure your exception handling code is working properly.
Interfaces can help you create a more loosely coupled architecture. That, along with a mocking framework, can make your unit tests more robust and maintainable.
Printable Format
I agree to this site's Privacy Policy. | http://visualstudiomagazine.com/Articles/2010/01/01/Interface-Based-Programming.aspx | CC-MAIN-2014-42 | refinedweb | 2,039 | 55.74 |
The
import directory contains tests imported from the SVG 1.1 Second Edition test suite, with tests renamed to contain
-manual in their name. These tests need review to verify that they are still correct for the latest version of SVG (which at the time of writing is SVG 2) and then need to be converted to reftests or testharness.js-based tests.
The SVG 1.1 test suite came with reference PNGs for each test, which, while not suitable as exact reftest reference files, at least give a rough indication of what the test should look like. For some tests, such as those involving filters, the test pass criteria are written with reference to the PNGs. When converting the tests to reftests or testharness.js-based tests, you might want to consult the reference PNG.
Tests should be placed in a directory named after the SVG 2 chapter name (for example in the
shapes/ directory for Basic Shapes chapter tests). Scripted tests should be placed under a
scripted/ subdirectory and reftests under a
reftests/ subdirectory, within the chapter directory. Filenames for tests of DOM methods and properties should start with InterfaceName.methodOrPropertyName, such as
types/scripted/SVGElement.ownerSVGElement-01.html.
Direct questions about the imported SVG 1.1 tests to Cameron McCormack. | https://chromium.googlesource.com/chromium/src/+/ade3b5e289c568820d7822f63d9456c289a72ae7/third_party/blink/web_tests/external/wpt/svg/ | CC-MAIN-2020-10 | refinedweb | 213 | 65.42 |
Cannotavax.servlet.jsp.JspException: Cannot find FacesContext - Java Server Faces Questions
javax.servlet.jsp.JspException: Cannot find FacesContext exception
javax.servlet.ServletException: Cannot find FacesContext...(HttpServlet.java:802)
root cause
javax.servlet.jsp.JspException: Cannot find
jsf jars - Java Server Faces Questions
: Cannot find FacesContext.
Please tell me the list of jsfjars.
Thanks in advance.
Hi friend,
Jar file to run The Jsf Applications
jsf...jsf jars Hi frnds, i done jsf simple pages those are working fine
JSF-Error - Java Server Faces Questions
JSF-Error Error in JSP
when I used to run loging.jsp the below message I am getting.
javax.servlet.jsp.JspException: Cannot find FacesContext... visit for more information:
JSF - Java Server Faces Questions
", getPersonName());
The FacesContext provides context environment in JSF... and it access on the other pages Follow some Steps :
1.Create a form in JSF... String fwdPage()
{
FacesContext context = FacesContext.getCurrentInstance
Cannot find tag library descriptor
Cannot find tag library descriptor Cannot find tag library descriptor...? How to resolve in struts in eclipse
Java Server Faces (JSF) Tutorials
JSF Tutorials
Java Server Faces (JSF) is a web application based on Java....
At roseindia.net you will find the best JSF tutorials for beginners as well as expert... the concepts of Java Server Faces (JSF) .
For more Java Server Faces (JSF) related
business delegate with JSF - Java Server Faces Questions
business delegate with JSF How to call EJB through a business delegate in JSF? Hi
I am sending links, where u can find about JSF and EJB...
http
cannot find symbol - Java Beginners
cannot find symbol public class Areatest
{ public static void main(String[]args)
{
Figure[]figures={new Triangle(2.0,3.0,3.0),new Rectangle(4.0,6.0),new Square(5.0)};
for(int i=0;i
error in code - Java Server Faces Questions
error in code hi friends i have a problem in execution of JSF it is displaying the error like "Cannot find FacesContext" i am not understanding plz... in ServletMapping
"/faces/*" to*.jsf
Faces Servlet
cannot find symbol method nextchar()??
cannot find symbol method nextchar()?? import java.util.Scanner;
public class Calc5{
public static void main(String args[]){
Scanner obj = new Scanner(System.in);
System.out.println("please enter
Integrating JSF, Spring and Hibernate
will explain you the process of
Integrating Spring with JSF (Java Server Faces...Integrating JSF, Spring and Hibernate
... in the application.
Configuring Spring context(WebApplicationContext)
What
non static variable cannot be referenced from static context
"non static variable cannot be referenced from static context". How to solve...non static variable cannot be referenced from static context public class Add{
int foo=7;
int bar=8;
public int add(int x,int y
Jsf biggener
.
exception
org.apache.jasper.JasperException: java.lang.RuntimeException: Cannot find FacesContext
root cause
java.lang.RuntimeException: Cannot find...Jsf biggener hi friends,
am new to jsf,i got an exception while
non static variable cannot be referenced from static context
non static variable cannot be referenced from static context public class Add{
int foo=7;
int bar=8;
public int add(int x,int y){
int z=x+y;
System.out.println(z);
return z;
}//add
jsf error
.
exception
org.apache.jasper.JasperException: java.lang.RuntimeException: Cannot find FacesContext
root cause
java.lang.RuntimeException: Cannot find...jsf error type Exception report
descriptionThe serverSF Tutorials: Easy steps to learn JSF
JSF Tutorials: Easy steps to learn JSF
JavaServer Faces or JSF is a web... version ofJavaServer Faces (JSF) was released in 2013 and called JSF 2.2,
which... interfaces for web applications. Moreover, JSF simplifies
the web application
“cannot find symbol”.
The reason behind cannot find symbol error... the java error cannot find symbol. In this example a class name 'cannot find... cannot find symbol”.
Cannotfindsymbol.java
import java.lang.*;
public
INTRODUCING JAVA SERVER FACES (JSF)
INTRODUCING
JAVA SERVER
FACES (JSF)
...;
</context-param>
<!-- Faces Servlet -->... of 'faces'
as the extension but we can also use *.jsf.
The Faces Servlet
JSF using Tomcat 6.0 - Java Server Faces Questions
JSF using Tomcat 6.0 Hai i have a problem during deployement time.
Actually this is my first example.
I have already run one jsf application...:42 PM org.apache.catalina.core.StandardContext start
SEVERE: Context
Roseindia JSF Tutorial
for the practical application of Java Server Faces (JSF).
At roseindia.net you... Tutorials are available for the learners willing to learn Java Server Faces (JSF... beneficial for you in the implementation of JSF.
Below you can find number of JSF
Java error cannot find symbol
Java error cannot find symbol
The java error cannot find symbol occurred when a Compiler... a class name
'cannot find symbol'. Inside the main method we have initialized
JSF Date picker error - Java Server Faces Questions
JSF Date picker error hi friend
thanks for your reply.i tried it already those codes. But those are not working .i don't know what... is
The import org.apache.myfaces.renderkit.html cannot be resolved
but i
jsf - Java Server Faces Questions
jsf i tried first example of jsf i followed everything whatever... requested resource (/jsf Example/) is not available.
Please give me the solution... the "faces-config.xml" file and "Web.xml"
For read more information :
http
jsf - Java Server Faces Questions
JSF facelets tags What is facelets tags in JSF? Hi friend Now read more information:
jsf - Java Server Faces Questions
1
Faces Servlet
*.jsf....
Visit for more information....
server
javax.faces.CONFIG_FILES
/WEB-INF/faces
context
context what is context in general in java ?
An association between a name and an object is called a binding, and a set of such bindings is called a context
JSF - Java Server Faces Questions
JSF How to upload a image on server through JSF?
Could anybody help me? Hi friend,
For upload a image in JSF visit to :
JSF - Java Server Faces Questions
JSF Hibernate Spring Integration Example Hi, can any one share the JSF Hibernate Spring integration example? just check u jsf related jars may be it's the problem comes with version related problem
JSF Books
heard about JavaServer? Faces (JSF) at the 2002 Java One conference,
we...). Experienced JSP developers will find that JavaServer Faces provides much of the plumbing... to leverage the full potential of JavaServer Faces (JSF) and Ajax
JSF - Java Server Faces Questions
JSF How to create a row using a button CREATE in JSF. answer is urgently required
jsf - Java Server Faces Questions
jsf for link which tag we have to use in jsf(means like anchor tag...,
"commandLink" tag is used like anchor tag in html.
jsf h...
For more information on JSF visit to :
jsf - Java Server Faces Questions
JSF sorting datatable I wanted to sort datatable by selected column and values
jsf - Java Server Faces Questions
jsf Hi,
how to give database connection in jsf.
tell me the steps.
thanks in advance
JBoss deployment problem for jsf - Java Server Faces Questions
].[localhost].[/RIFacelets]] (HDScanner) Exception sending context initialized event
JSF Form - Java Server Faces Questions
JSF Form what is JSF form action
JSF Forms - Developing form based application
JSF Forms - Developing form based application
Complete Java Server Faces (JSF) Tutorial -
JSF Tutorials. JSF Tutorials at Rose India covers
jsf - Java Server Faces Questions
jsf Hi friends,
i have a problem with jsf commandlink,i need 2 links in my jsf page i wrote code like... code to solve the problem.
For more information on JSF visit to :
http
jsf - Java Server Faces Questions
jsf Hi,
In my project there are 3 jsp pages and one jsf page ,first jsp page forwarding the request to jsf page then this jsf gives the greeting page means if it is success it goes to success page otherwise it goes to failed
JSF - Java Server Faces Questions
JSF Hello everyone,
I want to know about JSF. How it works? What other terms related to it? How to start with "Hello World !" ?
Kindly Reply...:
Hope that it will be helpful for you
JSF - Java Server Faces Questions
JSF Hi,
I am getting class not found exception
java.lang.ClassNotFoundException: org.apache.commons.collections.map.LazyMap
in JSF
even I added the commons-collections-3.2.1.jar file to my project
JSF error - Java Server Faces Questions
"java.lang.NoClassDefFoundError: javax/faces/event/SystemEventListener"
Iam using jsf-api & jsf...JSF error When i am trying to run my jsf application using... visit the following link:
Thanks
JSF - Java Server Faces Questions
information:
Thanks
Amardeep
Servlet Context
getContextPath() : This method is used to find out
the context path of web application...Servlet Context
In this section we will read about the SevletContext... by providing the value in web.xml
file using the <context-param></context
JSF - Java Server Faces Questions
on JSF.
I have one drop down box in my application which has values 1,2,3,4...){}
}
}
For more information on JSF valueChangeListener visit to :
Java server Faces Books
JavaServer Faces, or JSF, brings a component-based model to web application... a beginner or an expert, you'll find everything you need to know about JSF... server Faces
JavaServer Faces, or JSF, brings a component-based model
JSF Code - Java Server Faces Questions
JSF Code I'm having difficulty persisting or insert web application programmed with Java Server Faces into the database?
I need a code on how to create a user account and store the variables using the JSF Technology? avax.servlet.jsp.JspException: Cannot find bean org.apache.struts.taglib.html.BEAN
cannot find symbol class array queue--plzz somebody help..
cannot find symbol class array queue--plzz somebody help.. import java.util.*;
public class Test {
public static void main(String[] args)
{
ArrayQueue<String> q = new ArrayQueue<String>(6);
System.out.println
Spring Context - Writing Application Context
Spring Context - Writing Application Context
... by defining the Application
context definition file (applicationContext-hibernate.xml). This file is used by
context loader servlet to initialize the Spring Internalization - Java Server Faces Questions
JSF Internalization Hi,
Please tell me the Text Change functionality in JSF.
Kind regards,
Prasanna
jsf configration - Java Server Faces Questions
jsf configration can i use jsf with following configration jdk 1.4 , tomcat 4.1 & also Jboss/was (IBMs
JSF - Java Server Faces Tutorials
JSF - Java Server Faces Tutorials
Complete Java Server Faces (JSF... language, Messages etc...
Java Server Faces (JSF) is a great... effort brought a new technology named Java Server Faces
(JSF
cannot insert data into ms access database - Java Server Faces Questions
cannot insert data into ms access database
go back
Hi Friend,
You can use the following steps for creating DSN connection:
1. Open Data Sources (Start->Control Panel->Administrative Tool
jsf - Java Server Faces Questions
Hibernate & JSF - Java Server Faces Questions
Hibernate & JSF JSF and Spring Integration, how to get the available data from Data Base in a JSF project? Hello,Please check our Hibernate, JSF and Spring Integration tutorial at
JSF code Error - Java Server Faces Questions
JSF code Error Hii Friends,
When Iam working on JSF..,What jars are have to add in Applicaton?
Thanks and Regards By
Sreedhar
JSF binding attribute - Java Server Faces Questions
JSF binding attribute Hi frnds..
Can anyone please enlighten me about binding attribute in JSF...
how can we use binding attribute...please give... with the component.
"login.jsp"
JSF Login Example By Using
JSF h:panelGrid - Java Server Faces Questions
JSF h:panelGrid Hi Sir,
When using in JSF, is there an option... on JSF panelGrid Tag visit to :
jsf tomahawk - Java Server Faces Questions
jsf tomahawk i m using jsf and tomahawk.
the javascript in tomahawk is not loaded in my page automatically
any suggestion?? Hi Friend,
Please visit the following link:
jsf command link - Java Server Faces Questions
jsf command link Hi deepak ji ,
i have created one jsf page...;
faces... problem on JSF visit to :
dropdown box in jsf - Java Server Faces Questions
in jsf.
2.if the dropdown box contains id values like 1,2,3,4..........,if i click... for above queries.
And one more how to create morethan one dropdown box in one jsf page.
please give me jsf code only.
Thanks in advance. Hi friend
jsf login page - Java Server Faces Questions
jsf login page how to create a login page and connect to database using abatis tool
JSF - Framework
Servlet
*.jsf
---------------------------------
faces...JSF HI,can any one send me a sample web.xml, faces-config.xml..._METHOD
server
Faces Servlet
Java Server Faces (JSF)
Java Server Faces (JSF)
JSF Training
Course Objectives
To understand the fundamentals of the JSF framework and how it can be Hibernate - Java Server Faces Questions
JSF Hibernate I want to save data ,which is from view into data base by using hibernate .
somebody had used with Spring. But i want to use just jsf and hibernate.Is it possible? If so can u able to provide me an example SESSION - Java Server Faces Questions
JSF SESSION i am facing problem to store and retrive textbox values in session variable in jsf, i want to store textbox value in bean as well as in session variable but when submitting the form it showing null.i have written command link navigation case - Java Server Faces Questions
in JSF :
"navigation case in faces-config.xml...jsf command link navigation case Hi,
in my jsf page i have 2 commandlink tags,then how to write navigation case in faces config.xml tell me as soon
jsf tags - Java Server Faces Questions
jsf tags hi how can i insert a currency symbol in the jsf field with precision of 2 digits
if i use the following tag
with these libs... is not working
can any one tell me how can use both space and currency tag in my jsf
need view for jsf - Java Server Faces Questions
need view for jsf Hi,
iam using eclipse 3.1 and jsf 1.1 in my application,while creating only we can see the view of jsf page ,it will appear.../jsf/
Thanks
JSF Interview Questions
;
Collection of JSF (Java Server Faces) Interview Questions.
JSF Interview Question
JavaServer Faces (JSF) is an application framework...;
JSF Interview
Question Page 1
JavaServer Faces (JSF
jsf and hibernate - Java Server Faces Questions
JSF and Hibernate - developing an enterprise application How to develop an enterprise application in Hibernate and JSF? Hi,JSF, Hibernate... how to use these technologies then visit
JSF Search - Java Server Faces Questions
JSF Search Hi,
Do you have any examples of how to do a Search programm using JSF in RAD.My Search button should carry the value of the input text... and the results should be displayed in the new page.
I tried to go through your JSF
JSF image - Java Server Faces Questions
JSF image Dear Sir
I want to display a java.awt.iamge... to save the image.Without saving i want to display the image in jsf page.
public... the image in server.
Here i want to display the image in jsf page through
jsf navigation rule - Java Server Faces Questions
jsf navigation rule hi, i have one jsf page admin.jsp in that 2 commandlinks are there,they are addusers,addfieldusers for this navigation in faces... morethan one navigation rules in faces-config.xml.
Give me the solution as soon
jsf navigation rule - Java Server Faces Questions
jsf navigation rule Hi sir,
here is the code:
give me solution...,
faces-config.xml:
StoreNameBean
PersonBean
request
/pages..." go to "Hello.jsp"
For more information on JSF visit to :
http
jsf form output - Java Server Faces Questions
jsf form output Hi sir,
see my code and please tell me mistake... setEmailid(String emailid) {
this.emailid = emailid;
}
}
faces-config.xml... information on JSF visit to :
Thanks
jsf form output - Java Server Faces Questions
jsf form output Hi sir,
see my code and please tell me mistake...;
}
}
faces-config.xml:
PersonBean
PersonBean
request
/pages... have faced using this code.
For more information on jsf visit to
http
jsf code in myeclipse - Java Server Faces Questions
jsf code in myeclipse send me the code for richTablescroller Application in myEclipse Hi Friend,
Try the following code:
1... = initList;
}
}
3) In faces-config:
ScrollerBean
JSF-fileupload-ajax - Java Server Faces Questions
JSF-fileupload-ajax hi
i am upload the file JSF with ajax
i am using . i create 4 panel tabs
in one panel tab i used .i want file... friend,
For read more information :
google map with jsf - Java Server Faces Questions
google map with jsf hi,
i am using jsf with googlemap.i load the script file with googlemap .its working fine.how can i load the database values(latlong) to googlemap from jsf bean values.i want to show mutiple markers FileUpload - Java Server Faces Questions
JSF FileUpload hi all,
i am beginner to JSF i want upload one file and save it into another location.normally the files are upload temperorily stored to browser. Actual thing is that, i want to store that uploaded file
Simple JSF Hello Application
. First line tells
where to find JSF html tags that defines html elements and second line tells where to
find JSF core tags. A page which contains JSF tags... pattern with the Faces
servlet. This is used here because we have used *.jsf
jsf forms - Java Server Faces Questions
jsf forms Hi,
I have some queries in jsf forms please solve it.
1.how to set the height of a table
2. how to set the image at right side.
3.how......
-----------------------------------------
Visit for more information.
dropdown box in jsf - Java Server Faces Questions
information on JSF visit to :
richfaces in jsf - Java Server Faces Questions
richfaces in jsf Hi friends.
i added jsf tags like this :
but it giving exception like this:
org.apache.jasper.JasperException: File "/WEB.../jsf/richfaces/index.shtml
Thanks | http://roseindia.net/tutorialhelp/comment/58553 | CC-MAIN-2014-42 | refinedweb | 2,960 | 57.27 |
In part 1, you learned about the concepts behind state machines, how to create a basic working machine using XState, and integrate that into a React component.
In the second part, we will make our traffic light run automatically and turn it ON and OFF.
This article will assume you already read part 1. So if you didn’t, please check it out.
Last time, our state machine modeled a traffic light that changed at the press of a button. Now it’s time to add more features.
Making the Traffic Light Automatic
For a more realistic implementation, the button needs to be replaced by a timer. There are several ways to implement it. For this example, the choice will be a Delayed Transition — a transition that occurs automatically after a delay.
export const trafficLightMachine = createMachine< undefined, TrafficLightEvent, TraffiLightState >({ id: "trafficLight", initial: "red", states: { green: { on: { NEXT: "yellow" }, after: { 3500: "yellow", }, }, yellow: { on: { NEXT: "red" }, after: { 1500: "red", }, }, red: { on: { NEXT: "green" }, after: { 5000: "green", }, }, }, });
These delayed transitions are defined in the after property, which maps the delay in milliseconds to the respective state it will transition to.
So, 3.5 seconds after entering the green state, it will automatically transition to the yellow state. Which, in turn, will transition to the red state after 1.5 seconds, and red will transition over to green after 5 seconds, completing the familiar traffic light loop.
After making this change, the lights will start changing automatically, just like a real traffic light.
You can still use the NEXT button, and it will change lights immediately. Let’s run our automatic traffic light machine through the visualizer again.
Our automatic traffic light state machine
If you do it yourself, you’ll notice that the automatic transitions also happen on the visualizer! This is incredibly helpful to visualize complex business logic and a great way for designers and product managers to visualize how the application works.
Hierarchical States
A new requirement just came in. We need to be able to turn the traffic light on and off. So it’s time to extend our machine definition.
The machine needs to be turned on and off, which required adding two more event types to our model.
type TrafficLightEvent = | { type: "NEXT" } | { type: "TURN_OFF" } | { type: "TURN_ON" };
Now to our states, when the machine is ON, the lights should cycle automatically, and when we are on the OFF state, the lights need to be all off. This is best represented by hierarchical states — the OFF state would be a simple state, and the ON state should also be able to house our existing states. Let’s see how that looks.
type TraffiLightState = | { value: { ON: "green" }; context: undefined } | { value: { ON: "yellow" }; context: undefined } | { value: { ON: "red" }; context: undefined } | { value: "OFF"; context: undefined }; export const trafficLightMachine = createMachine< undefined, TrafficLightEvent, TraffiLightState >({ id: "trafficLight", initial: "OFF", states: { ON: { on: { TURN_OFF: "OFF" }, initial: "red", states: { green: { on: { NEXT: "yellow" }, after: { 3500: "yellow", }, }, yellow: { on: { NEXT: "red" }, after: { 1500: "red", }, }, red: { on: { NEXT: "green" }, after: { 5000: "green", }, }, }, }, OFF: { on: { TURN_ON: "ON" }, }, }, });
Let’s break up the changes, starting with our Typestates definitions. We added a case for the OFF state and also modified our color states. They now have a parent ON state, represented by the object notation.
Now to the machine itself, we need to refactor our first level of states definition. We created an ON and an OFF state. The OFF state is a simple state in which we define a single transition. It should transition over to ON when the TURN_ON event is received.
We define states again on the new ON state, with our previous color states and transitions. The lights are now substates of the parent ON states. We also changed the initial state of the entire machine. It now starts with OFF.
As this might be a little intimidating, let’s visualize our machine again to avoid any confusion.
Hierarchical traffic light
React Integration
Now that we’ve made the necessary changes to our state machine, we need to update our react integration to handle the new states and events.
export const App = () => { const [current, send] = useMachine(trafficLightMachine); return ( <div className="container"> <div className="pole" /> <div className="traffic-light"> <input type="radio" readOnly className="light red" checked={current.matches({ ON: "red" })} /> <input type="radio" readOnly className="light yellow" checked={current.matches({ ON: "yellow" })} /> <input type="radio" readOnly className="light green" checked={current.matches({ ON: "green" })} /> <button onClick={() => send("NEXT")}>NEXT</button> <button onClick={() => send("TURN_ON")}>TURN ON</button> <button onClick={() => send("TURN_OFF")}>TURN OFF</button> </div> </div> ); };
As you can see, not much has changed; we’ve added two new buttons to turn ON and OFF the traffic light. But the most significant change is how our matches function changes to match the color that should be lit. We need to check if the machine matches the substate of ON, so current.matches({ ON: ‘green’}) will only be true if the machine is ON and substate green.
If we run our application in the browser, we can see that the traffic light starts with all lights off, and we can turn it on to the light cycle using the TURN ON button.
Wrapping up
We’re now starting to see the true power of state machines! We explored how Delayed Transitions can make automatic transitions possible without writing too much code and how the Hierarchical States can help us model more complex behaviors without losing confidence, type safety, and code readability.
You can review the sample code here. | https://moduscreate.com/blog/getting-started-with-xstate-react-and-typescript-part-2/ | CC-MAIN-2022-21 | refinedweb | 920 | 61.67 |
Devel::Constants - translates constants back to named symbols
# must precede use constant use Devel::Constants 'flag_to_names'; use constant A => 1; use constant B => 2; use constant C => 4; my $flag = A | B; print "Flag is: ", join(' and ', flag_to_names($flag) ), "\n";
Declaring constants is very convenient for writing programs, but as Perl often inlines them, retrieving their symbolic names can be tricky. This worse with lowlevel modules that use constants for bit-twiddling.
Devel::Constants makes this much more manageable.
It silently wraps around the constant module, intercepting all constant declarations. It builds a hash, associating the values to their names, from which you can retrieve their names as necessary.
Note that you must use Devel::Constants before
constant, or the magic will not work and you will be very disappointed. This is very important, and if you ignore this warning, the authors will feel free to laugh at you (at least a little.
By default, Devel::Constants only intercept constant declarations within the same package that used the module. Also by default, it stores the constants for a package within a private (read, otherwise inaccessible) variable. You can override both of these.
Passing the
package flag to Devel::Constants with a valid package name will make the module intercept all constants subsequently declared within that package. For example, in the main package you might say:
use Devel::Constants package => NetPacket::TCP; use NetPacket::TCP;
All of the TCP flags declared within NetPacket::TCP are now available.
It is also possible to pass in a hash reference in which to store the constant values and names:
my %constant_map; use Devel::Constants \%constant_map; use constant NAME => 1; use constant RANK => 2; use constant SERIAL => 4; print join(' ', values %constant_map), "\n";
By default, Devel::Constants exports no subroutines. You can import its two helper functions optionally by passing them on the use line:
use Devel::Constants qw( flag_to_names to_name ); use constant FOO => 1; use constant BAR => 2; print flag_to_names(2); print to_name(1);
You may also import these functions with different names, if necessary. Pass the alternate name after the function name. Beware that this is the most fragile of all options. If you do not pass a name, Devel::Constants may become confused:
# good use Devel::Constants flag_to_names => 'resolve', 'to_name'; # WILL WORK IN SPITE OF POOR FORM (the author thinks he's clever) use Devel::Constants 'to_name', flag_to_names => 'resolve'; # WILL PROBABLY BREAK, SO DO NOT USE use Devel::Constants 'to_name', package => WD::Kudra;
Passing the
import flag will import any requested functions into the named package. This is occasionally helpful, but it will overwrite any existing functions in the named package. Be a good neighbor:
use Devel::Constants import => 'my::other::namespace', 'flag_to_names', 'to_name';
Note that constant also exports subroutines, by design.
flag_to_names($flag, [ $package ])
This function resolves a flag into its component named bits. This is generally only useful for known bitwise flags that are combinations of named constants. It can be very handy though.
$flag is the flag to decompose. The function does not modify it. The
$package parameter is optional. If provided, it will use flags set in another package. In the NetPacket::TCP example above, you can use it to find the symbolic names of TCP packets, such as SYN or RST set on a NetPacket::TCP object.
to_name($value, [ $package ])
This function resolves a value into its constant name. This does not mean that the value necessarily comes from the constant, but merely that it has the same value as the constant. (For example, 2 could be the result of a mathematical operation, or it could be a sign to dump core and bail out.
to_name only guarantees the same value, not the same semantics. See PSI::ESP if this is not acceptable.) As with flag_to_names, the optional
$package parameter will look for constants declared in a package other than the current.
flag_to_names(inefficient algorithm)
constant
chromatic
chromatic at wgz dot org, with thanks to "Benedict" at Perlmonks.org for the germ of the idea ().
Thanks also to Tim Potter and Stephanie Wehner for NetPacket::TCP.
Version 1.01 released by Neil Bowers <neilb at cpan dot org>.
This is free software. You may use, modify, and distribute it under the same terms as Perl 5.8.x itself.
Provides the ability to define constants, a reverse mapping function, and more besides.
A review of all CPAN modules related to the definition and manipulation of constants and read-only variables. | http://search.cpan.org/~neilb/Devel-Constants/lib/Devel/Constants.pm | CC-MAIN-2015-40 | refinedweb | 743 | 54.02 |
May 2016
Part 6 of the ASF ARM Tutorial
Enable the internal oscillators of Atmel ARM Cortex microcontrollers to use external crystals for their timing in an ASF project.
The ATSAM4N16C microcontroller on the Atmel SAM4N Xplained Pro board has two external crystals connected to its oscillator pins. Default Atmel Studio ASF projects use the internal RC oscillators instead of the external crystals. This part of the ARM Cortex ASF tutorial shows how to enable and use the external crystals with the oscillators.
A 12MHz crystal and 32.768kHz crystal are used for the main clock and slow clock respectively. A 100MHz system clock is derived from the internal oscillator using the 12MHz crystal by feeding the 12MHz clock through the PLL. A 32.768kHz slow clock is derived from the internal oscillator using the 32.768kHz crystal.
Start by creating a new user board ASF project as done in previous parts of this tutorial. I am calling this project ext_xtal.
In Atmel Studio:
If the new project is compiled and loaded to the board it will not use the external 12MHz crystal, but rather the internal RC oscillator. This can be verified by probing the crystal pins with an oscilloscope.
The following steps show how to use the 12MHz external crystal instead of the internal RC oscillator for the main clock.
Before ASF functions can be used to set up the clock source for a microcontroller, the System Clock Control service ASF module must be added to the project.
The sysclk_init() function becomes available after adding the System Clock Control ASF module. Call sysclk_init() at the top of main().
main.c
#include <asf.h> int main (void) { sysclk_init(); board_init(); while(1) { } }
After adding the System Clock Control ASF module, conf_clock.h is added to the project in src\config\. Open this file for editing.
Current system clock configuration can be found in conf_clock.h and the current clock settings are shown in the comments at the bottom of the file. As can be seen in the image below, the system clock is set up to run at 100MHz using the internal 8MHz RC oscillator.
ASF System Clock Settings
We still want to get the clock source from the PLL and use a prescaler value of two, so leave the first two definitions in conf_clock.h at their default settings.
CONFIG_PLL0_SOURCE must change from PLL_SRC_MAINCK_8M_RC to OSC_MAINCK_XTAL to change from using the internal RC oscillator to using the crystal attached to the external oscillator pins.
On the SAM4N Xplained Pro, the main crystal is 12MHz. The values of the PLL multiplication and division factors must change in order to derive a 100MHz system clock from the 12MHz crystal. This is done by changing the division to 3 which divides 12MHz by 3 to get 4MHz. A multiplication factor or 50 is then used to change the PLL frequency to 4MHz × 50 = 200MHz. The prescaler division of 2 brings this frequency down to 100MHz.
The modified conf_clock.h file is shown below with the commented out options removed and the comments at the bottom of the file updated.
conf_clock.h
#ifndef CONF_CLOCK_H_INCLUDED #define CONF_CLOCK_H_INCLUDED #define CONFIG_SYSCLK_SOURCE SYSCLK_SRC_PLLACK // ===== System Clock (MCK) Prescaler Options (Fmck = Fsys / (SYSCLK_PRES)) #define CONFIG_SYSCLK_PRES SYSCLK_PRES_2 // ===== PLL0 (A) Options (Fpll = (Fclk * PLL_mul) / PLL_div) // Use mul and div effective values here. #define CONFIG_PLL0_SOURCE OSC_MAINCK_XTAL #define CONFIG_PLL0_MUL 50 #define CONFIG_PLL0_DIV 3 // ===== Target frequency (System clock) // - External Xtal frequency: 12MHz // - System clock source: PLLA // - System clock prescaler: 2 (divided by 2) // - PLLA source: 12MHz // - PLLA output: 12MHz * 50 / 3 // - System clock: 12MHz * 50 / 3 / 2 = 100MHz #endif /* CONF_CLOCK_H_INCLUDED */
Note: When writing the multiplication factor directly to the PLL configuration register, one would normally be subtracted from this value to make the multiplication work correctly. This is not necessary in the above settings because the function that uses this value subtracts one from it. The multiplication factor for this example can therefore be 50 and not 49 to get the PLL to multiply by 50.
Values for the crystals on the board must be defined in conf_board.h as we have done in previous projects in this tutorial series. These values were defined in previous projects to prevent compiler warnings rather than use the external crystals. All previous projects used the internal 8MHz RC oscillator as the clock source which was ramped up to 100MHz by the PLL.
conf_board.h
#ifndef CONF_BOARD_H #define CONF_BOARD_H project can now be built and loaded to the board. An oscilloscope can be used to check the pins of the external 12MHz crystal to see that it is being used by the oscillator. A 12MHz clock should be seen on the oscilloscope.
Two lines of code must be added to the project at the top of main() to enable the slow clock oscillator to use the external 32.768kHz crystal. The code is shown below.
main.c
#include <asf.h> int main (void) { pmc_switch_sclk_to_32kxtal(PMC_OSC_XTAL); // enable external 32.768kHz crystal while (!pmc_osc_is_ready_32kxtal()); // wait until oscillator is ready sysclk_init(); board_init(); while(1) { } }
The first line of code enables the slow clock to use the 32.768kHz external crystal. The second line of code waits for the slow clock oscillator to stabilize.
The project can be built again and loaded to the board. An oscilloscope can be used to check the pins of the 32.768kHz crystal which should show a 32.768kHz clock pulse.
Amazon.com
Amazon.co.uk | http://startingelectronics.org/software/atmel/asf-arm-tutorial/external-crystals/ | CC-MAIN-2017-43 | refinedweb | 900 | 64.1 |
Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | Attributes | See Also
#include <sys/neti.h> int net_inject(const net_data_t net, inject_t style, net_inject_t *packet);
Solaris DDI specific (Solaris DDI).
value returned from a successful call to net_protocol_lookup(9F).
method that determines how this packet is to be injected into the network or kernel.
details about the packet to be injected.
The net_inject() function provides an interface to allow delivery of network layer (layer 3) packets either into the kernel or onto the network. The method of delivery is determined by style.
If NI_QUEUE_IN is specified, the packet is scheduled for delivery up into the kernel, imitating its reception by a network interface. In this mode, packet->ni_addr is ignored and packet->ni_physical specifies the interface for which the packet is made to appear as if it arrived on.
If NI_QUEUE_OUT is specified, the packet is scheduled for delivery out of the kernel, as if it were being sent by a raw socket. In this mode, packet->ni_addr and packet->ni_physical are both ignored.
Neither NI_QUEUE_IN or NI_QUEUE_OUT cause the packet to be immediately processed by the kernel. Instead, the packet is added to a list and a timeout is scheduled (if there are none already pending) to deliver the packet. The call to net_inject() returns once the setup has been completed, and not after the packet has been processed. The packet processing is completed on a different thread and in a different context to that of the original packet. Thus, a packet queued up using net_inject() for either NI_QUEUE_IN or NI_QUEUE_OUT is presented to the packet event again. A packet received by a hook from NH_PHYSICAL_IN and then queued up with NI_QUEUE_IN is seen by the hook as another NH_PHYSICAL_IN packet. This also applies to both NH_PHYSICAL_OUT and NI_QUEUE_OUT packets.
If NI_DIRECT_OUT is specified, an attempt is made to send the packet out to a network interface immediately. No processing on the packet, aside from prepending any required layer 2 information, is made. In this instance, packet->ni_addr may be used to specify the next hop (for the purpose of link layer address resolution) and packet->ni_physical determines which interface the packet should be sent out.
For all three packets, packet->ni_packet must point to an mblk structure with the packet to be delivered.
See net_inject_t(9S) for more details on the structure net_inject_t.
The net_inject() function returns:
The network protocol does not support this function.
The packet is successfully queued or sent.
The packet could not be queued up or sent out immediately.
The net_inject() function may be called from user, kernel, or interrupt context.
See attributes(5) for descriptions of the following attributes:
net_protocol_lookup(9F), netinfo(9F), net_inject_t(9S)
Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | Attributes | See Also | http://docs.oracle.com/cd/E19082-01/819-2256/6n4icm0p4/index.html | CC-MAIN-2015-32 | refinedweb | 464 | 55.24 |
27 August 2012 05:22 [Source: ICIS news]
By Nurluqman Suratman and Dolly Wu
?xml:namespace>
SINGAPORE
Manufacturing activities in the country – which is
In its monthly survey of Chinese factories, UK banking group HSBC issued a lower flash purchasing managers’ index (PMI) for the country in August at 47.8 than July’s 49.5.
HSBC’s PMI readings for
New export orders sub-index captured in the HSBC survey fell to 44.7 in August – the worst showing since March 2009.
“The collapse in August's new export orders reading implies that sluggish exports growth will stay with
At 1%,
“Export pressure for
Dwindling demand from the debt-ridden eurozone has weighed heavily on Chinese exports and this, in turn, is having a negative impact on other Asian economies, including
Based on July data,
During the month,
Its imports of methanol also fell by 48% year on year to 259,237 tonnes, while those of purified terephthalic acid (PTA) and phenol declined by 19% and 29%, respectively.
But it took in significantly higher volumes of butadiene in July at 243,920 tonnes – representing an eightfold increase from the previous corresponding period.
Apart from its slowing economy,
“Despite tentative signs of stabilization in
It was able to ride through the global recession of 2009 largely unscathed. But this time around, a huge hit on exports – its main engine of growth – is inevitable as the eurozone, which is deep in mounting debt crisis, is a major market.
“
“The main drag in
“For the first several months of this year, demand from the
“However, the manufacturing cycle in the
The Chinese economy is forecast to post an 8% growth this year, a deceleration from a 9.2% pace recorded in 2011, according to global financial watchdog – the International Monetary Fund (IMF).
($1 = €0.80, $1 = Y78.6) | http://www.icis.com/Articles/2012/08/27/9589270/Slowing-China-drags-petrochemical-demand-in-Asia.html | CC-MAIN-2015-06 | refinedweb | 306 | 55.27 |
Hi
i did the simple part of my project and got it to run but now i have to add in prototypes. can someone please help
bool validRadius(double);
bool validLengthAndWidth(double, double);
bool validBaseAnd Height(double, double);
I will also create functions which will calculate the area of the various geometric objects. These functions will be called from within each of the valid data cases in your switch-case selection construct. In each of the different cases, validate the data first and call the function if the data is in fact valid. If called, each of the functions will then calculate their respective areas and then return the value of its calculation to be printed within the case ( or write another function to do the printing.).
Use the following prototypes:
double areaCircle(double);
double areaRectangle(double, double);
double areaTriangle( double, double);
my programs
#include <iostream> #include <iomanip> #include <cmath> // needed for pow function using namespace std; int main() { int choice; //menu choice double PI= 3.14159; double area, radius, base, num1, length, height, width; while (choice != 4) { cout << "\t\tGeometry Calculator Menu\n\n"; cout << "1. Calculate the Area of a Circle\n"; cout << "2. Calculate the Area of a Rectangle\n"; cout << "3. Calculate the Area of a Triangle\n"; cout << "4. Quit the Program\n\n"; cout << "Enter your choice (1-4): "; cin >> choice; switch (choice) { case 1: cout << "Enter the radius: "; cin >> radius; area = 3.14159 * pow(radius, 2.0); cout << "Area = " << area << "\tsquare units" << endl; break; case 2: do { cout << "Enter length: "; cin >> length; while (length < 0 ) { cout << "Please enter a positive length value: " << endl; cin >> length; } cout << "Enter width: "; cin >> width; while (width < 0 ) { cout << "Please enter a positive width value: " << endl; cin >> width; } } while ( length < 0 || width < 0 ); area = length * width; cout << "Area of rectangle is: " << area << endl; break; case 3: do { cout << "Enter base: "; cin >> base; while (base < 0 ) { cout << "Please enter a positive base value: " << endl; cin >> base; } cout << "Enter height: "; cin >> height; while (width < 0 ) { cout << "Please enter a positive height value: " << endl; cin >> height; } } while ( height < 0 || base < 0 ); area = base * height * .5; cout << "Area of triangle is: " << area << endl; break; case 4: void exit (int exitcode); break; default: cout << "**Error: Menu option must be 1, 2, 3 or 4." << endl; cout << "Please try again." << endl; break; } } system("pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/152708/need-help-added-additional-code-to-my-program | CC-MAIN-2018-43 | refinedweb | 392 | 66.17 |
Download the code files for this project here.
I love lights; specifically, I love LEDs - which have been described to me as "catnip for geeks". LEDs are low powered but bright, which means they can be embedded into all sorts of interesting places and, when coupled with a network, can be used for all sorts of ambient display purposes.
In this post, I'll show you how to build an "information radiator" with a bit of Python and some LEDs, which you can then use to make your own for your own personal needs.
// An information radiator light showing the forecast temperature in Melbourne.
An information radiator is so called as it radiates information outwards from (often) a fixed point so that it can be interpreted by an observer. More complex information can be encoded through the use of color, brightness, or frequency of lighting to encode more information.
I'm going to show you how to build an ambient display that scrapes some data from a weather service and then display it using colored light to indicate the forecasted temperature.
This is quite a simple example, but by the end of this two-part post series, you will be able to change your information radiator to consider rain or multiple elements, or even point to something that is important to you.
Bill of materials
Tools required
These common tools will come in handy:
- Soldering iron
- Wire strippers
Design
You don't want the light attached to the computer all the time - what's the point of a light if you can just look up the weather on Google? The device will connect to the network and exist somewhere visible, and then the processing can run on a mini server somewhere (such as a Raspberry Pi) and just send the device messages when needed.
So, the system design looks like this:
- The microcontroller looks after the LED and exposes a network interface.
- A Python script runs periodically on the server to check the weather forecast, get the data, and then send a message to the Arduino.
Building the light
The build of the light is quite straightforward. Cut four pieces of wire about 6 inches long (personal preference) and solder them to the four connections on the light disk.
// Light disc with wires soldered on.
Strip 5mm of wire from the other end and wire the light disk to the Arduino in the following way:
- R to pin 5
- G to pin 6
- B to pin 9
Depending on the version of the light disc you have, wire GND to GND or 5V to 5V. The specifics are labelled on the disc itself, and the newer discs are GND.
// Light disc wired into Arduino.
That's it! You're all done electronics-wise.
Plug in an Ethernet cable and ensure you have 7-20V power supplied from a power pack to the Arduino.
Programming the Arduino
If you have never programmed an Arduino before, I suggest this tutorial as an excellent starting point. I'm going to assume you have got the Arduino IDE installed on your computer and you can upload sketches.
First, you need to test your wiring. The following Arduino code will cycle through combinations of colors for about 1 second each. It will print the color to the serial console as well, so you can observe it with the serial monitor:
#define RED 5 #define GREEN 6 #define BLUE 9 #define MAX_COLOURS 8 #define GND true // change this to false if 5V type char* colours[] = {"Off", "red", "green", "yellow", "blue", "magenta", "cyan", "white"}; uint8_t current_colour = 0; void setup() { Serial.begin(9600); Serial.println("Testing lights"); pinMode(RED, OUTPUT); pinMode(GREEN, OUTPUT); pinMode(BLUE, OUTPUT); if (GND) { digitalWrite(RED, LOW); digitalWrite(GREEN, LOW); digitalWrite(BLUE, LOW); } else { digitalWrite(RED, HIGH); digitalWrite(GREEN, HIGH); digitalWrite(BLUE, HIGH); } } void loop () { Serial.print("Current colour: "); Serial.println(colours[current_colour]); if (GND) { digitalWrite(RED, current_colour & 1); digitalWrite(GREEN, current_colour & 2); digitalWrite(BLUE, current_colour & 4); } else { digitalWrite(RED, !(bool)(current_colour & 1)); digitalWrite(GREEN, !(bool)(current_colour & 2)); digitalWrite(BLUE, !(bool)(current_colour & 4)); } if ((++current_colour) >= MAX_COLOURS) current_colour=0; delay(1000); }
Notably, there is a flag to flip (#define GND true | false) depending on whether your light disc uses GND or 5V. All this does is reverse the bit-shifting logic (on the GND disc, the light goes on when the pin goes HIGH, but on the 5V disc, the light goes on when the pin goes LOW).
If the colors are muddled, you have probably just connected a wire to the wrong pin; just flip them over and it should be fine. If you aren't seeing any light, check your connections and ensure you are getting power to the light disk.
The next thing to do is write the sketch that will take messages from the network and update the light. To do this, we need to establish a protocol. There are many ways to define this, but for simplicity, a text protocol like JSON works sufficiently well.
Each message will look like this:
{r:val, g:val, b:val}
In each case, val is an unsigned byte, so will be in the range 0-255:
// Adapted from generic web server example as part of IDE created by David Mellis and Tom Igoe. #include "Arduino.h" #include <Ethernet.h> #include <SPI.h> #include <string.h> #include <stdlib.h> #define DEBUG false // <1> // Enter a MAC address and IP address for your controller below. // The IP address will be dependent on your local network: byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xAE }; byte ip[] = { <PUT YOUR IP HERE AS COMMA BYTES> }; //eg 192,168,0,100 byte gateway[] = { <PUT YOUR GW HERE AS COMMA BYTES}; // eg 192,168,0,1 byte subnet[] = { <PUT YOUR SUBNET HERE>}; //eg 255,255,255,0 // Initialize the Ethernet server library // with the IP address and port you want to use (in this case telnet) EthernetServer server(23); #define BUFFERLENGTH 255 // these are the pins you wire each LED to. #define RED 5 #define GREEN 6 #define BLUE 9 #define GND true // change this to false if 5V type, true if GND type light disc void setup() { Ethernet.begin(mac, ip, gateway, subnet); server.begin(); #ifdef DEBUG Serial.begin(9600); Serial.println("Awaiting connection"); #endif } void loop() { char buffer[BUFFERLENGTH]; int index = 0; // Listen EthernetClient client = server.available(); if (client) { #ifdef DEBUG Serial.println("Got a client"); #endif // reset the input buffer index = 0; while (client.connected()) { if (client.available()){ char c = client.read(); // if it's not a new line, then add it to the buffer <2> if (c != '\n' && c != '\r') { buffer[index] = c; index++; if (index > BUFFERLENGTH) index = BUFFERLENGTH -1; continue; } else { buffer[index] = '\0'; } // get the message string for processing String msgstr = String(buffer); // get just the bits we want between the {} msgstr = msgstr.substring(msgstr.lastIndexOf('{')+1, msgstr.indexOf('}', msgstr.lastIndexOf('{'))); msgstr.replace(" ", ""); msgstr.replace("\'", ""); #ifdef DEBUG Serial.println("Message:"); Serial.println(msgstr); #endif // rebuild the buffer with just the URL msgstr.toCharArray(buffer, BUFFERLENGTH); // iterate over the tokens of the message - assumed flat. <3> char *p = buffer; char *str; while ((str = strtok_r(p, ",", &p)) != NULL) { #ifdef DEBUG Serial.println(str); #endif char *tp = str; char *key; char *val; // get the key key = strtok_r(tp, ":", &tp); val = strtok_r(NULL, ":", &tp); #ifdef DEBUG Serial.print("Key: "); Serial.println(key); Serial.print("val: "); Serial.println(val); #endif // <4> if (GND) { if (*key == 'r') analogWrite(RED, atoi(val)); if (*key == 'g') analogWrite(GREEN, atoi(val)); if (*key == 'b') analogWrite(BLUE, atoi(val)); } else { if (*key == 'r') analogWrite(RED, 255-atoi(val)); if (*key == 'g') analogWrite(GREEN, 255-atoi(val)); if (*key == 'b') analogWrite(BLUE, 255-atoi(val)); } } break; } } delay(10); // give client time to send any data back client.stop(); } }
The most notable parts of the code are as follows:
- You add your own network settings in here
- This text parser just adds text to a buffer until a \n arrives
- As this protocol is simple, I use a string tokenizer to break up the message into its constituent pieces as key-value pairs
- Use the RGB values to set the appropriate level on the PWM pins (noting polarity reversal for GND vs 5V light discs)
To test the code, upload the sketch, ensure your Ethernet cable is plugged in, and attempt to connect to the device:
telnet <ip> 23
This should return something like the following:
Trying 10.0.1.91... Connected to 10.0.1.91. Escape character is '^]'.
Now, enter:
{r:200,g:0, b:0} <enter>
If the light changes to red, then everything is working - time to get some data. If not, check your code and make sure the messages are being interpreted properly (plug in your computer to use the serial debugger to watch the messages).
Play around with changing the colors of your light by sending different values to the device. In the Part 2 post, I’ll explain how to scrape the weather data we want and use that to update the light periodically.
About the Author
Andrew Fisher is a creator (and destroyer) of things that combine mobile web, ubicomp, and lots of data. He is a programmer, interaction researcher, and CTO at JBA, a data consultancy in Melbourne, Australia. He can be found on Twitter at @ajfisher. | https://www.packtpub.com/books/content/building-information-radiator-part-1 | CC-MAIN-2017-39 | refinedweb | 1,550 | 59.94 |
Python Patterns - Implementing Graphs
Change notes: 2/22/98, 3/2/98, 12/4/00: This version of this essay fixes several bugs in the code.
Copyright (c) 1998, 2000, 2003 Python Software Foundation. All rights reserved. Licensed under the PSF license.
Graphs are networks consisting of nodes connected by edges or arcs., determining cycles in the graph (a cycle is a non-empty path from a node to itself), finding a path that reaches all nodes (the famous "traveling salesman problem"), and so on. Sometimes the nodes or arcs of a graph have weights or costs associated with them, and we are interested in finding the cheapest path.
There's considerable literature on graph algorithms, which are an important part of discrete mathematics. Graphs also have much practical use in computer algorithms. Obvious examples can be found in the management of networks, but examples abound in many other areas. For instance, caller-callee relationships in a computer program can be seen as a graph (where cycles indicate recursion, and unreachable nodes represent dead code).
Few programming languages provide direct support for graphs as a data type, and Python is no exception. However, graphs are easily built out of lists and dictionaries. For instance, here's a simple graph (I can't use drawings in these columns, so I write down the graph's arcs):
A -> B A -> C B -> C B -> D C -> D D -> C E -> F F -> CThis graph has six nodes (A-F) and eight arcs. It can be represented by the following Python data structure:
graph = {'A': ['B', 'C'], 'B': ['C', 'D'], 'C': ['D'], 'D': ['C'], 'E': ['F'], 'F': ['C']}This is a dictionary whose keys are the nodes of the graph. For each key, the corresponding value is a list containing the nodes that are connected by a direct arc from this node. This is about as simple as it gets (even simpler, the nodes could be represented by numbers instead of names, but names are more convenient and can easily be made to carry more information, such as city names).
Let's write a simple function to determine a path between two nodes. It takes a graph and the start and end nodes as arguments. It will return a list of nodes (including the start and end nodes) comprising the path. When no path can be found, it returns None. The same node will not occur more than once on the path returned (i.e. it won't contain cycles). The algorithm uses an important technique called backtracking: it tries each possibility in turn until it finds a solution.
def find_path(graph, start, end, path=[]): path = path + [start] if start == end: return path if not graph.has_key(start): return None for node in graph[start]: if node not in path: newpath = find_path(graph, node, end, path) if newpath: return newpath return NoneA sample run (using the graph above):
>>> find_path(graph, 'A', 'D') ['A', 'B', 'C', 'D'] >>>The second 'if' statement is necessary only in case there are nodes that are listed as end points for arcs but that don't have outgoing arcs themselves, and aren't listed in the graph at all. Such nodes could also be contained in the graph, with an empty list of outgoing arcs, but sometimes it is more convenient not to require this.
Note that while the user calls find_graph() with three arguments, it calls itself with a fourth argument: the path that has already been traversed. The default value for this argument is the empty list, '[]', meaning no nodes have been traversed yet. This argument is used to avoid cycles (the first 'if' inside the 'for' loop). The 'path' argument is not modified: the assignment "path = path + [start]" creates a new list. If we had written "path.append(start)" instead, we would have modified the variable 'path' in the caller, with disastrous results. (Using tuples, we could have been sure this would not happen, at the cost of having to write "path = path + (start,)" since "(start)" isn't a singleton tuple -- it is just a parenthesized expression.)
It is simple to change this function to return a list of all paths (without cycles) instead of the first path it finds:
def find_all_paths(graph, start, end, path=[]): path = path + [start] if start == end: return [path] if not graph.has_key(start): return [] paths = [] for node in graph[start]: if node not in path: newpaths = find_all_paths(graph, node, end, path) for newpath in newpaths: paths.append(newpath) return pathsA sample run:
>>> find_all_paths(graph, 'A', 'D') [['A', 'B', 'C', 'D'], ['A', 'B', 'D'], ['A', 'C', 'D']] >>>Another variant finds the shortest path:
def find_shortest_path(graph, start, end, path=[]): path = path + [start] if start == end: return path if not graph.has_key(start): return None shortest = None for node in graph[start]: if node not in path: newpath = find_shortest_path(graph, node, end, path) if newpath: if not shortest or len(newpath) < len(shortest): shortest = newpath return shortestSample run:
>>> find_shortest_path(graph, 'A', 'D') ['A', 'C', 'D'] >>>These functions are about as simple as they get. Yet, they are nearly optimal (for code written in Python). In another Python Patterns column, I will try to analyze their running speed and improve their performance, at the cost of more code.
Another variation would be to add more data abstraction: create a class to represent graphs, whose methods implement the various algorithms. While this appeals to the desire for structured programming, it doesn't make the code any more efficient (to the contrary). It does make it easier to add various labels to the nodes or arcs and to add algorithms that take those labels into account (e.g. to find the shortest route between two cities on a map). This, too, will be the subject of another column. | http://www.python.org/doc/essays/graphs/ | crawl-001 | refinedweb | 963 | 67.69 |
Plus, changing beneficiaries, non-citizen contributions, and Crummey questions.
College-savings expert Susan Bart answers advisors' questions on 529 plans and other education-planning matters. E-mail your questions to advisorquest@morningstar.com.
Question: In 2009, I made two gifts, one for each child of mine, before year end through electronic payment into their 529 accounts. The cash was taken out of my checking account on Dec. 29, 2009, but the transfer was not credited to the 529 accounts until 2010. Is this a 2009 gift? Does it matter?
Susan: The gift is complete when you no longer have the ability to take it back. So if your account was debited in 2009 for an electronic transfer, it's a 2009 gift even if it doesn't get credited to the 529 program until 2010. The significance of the timing of the gift is in determining how much in additional funds you can give your children or their 529 accounts in 2010 without exceeding the $13,000 per donee gift tax annual exclusion.
Question: My client opened a 529 account naming himself as the beneficiary many years ago. He is the account owner as well as the designated beneficiary. He was planning to go back to school but ultimately did not pursue it. There is still $33,000 in his 529 account. He wants to change the beneficiary from himself to a son of his sister, in order words, his nephew. If he changes the beneficiary on the account to his nephew, will this trigger a gift tax or cause him to have to file a gift tax return?
Susan: Your client's nephew is a member of the family and thus the change of beneficiary will not be treated as a nonqualified distribution that is subject to income tax. There are, however, gift tax consequences to the change of beneficiary. The change of beneficiary will be treated as a gift from your client to the nephew. Your client would be required to file a gift tax return (Form 709). If your client does not make the five-year election, then only $13,000 of the gift will be covered by the gift tax annual exclusion, and $20,000 of the gift will be treated as a taxable gift from your client to his nephew. The $20,000 taxable gift would either reduce your client's $1,000,000 aggregate lifetime exclusion or if your client has already used his full $1,000,000 lifetime exclusion, would be subject to gift tax.
Because your client needs to file a gift tax return in any event, your client might as well make the five-year election on his gift tax return to cause the gift to be treated as if it were made pro rata over five years. Thus your client would be treated as if he made a gift of $6,600 in the year the beneficiary was changed and each of the subsequent four years. Assuming that your client did not make other gifts to his nephew in excess of $6,400 ($13,000 minus $6,600) in any of these years, no gift tax would be due.
If your client is highly motivated to avoid filing a gift tax return, he could change the beneficiary over $13,000 of the account this year, change the beneficiary over an additional $13,000 of the account next year, and then change the beneficiary of the remainder of the account in 2013. Assuming that in any year his gifts to his nephew plus the amount of the 529 account over which he changes the beneficiary does not exceed the gift tax annual exclusion amount, he would not have to file a gift tax return reporting the gift.
Question: A 529 account is being established for a minor child of a deceased employee of a global corporation so that co-workers of the deceased employee can make contributions to the account. Can a nonresident alien or a U.S. citizen residing abroad make contributions to the 529 account?
Susan: If contributions from multiple donors are anticipated, the 529 account should be established under a program that permits non-account owners to make contributions. Assuming that this is the case, generally there is no prohibition against a nonresident alien or a U.S. citizen residing abroad making contributions to the 529 account of which someone else is the account owner.
Question: Are you aware of any prohibitions on the creator of a Crummey trust being a married couple establishing a Crummey trust for their grandchild with their son as the trustee? All trust forms that I have come across refer to an individual, not a couple, establishing the trust.
Susan: It's customary to name a single individual as the grantor of a trust, but there is no legal prohibition on a married couple being named as the grantor. For gift and estate tax purposes, what is relevant is who makes contributions to the trust, not who is named as grantor in the trust document. However, assuming that only one of the married couple is named as grantor, typically there would be no prohibition on the other member of the couple making contributions to the trust.. | http://www.morningstar.com/advisor/t/42990335/end-of-year-529-timing.htm | CC-MAIN-2016-30 | refinedweb | 873 | 58.01 |
The following is a changelog for JavaScript from Netscape Navigator 3.0 to 4.0. The old Netscape documentation can be found on archive.org. Netscape Navigator 4.0 was released on June 11, 1997. Netscape Navigator 4.0 was the third major version of the browser with JavaScript support.
JavaScript versions
Netscape Navigator 4.0 executes JavaScript language versions up to 1.2. Note that Netscape Navigator 3.0 and earlier ignored scripts with the language attribute set to "JavaScript1.2" and higher.
<SCRIPT LANGUAGE="JavaScript1.1"> <!-- JavaScript for Navigator 3.0. --> <SCRIPT LANGUAGE="JavaScript1.2"> <!-- JavaScript for Navigator 4.0. -->
New features in JavaScript 1.2
New objects
New properties
New methods
New operators
delete
- Equality operators (
==and
!=)
New statements
- Labeled statements
switch
do...while
import
export
Other new features
Changed functionality in JavaScript 1.2
- You can now nest functions within functions.
- Number now converts a specified object to a number.
- Number now produces
NaNrather than an error if
xis a string that does not contain a well-formed numeric literal.
- String now converts a specified object to a string.
Array.prototype.sort()now works on all platforms. It no longer converts undefined elements to null and sorts them to the high end of the array.
String.prototype.split()
- It can take a regular expression argument, as well as a fixed string, by which to split the object string.
- It can take a limit count so that it won't include trailing empty elements in the resulting array.
String.prototype.substring(): no longer swaps index numbers when the first index is greater than the second.
toString(): now converts the object or array to a literal.
- The
breakand
continuestatements can now be used with the new labeled statement. | https://developer.cdn.mozilla.net/en-US/docs/Archive/Web/JavaScript/New_in_JavaScript/1.2 | CC-MAIN-2021-10 | refinedweb | 290 | 62.24 |
Store Report Style Sheets
- 3 minutes to read
Tip
Online Example: How to create and store report style sheets
This tutorial describes how to create and save report style sheets at design time within Visual Studio, and conditionally load them at runtime.
Create and Save Style Sheets
To create a table report in this tutorial, start with a report that is bound to the “Products” table of the sample Northwind database (the nwind.mdb file is included in the XtraReports installation). To learn more about binding a report to a data source, see Provide Data to Reports. In this tutorial, you will start with the following report layout.
To add a new style sheet, switch to the Report Explorer, right-click the Styles node or its sub-node and in the invoked context menu, select Add Style.
Then, right-click the newly added style and select Edit Styles… in the context menu. In the invoked Styles Editor, specify the style name (“myOddStyle“), and save it as three separate style sheets (myStyle_Rose.repss, myStyle_Yellow.repss and myStyle_Blue.repss) in the Styles folder of the application’s directory. Each style sheet has a different BackColor setting.
After saving the style sheets, click Close.
To assign a style to the XRTable in the Detail band, expand its XRControl.Styles property in the Properties window, and set the OddStyle property to myOddStyle.
Load Style Sheets
Add a parameter to the report. To do this, right-click the Parameters section in the Field List window, and choose Add Parameter in the invoked context menu.
In the invoked dialog, adjust the properties of the parameter as shown in the following image.
Click OK to exit the dialog.
Programmatically assign the style sheets to the report depending on the parameter value. To do this, handle the report’s XRControl.BeforePrint event in the following way.
using System.Windows.Forms; using DevExpress.XtraReports.UI; // ... private void XtraReport1_BeforePrint(object sender, System.Drawing.Printing.PrintEventArgs e) { string sheetName = ""; // Define a relative start-up path, so that styles are loaded // regardless of the location of the application. string path = Application.StartupPath + @"\..\..\Styles\"; // Define the appropriate style sheet based on the selected parameter value. switch((int)OddStyle.Value) { case 0: sheetName = "myStyle_Rose.repss"; break; case 1: sheetName = "myStyle_Yellow.repss"; break; case 2: sheetName = "myStyle_Blue.repss"; break; } // Set the report's StyleSheetPath property to specify the report's style sheet. ((XtraReport)this).StyleSheetPath = path + sheetName; } | https://docs.devexpress.com/XtraReports/4807/detailed-guide-to-devexpress-reporting/customize-appearance/store-report-style-sheets?v=20.2 | CC-MAIN-2022-21 | refinedweb | 402 | 59.4 |
Tk::DateEntry - Drop down calendar widget for selecting dates.
$dateentry = $parent->DateEntry (<options>);
Tk::DateEntry is a drop down widget for selecting dates. It looks like the BrowseEntry widget with an Entry followed by an arrow button, but instead of displaying a Listbox the DateEntry displays a calendar with buttons for each date. The calendar contains buttons for browsing through the months.
When the drop down is opened, the widget will try to read the current content of the widget (the -textvariable), and display the month/year specified. If the variable is empty, or contains invalid data, then the current month is displayed. If one or two digit year is specified, the widget tries to guess the correct century by using a "100 year window".
The Entry widget has the following keyboard shortcuts:
Increase or decrease the date by one day.
Increase or decrease the date by one week.
Increase or decrease the date by one month. This would not work if the next or previous month has less days then the day currently selected.
Increase or decrease the date by one year. This would not work if the same month in the next or previous year has less days then the day currently selected.
Tk::DateEntry requires Time::Local and POSIX (strftime) (and basic Perl/Tk of course....). For using dates before 1970-01-01 either Date::Calc or Date::Pcalc is required.
For faster scanning between months the optional requirement Tk::FireButton is needed. For localized day and month names the following modules are needed:
Use alternative image for the arrow button.
Specify dateformat to use:
See also "DATE FORMATS" below.
Instead of using one of the builtin dateformats, you can specify your own by supplying a subroutine for parsing (-parsecmd) and formatting (-formatcmd) of the date string. These options overrides -dateformat. See "DATE FORMATS" below.
See -parsecmd above and "DATE FORMATS" below.
Sets the background color for the Entry subwidget. Note that the dropdown calendar is not affected by this option. See also -boxbackground, -buttonbackground and -todaybackground.
Sets the background color for all button in the dropdown calendar.
Sets the background color for the dropdown widget (not including the buttons).
Sets the background color for the button representing the current date.
Sets the font for all subwidgets.
Called for every day button in the calendar while month configuration. A hash with the keys -date, -widget, and -datewidget is passed to the callback. The -date parameter is an array reference containing day, month, and year. For empty buttons this parameter is undefined. The -widget parameter is a reference to the current Tk::DateEntry widget, and the -datewidget parameter is a reference to the current day button. A sample callback:
sub configcmd { my(%args) = @_; my($day,$month,$year) = @{$args->{-date}}; my $widget = $args->{-widget}; my $datewidget = $args->{-datewidget}; $datewidget->configure(...); ... }
The callback is called after initial configuration of a day widget, that is, i.e. the label and the background color is already set. Note that day buttons keep their configuration while switching between months.
Specifies the daynames which is used in the calendar heading. The default is
[qw/S M Tu W Th F S/]. Note that the array MUST begin with the name of Sunday, even if -weekstart specifies something else than 0 (which is Sunday). See also "WEEKS" below.
It is also possible to use the special value
locale to use the daynames from the current locale.
Use this if you don't want the weeks to start on Sundays. Specify a number between 0 (Sunday) and 6 (Saturday). See "WEEKS" below.
Format for the month name heading. The month name heading is created by calling
strftime(format,0,0,0,1,month,year). Default format is '
%B %Y'. Note that only month and year will have sensible values, including day and/or time in the heading is possible, but it makes no sense.
If POSIX is not available then this option has no effect and the month name heading format will just be "
%m/%Y".
'normal', 'disabled' or 'readonly'. The latter forces the user to use the drop down, editing in the Entry subwidget is disabled.
Width of the Entry subwidget, default is 10 (which fits the default date format MM/DD/YYYY).
All other options are handled by the Entry subwidget.
The default date format is MM/DD/YYYY. Since Tk::DateEntry has to parse the date to decide which month to display, you can't specify strftime formats directly (like "-dateformat => 'Date: %D. %B'").
The "builtin" date formats are:
Trailing fields that are missing will be replaced by the current date, if the year is specified by one or two digits, the widget will guess the century by using a "100 year window".
If you're not satisified with any of these formats, you might specify your own parse and format routine by using the -parsecmd and -formatcmd options.
The -parsecmd subroutine will be called whenever the pulldown is opened. The subroutine will be called with the current content of -textvariable as the only argument. It should return a three element list: (year, month, day). Any undefined elements will be replaced by default values.
The -formatcmd subroutine will be called whenever the user selects a date. It will be called with three arguments: (year, month, day). It should return a single string which will be assigned to the -textvariable.
See "EXAMPLES" below.
The default is to display the calendar the same way as the unix cal(1) command does: Weeks begin on Sunday, and the daynames are S, M, Tu, W, Th, F, and S.
However, some people prefer to start the weeks at Monday (saving both Saturday and Sunday to the weekEND...) This can be achived by specifying
-weekstart=>1.
-weekstart=>0 causes the week to start at Sunday, which is the default. If you have a very odd schedule, you could also start the week at Wednesday by specifying
-weekstart=>3.....
If you don't like the "cal" headings, you might specify something else by using the -daynames option.
See "EXAMPLES" below.
$parent->DateEntry->pack;
If you want the "locale's abbreviated weekday name" you do it like this:
$parent->DateEntry(-daynames=>'locale')->pack;
which is short for:
use POSIX qw/strftime/; my @daynames=(); foreach (0..6) { push @daynames,strftime("%a",0,0,0,1,1,1,$_); } $parent->DateEntry(-daynames=>\@daynames)->pack;
A Norwegian would probably do something like this:
my $dateentry=$parent->DateEntry (-weekstart=>1, -daynames=>[qw/Son Man Tir Ons Tor Fre Lor/], -parsecmd=>sub { my ($d,$m,$y) = ($_[0] =~ m/(\d*)\/(\d*)-(\d*)/); return ($y,$m,$d); }, -formatcmd=>sub { sprintf ("%d/%d-%d",$_[2],$_[1],$_[0]); } )->pack;
Note that this -parsecmd will return (undef,undef,undef) even if one or two of the fields are present. A more sophisticated regex might be needed....
If neither Date::Calc nor Date::Pcalc are available, then Tk::DateEntry uses timelocal(), localtime() and strftime(). These functions are based on the standard unix time representation, which is the number of seconds since 1970-01-01. This means that in this case Tk::DateEntry doesn't support dates prior to 1970, and on a 32 bit computer it doesn't support dates after 2037-12-31.
Future perl versions (possibly beginning with 5.10.1) will have support for 64 bit times.
Use DateTime::Locale instead of POSIX for localized day and month names.
Tk::Entry, Tk::Button, Tk::ChooseDate.
Hans J. Helgesen <hans.helgesen@novit.no>, October 1999.
Current maintainer is Slaven Rezic <slaven@rezic.de>. | http://search.cpan.org/~srezic/Tk-DateEntry-1.42/DateEntry.pm | CC-MAIN-2016-44 | refinedweb | 1,259 | 66.44 |
24 November 2011 08:39 [Source: ICIS news]
By Felicia Loo and Bohan Loh
?xml:namespace>
Ethylene prices in the region appeared to have bottomed out, rising by $10-20/tonne (€7.5-15/tonne) in the week ended 18 November to $1,020-1,050/tonne CFR (cost and freight) NE (northeast)
Downstream styrene butadiene rubber (SBR) prices in
Average spot prices of non-oil grade 1502 SBR rebounded $50/tonne week on week to $2,725/tonne (€2,017/tonne) CIF China on 23 November, after shedding 45% since early August, according to ICIS.
“South Korean butadiene demand is getting better,” said a trader in
BD prices were assessed at $1,700-1,750/tonne CFR NE Asia in the week ended 23 November, up by 9.5% from the previous week. Until the rebound in prices this week, BD values had been falling for four months.
The rebound in BD prices led to expectations that SBR values will increase further, market sources said. BD is a major feedstock in the production of SBR, making up more than 70% of SBR’s composition and cost.
“There are mixed signals from the markets. If
However, economic growth in China, the world’s second biggest economy, is forecast to decelerate to 9.1% in 2011 from 10.4% last year, the World Bank said. With the exclusion of
On polymers, it is atypical of the markets to keep inventory levels low at the end of the year, traders said.
“The (polymers) buyers are not stocking up and they only buy on a need-to basis,” said a polymers trader in
A major polymer importer based in Shanghai said it is cutting its prices despite squeezed margins to offload cargoes in view of weak demand and financial obligations ahead of the Lunar New Year holiday in January next year.
Traders in
In the xylenes market, demand for isomer-grade mixed xylenes (MX) is tapering off due to a shrinking spread between MX and paraxlyene (PX) at $215/tonne. The spread has narrowed from over $400/tonne margins two months ago but remains generally sufficient for PX makers to break even and cover operational costs of around $180-200/tonne.
Nonetheless, supply of MX is expected to remain tight because of limited plant expansions next year.
Demand for PX is similarly tapering off after months of tight supply due to squeezed PX-PTA margins. Numerous downstream PTA units are either shutting down or reducing operating rates in December because of negative margins.
An influx of deep-sea cargoes from EU of around 15,000 tonnes and the
Good news was that the bearishness seemed to have fizzled out as reflected in a few spot trades this week, they added.
“The direction is not clear at this stage. The crackers are also running at lower rates and spot buying is only from
($1 = €0.75)
Additional reporting by Helen Yan, Yu Guo and Peh Soo Hwee. | http://www.icis.com/Articles/2011/11/24/9511108/potential-for-rebound-in-asia-petchems-demand-still-uncertain.html | CC-MAIN-2013-20 | refinedweb | 493 | 59.74 |
QDialog malfunction in modality mode
Hi everyone!
I want to create a child widget with option to block parent widget.
Many advise to create QDialog with exec_ method and I tried that.
I even tried to set setWindowModality(QtCore.Qt.ApplicationModal)
As result the child widget is always on top of parent, but if I click on parent the child loses his focus. And the parent widget still can be moved on screen or even minimized in trey.
What's the point of such modality if it's not block parent absolutely?
Qt 4.8
@Buccaneer
Hi
When you create your dialog, do you give the parent in the constructor ?
Dialog mydia( Parent );
Also, did you call QDialog constructor from your class?
MyDialog::MyDialog(QWidget *parent) :..
QDialog(parent),
So the flags get set ?
@mrjj
I make my project in python with PyQt
No I didn't set parent.
When My QDialog class starts - parent is set to None
I just ported the project from ubuntu to windows and modality works perfectly (((
Looks like this is my problem in ubuntu + PyQt4
I'm so upset
@Buccaneer
Hi, sorry, though it was c++.
Never used Py so Have no idea.
I assume, your dialog is something like :
class DateDialog(QDialog):
def init(self, parent = None):
super(DateDialog, self).init(parent)
which as far as I understand will call QDialog Constructor ?
update: Ah, works in win.
@Buccaneer
Well seems to be the same with C++.
Looks Modal but can still minimize Parent...
So I guess it how Ubuntu does "modal"
Prevents access to any control on the parent but not minimize/maxi.
Strange.
Update:
Seems like other apps have this too. I guess its up to the windows
manager how it works. | https://forum.qt.io/topic/58295/qdialog-malfunction-in-modality-mode | CC-MAIN-2018-39 | refinedweb | 289 | 66.74 |
Anyone that has done a web search is familiar with autocomplete behavior. As you type characters, the search engine suggests several - potentially hilarious - complete queries. While autocomplete controls are common, actually developing them has a number of non-trivial technical challenges.
Where do you get the options from? Do you specify them on the client or load them server-side? Do you load all the options at once or as the user types?
Kendo UI's AutoComplete widget gives you the hooks to make all these implementations possible. In this article we'll build a few Java-backed AutoComplete widgets to explore the best way to structure the autocomplete controls you need to build. We'll start with a small hardcoded list, and scale all the way up a server filtered list with a million options.
Note: The finished examples are available in this GitHub repository if you'd like to follow along.
Let's start with the basics. The simplest Kendo UI AutoComplete you can build takes a hardcoded list of options in JavaScript directly.
<input id="autocomplete">
<script>
$( "#autocomplete" ).kendoAutoComplete({
dataSource: [ "One", "Two", "Three" ]
});
</script>
If you prefer the (excuse the pun) autocompletion and validation niceties that Kendo UI's JSP wrappers provide, you can use them to hardcode options as well.
<kendo:autoComplete
<kendo:dataSource</kendo:dataSource>
</kendo:autoComplete>
Note: When using the JSP wrappers, make sure you have Kendo UI's jar on your classpath and have the taglib imported correctly. If you're confused by this, checkout our getting started documentation that walks you through the setup.
The rest of this example will use the JSP wrapper syntax, but keep in mind that everything can be done in JavaScript directly if that's your preference. To see the JavaScript syntax to use, refer to AutoComplete's API documentation.
While hardcoding the options works well for very simple cases, it's not practical or maintainable for large lists.
To show a bigger and more realistic use case, let's build an autocomplete for selecting countries (United States, Canada, etc). Since there are ~200 countries, hardcoding the full list in a single JavaScript or JSP file would be a mess.
To implement this, we'll start by altering our data source. Instead of hardcoding options, we'll use the read option to specify a server-side endpoint to retrieve the data from.
read
<kendo:autoComplete
<kendo:dataSource>
<kendo:dataSource-transport
</kendo:dataSource>
</kendo:autoComplete>
Here, once the user types a single character, Kendo UI performs an AJAX request to /App/countries and expects a JSON formatted array of strings to be returned.
/App/countries
Therefore, the next step is adding something to listen at /App/countries. For simplicity, we'll implement this with a basic Servlet.
Note: You can absolutely implement this with a more robust MVC library such as Struts or Spring. I'm sticking with a Servlet here since it's universal.
First, we'll add the following to the app's deployment descriptor (web.xml).
<servlet>
<servlet-name>CountryServlet</servlet-name>
<servlet-class>com.countries.CountryServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>CountryServlet</servlet-name>
<url-pattern>/countries</url-pattern>
</servlet-mapping>
This tell our app to invoke the Servlet class com.countries.CountryServlet when a request matching the URL pattern /countries is received. Next we have to create com.countries.CountryServlet, which is shown below.
com.countries.CountryServlet
/countries
package com.countries;
import com.google.gson.G CountryServlet extends HttpServlet {
public static final String[] countries = new String[]{
"Afghanistan",
"Albania",
...
"Zimbabwe"
};
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("application/json");
Gson gson = new Gson();
try (PrintWriter out = response.getWriter()) {
out.print(gson.toJson(countries));
}
}
}
A couple of things to note here:
application/json
String[]
/App
And that's it. With this approach the autocomplete does not load any data until the user starts typing. Once the user types a single character, the widget makes a GET AJAX request to retrieve all countries from /App/countries. When the AJAX call completes, the widget filters through the options client side as the user types. This workflow is shown below.
The key here is: although the data is returned by the server, the filtering is done on the client. Since there are only ~200 options here this isn't a problem - JavaScript can loop over 200 options quickly.
But what if we had more? Filtering through hundreds of options on the client is fine, but what if there were thousands, tens of thousands, or even - as we're about to see - MILLIONS.
For our next example we're going to go big by building an autocomplete for the top million sites on the web. Where are we going to get the data from? Alexa, a web metrics site, conveniently offers a CSV of the top million sites on the web that is updated daily.
We'll start by placing this CSV file in a data directory in our application's WebContent.
data
WebContent
Next, we'll setup the same structure we used for the domain autocomplete. First a kendo:autoComplete tag that invokes /App/sites.
kendo:autoComplete
/App/sites
<kendo:autoComplete
<kendo:dataSource
<kendo:dataSource-transport
</kendo:dataSource>
</kendo:autoComplete>
Then a Servlet to listen at /App/sites.
<servlet>
<servlet-name>SiteServlet</servlet-name>
<servlet-class>com.sites.SiteServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>SiteServlet</servlet-name>
<url-pattern>/sites</url-pattern>
</servlet-mapping>
This Servlet is more complex than our country Servlet; it must load data from the CSV file instead of returning a hardcoded list. To clean up our code, we'll abstract the CSV parsing into a separate SiteService class. With this difference in mind, the Servlet looks pretty similar to our previous example. It returns a JSON serialized array of Strings to show in the autocomplete.
SiteService
package com.sites;
import com.google.gson.Gson;;
public class SiteServlet extends HttpServlet {
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
List<String> sites = SiteService.load();
response.setContentType("application/json");
Gson gson = new Gson();
try (PrintWriter out = response.getWriter()) {
out.print(gson.toJson(sites));
}
}
@Override
public void init() throws ServletException {
String path = this.getServletContext().getRealPath("data/top-1m.csv");
SiteService.build(path);
}
}
The init() call gets a reference to our CSV file and calls SiteService.build() to parse it. The implementation of SiteService is shown below. Don't worry about the details here; we're concerned with building an autocomplete and not necessarily how to parse a CSV file in Java. Just know that at the end of build(), the sites variable contains a List full of one million sites as strings.
init()
SiteService.build()
build()
sites
List
package com.sites;
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class SiteService {
public static List<String> sites = new ArrayList();
public static void build(String path) {
BufferedReader reader = null;
String line;
try {
reader = new BufferedReader(new FileReader(path));
while ((line = reader.readLine()) != null) {
String[] strings = line.split(",");
sites.add(strings[1] + " (Rank " + strings[0] + ")");
}
} catch (FileNotFoundException e) {
throw new RuntimeException("Could not find file", e);
} catch (IOException e) {
throw new RuntimeException("An IO error occurred reading the file.", e);
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
throw new RuntimeException("An I/O error occurred closing the file", e);
}
}
}
}
public static List<String> load() {
return sites;
}
}
This works, but there's one major problem with this example. While this Servlet does return a JSON encoded array of one million sites, it instantly seizes up the browser that requested it. As it turns out, despite its modern processing power, asking JavaScript to parse through one million options is still a bit much.
When the processing becomes this intensive, it makes sense to move the filtering from the client to the server. And Kendo UI makes it easy to do just that.
To move filtering to the server, first we must set the data source's serverFiltering option to true. This tells the widget to treat the data returned from the server as filtered.
serverFiltering
true
<kendo:autoComplete
<kendo:dataSource
<kendo:dataSource-transport
</kendo:dataSource-transport>
</kendo:dataSource>
</kendo:autoComplete>
Now, the widget will send a whole bunch of information to the server as the user types. For example the following request parameters are sent after the user types an "a".
These filters tell the server how to filter data - in this case, to get items that start with an "a" in a case insensitive manner. This information is great for more complex widgets - such as the a grid - but it's overkill for a simple widget like autocomplete. All the server needs to know is what the user typed.
Fortunately, we can configure how data is sent to the server using the data source's transport.parameterMap option. Simply put, the parameter map is a JavaScript function that converts Kendo UI's formats into a format more suitable to the server receiving the request. The parameter map we'll use is shown below.
<kendo:autoComplete
>
Note how options.filter.filters[ 0 ].value corresponds to the query string parameter that contained "a" in our previous example.
options.filter.filters[ 0 ].value
This function tells Kendo UI: instead of sending several complex filters, just send a filter parameter set to the string that the user typed.
filter
Now we need to change our Servlet to retrieve this parameter and pass it on to our service. We'll by adding the code below to the Servlet.
String filter = request.getParameter("filter");
List<String> sites = SiteService.load(filter);
Then, we'll change the service's load() method to actually do the filtering.
load()
public static List<String> load(String startswith) {
List<String> matches = new ArrayList();
startswith = startswith.toLowerCase();
for (String country : sites) {
if (country.toLowerCase().startsWith(startswith)) {
matches.add(country);
}
}
return matches;
}
Note: While this example gets its data from a CSV file, you can easily adapt the code to work with any database or ORM setup. It's easy to build a WHERE clause when you only have one filter string to use.
We're almost there. This example now loads in modern browsers, but it's extremely sluggish and still crashes older browsers. Why?
A single character can still match tens of thousands of results with such a large dataset. And even though JavaScript is no longer filtering these requests, it still has to loop over them and add an <li> for each into the autocomplete's menu.
<li>
Therefore we'll make one final adjustment: setting the AutoComplete widget's minLength option to 3.
minLength
3
<kendo:autoComplete
>
This requires the user to type three characters before seeing results. Because the maximum number of matches is now in the hundreds rather than the tens of thousands, our autocomplete functions fine in all browsers.
While forcing the user to type three characters before getting feedback is not ideal, it's a necessary optimization when dealing with enormous data sets such as this.
As another option, you can consider returning a subset of the data for shorter queries. For example consider Google. The search engine has an enormous number of options it could show you when you type "a", but it limits the results to the top four or five.
Let's add this behavior to our example. The following alternation to the load() method returns the first 100 matches if the filter is less than three characters.
public static List<String> load(String startswith) {
List<String> matches = new ArrayList();
startswith = startswith.toLowerCase();
for (String country : sites) {
if (country.toLowerCase().startsWith(startswith)) {
matches.add(country);
}
if (startswith.length() < 3 && matches.size() == 100) {
return matches.subList(0, 100);
}
}
return matches;
}
This approach works well if you have some means of ranking the options. In our case, since the sites are ranked from 1 to one million, this is perfect; the user is more likely to type a site towards the top of the rankings anyways.
After all of this, we finally have a functioning autocomplete with our epic data set.
You can checkout the full source of these examples at. The code runs surprisingly quickly for such a large data set. Even if you're not interested in the implementation, an autocomplete of the top million sites is a lot of fun to filter through.
In this article we built several increasingly large autocomplete controls using Kendo UI's AutoComplete widget. We started with a simple hardcoded list, moved onto a server driven list, then finally switched the filtering itself to the server.
We also saw how easy it was to integrate Kendo UI's JSP wrappers and a Java backend into the autocomplete process.
Do you have any other Java + Kendo UI integrations you'd like us to discuss? Let us know in the comments and we'll see what we can do.
TJ VanToll is a frontend developer, author, and a Principal Developer Advocate for Progress. TJ has over a decade of web development experience, including a few years working on the jQuery and NativeScript teams. Nowadays he helps web developers build awesome UIs with KendoRe | https://www.telerik.com/blogs/building-a-full-stack-autocomplete-widget-with-java-and-kendo-ui | CC-MAIN-2020-24 | refinedweb | 2,196 | 57.37 |
This is a mobile version, full one is here.
Yegor Bugayenko
13 May 2014
Why NULL is Bad?
A simple example of
NULL usage in Java:
public Employee getByName(String name) { int id = database.find(name); if (id == 0) { return null; } return new Employee(id); }
What is wrong with this method?
It may return
NULL instead of an object—that's what is wrong.
NULL is a terrible practice in an object-oriented paradigm and should be avoided at
all costs.
There have been a number of opinions about this published already, including
Null References, The Billion Dollar Mistake
presentation by Tony Hoare and the entire
Object Thinking
book by David West.
Here, I'll try to summarize all the arguments and show examples of
how
NULL usage can be avoided and replaced with proper object-oriented constructs.
Basically, there are two possible alternatives to
NULL.
The first one is Null Object design pattern (the best way is to make it a constant):
public Employee getByName(String name) { int id = database.find(name); if (id == 0) { return Employee.NOBODY; } return Employee(id); }
The second possible alternative is to fail fast by throwing an Exception when you can't return an object:
public Employee getByName(String name) { int id = database.find(name); if (id == 0) { throw new EmployeeNotFoundException(name); } return Employee(id); }
Now, let's see the arguments against
NULL.
Besides Tony Hoare's presentation and David West's book mentioned above, I read these publications before writing this post: Clean Code by Robert Martin, Code Complete by Steve McConnell, Say "No" to "Null" by John Sonmez, Is returning null bad design? discussion at StackOverflow.
Ad-hoc Error Handling
Every time you get an object as an input you must
check whether it is
NULL or a valid object reference.
If you forget to check, a
NullPointerException (NPE)
may break execution in runtime. Thus, your logic becomes
polluted with multiple checks and if/then/else forks:
// this is a terrible design, don't reuse Employee employee = dept.getByName("Jeffrey"); if (employee == null) { System.out.println("can't find an employee"); System.exit(-1); } else { employee.transferTo(dept2); }
This is how exceptional situations are supposed to be handled in C and other imperative procedural languages. OOP introduced exception handling primarily to get rid of these ad-hoc error handling blocks. In OOP, we let exceptions bubble up until they reach an application-wide error handler and our code becomes much cleaner and shorter:
dept.getByName("Jeffrey").transferTo(dept2);
Consider
NULL references an inheritance of procedural programming,
and use 1) Null Objects or 2) Exceptions instead.
Ambiguous Semantic
In order to explicitly convey its meaning, the function
getByName() has to be named
getByNameOrNullIfNotFound().
The same should happen with every function that returns an
object or
NULL. Otherwise, ambiguity is inevitable for a code reader.
Thus, to keep semantic unambiguous, you should give longer names to functions.
To get rid of this ambiguity, always return a real object, a null object or throw an exception.
Some may argue that we sometimes have to return
NULL,
for the sake of performance. For example, method
get() of
interface
Map
in Java returns
NULL when there is no such item in the map:
Employee employee = employees.get("Jeffrey"); if (employee == null) { throw new EmployeeNotFoundException(); } return employee;
This code searches the map only once due to the usage of
NULL in
Map.
If we would refactor
Map so that its method
get() will throw
an exception if nothing is found, our code will look like this:
if (!employees.containsKey("Jeffrey")) { // first search throw new EmployeeNotFoundException(); } return employees.get("Jeffrey"); // second search
Obviously, this is method is twice as slow as the first one. What to do?
The
Map interface (no offense to its authors) has a design flaw.
Its method
get() should have been returning an
Iterator
so that our code would look like:
Iterator found = Map.search("Jeffrey"); if (!found.hasNext()) { throw new EmployeeNotFoundException(); } return found.next();
BTW, that is exactly how C++ STL map::find() method is designed.
Computer Thinking vs. Object Thinking
Statement
if (employee == null) is understood by someone who
knows that an object in Java is a pointer to a data structure and
that
NULL is a pointer to nothing (
0x00000000, in Intel x86 processors).
However, if you start thinking as an object, this statement makes much less sense. This is how our code looks from an object point of view:
- Hello, is it a software department? - Yes. - Let me talk to your employee "Jeffrey" please. - Hold the line please... - Hello. - Are you NULL?
The last question in this conversation sounds weird, doesn't it?
Instead, if they hang up the phone after our request to speak to Jeffrey, that causes a problem for us (Exception). At that point, we try to call again or inform our supervisor that we can't reach Jeffrey and complete a bigger transaction.
Alternatively, they may let us speak to another person, who is not Jeffrey, but who can help with most of our questions or refuse to help if we need something "Jeffrey specific" (Null Object).
Slow Failing
Instead of failing fast, the code above attempts to die slowly, killing others on its way. Instead of letting everyone know that something went wrong and that an exception handling should start immediately, it is hiding this failure from its client.
This argument is close to the "ad-hoc error handling" discussed above.
It is a good practice to make your code as fragile as possible, letting it break when necessary.
Make your methods extremely demanding as to the data they manipulate. Let them complain by throwing exceptions, if the provided data provided is not sufficient or simply doesn't fit with the main usage scenario of the method.
Otherwise, return a Null Object, that exposes some common behavior and throws exceptions on all other calls:
public Employee getByName(String name) { int id = database.find(name); Employee employee; if (id == 0) { employee = new Employee() { @Override public String name() { return "anonymous"; } @Override public void transferTo(Department dept) { throw new AnonymousEmployeeException( "I can't be transferred, I'm anonymous" ); } }; } else { employee = Employee(id); } return employee; }
Mutable and Incomplete Objects
In general, it is highly recommended to design objects with immutability in mind. This means that an object gets all necessary knowledge during its instantiating and never changes its state during the entire life-cycle.
Very often,
NULL values are used in lazy loading,
to make objects incomplete and mutable. For example:
public class Department { private Employee found = null; public synchronized Employee manager() { if (this.found == null) { this.found = new Employee("Jeffrey"); } return this.found; } }
This technology, although widely used, is an anti-pattern in
OOP.
Mostly because it makes an object responsible for performance problems
of the computational platform, which is something an
Employee object should not be aware of.
Instead of managing a state and exposing its business-relevant behavior, an object has to take care of the caching of its own results—this is what lazy loading is about.
Caching is not something an employee does in the office, does he?
The solution? Don't use lazy loading in such a primitive way, as in the example above. Instead, move this caching problem to another layer of your application.
For example, in Java, you can use aspect-oriented programming aspects.
For example, jcabi-aspects has
@Cacheable
annotation that caches the value returned by a method:
import com.jcabi.aspects.Cacheable; public class Department { @Cacheable(forever = true) public Employee manager() { return new Employee("Jacky Brown"); } }
I hope this analysis was convincing enough that you will
stop
NULL-ing your code :) | http://www.yegor256.com/2014/05/13/why-null-is-bad.amp.html | CC-MAIN-2018-09 | refinedweb | 1,271 | 55.13 |
We had a recent inquiry on the mailing list about LingPipe’s complete-link clustering implementation. For the same inputs, it was producing what looked like different results. Well, it actually was producing different results. Different results that were structurally equivalent, but not Java equal.
It turns out that the culprit was our old friends
Object.equals(Object),
Object.hashCode() and collection implementations inside
java.util.
Perceived Non-Determinism Sample
Here’s the simplest code I could think of that illustrates what appears to be a problem with non-deterministic behavior:
import java.util.HashSet; public class HashOrderDemo { public static void main(String[] args) { for (int i = 0; i < 10; ++i) { HashSet<Foo> s = new HashSet<Foo>(); for (int k = 0; k < 10; ++k) s.add(new Foo(k)); System.out.println(s); } } static class Foo extends Object { final int mK; Foo(int k) { mK = k; } public String toString() { return Integer.toString(mK); } } }
First note that the static nested class
Foo extends
java.lang.Object and hence inherits
Object‘s implementation of
equals(Object) and
hashCode() [links are to the JDK 1.5 Javadoc]. As a result, two
Foo objects holding the same integer are not equal unless they are the same object.
The
main() runs a loop adding 10 instances of
Foo to a
java.util.HashSet then printing them out. The result is different orderings:
c:\carp\temp>javac HashOrderDemo.java c:\carp\temp>java HashOrderDemo [5, 4, 9, 6, 8, 7, 0, 2, 1, 3] [9, 7, 6, 5, 3, 0, 8, 2, 1, 4] [4, 6, 1, 0, 9, 2, 5, 8, 7, 3] [0, 2, 1, 7, 8, 3, 6, 5, 9, 4] [0, 2, 1, 4, 7, 8, 6, 9, 5, 3] [2, 1, 3, 9, 4, 6, 7, 0, 8, 5] [8, 7, 4, 0, 2, 9, 5, 6, 1, 3] [6, 1, 9, 7, 2, 4, 3, 0, 8, 5] [5, 2, 0, 9, 6, 4, 7, 3, 1, 8] [9, 2, 0, 3, 8, 7, 5, 4, 6, 1]
Of course, given our definition of
Foo, the sets aren’t actually equal, because the members aren’t actually equal. The members just appear to be equal because they’re structurally equivalent in terms of their content.
If you want instances of Java classes things to behave as if their equal, you have to override the default implementations
equals(Object) and
hashCode() accordingly.
But what if the Objects are Equal?
So what if we look at a case where the objects and hence sets really are equal? Alas, we can still get what looks to be non-equal outputs.
Consider the following program:
import java.util.HashSet; public class HashOrderDemo2 { public static void main(String[] args) { for (int i = 1; i < 200; i *= 3) { HashSet<String> s = new HashSet<String>(i); for (int k = 0; k < 10; ++k) s.add(Integer.toString(k)); System.out.println(s); } } }
which if we compile and run, gives us
c:\carp\temp>javac HashOrderDemo2.java c:\carp\temp>java HashOrderDemo2 [3, 5, 7, 2, 0, 9, 4, 8, 1, 6] [3, 5, 7, 2, 0, 9, 4, 8, 1, 6] [3, 5, 7, 2, 0, 9, 4, 8, 6, 1] [3, 7, 2, 0, 6, 1, 5, 9, 4, 8] [0, 9, 3, 6, 1, 5, 4, 7, 2, 8]
Here the print order is different because the iteration order is different in different hash sets. With different sizes, the underlying array of buckets is a different size, hence after modular arithmetic, elements wind up ordered differently.
But the sets produced here are actually equal in the Java sense that
set1.equals(set2) would return
true. They just don’t look equal because the print and iteration order is different.
Bug or Feature?
Let’s just call it a quirk. The doc for hash sets and other collections is pretty explicit. That doesn’t mean it’s not confusing in many applications.
May 26, 2010 at 6:28 pm |
Any implementation of set is not expected to return elements in any particular order [1]. Its neither a bug or a feature, just the contract of the interface.
Depending upon implementation specific feature would lead to unnecessary bugs. It’s best to avoid such things.
[1]
May 27, 2010 at 2:17 pm |
That’s what I meant by it not being a bug. It’s well documented what it’s supposed to do. It’s just that it continues to trip people up.
May 26, 2010 at 6:30 pm |
Forgot to add,
OTOH, you have LinkedHashSet [1], which is more in the line of what you might be looking for.
[1]
May 27, 2010 at 2:19 pm |
That’ll use a linked list to return the results in the order they were added. The problem here for clustering as I point out in the next post is that there’s an underlying sorted priority queue that needs to pay attention to more than just the scores to ensure determinism.
May 27, 2010 at 7:47 am |
Actually, rather than use HashSet, use TreeSet. This data-structure implicitly sorts your elements, provided that you implement the Comparable interface.
May 27, 2010 at 2:22 pm |
Right! That’s what I’ve been suggesting. As is, my priority queues are based on tree sets. As is, they’re sorting pairs of clusters pending merging based on their distances. Two clusters can have the same distance, so something needs to be done to impose a sort order on pairs of clusters at the same distance (like recency of creation). I could roll that into the ordering and create a deterministic version of the algorithm.
Given that the algorithm’s not deterministic, I’m just not that worried about doing this for our hierarchical clusterers.
May 27, 2010 at 11:14 am |
I hit the same problem with the Stanford parser libraries one time. Took me about a day to confirm I wasn’t going mad (and rule out things like file system corruption), another day to confirm my own code was fine, and then another day to hazard a guess about where the problem was (which was right).
I’ve used it as a case study in teaching ever since… | https://lingpipe-blog.com/2010/05/25/non-determinism-java-util-hashset-print-and-iterato-order/ | CC-MAIN-2019-30 | refinedweb | 1,043 | 71.14 |
Redux Reducers Explained - How to Use Them Correctly
In this video you will learn deeper about reducers in Redux, how to write better code in them and how to combine them. Let's jump right into it.
Content
As you can see we have a reducer with 1 single action inside.
The first thing that I want to do is improve the structure of our reducer. For now our whole reducer is an array of users. It may be fine but normally we want to store more than 1 property inside a reducer. As an example we can move our username that we are typing inside reducer.
const initialState = { users: [], username: '', }; const reducer = (state = initialState, action) => {}
As you can see instead of setting state to empty array we created initialState object. It serves 2 purposes: First of all this is our default state and secondly we see from the single glance all possible properties that are planned in our reducer.
Now let's move username from component to redux.
handleUser = (e) => { this.props.dispatch({ type: "CHANGE_USERNAME", payload: e.target.value }); };
Now let's look on our actions. We want to handle additionally change username action
const reducer = (state = initialState, action) => { if (action.type === 'ADD_USER') { return { ...state, username: "", users: [...state.users, state.username], }; } else if (action.type === 'CHANGE_USERNAME') { return { ...state, username: payload }; } return state; };
The interesting part here that we have username always inside reducer so we don't need to pass it inside ADD_USER action anymore. Also we need to add username to our mapStateToProps.
Also there is super super important point that all reducer update must return new object. So we can't really modify the state directly but we must always return a new object. This is because of how Redux compares the previous and current state to see if something changed. Also we must write reducers as a pure function. Which means there should not be some side effects like feature data, accessing DOM properties or something else. Only state updates based on the previous state and action.
const mapStateToProps = (state) => { return { users: state.users, username: state.username, }; };
As you can see everything is working as previously but now we have 2 actions in reducer and bigger state
Now we can improve our reducers even more. As you can see if else conditions are quite verbose and we normally write switch case instead.
const reducer = (state = initialState, action) => { switch (action.type) { case "ADD_USER": return { ...state, username: "", users: [...state.users, state.username], }; case "CHANGE_USERNAME": return { ...state, username: action.payload, }; default: return state; } };
As you can see, this code is much easier to read.
Now we are coming to the point that actually we want to have more than one reducer. Just imagine that we have redux start for all our pages like for example login, register, feed, single post and much more. As you can imagine it will be a mess if we write all this actions and properties in a single reducer.
The point is that createStore allows only a single reducer but there is a way to combine our reducers. Let's say that we want to create 1 more page with posts. So actually we can create 1 more reducer and pack all properties of that page there and rename our reducers.js to users for example.
src/store/reducers/posts.js
const initialState = { posts: [], }; const reducer = (state = initialState, action) => { console.log("posts reducer", state, action); return state; }; export default reducer;
As you can see we created an empty reducer with posts property. We can of course add here changing of the state depending on the action. We also moved all files in additional reducers folder.
But the question is now how to can use several reducers simultaneously? For this we need to use combineReducers function.
src/store/reducers/index.js
import { combineReducers } from "redux"; import posts from "./posts"; import users from "./users"; export default combineReducers({ posts, users, });
src/index.js
import reducer from "./store/reducers"; ... const store = createStore(reducer, composeWithDevTools());
As you can see inside our main reducer that we want to inject inside createStore we use combineReducers function from Redux. It just makes an object with properties like we name keys.
If we check in browser our page is broken but we can see that our state looks correctly now. We have 2 different namespaces: users and posts.
Now let's adjust our mapStateToProps because we changed our keys again.
const mapStateToProps = (state) => { return { users: state.users.users, username: state.users.username, }; };
As you can see now everything is working as expected.
But here is one more important thing to remember. As you can see we wrote a console.log inside posts reducer. And we can see in browser that we are getting this console log on every action that is dispatched. This is correct and happens because all our actions are global. It doesn't matter that we created additional reducer. All actions are global and they can change the state of any reducer. This is important to remember.
Call to action
So here are important points to remember.
- Defining of initialState in reducer always brings clarity.
- Switch in reducers looks much better than if else
- CombineReducers is an amazing possibility to scale you project indefinitely.. | https://monsterlessons-academy.com/p/redux-reducers-explained-how-to-use-the-correctly | CC-MAIN-2021-31 | refinedweb | 874 | 59.7 |
Let wing running a script when user close/exit it.
Hi,
How can I let wing running a script when user close/exit it?
Regards.
Hi,
How can I let wing running a script when user close/exit it?
Regards.
You can write an extension script as described in and connect to a signal that is emitted when Wing quits. This isn't currently exposed in the API (I'll see if we can add this, and thus am marking this ticket as a feature-request) but you can reach through the API to do it, something like this:
import wingapi def _quitting(): # Do something here pass wingapi.gApplication.fSingletons.fGuiMgr.connect('about-to-quit', _quitting)
There is also a signal called 'quit' that can be used instead. The one above is emitted when the mainloop is still running while 'quit' is emitted after the mainloop exits. In your case either is probably fine.
Thanks a lot, based on your above instructions. I write the following script in ~/.wingpro7/scripts named as preferences-strip.py and it does the trick:
import wingapi def _quitting(): # Do something here import os import re # # flag = False preferences = os.environ['HOME'] + '/.wingpro7/preferences' lines = [] with open(preferences, 'r') as f: for line in f: line = line.strip('\n') if re.match(r'^main[.](update-history|last-prefs-page)', line): flag = True else: if flag and not re.match(r'[ ]', line): flag = False if not flag: lines.append(line) with open(preferences, "w") as f: f.write('\n'.join(str(item) for item in lines)) f.write('\n') pass wingapi.gApplication.fSingletons.fGuiMgr.connect('about-to-quit', _quitting)
But I still cannot figure out why you use
pass command on the example codes. Any hints?
Regards.
"pass" does nothing. It's used in Python as a place-holder so you can write something that's syntactically correct but doesn't have actual code in it yet. You can remove it from your code above...
If you are talking about the Debugger, it should work to use the Debug > Processes > Detach from Process menu item and then the debug process will continue and not be terminated when Wing quits.
No, I mean the general case. When I exit/close wing, this event can trigger some script to do a post-processing job for me.
Please start posting anonymously - your entry will be published after you log in or create a new account.
Asked: 2020-06-02 18:08:39 -0500
Seen: 166 times
Last updated: Jun 03 '20
Let wing share themes format with other popular IDE/editors and support themes import from popular theme websites.
Upgrade Python version 2.7.x in WING Personal 7 for Mac OS X Yosemite
Why does wing-internal-python need to receive incoming connections?
The meaning for the perference option: main.last-prefs-page.
Unable to connect debugger to a docker container
Add more light themes for wing.
Use Jupyter Lab with wing.
Issue with loading modules on hpc system | https://ask.wingware.com/question/2096/let-wing-running-a-script-when-user-closeexit-it/ | CC-MAIN-2021-17 | refinedweb | 501 | 66.94 |
The Silverlight ListBox is much more than you may think. When most people think of a ListBox they typically think of just a simple control that displays rows of text. However, the Silverlight ListBox is one of the most flexible controls you will find. I tend to think of the ListBox as similar to the ASP.NET Repeater control as it too is very flexible. In this article, I will show you six different ways to display data in a Silverlight ListBox.
For all the tips in this article I will use a simple Product class (Listing 1) with seven properties; ProductId, ProductName, ProductType, Price, Image and IsOnSpecial. This class holds the information about a specific product such as its product type, its price, an image for the product and whether or not that product is on special.
As you could guess, you will most likely have more than one product to display in your ListBox. So, you will need to create a Product collection class (Listing 2) to hold many Product objects. In this article you will see how to create a mock set of product objects and fill in a property called DataCollection in the Products class with this mock set of products.
Create an Instance of Products in Your XAML
Now that you have some data in your Products class, you need to create an instance of this class so you can display that data in a ListBox. You do not need to write code to create an instance of this class; you can have your XAML create the instance. You do this by first adding a namespace to your <UserControl> element that references the assembly in which your Products class is located.
xmlns:data="clr-namespace: Silverlight_ListBox"
Next, you add a <UserControl.Resources> section and create the instance of the products class from the “Silverlight_ListBox” namespace referenced by the local XAML namespace that you called “data”.
<UserControl.Resources> <data:Products x: </UserControl.Resources>
The XAML above creates an instance of the Products class and assigns it the key name of “productCollection”. To use this resource as the source of data to a ListBox, you simply set the ItemsSource as shown in the snippet below:
<ListBox x:
By binding to the static resource productCollection you can access the property called DataCollection which you remember holds the collection of our mock product objects. Ok, now that you know what the sample data looks like and how to bind it to a ListBox, let’s now take a look at various methods of displaying data using the ListBox control in Silverlight.
Tip 1: Display More Than One Column
A XAML ListBox can display two (or more) columns just as easily as one column. In the example shown in Figure 1 you see two columns; one for the product name and one for the product price. This type of ListBox is very easy to create with just a little bit of XAML code shown in Listing 3.
As described previously, you have already created a resources element that references the products collection of your user control. Now you can create the template that will be used to display each product in the ListBox control. This is done by adding the ListBox.ItemTemplate element to the XAML for the ListBox control, and then you add a DataTemplate element nested within the ListBox control ItemTemplate.
The DataTemplate is the most important element in a ListBox as it tells Silverlight how to present each row of data. This template starts with a StackPanel that “stacks” child elements horizontally by using the Orientation=”Horizontal” attribute. The StackPanel contains two child elements, a TextBlock for the product name and another TextBlock for the product price. Note that the TextBlock elements define the width of the “columns” for the ListBox by using the Width attribute.
The DataTemplate is the most important element in a ListBox as it tells Silverlight how to present each row of data.
Each TextBlock control is bound to a corresponding property of the Product using XAML data-binding notation. The first TextBlock has its “Text” property set to “{Binding Path=ProductName}”. This notation tells XAML to get the value for the ProductName property for the current row of data being displayed and places that value into the Text property of this TextBlock.
The same binding notation is used for the Price column, setting the Path to the Price property of the Product object. In addition, the binding notation allows you to specify a .NET format string for value types. In this case the “c” format is used to take the decimal price value and format it as a currency.
When the user control is loaded, Silverlight creates an instance of the Products class, the DataCollection property is initialized and Silverlight binds the items in the collection and the result is displayed as shown in Figure 1.
Tip 2: Spice Up the ListBox with Images
Since the main example uses a graphical user interface, i.e., a browser, it makes sense that you want to display images in addition to just text. After all, a picture is worth a thousand words as the old saying goes. Using images next to product information can make it a little clearer about the actual product you are purchasing. In this next sample (Listing 2) you will see how to use a ListBox to display images in addition to textual data.
In this example, you will see how to use a Silverlight ListBox control to display product images and product information in multiple columns, one column for the product image and two columns for the product name and price.
Again, you’ll use a DataTemplate to define the look and feel for each row of the ListBox. Listing 4 shows the XAML code for the ListBox illustrated in Figure 2.
The DataTemplate in Figure 2 is almost identical to the ListBox described in Tip 1. The only difference is the addition of an Image control prior to the two TextBlock controls. The Image control adds both Width and Height attributes to describe the total area that the image will take up in this row of data. The Source property of an Image control will be a valid URI that points to where the actual image is located in relationship to the Silverlight control. Assuming that the images are located in an Images folder than the “Image” property of the Product class would hold a path such as “Images/Framework.jpg”.
Tip 3: Split Data Across Multiple Lines
Since you are allowed to use almost any XAML within the DataTemplate for a ListBox, you are not limited to just a single stack panel and text block controls. Think of each row of the ListBox as being just another user control into which you can place other XAML controls. If you look at Figure 3 you can see that you now have a border around each row of data. You also have the image on the left and you have the name of the Product above the price of the product. Listing 5 shows the XAML code for the ListBox shown in Figure 3.
Let’s now break down each piece of XAML contained within the DataTemplate of this ListBox. The first piece is the Border control that wraps up each row of data. Next is a StackPanel that stacks the elements of the row horizontally due to the Orientation attribute being set to Horizontal. You now list the items that you want to stack horizontally; they are the Image control and another StackPanel. The second stack panel is a normal vertical StackPanel, so inside of this is where you place the TextBlock control that displays the product name and another TextBlock control that displays the price below the product name.
Tip 4: Display Your Data Horizontally
Let’s now take the ListBox we just looked at and turn it on its side. That’s right, a ListBox in XAML does not just have to display data vertically - you can change the orientation of the ListBox so it will display the data horizontally (see Figure 4).
To make a ListBox display its data horizontally you simply need to add a <ListBox.ItemsPanel> element to the ListBox Listing 6). In the <ItemsPanelTemplate> element you just need to specify that you wish to use a StackPanel with its Orientation attribute set to “Horizontal”. Many XAML controls have an ItemsPanel template. This template controls the orientation and layout for the control as a whole, not the individual items within the control. So, in this case, the ItemsPanel is telling the whole ListBox to display horizontally. Each row in the ListBox is still laid out according to its DataTemplate.
The ItemsPanel element controls the overall look of a ListBox and thus can be used to display a ListBox horizontally.
Tip 5: Change the Look and Feel at Runtime
Silverlight has the flexibility to modify the look of your controls dynamically at runtime with just a few lines of code. Take a look at Figure 5 and Figure 6 and you will see two different views of the same ListBox and data. To accomplish this, you simply need to set up two different DataTemplate sections as resources. With just a couple of lines of code you can then switch between two DataTemplate resources at runtime.
To create the Silverlight page shown in Figure 5 you will create a Grid with two rows. The first row is where you will place the More and Less buttons. The second row is where you place the ListBox. Listing 7 is the XAML used for the <Grid> portion of this page with the buttons and the ListBox shown. Notice in the ListBox that instead of specifying an ItemTemplate area you instead reference a StaticResource. You will learn about this StaticResource in a minute.
After you have created the Grid, the Button controls and the ListBox control, you are now ready to create the DataTemplates that will be used to fill in the ItemTemplate area of the ListBox. The first thing you need to do is to create a <UserControl.Resources> section in your User Control. You will create two data templates as keyed resources in this resources section. One DataTemplate has a key set to “tmplMore” and one has a key set to “tmplLess”. Listing 8 is the DataTemplate that is used for the more detail view of your ListBox (Figure 5).
You now create a DataTemplate resource with a key set to “tmplLess”. This template is used to create the ListBox with less detail (Figure 6). The XAML for this DataTemplate is shown in the code snippet below.
<DataTemplate x: <StackPanel Orientation="Horizontal"> <TextBlock FontSize="16" Text="{Binding Path=ProductName}" /> </StackPanel> </DataTemplate>
In Listing 7 you pre-assigned the DataTemplate with the key of “tmplMore” to the ListBox. This is just so you have something to look at in design mode. Now let me show you how to dynamically switch between the two keyed DataTemplate resources at runtime.
The Click event procedure of each button is set to call the same method. This method name is called ChangeTemplate. Notice the use of the Tag property in each button. The Tag property is set to the key name of the DataTemplate you created in the UserControl.Resources section.
<Button Name="btnMore" Tag="tmplMore" Content="More" Click="ChangeTemplate" /> <Button Name="btnLess" Tag="tmplLess" Content="Less" Click="ChangeTemplate" />
Listing 9 shows the ChangeTemplate() event procedure called by each of the button’s Click event.
While you could have written the above procedure as a single line of code, I broke it out so you can more easily read and understand the code. First you grab the Tag property of the button which raised this event. Store this value into the “templ” variable. Next you use the Resources property of the UserControl class to locate the resource with the name you pass to this indexed property. You will need to cast the resource you get back as a DataTemplate object. You then assign this DataTemplate object to the ItemTemplate property of the ListBox.
Tip 6: Display Conditional Text Using a Value Converter Class
When you bind data to your Silverlight controls you will probably have data that is in one format that needs to be converted this sample you will use the IsOnSpecial property of the Product class to determine whether or not to display the text “** ON SPECIAL **” for each row of data (see Figure 7). The IsOnSpecial property is a Boolean value so you need to convert that value from a True/False value to a string.
To accomplish this you need to tell Silverlight that instead of displaying the IsOnSpecial property it should pass the value to a method in a class you create and have that method return a string back to be displayed.. Listing 10 is the class called “OnSpecialConverter” that implements the two methods Convert and ConvertBack.” return the string “** ON SPECIAL **” otherwise return an empty string.
Namespaces and Resources
Now that you have this class created you will have your Silverlight user control create an instance of that class. To do this, you first add the namespace where this class was created. At the top of your user control you add an xmlns tag as shown below.
xmlns:local="clr-namespace:SLValueConverter"
The “local” is any name you want to create. Visual StudioBox with Value Converter
To create the ListBox shown in Figure 7, you will use the same XAML that was shown in Tip 3. You will just add one more TextBlock control after the text block that displays the price. This new TextBlock is defined as shown in the code snippet below.
<TextBlock Margin="18" FontSize="18" Text="{Binding Path=IsOnSpecial, Converter={StaticResource specialConvert}}" />
The Binding syntax is the same as you have seen as far as binding to the property of the class to which this ListBox is bound. The difference is you now specify a Converter after the path. The Converter is bound to a resource in your project. In this case it is bound to the resource you created in the UserControl.Resources section that created an instance of the OnSpecialConverter class. When each row in the ListBox is evaluated, the IsOnSpecial value is passed to the “value” property of the Convert method of the OnSpecialConverter class. Then either some text is returned, or not.
Summary
The Silverlight ListBox control has a tremendous amount of flexibility in how it can display data. In this article, you used one set of Product data and you learned how to display that data in a myriad of ways. There are many more things you can make the ListBox do as well. The most important take away from this article is that XAML is very powerful and learning just a few basics opens up all sorts of great UI possibilities.
NOTE: You can download the complete sample code (in both VB and C#) at my website.. Choose Tips & Tricks, then “Six Silverlight ListBox Tips” from the drop-down. | https://www.codemag.com/Article/1108081/Six-Silverlight-ListBox-Tips | CC-MAIN-2020-10 | refinedweb | 2,515 | 69.31 |
[Sorry about the extra CC, Thomas. Using a different email account
than normal, and this mutt didn't have rox-devel configured as a
mailing list.]
Circa 2002-Feb-11 18:32:24 -0500 dixit Jim Knob.
[...]
: ROX-Filer appears to be doing something with libiconv, but it doesn't
: seem to check whether libiconv is present.
I lied. There does appear to be a check for iconv.h, but it fails.
Here's the bit in config.log (where irrelevant lines are prefixed with
parentheses, present to show context):
(configure:1578: checking for regex.h
(configure:1588: gcc -E conftest.c >/dev/null 2>conftest.out
configure:1578: checking for iconv.h
configure:1588: gcc -E conftest.c >/dev/null 2>conftest.out
configure:1584: iconv.h: No such file or directory
configure: failed program was:
#line 1583 "configure"
#include "confdefs.h"
#include <iconv.h>
(configure:1638: checking for working const
(configure:1692: gcc -c -O2 -I/usr/local/include -Wall conftest.c 1>&5
Should i be using CPPFLAGS=-I/usr/local/include instead of CFLAGS?
--
jim knoble | jmknob. Here's the
list of relevant "extra" packages that are installed:
gdk-pixbuf-0.10.1 replacement library for Imlib
gettext-0.10.40 GNU gettext
glib-1.2.10 useful routines for C programming
gtk+-1.2.10p1 General Toolkit for X11 GUI
libiconv-1.7 character set conversion library
libxml-2.4.5 XML parsing library
png-1.0.12 library for manipulating PNG images
The output from './AppRun --compile' is attached.
ROX-Filer appears to be doing something with libiconv, but it doesn't
seem to check whether libiconv is present. Recall that OpenBSD doesn't
use glibc.
Anyone else getting this to work under *BSD?
--
jim knoble | jmknoble@... |
Hi,
enclosed a small nl.po update against 1.1.13
bye,
Wilbert
BTW looks like the gtk2.0 crash bug with animated gif has been fixed for
gtk+-1.3.14 :)
--
Wilbert Berendsen ()
On Sat, Feb 09, 2002 at 03:30:04AM +0100, Bernard Jungen wrote:
> Hi! Here are two tiny patches for ROX:
>
> - hopefully better item cursor alignment.
>
> - clicking scroll arrows scrolls by an entire item, not 1/4th of it.
>
> How about applying my previous patch about page up/down scrolling?
> I agree it was lost in other stuff, I could prepare a clearer patch
> for that.
My plan currently involves not making any changes at all for a week, and
then releasing 1.2.0, assuming no major bugs are found.
Then all the patches can start going in again :-)
--
Thomas Leonard
tal00r@... tal197@...
On Sat, Feb 09, 2002 at 05:45:07PM -0800, Ken Hayber wrote:
> Hi, has anyone investigated adding the ability to Move/Copy multiple files
> via the menu? Currently the Copy option only allows one file at a time and
> there is no move option. From a quick scan of the sources it appears that
> the underlying action code would support this. I'd guess the biggest job
> is to make sure that the destination entered is either a file or a
> directory.
>
> I'm happy to attempt this, but wanted to see if it had been considered
> before and rejected or it anyone else was working on it already. I'd also
> like to see Cut/Copy/Paste for single and multiple files.
You can move files using the Rename box (one at a time).
My plan for dealing with multiple files is the use the selection groups,
eg:
1. Select the files you want to copy.
2. Press Ctrl+1 to store the group.
3. Move to destination.
4. Start Copy/Move/Link operation (eg, Ctrl+C)
5. At the 'Copy ... ?' prompt, press 1.
This is slightly slower than Copy/Paste, because you only select which
group to use (0-9) rather than what opperation (Copy/Cut). On the other
hand, the Copy/Paste thing is broken because:
- It abuses the clipboard metaphor. Eg, cutting doesn't really remove the
files like cutting text does, and copying doesn't copy them (if you
modify the files before pasting, you get the modified versions).
- It doesn't support symlinking.
--
Thomas Leonard
tal00r@... tal197@...
On Fri, Feb 08, 2002 at 12:59:08PM -0300, Marcelo Ramos wrote:
> Hi Thomas:
>
> I think that the english language is a barrier for the spanish community
> that can't read english. So I'd like to translate the entire rox web
> site in sourceforge.net. It would be great for widening the user base.
>
> What do you think? If you also like the idea i can take the task.
Anyone know the best way to do this? Ideally, it should work like gettext,
I think, so that if part of the page gets updated then it just appears in
English until the translation is updated...
--
Thomas Leonard
tal00r@... tal197@...
On Sat, Feb 09, 2002 at 10:16:42AM -0500, Brad Cox wrote:
> As far as I'm aware, ROX only supports sorting like this:
>
> a b c d
> e f g h
>
> I'd prefer the option of sorting in column major (dictionary) order,
> so that my eye can scan right to the column then down to the row.
>
> a c e g
> b d f h
That should be feasible.
> Failing that, an option to turn off the column builder altogether,
> particularly when summaries are on, would be nice. I've been using
> summaries to reduce the number of columns, but I always get at least
> two columns and have to manually shrink the window to get just one.
> Thats a lot of mousing.
Maybe an option for max number of columns would do it better?
--
# Bernard Jungen @
# Computer Science + Member of BASS + INTP + Triple Evokateur
Daddy, what does FORMATTING DRIVE C mean? | https://sourceforge.net/p/rox/mailman/rox-devel/?viewmonth=200202&viewday=11 | CC-MAIN-2018-22 | refinedweb | 961 | 77.33 |
We have recently gone "live" with more NFS users "banging" on our FreeBSD-based fileservers. And now something seems to have started triggering kernel panics. Since the they major difference from before is the number of NFS users so this is the major suspect...
We just caught a panic and got a screendump from the console and the stack traceback shows:
> Fatal trap 12: page fault while in kernel mode
> cpuid = 8; apic id = 08
> fault virtual addresa = 0x0
> fault code = supervisor read data, page not present
> instruction pointer = 0x20:0xffffffff82b578e9
> stack pointer = 0x20:0xfffffe3fdc627760
> code segment = base rx0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
> processor eflags = interrupt enabled, resume, IOPL = 0
> current process = 2519 (nfsd: service)
> trap number = 12
> panic: page fault
> cpuid = 8
> KDB: stack backtrace
> #0 0xffffffff80b3d577 at kdb_backtrace+0x67
> #1 0xffffffff80af6b17 at vpanic+0x177
> #2 0xffffffff80af6993 at panic+0x43
> #3 0xffffffff80f77fdf at trap_fatal+0x35f
> #4 0xffffffff80f78039 at trap_pfault+0x49
> #5 0xffffffff80f77807 at trap+0x2c7
> #6 0xffffffff80f56fbc at calltrap+0x8
> #7 0xffffffff82b5d4d2 at svc_rpc_gss+0x8f2
> # 8 0xffffffff80d6c1b6 at svc_run_internal+0x726
> #9 0xffffffff80d6cd4b at svc_thread_start+0xb
> #10 0xffffffff80aba093 at fork_exit+0x8
> #11 0xffffffff80f48ede at fork_trampoline+0xe
(Unfortunately not kernel crash dump from this machine).
Systems are: Dell PowerEdge R730xd with 256GB RAM, HBA330 (LSI 3008) SAS controllers, ZFS-storage, Intel X710 10GE-ethernet machines running FreeBSD 11.2. No swap enabled. ZFS ARC capped to 128GB.
NFS v4.0 or v4.1 client with sec=krb5:krb5i:krb5p security. Most clients (if not all) are running Linux CentOS or Ubuntu). Around 200 active clients per server.
(Most clients are Windows users using SMB via Samba though)
We have enabled a crash dump device one a couple of the machines and are going to enable it on more in order to try to get a crash-dump when the next server panics...
Any ideas where this bug might be or how we could workaround it? (Disabling NFS is unfortunately not an option).
Well, if you can somehow find out the source line# for
#7 0xffffffff82b5d4d2 at svc_rpc_gss+0x8f2
then I can take a look.
It is pretty obviously a use of a NULL ptr, but without the
source line#, it's pretty hard to guess what it is.
rick
This is a huge wild chance, but I was looking at the assembler code for the svc_rpc_gss function at around the offset (0x8f2 = 2290) and it looks like this:
0xffffffff8286d4bc <svc_rpc_gss+2268>: callq 0xffffffff8286bc50 <rpc_gss_oid_to_mech>
0xffffffff8286d4c1 <svc_rpc_gss+2273>: mov 0x78(%r14),%rsi
0xffffffff8286d4c5 <svc_rpc_gss+2277>: lea -0x38(%rbp),%rdi
0xffffffff8286d4c9 <svc_rpc_gss+2281>: lea -0x70(%rbp),%rdx
0xffffffff8286d4cd <svc_rpc_gss+2285>: callq 0xffffffff828678b0 <gss_export_name>
0xffffffff8286d4d2 <svc_rpc_gss+2290>: test %eax,%eax
0xffffffff8286d4d4 <svc_rpc_gss+2292>: je 0xffffffff8286d932 <svc_rpc_gss+3410>
Looking at the source code in the svc_rpcsec_gss.c file this _might_ correspond to code in svc_rpc_gss_accept_sec_context() at around line 941:
client->cl_rawcred.version = RPCSEC_GSS_VERSION;
rpc_gss_oid_to_mech(mech, &client->cl_rawcred.mechanism);
maj_stat = gss_export_name(&min_stat, client->cl_cname,
&export_name);
if (maj_stat != GSS_S_COMPLETE) {
rpc_gss_log_status("gss_export_name", client->cl_mech,
maj_stat, min_stat);
return (FALSE);
}
client->cl_rawcred.client_principal =
mem_alloc(sizeof(*client->cl_rawcred.client_principal)
+ export_name.length);
client->cl_rawcred.client_principal->len = export_name.length;
memcpy(client->cl_rawcred.client_principal->name,
export_name.value, export_name.length);
@Peter, you should be able to use 'l *(svc_rpc_gss+0x8f2)' in kgdb to figure out the line number directly.
The only two callers of that function appear to be rpc_gss_get_principal_name and svc_rpc_gss_accept_sec_context. The latter is called exclusively by svc_rpc_gss, so it might be a good candidate. The former appears to be dead code, so I think it's svc_rpc_gss -> svc_rpc_gss_accept_sec_context (l.1344) -> gss_export_name (l.957). (Line numbers from CURRENT-ish, I don't know what you're running.)
Well, if I understood the comments, that would suggest client->cl_cname is NULL.
That is weird. I'm not a Kerberos guy, but this says that the server
was able to handle the GSSAPI initialization token (basically a Kerberos
session ticket + some GSSAPI gobbly gook), but the GSSAPI library
doesn't return a principal name for the gss_accept_sec_context() even
though it returns GSS_S_COMPLETE.
What does this mean in Kerberos land? I have no idea.
I can see two ways to handle it.
1 - Consider it a failed authentication.
OR
2 - Map it to "nobody".
Basically that principal name in client->cl_cname is looked up in the
password database and, if it is not found, then the credentials of
"nobody" are used instead of the credentials for the user in the
password database.
--> Since no entry in the password database gets "nobody", it seems
that "no principal name" might get the same treatment?
I think I'll generate a patch for #2 above and attach it to this PR#,
although I have no way to test it.
Created attachment 201858 [details]
fix srv side rpcsec_gss to handle NULL client principal name
This patch modifies the server side RPCSEC_GSS so that it handles
the case where gss_accept_sec_context() works for a client's token
and returns GSS_S_COMPLETE but the client principal name is NULL.
I have no idea what this means w.r.t. Kerberos, but since a principal
name that cannot be found in the password database is authenticated as
the user "nobody", this patch does the same for the case of "no prinicpal name".
It is untested, but hopefully Peter can test it?
(It assumes that the crash was caused by client->cl_cname == NULL.)
Created attachment 201862 [details]
updated fix for srv side rpcsec_gss NULL client principal
This is an update to the 201858 patch which does a couple
of printf()s and no KASSERT().
It uses a local clname instead of the one in client->cl_cname
so that any race caused by multiple RPC requests with the same
handle and GSS_S_CONTINUE_INIT will be handled.
(gss_release_name() sets client->cl_cname NULL and that could
have resulted in the crash if multiple RPCs were received and
handled concurrently as above.)
I didn't think that gss_accept_sec_context() ever returned
GSS_S_CONTINUE_INIT for Kerberos mechanism, but I could be
wrong on this and if this does happen and the client erroneously
sends the next token RPC twice, it could try to use client->cl_cname
after it is set NULL by the gss_release_name() for the previous
RPC message.
The printf()s should tell us more about how the NULL cname happens.
I'll try out the proposed fixes. Sound like it could be the culprit. A pity I haven't really found a way to force the issue yet (or even pinpointed which of the 100-200 clients caused it). But perhaps it's just a random thing caused by network retransmissions and load balancing over multiple LACP trunks...
A question - I enabled a lot of debugging output in order to try to see what's happening and I noticed one strange thing in the "dmesg" output:
> rpcsec_gss: accepted context for \^D\^A (41 bytes) with <mech { 1 2 840 113554 1 2 2 }, qop 0, svc 1>
(I had modified the rpc_gss_log_debug() call to also print the length of the client_principal member)
Ie, this call around line 1100 or so:
rpc_gss_log_debug("accepted context for %s (%d bytes) with "
"<mech %.*s, qop %d, svc %d>",
client->cl_rawcred.client_principal->name,
client->cl_rawcred.client_principal->len,
mechname.length, (char *)mechname.value,
client->cl_qop, client->cl_rawcred.service);
The "client_principal->name" output looks garbled.. (In /var/log/messages those characters get's filtered out so you won't find it there). Perhaps cl_rawcred.client_principal isn't supposed to be printable at all.
The bogus client_principal name suggests that client->cl_cname
was pointing at something bogus, because the client_principal
is filled in from it.
So, I think this suggests that client->cl_cname is sometimes
bogus and sometimes NULL. (It's filled in via gss_accept_sec_context(),
so that's in Kerberos land.)
Hopefully the patch will help resolve this, rick.
Created attachment 201876 [details]
fix race in ref counting of svc_rpc_gss_client struct
Upon inspection I found two related races when the svc_rpc_gss_client
structure in created/initialized.
1 - The cl_expiration field is set after the structure is linked into a
list, making it visible to other threads when it is still 0.
This could result in svc_rpc_gss_timeout_client() freeing it just after
being linked into the list.
2 - Related to #1, cl_refs is initialized to 1 and then incremented by
refcount_acquire() after svc_rpc_gss_create_client() returns it,
leaving a window where cl_refs could be decremented to 0 and free'd.
This patch makes two changes to svc_rpc_gss_create_client():
- Move the code that initializes cl_expiration to before where it is linked
into the list, so that it is set before it is visible to other threads.
- Initialize cl_refs to 2, so a refcount_acquire() is not needed after
the svc_rpc_gss_create_client() call.
These races could allow the structure to be free'd when it is still being
used in svc_rpc_gss().
This patch can be applied separately from #201862.
(In reply to Rick Macklem from comment #9)
I modified the debug printout a bit more so it would print all bytes in the cname output and it looks like it's not a "pure" name but some 'struct'/packed data of some kind...13ESPRESSO$@AD.LIU.SE (38 bytes), cl_sname = nfs@filur00.it.liu.se
Ie, the client principal is there but at the end of the data (19 bytes into it).
(\xNN = hexadecimal encoded character. Printable ascii chars are printed as is)
Yes, that is correct. In the gssapi, the call gss_export_name()
gets the plain ascii name out of the internal format.
You'll notice that the client_principal that was printing out
bogusly is a copy of the result of a gss_export_name() on cl_cname.
(So, assuming your principal names are printable ascii, as this example
appears to be, the result of a successful gss_export_name() will
be printable.)
Btw, I'm hoping that the second patch "fix race..." will fix this
problem, since I doubt that the Kerberos/gssapi libraries are returning
trash for the client principal name via gss_accept_sec_context().
I'm running a kernel with your patches now on our test server. With some added code to print the 'cname' to export_name converted as hexadecimal characters (for non-printable ones). It really looks like som 'struct' ending with the length of the principal as a 4-byte int and then the printable principal name.
Not many NFS clients connecting to that server so a race condition causing the "random" characters isn't really likely. Especially not when it's the first call to happening :-)
(Machine16FILIFJONKAN$@AD.LIU.SE (41 bytes),
cl_sname = nfs@filur00.it.liu.se
rpcsec_gss: accepted context for (41 bytes) with <mech { 1 2 840 113554 1 2 2 }, qop 0, svc 1>
(User12tesje148@AD.LIU.SE (37 bytes),
cl_sname = nfs@filur00.it.liu.se
rpcsec_gss: accepted context for (37 bytes) with <mech { 1 2 840 113554 1 2 2 }, qop 0, svc 1>
My quick hack to print the export_name:
maj_stat = gss_export_name(&min_stat, cname,
&export_name);
if (maj_stat != GSS_S_COMPLETE) {
rpc_gss_log_status("gss_export_name", client->cl_mech,
maj_stat, min_stat);
return (FALSE);
}
if (1) /* Debug printout */
{
gss_buffer_desc tmp;
char *src, *dst;
int i;
OM_uint32 dummy;
dst = tmp.value = mem_alloc(tmp.length = export_name.length*4+1);
src = export_name.value;
for (i = 0; i < export_name.length; i++) {
if (*src < ' ' || *src > '~') {
sprintf(dst, "\\x%02X", *src);
dst += 4;
src++;
} else
*dst++ = *src++;
}
*dst = '\0';
rpc_gss_log_debug("svc_rpc_gss_accept_sec_context: cl_cname = %s (%d bytes), cl_sname = %s",
tmp.value, export_name.length,
client->cl_sname && client->cl_sname->sn_principal ? client->cl_sname->sn_principal : "<null>");
gss_release_buffer(&dummy, &tmp);
}
client->cl_rawcred.client_principal =
mem_alloc(sizeof(*client->cl_rawcred.client_principal)
+ export_name.length);
client->cl_rawcred.client_principal->len = export_name.length;
memcpy(client->cl_rawcred.client_principal->name,
export_name.value, export_name.length);
gss_release_buffer(&min_stat, &export_name);
Well, since that is the output of gss_export_name(), I suspect that
won't be found in the password database and will map to nobody.
Note that the fact that it ends in "@AD.LIU.SE" suggests that the
gss_export_name() isn't doing what it needs to do.
Normally the output of gss_export_name() should be:
For a user principal, just the user, such as "tesje148".
For a host principal, <name>@<host.domain>, such as "nfs@filur00.it.liu.se".
Unless there is an additional step in the GSSAPI to get to the "user" or
"user@domain" name that I have forgotten. (I am not the author of this code,
but I did write a RPCSEC_GSS implementation for the early NFSv4 code I did
long ago.)
Maybe gss_pname_to_unix_cred() does some additional conversion?
(If you add a couple of printf()s in gss_pname_to_unix_cred(), you should
be able to tell if the cname is translating to a unix cred ok.)
You also might consider testing with my second patch and not the first one
that changes client->cl_cname to a local cname. (I may have screwed the patch
up, since I can't test them.)
I don't think this is related to the crash.
Oops, I was wrong. (Maybe Heimdal just returned the principal name 10+
years ago, but no longer.)
If you look at (under your FreeBSD source tree):
crypto/heimdal/lib/gssapi/krb5/export_name.c
you'll see that it puts a bunch of junk in front of the principal
name. (It looks like it is always the same # of bytes, so for your case
I'd guess it's always 19bytes of junk.)
I'd guess that krb5_aname_to_localname() found in:
crypto/heimdal/lib/krb5/aname_to_localname.c
knows how to get rid of the junk, but that isn't callable from
the kernel, I'd guess? (I think the gssd daemon will only handle
a subset of the gss_XXX() calls.)
So, I don't think this is a bug and that leaves us with the crashes
due to a NULL cl_cname and I'm hoping they were caused by the race
fixed by the 2nd patch.
Good luck with it, rick
I agree that the debug printout stuff is not the problem here.
We are going to try out your second patch on our production servers this week.
(Just had one of the servers (without your patch) crash and reboot again twice).
Thankfully we found a way to speed up the mounting of 20k zfs filesystems massively so a reboot now takes about 5 minutes instead of 40+ minutes... (A coworker wrote a script that does the ZFS mounts in parallell instead of sequentially - there are room for even more speed improvements though :-)
Anyway - a question about a potential workaround:
Would limiting the number of NFSD thread with "-n 1" "mask" this problem?
Our servers have 32 cpu "cores" so nfsd tries to start up 256 kernel threads which I'm guessing will improve the chance of the bug to be triggered.
As some background (not necessarily helpful, unfortunately) on GSSAPI names, they are actually fairly complicated objects.
To start with, there are two broad types of name object, an "internal name" that is something of a generic name string and name type (e.g., "host@test.example.com" and GSS_NT_HOSTBASED_SERVICE), and a "mechanism name" (MN) that contains similar information content but has been restricted to a single GSSAPI mechanism (e.g., for the krb5 mechanism, you'd get "host/test.example.com@REALM" for the above example). So the actual runtime object is going to have some OIDs associated with it, whether for name type or mechanism type and perhaps other things.
The original worldview w.r.t. GSSAPI names was that an MN could be converted to an "export token" that would be treated as an opaque binary string and used for equality comparison in ACLs, and that there would be a separate "display name" version that would be used in logging but not for authentication decisions. (You can imagine how well that turned out.) So now we have more advanced functions like gss_localname() that map a MN to a local username.
I did not look particularly closely at the debugger dump of the cl_name that had the "expected name" at the end of the buffer; it did look like there were probably some OIDs and maybe other data in front. Presumably one could pull the relevant struct definition from the heimdal internal headers.
Yes, I suspect that "-n 1" on the nfsd startup would probably
mask the problem, although I would not recommend that.
(This assumes that I am correct to assume this race between threads
is the culprit.)
Increasing the size of CLIENT_MAX (which recently became a tunable
in head) might also mask it and will help w.r.t. performance if
you have more than 128 users authenticating from the clients.
(In this case, CLIENT refers to a user on an NFS mount and not an
NFS mount from a client.)
Interesting that the HASH table is set to 256, which is twice the
number of elements being kept in the hash lists. (Not sure why
anyone would make it that way, but I'm not the author of this stuff.;-)
I am really hoping that the 2nd patch "fix race..." will fix the
problem. (The first patch would just be a "safety belt" in case
the GSSAPI did return GSS_S_COMPLETE for gss_accept_sec_context(),
but somehow leave the cname argument NULL.
(In reply to Rick Macklem from comment #18)
Hmm.. I wonder what would happen if I set CLIENT_MAX to 1 or 2 instead on our test server and then have a couple of Linux clients try to do some concurrent access (as different users). Ah well, something for tomorrow to test. We definitely have more that 128 different users accessing each server so will probably up the limits to 1024 or so later.
(Looking for ways to more easily provoke the issue :-)
Anyway, we're now running our servers with that second patch. Will be interesting to see what happens tomorrow... (One of the servers rebooted twice today (without that patch)).
Well, if you have NFS client(s) mounted with more than CLIENT_MAX different
users actively using the mount concurrently, that might tickle the
race?
- Basically, this code (which handles a new credential creation) happens
when there is a miss on the cache of credentials (which is capped at
CLIENT_MAX), where each distinct user (as in uid) would need a credential.
--> Conversely, making CLIENT_MAX >= the maximum # of different uids actively
using the file system might reduce the likelyhood of the crash, since this
code would be executed less frequently.
Related to Ben's comment (thanks for the nice description of the name):
- It did tickle a rusty brain cell. I think I was confusing gss_export_name()
with gss_display_name(). Unfortunately, this isn't very useful, since
neither gss_display_name() nor gss_localname() are supported by the
KGSSAPI.
If you search for "_svc" in usr.sbin/gssd/gssd.c, you'll see the rather
small list of gssapi functions supported by the KGSSAPI (unless I've
misread this code).
- I think the structure is called "Principal" (also called
"krb5_principal_data"). It seems to be defined in
krb5_asn1.h and that isn't in the kernel either.
--> I suspect this is why the logging code enabled via compiling it with
DEBUG defined just logs the output of gss_export_name() and doesn't
try and extract the components of it?
Good luck with your testing, rick
Just a quick feedback/comment: We've now been running our production servers with a patched kernel as per above since Monday night and so far everything seems to be running correctly... No kernel panics so far - *Fingers Crossed* :-)
Just a quick wish for someone - would it be possible to have the CLIENT_MAX and the hash table size increased a bit (like to 512/1024) in an "official" kernel patch for 11.2? 128/256 are very low numbers - especially since every client principal (HOST$@REALM) also uses up a slot in that table when a client connects...
So basically it just requires 64 clients (with a different user on each client) to run into the CLIENT_MAX limit (which I'm guessing will cause gssd extra work so it's not a showstopper, just annoying :-).
(There are a bit too many kernel-crashing problems reported with ZFS in 12.0 for us to upgrade there yet).
I know I can build my own kernel with those limits increased but it would be nice to not have to do that (and distribute it to all our servers) whenever a new kernel/security patch arrives... :-)
/Lazy Bugger
I won't be able to do any commits until April, so by then we should
know if the patch stops the crashes. (I've taken this bug, so that
hopefully I will remember to look at this in April.)
I have pinged trasz@ w.r.t. doing an MFC of his patch to head that
makes CLIENT_MAX a sysctl you can tune. That would get it in 11.3.
(I've left a decision w.r.t. doing an errata change of CLIENT_MAX to him, although I'll admit it doesn't seem serious enough for an errata to me.)
A commit references this bug:
Author: trasz
Date: Tue Feb 19 11:07:03 UTC 2019
New revision: 344276
URL:
Log:
Bump the default kern.rpc.gss.client_max from 128 to 1024.
The old value resulted in bad performance, with high kernel
and gssd(8) load, with more than ~64 clients; it also triggered
crashes, which are to be fixed by a different patch.
PR: 235582
Discussed with: rmacklem@
MFC after: 2 weeks
Changes:
head/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c
I've MFC-ed the patch, and also bumped the default in 13-CURRENT.
As for errata - I'm not sure, to be honest. I think erratas are more about crasher bugs and regressions, while this one is neither.
A commit references this bug:
Author: rmacklem
Date: Tue Apr 2 23:51:09 UTC 2019
New revision: 345818
URL:
Log:
Fix a race in the RPCSEC_GSS server code that caused crashes.
When a new client structure was allocated, it was added to the list
so that it was visible to other threads before the expiry time was
initialized, with only a single reference count.
The caller would increment the reference count, but it was possible
for another thread to decrement the reference count to zero and free
the structure before the caller incremented the reference count.
This could occur because the expiry time was still set to zero when
the new client structure was inserted in the list and the list was
unlocked.
This patch fixes the race by initializing the reference count to two
and initializing all fields, including the expiry time, before inserting
it in the list.
Tested by: peter@ifm.liu.se
PR: 235582
MFC after: 2 weeks
Changes:
head/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c
The second patch seems to fix the crashes and has been committed to head.
A commit references this bug:
Author: rmacklem
Date: Wed Apr 3 03:50:16 UTC 2019
New revision: 345828
URL:
Log:
Add a comment to the r345818 patch to explain why cl_refs is initialized to 2.
PR: 235582
MFC after: 2 weeks
Changes:
head/sys/rpc/rpcsec_gss/svc_rpcsec_gss.c
The patch that is believed to fix these crashes has been MFC'd. | https://bugs.freebsd.org/bugzilla/show_bug.cgi?format=multiple&id=235582 | CC-MAIN-2019-30 | refinedweb | 3,821 | 61.87 |
14 June 2011 17:31 [Source: ICIS news]
<"" http:="">>
LONDON (ICIS)--InterBulk will focus on expanding its operations in the high growth regions of ?xml:namespace>
"As we are operating worldwide we think that the highest growth will be in emerging countries where we have our strategy ... we had a return to growth last year and in the first half of this year and our outlook is that we will grow further," van Wissen said.
As part of InterBulk's expansion plans, the company announced on 20 May the conditional placing of 165m new ordinary shares at 11 pence to China-based logistics major Sinotrans Logistics, which will raise £18.2m (€20.7m, $30.0m) of gross proceeds. Sinotrans will be a partner for the development of all of InterBulk's business activities in the Chinese and wider Asian markets, the CEO added.
"We have been working with Sinotrans Logistics for the last four years in an alliance for the chemical industry in China and their investment in our business is a major step," van Wissen said.
Earlier on Tuesday, InterBulk announced that its profit after tax for the first half of 2011 had soared to £1.35m compared with £391,000 in the same period last year as revenue rose 15.8% year on year to £146.2m.
In a statement, van Wissen said that the company’s business model, competitive position, and growth opportunities for dry and liquid bulk intermodal solutions provided a positive environment for future success.
Company chairman David Rolph said that global demand for chemicals and polymers had remained buoyant in the first six months of the year, which has underpinned InterBulk's revenue growth. He added that the global chemical industry had experienced good market conditions in 2010, which has continued into 2011 with modest growth forecast in the US and Europe and stronger growth in emerging regions.
However, he added that the large increase in fuel prices, with oil prices rising by approximately 40% in dollar terms during the six months to 31 March 2011, has impacted margin performance, particularly in
($1 = €0 | http://www.icis.com/Articles/2011/06/14/9469493/interbulk-focuses-future-expansion-in-high-growth-economies-ceo.html | CC-MAIN-2014-42 | refinedweb | 350 | 55.68 |
Define f(z) = iz*exp(-z/sin(iz)) + 1 and g(z) = f(f(z)) + 2 for a complex argument z. Here’s what the phase plots of g look like.
The first image lets the real and imaginary parts of z range from -10 to 10.
This close-up plots real parts between -1 and 0, and imaginary part between -3 and -2.
The plots were produced with this Python code:
from mpmath import cplot, sin, exp def f(z): return 1j*z*exp(-z/sin(1j*z)) + 1 def g(z): return f(f(z)) + 2 cplot(g, [-1,0], [-3,-2], points=100000)
The function g came from Visual Complex Functions.
2 thoughts on “Fractal-like phase plots”
I’ve been using mpmath for a while to generate Newton’s method fractals of various functions, e.g. this entry
I goofed a little on the title, it’s Newton’s method with
z = z – [exp(cos(z)) – cos(exp(z) -z]/[-sin(z)exp(cos(z)) + exp(z)sin(exp(z)) -1.0]
You might be interested in taking a look at
which Steve Gubkin and I will be using to build an online complex analysis course, along the lines of our | http://www.johndcook.com/blog/2013/03/07/fractal-like-phase-plots/ | CC-MAIN-2015-48 | refinedweb | 205 | 67.99 |
Serverless is a cloud-computing execution model in which the cloud provider is responsible for executing a piece of code by dynamically allocating resources to run the code when needed. In a previous post, we looked at what serverless is, and we set up our computer to be able to build serverless applications using AWS Amplify. We bootstrapped a React project and added the Amplify library to it. In this post, we will use the Amplify CLI to provision a secured backend API and a NoSQL database. Then we will consume this API from the React project.
Creating The Serverless Backend Services
The application we're going to build will allow users to perform basic CRUD operation. We will use a REST API with a NoSQL database. Follow the instruction below to create the serverless backend.
- Open the command line and go to the root directory of your project.
- Run the command
amplify add api.
- You get a prompt to select a service type. Choose
RESTand press Enter.
- It prompts you to enter a name for the current category (the api category). Enter
todosApiand press Enter.
- You're asked for a path. Accept the default
itemspath by pressing Enter.
- The next prompt asks for the Lambda source. The serverless REST API works by creating a path on API Gateway and mapping that path to a lambda function. The lambda function contains code to execute when a request is made to the path it's mapped to. We will create a new lambda. Select the option
Create a new Lambda functionand press Enter.
- Enter
todosLambdaas the name of the resource for the category (function category), and press Enter.
- You will be asked for the name of the lambda function. Enter
todosand press Enter.
- You will be asked to choose a template for generating code for this function. Choose the option
CRUD function for Amazon DynamoDB table (Integration with Amazon API Gateway and Amazon DynamoDB)and press Enter. This creates an architecture using API Gateway with Express running in an AWS Lambda function that reads and writes to Amazon DynamoDB.
- The next prompt asks you to choose a DynanoDB data source. We do not have an existing DynamoDB table so we will choose the
Create a new DynamoDB tableoption. Press Enter to continue. Now you should see DynamoDB database wizard. It'll ask a series of questions to determine how to create the database.
- You will be asked to enter the name for this resource. Enter
todosTableand press Enter.
- The next prompt is for the table name. Enter
todosand press Enter.
- You will be asked to add columns to the DynamoDB table. Follow the prompt to create column
idwith
Stringas its type.
idcolumn when asked for the partition key (primary key) for the table.
- You will be asked if you want to add a sort key to the table. Choose false.
- The next prompt asks if you want to add global secondary indexes to your table. Enter
nand press Enter. You should see the message
Successfully added DynamoDb table locally
- The next prompt asks Do you want to edit the local lambda function now?. Enter
nand press Enter. You should see the message
Successfully added the Lambda function locally.
- You get asked if you want to restrict access to the API. Enter
yand press Enter.
- For the next prompt, choose
Authenticated and Guest usersand press Enter. This option gives both authorized and guest users access to the REST API.
- Next, you get asked
What kind of access do you want for Authenticated users. Choose
read/writeand press Enter.
- Now we get a prompt to choose the kind of access for unauthenticated users (i.e gues users). Choose
readand press Enter. You should get the message
Successfully added auth resource locally. This is because we have chosen to restrict access to the API, and the CLI added the Auth category to the project since we don't have any for the project. At this point, we've added resources that are needed to create our API (API Gateway, DynamoDB, Lambda function, and Cognito for authentication).
- We get asked if we want to add another path to the API. Enter
nand press Enter. This completes the process and we get the message
Successfully added resource todosApi locally.
The
amplify add api command took us through the process of creating a REST API. This API will be created based on the options we chose. To create this API requires 4 AWS services. They're:
- Amazon DynamoDB. This will serve as our NoSQL database. We created a DynomoDB table named
todoswhen we added the
todosTableresource. We gave it 3 columns with
idas the primary key.
- AWS Lambda functions. This lets us run code without provisioning or managing servers. This is where our code to perform CRUD operations on the DynamoDB table will be.
- Amazon Cognito. This is responsible for authentication and user management. This lets us add user sign-up, sign-in, and access control to our app. We chose the option to restrict access to our API, and this service will help us authenticate users.
- Amazon API Gateway. This is what allows us to create REST API endpoint. We added a resource for this named
todosApi, with a path
items. We also selected the option to restrict access to the API.
However, the service specifications for this services are not yet in the cloud. We need to update the project in the cloud with information to provision the needed services. Run the command
amplify status, and we should get a table with information about the amplify project.
It lists the category we added along with the resource name and operation needed to be run for that resource. What the
Create operation means is that these resources need to be created in the cloud. The
init command goes through a process to generate the .amplifyrc file (it is written to the root directory of the project) and inserts a amplify folder structure into the project’s root directory, with the initial project configuration information written in it. Open the amplify folder and you'll find folders named backend and #current-cloud-backend. The backend folder contains the latest local development of the backend resources specifications to be pushed to the cloud, while #current-cloud-backend contains the backend resources specifications in the cloud from the last time the
push command was run. Each resources stores contents in its own subfolder inside this folder.
Open the file backend/function/todosLambda/src/app.js. You will notice that this file contains code generated during the resource set up process. It uses Express.js to set up routes, and aws-serverless-express package to easily build RESTful APIs using the Express.js framework on top of AWS Lambda and Amazon API Gateway. When we push the project configuration to the cloud, it'll configure a simple proxy API using Amazon API Gateway and integrate it with this Lambda function. The package includes middleware to easily get the event object Lambda receives from API Gateway. It was applied on line 32
app.use(awsServerlessExpressMiddleware.eventContext()); and used across the routes with codes that looks like
req.apiGateway.event.*. The pre-defined routes allow us to perform CRUD operation on the DynamoDB table. We will make a couple of changes to this file. The first will be to change the value for
tableName variable from
todosTable to
todos. When creating the DynamoDB resource, we specified
todosTable as the resource name and
todos as the table name, so it wrongly used the resource name as the table name when the file was created. This would likely be fixed in a future version of the CLI, so if you don't find it wrongly used, you can skip this step. We will also need to update the definitions.
Change the first route definition to use the code below.
app.get(path, function(req, res) { const queryParams = { TableName: tableName, ProjectionExpression: "id, title" }; dynamodb.scan(queryParams, (err, data) => { if (err) { res.json({ error: "Could not load items: " + err }); } else { res.json(data.Items); } }); });
This defines a route to respond to the /items path with code to return all data in the DynamoDB table. The
ProjectionExpression values are used to specify that it should get only the columns
id and
title.
Change the route definition on line 77 to read as
app.get(path + hashKeyPath + sortKeyPath, function(req, res) {. This allows us retrieve an item by its
id following the path /items/:id. Also change line 173 to be
app.delete(path + hashKeyPath + sortKeyPath, function(req, res) {. This respnds to HTTP DELETE method to delete an item following the path /items/:id.
The AWS resources have been added and updated locally, and we need to provision them in the cloud. Open the command line and run
amplify push. You'll get a prompt if you want to continue executing the command. Enter
y and press Enter. What this does is that it'll upload the latest versions of the resources nested stack templates to an S3 deployment bucket, and then call the AWS CloudFormation API to create / update resources in the cloud.
Building The Frontend
When the
amplify push command completes, you'll see a file aws-exports.js in the src folder. This file contains information to the resources that were created in the cloud. Each time a resource is created or updated by running the
push command, this file will be updated. It's created for JavaScript projects and will be used in Amplify JavaScript library. We will be using this in our React project. We will also use Bootstrap to style the page. Open public/index.html and add the following in the>
Add a new file src/List.js with the following content:
import React from "react"; export default props => ( <div> <legend>List</legend> <div className="card" style={{ width: "25rem" }}> {renderListItem(props.list, props.loadDetailsPage)} </div> </div> ); function renderListItem(list, loadDetailsPage) { const listItems = list.map(item => ( <li key={item.id}{listItems}</ul>; }
This component will render a list of items from the API. Add a new file src/Details.js with the following content:
import React from "react"; export default props => ( <div> <h2>Details</h2> <div className="btn-group" role="group"> <button type="button" className="btn btn-secondary" onClick={props.loadListPage} > Back to List </button> <button type="button" className="btn btn-danger" onClick={() => props.delete(props.item.id)} > Delete </button> </div> <legend>{props.item.title}</legend> <div className="card"> <div className="card-body">{props.item.content}</div> </div> </div> );
This component will display the details of an item with buttons to delete that item or go back to the list view. Open src/App.js and update it with this code:
import React, { Component } from "react"; import List from "./List"; import Details from "./Details"; import Amplify, { API } from "aws-amplify"; import aws_exports from "./aws-exports"; import { withAuthenticator } from "aws-amplify-react"; Amplify.configure(aws_exports); class App extends Component { constructor(props) { super(props); this.state = { content: "", title: "", list: [], item: {}, showDetails: false }; } async componentDidMount() { await this.fetchList(); } handleChange = event => { const id = event.target.id; this.setState({ [id]: event.target.value }); }; handleSubmit = async event => { event.preventDefault(); await API.post("todosApi", "/items", { body: { id: Date.now().toString(), title: this.state.title, content: this.state.content } }); this.setState({ content: "", title: "" }); this.fetchList(); }; async fetchList() { const response = await API.get("todosApi", "/items"); this.setState({ list: [...response] }); } loadDetailsPage = async id => { const response = await API.get("todosApi", "/items/" + id); this.setState({ item: { ...response }, showDetails: true }); }; loadListPage = () => { this.setState({ showDetails: false }); }; delete = async id => { //TODO: Implement functionality }; render() { return ( <div className="container"> <form onSubmit={this.handleSubmit}> <legend>Add</legend> <div className="form-group"> <label htmlFor="title">Title</label> <input type="text" className="form-control" id="title" placeholder="Title" value={this.state.title} onChange={this.handleChange} /> </div> <div className="form-group"> <label htmlFor="content">Content</label> <textarea className="form-control" id="content" placeholder="Content" value={this.state.content} onChange={this.handleChange} /> </div> <button type="submit" className="btn btn-primary"> Submit </button> </form> <hr /> {this.state.showDetails ? ( <Details item={this.state.item} loadListPage={this.loadListPage} delete={this.delete} /> ) : ( <List list={this.state.list} loadDetailsPage={this.loadDetailsPage} /> )} </div> ); } } export default withAuthenticator(App, true);
We imported the Amplify library and initialised it by calling
Amplify.configure(aws_exports);. When the component is mounted, we call
fetchList() to retrieve items from the API. This function uses the API client from the Amplify library to call the REST API. Under the hood, it utilizes Axios to execute the HTTP requests. It'll add necessary headers to the request so you can successfully call the REST API. You can add headers if you defined custom headers for your API, but for our project, we only specify the apiName and path when invoking the functions from the API client. The
loadDetailsPage() function fetches a particular item from the database through the API and then sets
item state with the response and
showDetails to true. This
showDetails is used in the render function to toggle between showing a list of items or the details page of a selected item. The function
handleSubmit() is called when the form is submitted. It sends the form data to the API to create a document in the database, with columns
id,
title and
content, then calls
fetchList() to update the list. I left the
delete() function empty so you can implement it yourself. What better way to learn than to try it yourself 😉. This function will be called from the delete button in the
Details component. The code you have in it should call the API to delete an item by
id and display the list component with correct items. We wrapped the App component with the
withAuthenticator higher order component from the Amplify React library. This provides the app with complete flows for user registration, sign-in, signup, and sign out. Only signed in users can access the app since we're using this higher order component. The
withAuthenticator component automatically detects the authentication state and updates the UI. If the user is signed in, the underlying App component is displayed, otherwise, sign-in/signup controls are displayed. The second argument which was set to
true tells it to display a sign-out button at the top of the page. Using the
withAuthenticator component is the simplest way to add authentication flows into your app but you can also have a custom UI and use set of APIs from the Amplify library to implement sign-in and sign up flows. See the docs for more details.
We have all the code necessary to use the application. Open the terminal and run
npm start to start the application. You'll need to signup and sign in to use the application.
Wrapping Up
We went through creating our backend services using the Amplify CLI. The command
amplify add api took us adding resources for DynamoDB, Lambda, API Gateway, and Cognito for authentication. We updated the code in backend/function/todosLambda/src/app.js to match our API requirement. We added UI components to perform CRUD operations on the app and used a higher order component from the Amplify React library to allow only authenticated users access to the application. You should notice we only used a few lines of code to add authentication flows and call the API. Also creating the serverless backend services and connecting them all together was done with a command and responding to the prompts that followed. Thus showing how AWS Amplify makes development easier.
Originally posted on my blog.
Posted on by:
Peter Mbanugo
Software Developer with experience building web apps and services in JS and C#. I'm passionate about building quality software, with interest area around Offline First and Software Architecture.
Discussion | https://dev.to/pmbanugo/going-serverless-with-react-and-aws-amplify-part-2-creating-and-using-serverless-services-340e | CC-MAIN-2020-40 | refinedweb | 2,623 | 58.99 |
Introduction and motivation to use Redux
Information drawn from
Redux is a predictable state container for JavaScript apps.
It helps you write applications that behave consistently, run in different environments (client, server, and native), and are easy to test.
You can use Redux together with React, or with any other view library. It is tiny (2kB, including dependencies), but has a large ecosystem of addons available.
Basic Example!
import { createStore } from 'redux' /** * This is a reducer, a pure function with (state, action) => state signature. * It describes how an action transforms the state into the next state. * * The shape of the state is up to you: it can be a primitive, an array, an object, * or even an Immutable.js data structure. The only important part is that you should * not mutate the state object, but return a new object if the state changes. * * In this example, we use a `switch` statement and strings, but you can use a helper that * follows a different convention (such as function maps) if it makes sense for your * project. */ function counter(state = 0, action) { switch (action.type) { case 'INCREMENT': return state + 1 case 'DECREMENT': return state - 1 default: return state } } // Create a Redux store holding the state of your app. // Its API is { subscribe, dispatch, getState }. let store = createStore(counter) // You can use subscribe() to update the UI in response to state changes. // Normally you'd use a view binding library (e.g. React Redux) rather than subscribe() directly. // However it can also be handy to persist the current state in the localStorage. store.subscribe(() => console.log(store.getState())) // The only way to mutate the internal state is to dispatch an action. // The actions can be serialized, logged or stored and later replayed. store.dispatch({ type: 'INCREMENT' }) // 1 store.dispatch({ type: 'INCREMENT' }) // 2 store.dispatch({ type: 'DECREMENT' }) // 1
The only important part is that you should not mutate the state object, but return a new object if the state changes.
Motivation.
------------------------------------------------------------------------
Last update on 02 Feb 2020
--- | https://codersnack.com/redux-introduction/ | CC-MAIN-2022-33 | refinedweb | 333 | 65.83 |
[
]
Xu Zhang updated PIG-72:
------------------------
Attachment: (was: PortPigUnitTestToMiniClusters.patch)
> Porting Pig unit tests to use MiniDFSCluster and MiniMRCluster on the local machine
> -----------------------------------------------------------------------------------
>
> Key: PIG-72
> URL:
> Project: Pig
> Issue Type: Test
> Components: tools
> Reporter: Xu Zhang
>
> We have the need to port the Pig unit tests to use MiniDFSCluster and MiniMRCluster,
so that tests can be executed with the DFS and MR threads on the local machine. This feature
will eliminate the need to set up a real distributed hadoop cluster before running the unit
tests, as everything will now be carried out with the (mini) cluster on the user's local machine.
> One prerequisite for using this feature is a hadoop jar that has the class files for
MiniDFSCluster, MiniMRCluster and other supporting components. I have been able to generate
such a jar file with a special target added by myself to hadoop's build.xml and have also
logged a hadoop jira to request this target be a permanent part of that build file. If possible,
we can just replace hadoop15.jar with this jar file on the SVN source tree and then the users
will never need to worry about the availability of this jar file. Please find such a hadoop
jar file in the attachment.
> To use the feature in unit tests, the user just need to call MiniClusterBuilder.buildCluster()
before a PigServer instance is created with the string "mapreduce" as the parameter to its
constructor. Here is an example of how the MiniClusterBuilder is used in a test case class:
> public class TestWhatEver extends TestCase {
> private String initString = "mapreduce";
> private MiniClusterBuilder cluster = MiniClusterBuilder.buildCluster();
>
> @Test
> public void testGroupCountWithMultipleFields() throws Exception {
> PigServer pig = new PigServer(initString);
> // Do something with the pig server, such as registering and
executing Pig
> // queries. The queries will executed with the local cluster.
> }
>
> // More test cases if needed
> }
> To run the unit tests with the local cluster, under the top directory of the source tree,
issue the command "ant test". Notice that you do not need to specify the location of the hadoop-site.xml
file with the command line option "-Djunit.hadoop.conf=<dir>" anymore.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/hadoop-pig-dev/200802.mbox/%3C18012606.1201826831626.JavaMail.jira@brutus%3E | CC-MAIN-2017-39 | refinedweb | 375 | 60.75 |
Build a Todo App from Scratch with Ionic
By Josh Morony
If you’ve been reading my beginner series of tutorials then you’ve probably seen a few example applications and had a bit of a play with Ionic.
When building an app from scratch though it’s a bit more intimidating, and you’ll probably run into a few roadblocks along the way. What I wanted to do in this tutorial was to come up with a concept for an application, and then build it from scratch using Ionic’s “blank” template.
So, to that end, we will be building a todo application that will allow a user to:
- View a list of todo’s
- Add new todo’s
- View details of a specific todo (we will be setting up a Master / Detail pattern in Ionic)
- Save the todo’s to permanent storage
Before We Get Started
Last updated for Ionic 3.6.1
Before you go through this tutorial, you should have at least a basic understanding of Ionic concepts. You must also already have Ionic installed. Generate a New Ionic Project
We’re going to start off by generating a new project based on the “blank” template. This is mostly an empty skeleton project, but there is a bit of example code in there that we will get rid of.
Run the following command to generate a new Ionic application
ionic start ionic-todo blank
Once the code is generated, open up the project in your text editor. I explain the structure of an Ionic application in this post so take a look at that if you’d like some more background information about the code the Ionic CLI generates.
Basically, all of the components (our application will be made up of a bunch of different components) for our application will be in the src folder (including the root component in the app folder, and all of our page components in the pages folder). A component will consist of a template (
.html file), class definition (
.ts file) and maybe some styling (
.scss file). As well as components, you might also create things like services here (like the data service we will create later) as well, which don’t have a template or styling, but are similar in structure to a normal component. These services are also refered to as “providers” and will be placed in a providers folder.
Right now, there is just a single HomePage component that sets up a dummy view. We are going to modify this to display a list of all the todo items in our application.
I want to start off by taking a look at the src/app/app.component.ts file that is auto generated:
import { Component } from '@angular/core'; import { Platform } from 'ionic-angular'; import { HomePage } from '../pages/home/home'; @Component({ templateUrl: 'app.html' }) export class MyApp { rootPage:any = HomePage; constructor(platform: Platform) { platform.ready().then(() => { }); } }
The app.component.ts file defines our root component. This component is special when compared to the rest of the components because it is the first component that will be added to the application, and from there we can display more components, which can then add even more components and so on. Basically, our application structure is like a tree with the root component at the root of that tree.
So, it’s important that our root component knows where to find our HomePage as it will need to set it as the root page. Notice that we are importing the HomePage at the top of this file, and then setting it as the root page with the following code:
rootPage: any = HomePage;
We are able to declare variables above the constructor to make them member variables or class members, meaning they will be accessible throughout the entire class by referencing
this.myVar and it will also be available to your templates. In this case, we are creating a class member called
rootPage that we will be able to access throughout this class, and in the template. If we take a look at the associated template that is in app.html we would see:
<ion-nav [root]="rootPage"></ion-nav>
The template is accessing that class member that is stored in the TypeScript file to configure the
<ion-nav> component.
The
: any part of the variable declaration is just a TypeScript thing, which says that
rootPage can be any type. TypeScript also allows you to enforce more specific types, rather than just using
any. If you’re not comfortable with TypeScript yet and are confused by this, don’t worry about it – you can leave types out and your application will still work fine. I won’t be using types in this tutorial, except for dependency injection where it is required (which we will get to later). If you would like to know more about using types and TypeScript in Ionic, you should read Ionic: TypeScript vs ECMAScript 6.
The root page is the first page that will be displayed in your application, and then you can navigate to other pages from there. To change views in an Ionic application you can either change this root page, or push and pop views on top of that page. Pushing a view will change to that view, and popping it will remove it and go back to the previous view. For a more detailed explanation of navigation in Ionic, I recommend taking a look at A Simple Guide to Navigation in Ionic.
2. Setting up the Home Page
Now that we’ve got our basic application set up, let’s start making things happen. First, let’s set up our todo lists template.
Creating the Template
Modify src/pages/home/home.html to reflect the following:
<ion-header> <ion-navbar <ion-title> Todos! </ion-title> <ion-buttons end> <button ion-button icon-only (click)="addItem()"><ion-icon</ion-icon></button> </ion-buttons> </ion-navbar> </ion-header> <ion-content> <ion-list> <ion-item *{{item.title}}</ion-item> </ion-list> </ion-content>
Take notice of the
* syntax used here for the
*ngFor in the list, this is shorthand for creating an embedded template. So rather than literally rendering out:
<ion-item *{{item.title}}</ion-item>
to the DOM (Document Object Model), an embedded template will be created for each items specific data. So, if our items array (which will be defined later in the class definition) had 4 items in it, then the
<ion-item> would be rendered out 4 times. Also notice that we are using
let item, which assigns a single element from the
items array to
item as we loop through the array. This allows us to reference its properties, and also pass it into the
viewItem function.
We are also setting the title to Todos! and we are supplying a button using
<ion-buttons>. Since we have supplied the end attribute, the button will be placed in the “end” position. Different attributes may behave differently depending on what platform they are running on, but on iOS for example, “end” will place the button to the right of the nav bar. Also note that on the button itself we give it an attribute of ion-button which will give the button the Ionic styling, as well as icon-only which styles buttons that only contain an icon and no text.
We are using (click) to attach a click listener to this element, which will call the addItem() function in home.ts (which doesn’t exist yet).
Creating the Class Definition
Now that we have our template sorted, we need to create the functions we are referencing in our HomePage class, and we also need to supply the items data (we will just set up some dummy data initially).
Modify src/pages/home/home.ts to reflect the following:
import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { public items; constructor(public navCtrl: NavController) { } ionViewDidLoad(){ this.items = [ {title: 'hi1', description: 'test1'}, {title: 'hi2', description: 'test2'}, {title: 'hi3', description: 'test3'} ]; } addItem(){ } viewItem(){ } }
Remember before how we assigned a type of
any to the
homePage variable? Now we are assigning a type of
NavController to the
navCtrl parameter in the constructor. This is how dependency injection works in Ionic, and is basically a way of telling the application “we want
navCtrl to be a reference to
NavController”. By adding the
public keyword in front of it, it automatically creates a member variable for us. This means that we can now reference the
NavController anywhere in this class by using
this.navCtrl.
Now that we’ve set up some dummy data (we use the
ionViewDidLoad lifecycle hook to do this, which is triggered as soon as the page is loaded), you should be able to see it rendered in your list:
when you run
ionic serve. Since we have imported the NavController service, we will be able to push and pop views from this component by using:
this.navCtrl.push(SOME_PAGE);
and
this.navCtrl.pop();
We’ve created functions for adding items and viewing items, but before we can take this any further we are going to have to create a AddItemPage and ItemDetailPage component.
Adding Items
We’re going to create a new component now that will allow us to add new todo’s. This will be a simple form where we supply a title and description to create a new todo.
Run the following command to generate a new add-item page:
ionic g page AddItem
IMPORTANT: When pages are generated in Ionic, something called “lazy loading” is set up by default. To keep things simple, we will not be covering that in this tutorial. To make sure you do not run into any issues, you should delete the add-item.module.ts file that is generated. The
@IonicPage() decorator should also be removed from the add-item.ts file.
Whenever we create a new page, we need to ensure that it is imported into our app.module.ts, and declared in the
declarations and
entryComponents arrays.'; @NgModule({ declarations: [ MyApp, HomePage, AddItemPage ], imports: [ BrowserModule, IonicModule.forRoot(MyApp) ], bootstrap: [IonicApp], entryComponents: [ MyApp, HomePage, AddItemPage ], providers: [{provide: ErrorHandler, useClass: IonicErrorHandler}] }) export class AppModule {}
Just like last time, let’s create our template for the component first.
Setting up the Add Item Template
Modify src/pages/add-item/add-item.html to reflect the following:
<ion-header> <ion-toolbar <ion-title> Add Item </ion-title> <ion-buttons end> <button ion-button icon-only (click)="close()"><ion-icon</ion-icon></button> </ion-buttons> </ion-toolbar> </ion-header> <ion-content> <ion-list> <ion-item> <ion-label floating>Title</ion-label> <ion-input</ion-input> </ion-item> <ion-item> <ion-label floating>Description</ion-label> <ion-input</ion-input> </ion-item> </ion-list> <button full ion-buttonSave</button> </ion-content>
There’s nothing too crazy going on here to start off with. We have another button defined, this time calling a saveItem function that we will define in add-item.ts shortly. We also have a button that references a
close function – since we will eventually launch this page as a
Modal we want the ability to dismiss the page, so we will also be defining this function in add-item.ts.
Then we have some inputs, and on them, we have [(ngModel)], which sets up two-way data binding for us. Any change we make to the title field will be immediately reflected on the this.title member variable in add-item.ts (which we will also add shortly). The same goes in reverse, any change we make to
this.title in add-item.ts will also be immediately reflected in the template.
Also notice that on our save button we use the
full attribute, this is a handy little attribute that will make the button full width for us.
Setting up the Add Item Class
Now we are going to set up the class for our Add Item component.
Modify add-item.ts to reflect the following:
import { Component } from '@angular/core'; import { NavController, ViewController } from 'ionic-angular'; @Component({ selector: 'page-add-item', templateUrl: 'add-item.html' }) export class AddItemPage { title: string; description: string; constructor(public navCtrl: NavController, public view: ViewController) { } saveItem(){ let newItem = { title: this.title, description: this.description }; this.view.dismiss(newItem); } close(){ this.view.dismiss(); } }
We’re importing a weird new service here called
ViewController which can be used with Modals to dismiss (close) them.
Apart from that, we are just creating the
saveItem function which sets up a newItem object, which will use the current values of title and description (i.e. whatever the user puts in the input fields we set up two-way data binding on). We then
dismiss the view, but we also pass in that new item to the
dismiss function. This will allow us to set up a listener back on our home page (which will launch this page) to grab that item. In this way, we can pass data from one page to another (however, keep in mind that a modal isn’t required to pass data between pages).
Saving Items in the Home Page
As I just mentioned, we are sending some data back to the HomePage component to be saved, so we are going to have to set that up. We are also going to import our new AddItemPage component into the HomePage so that we can create that view when the user clicks Add Item.
Modify src/pages/home/home.ts to reflect the following:
import { Component } from '@angular/core'; import { ModalController, NavController } from 'ionic-angular'; import { AddItemPage } from '../add-item/add-item' @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { public items = []; constructor(public navCtrl: NavController, public modalCtrl: ModalController) { } ionViewDidLoad(){ } addItem(){ let addModal = this.modalCtrl.create(AddItemPage); addModal.onDidDismiss((item) => { if(item){ this.saveItem(item); } }); addModal.present(); } saveItem(item){ this.items.push(item); } viewItem(item){ } }
If you look at the top of this file, you can see that we are now importing the AddItemPage component. This will allow us to create a Modal using that page, as we are doing in the
addItem function. Notice that we also set up the
onDidDismiss listener here, which will grab that item that is being passed back and then save it using the
saveItem function. For now we just save an item by pushing it into the
items array, but eventually we will save it into storage.
We’ve also removed the dummy data, because now when a user enters in a new item it will be added to this.items through the saveItem function. We also set
items to be an empty array initially.
Viewing Items
Now we want to add the ability to click on a specific todo list item, and then view the details of that item (i.e. the description). To do this, we are going to create another new component.
Run the following command to create an item-detail page:
ionic g page ItemDetail
and once again, we will need to set it up in our app.module.ts file:'; import { ItemDetailPage } from '../pages/item-detail/item-detail'; @NgModule({ declarations: [ MyApp, HomePage, AddItemPage, ItemDetailPage ], imports: [ BrowserModule, IonicModule.forRoot(MyApp) ], bootstrap: [IonicApp], entryComponents: [ MyApp, HomePage, AddItemPage, ItemDetailPage ], providers: [{provide: ErrorHandler, useClass: IonicErrorHandler}] }) export class AppModule {}
Now let’s set up our new components template.
Modify src/pages/item-detail/item-detail.html to reflect the following
<ion-header> <ion-navbar <ion-title> {{title}} </ion-title> </ion-navbar> </ion-header> <ion-content> <ion-card> <ion-card-content> {{description}} </ion-card-content> </ion-card> </ion-content>
Compared to the rest of the templates, this one is pretty straight forward. We’re just using the
<ion-card> directive to fancy it up a bit, and outputting the values of title and description which will be defined in item-detail.ts.
Modify src/pages/item-detail/item-detail.ts to reflect the following:
import { Component } from '@angular/core'; import { NavParams } from 'ionic-angular'; @Component({ selector: 'page-item-detail', templateUrl: 'item-detail.html' }) export class ItemDetailPage { title; description; constructor(public navParams: NavParams){ } ionViewDidLoad() { this.title = this.navParams.get('item').title; this.description = this.navParams.get('item').description; } }
When we push this page we will pass in the data of the item that was clicked, and then we just set the title and description to that of the item using NavParams.
Now all we have to do is set up the viewItem function in home.ts and import the new detail page.
Modify the viewItem function in src/pages/home/home.ts to reflect the following:
viewItem(item){ this.navCtrl.push(ItemDetailPage, { item: item }); }
Add the following line to the top of src/pages/home/home.ts:
import { ItemDetailPage } from '../item-detail/item-detail';
This will push the item detail page, and pass in the item in the list that was clicked. If you try clicking on an item in the list now, you should see something like this:
Saving Data Permanently with Storage
The todo application will basically work now, but the data isn’t being stored anywhere so as soon as you refresh the application you will lose all of your data (not ideal).
What we’re going to do now is create a service called Data that will handle storing and retrieving data for us. We will use the Storage service Ionic provides to help us do this. Storage is Ionic’s generic storage service, and it handles storing data in the best way possible whilst providing a consistent API for us to use.
This means that if you are running on a device and have the SQLite plugin installed, then it will use a native SQLite database for storage, otherwise, it will fall back to using browser based storage (which can be wiped by the operating system).
Click here to learn more about the different storage options for HTML5 mobile applications.
Run the following command to generate a Data service:
ionic g provider Data
Modify data.ts to reflect the following:
import { Storage } from '@ionic/storage'; import { Injectable } from '@angular/core'; @Injectable() export class Data { constructor(public storage: Storage){ } getData() { return this.storage.get('todos'); } save(data){ this.storage.set('todos', data); } }
This one is a little bit different to the rest of the components we have been creating (and it would be more appropriate to consider it a service). Instead of using the @Component decorator, we are instead declaring this class as an @Injectable.
In our constructor, we set up a reference to the Storage service.
Our save function simply takes in the array of all of the items and saves it to storage, whenever the items change we will call this function.
We will also need to set up the Storage service (which requires us to include a new
import in the @NgModule), as well as the Data provider, in our app.module.ts file.
Modify src/app/app.module.ts to reflect the following:
import { BrowserModule } from '@angular/platform-browser'; import { NgModule, ErrorHandler } from '@angular/core'; import { IonicApp, IonicModule, IonicErrorHandler } from 'ionic-angular'; import { IonicStorageModule } from '@ionic/storage'; import { MyApp } from './app.component'; import { HomePage } from '../pages/home/home'; import { AddItemPage } from '../pages/add-item/add-item'; import { ItemDetailPage } from '../pages/item-detail/item-detail'; import { Data } from '../providers/data'; @NgModule({ declarations: [ MyApp, HomePage, AddItemPage, ItemDetailPage ], imports: [ BrowserModule, IonicModule.forRoot(MyApp), IonicStorageModule.forRoot() ], bootstrap: [IonicApp], entryComponents: [ MyApp, HomePage, AddItemPage, ItemDetailPage ], providers: [Data, {provide: ErrorHandler, useClass: IonicErrorHandler}] }) export class AppModule {}
Notice that we have declared these services in the
providers array, rather than in the
declarations or
entryComponents arrays.
Now we need to update home.ts to make use of this new service.
Modify src/pages/home/home.ts to reflect the following:
import { Component } from '@angular/core'; import { ModalController, NavController } from 'ionic-angular'; import { AddItemPage } from '../add-item/add-item' import { ItemDetailPage } from '../item-detail/item-detail'; import { Data } from '../../providers/data'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { public items = []; constructor(public navCtrl: NavController, public modalCtrl: ModalController, public dataService: Data) { this.dataService.getData().then((todos) => { if(todos){ this.items = todos; } }); } ionViewDidLoad(){ } addItem(){ let addModal = this.modalCtrl.create(AddItemPage); addModal.onDidDismiss((item) => { if(item){ this.saveItem(item); } }); addModal.present(); } saveItem(item){ this.items.push(item); this.dataService.save(this.items); } viewItem(item){ this.navCtrl.push(ItemDetailPage, { item: item }); } }
So here’s our final bit of code. Again, we’re importing the data service and passing it through to our constructor. We are still setting the items to be empty to start off with, and fetching the data using the data service. If there are any items returned it will set items to that, but if there is not it will just set it to an empty array again.
It’s important to note here that getData returns a promise not the data itself. Fetching data from storage is asynchronous which means our application will continue to run while the data loads. A promise allows us to perform some action whenever that data has finished loading, without having to pause the whole application.
For more information on promises, check out Andrew McGivery’s post on the subject (keep in mind though that the promise syntax is different in Ionic).
Finally, we also add a call to the save function in the data service when a new item is being added. So now this function will update our items array with the new data straight away, but it will also save a copy to the data service so that it is available next time we come back to the application.
Summary
In this tutorial, we have covered how to implement a lot of common functionality in an Ionic app like:
- Creating views
- Listening for and handling events
- Navigating between views
- Passing data between views
- Setting up two-way data binding
- Saving data
Obviously, there’s a lot more we could do with this application like making it prettier, adding the ability to delete and edit notes and so on. However, for now, hopefully that should have given you a pretty decent introduction to creating apps with Ionic. | https://www.joshmorony.com/build-a-todo-app-from-scratch-with-ionic-2-video-tutorial/ | CC-MAIN-2020-10 | refinedweb | 3,687 | 53 |
From: Craig Rodrigues (rodrigc_at_[hidden])
Date: 2005-05-07 07:14:04
On Sat, May 07, 2005 at 01:42:14AM -0400, Phillip Seaver wrote:
> I haven't looked at the CVS version (I'm using milestone 10 right now),
> but I used the darwin toolset instead of the gcc toolset ("using darwin
> ;" in user-config.jam). It uses gcc, but has OS X specific options,
> like -dynamic instead of -shared for <link>shared builds.
Oh cool, I'll try that out.
This is what I have in user-config.jam:
import toolset : using ;
# GCC configuration
# Configure gcc (default version)
using gcc ;
Is it possible to modify my user-config.jam
to detect what operating system I am on, and include the
correct toolset? The reason I ask is because
I have a single Boost Build tree (from CVS), on an NFS exported
directory, and I have
multiple bjam binaries. In my .bashrc, I set
the PATH to pick up the correct bjam depending on my OS.
The BOOST_ROOT is the same for me on whatever platform I am on.
If I could get my user-config to use the darwin toolset
on MacOS X, that would allow me to test BBv2 quickly
on multiple platforms from a single Boost source tree.
Thanks!
-- | https://lists.boost.org/boost-build/2005/05/9711.php | CC-MAIN-2021-10 | refinedweb | 213 | 68.6 |
All of that seems rather simple. However, I have a big problem: Say my website is about shoes. I have three categories in the top navigation: Sports shoes, Cool Shoes, Women's shoes. If the user is on the page about "sports shoes", I want that part in the top navigation to be highlighted, if he's on the page about "cool shoes" that part in the top navigation should be highlighted, etc..
In order to get that done (if it was only 3 pages), I would normally create three different pages and the navigation code (the html) would look a bit like this:
"sports shoes"-page:
... <li class="selected"><a>sports shoes</a></li> <li><a>cool shoes</a></li> <li><a>women's shoes</a></li> ...
for the "cool shoes"-page like this:
... <li><a>sports shoes</a></li> <li class="selected"><a>cool shoes</a></li> <li><a>women's shoes</a></li> ...
and for the women's shoes page..well you can imagine. However, if I want to do that I have to create a separate file for each one...which Im trying to avoid by using SSI (not a problem with 3 navigation buttons, but if you have a top navigation with 10 and a sub-navigation with some more..Id prefer to avoid this).
Is there any possibility to get this done with mostly HTML and CSS knowledge (I'm trying to make some progress with the site and if I have to learn java script for another dozens of hours or more I'd probably forget about the highlighting effect for now). I'd be willing to look up a few things to get this done, but it's probably not worth having to learn another programming language for me right now.
Is there the possibility to use some code like this in the separate file (which contains the html code for the navigation):
... <li class="selected1"><a>sports shoes</a></li> <li class="selected2"><a>cool shoes</a></li> <li class="selected3"><a>women's shoes</a></li> ...
and then use a little bit of dynamic code on each page (right before I use SSI to call(exp?) the navigation) so the browser understands which part of the navigation has to be highlighted?
If something like that is possible (I understand that the above is probably not possible, but hopefully you understand my train of thought?:)) without learning a lot of java script or something (I'm willing to spend a bit of time - say 5 hours max. ;)), but if it takes longer than that, I should probbaly postpone it for now (for now my main goal is launching the site).
THANKS A LOT! I really appreciate your help...If such a forum had existed back when I was programming computer games in elementary, I mgiht have never stopped it ;)
You would look at the URL of the page and perhaps iterate through each of your navigation URLs, and highlight if it is the current URL.
[edited by: encyclo at 11:48 pm (utc) on Oct. 13, 2008] [edit reason] moved from another location [/edit]
<!--#include virtual="/my-menu.cgi" -->
. . . and output the links based on the referrer. But that's probably just a comfort zone on my part. :-)
The right way to do it would probably be to use the limited but powerful functions available to SSI. I have not tested the below, but it should be a fun project.
First, you include another server side include:
<!--#include virtual="/ssi/menu-parser.shtml" -->
Then in menu-parser.shtml, you do a bit of trickery.
SSI has access to the same environment variables as any server-side program. One of those is DOCUMENT_URI, the URI to the document. You could combine that with the if construct and expr function in SSI to do this:
<!--#if expr='"$DOCUMENT_URI" = "/file1.shtml"' --> <a href="file1.shtml" class="highlight-class">File 1</a> <!--#else --> <a href="file1.shtml">File 1</a> <!--#endif -->
<!--#if expr='"$DOCUMENT_URI" = "/file2.shtml"' --> <a href="file2.shtml" class="highlight-class">File 2</a> <!--#else --> <a href="file2.shtml">File 2</a> <!--#endif -->
If the first matches, it displays the version with the class, otherwise, it displays the regular link.
You can see this sample here [httpd.apache.org] (under Variable Substitution) at the Apache SSI docs, which actually start here [httpd.apache.org].
Haven't tested it but it should work, you may have to run a script or two to see if the document URI is a full path or relative.
Most folks use a hybrid of CSS and an id or class on the body element to change the style on the page which causes HTML and CSS bloat when the server can handle the update on the fly quite nicely without having to update code every time you add/remove pages from your site.
Basically what you end up doing is using the same navigation for everything, it remains "static", so-to-speak. You buffer the navigation by storing it in a string variable for manipulation by your server-side processing. The server compares any links found in the string with the current link being requested and applies your changes on-the-fly.
You can have your SSI call a program, rather than a static file. Then your program returns the html relevant for the page it's delivering to. For example, your SSI call could be:
<!--#include virtual="include.cgi"-->
Then your Perl program looks at what page is calling the script, and returns the HTML appropriate for that page.
Anyway thanks for your efforts! | http://www.webmasterworld.com/website_technology/3763700.htm | CC-MAIN-2014-15 | refinedweb | 937 | 65.42 |
"Heresy grows from idleness." -- Unknown.
Introduction
The Nim Compiler User Guide documents the typical compiler invocation, using the compile or c command to transform a .nim file into one or more .c files which are then compiled with the platform's C compiler into a static binary. However there are other commands to compile to C++, Objective-C or JavaScript. This document tries to concentrate in a single place all the backend and interfacing options.
The Nim compiler supports mainly two backend families: the C, C++ and Objective-C targets and the JavaScript target. The C like targets creates source files which can be compiled into a library or a final executable. The JavaScript target can generate a .js file which you reference from an HTML file or create a standalone nodejs program.
On top of generating libraries or standalone applications, Nim offers bidirectional interfacing with the backend targets through generic and specific pragmas.
Backends
The C like targets
The commands to compile to either C, C++ or Objective-C are:
The most significant difference between these commands is that if you look into the nimcache directory you will find .c, .cpp or .m files, other than that all of them will produce a native binary for your project. This allows you to take the generated code and place it directly into a project using any of these languages. Here are some typical command line invocations:
$ nim c hallo.nim $ nim cpp hallo.nim $ nim objc hallo.nim
The compiler commands select the target backend, but if needed you can specify additional switches for cross compilation to select the target CPU, operative system or compiler/linker commands.
The JavaScript target
Nim can also generate JavaScript code through the js command.
Nim targets JavaScript 1.5 which is supported by any widely used browser. Since JavaScript does not have a portable means to include another module, Nim just generates a long .js file.
Features or modules that the JavaScript platform does not support are not available. This includes:
- manual memory management (alloc, etc.)
- casting and other unsafe operations (cast operator, zeroMem, etc.)
- file management
- most modules of the standard library
- proper 64 bit integer arithmetic
- unsigned integer arithmetic
However, the modules strutils, math, and times are available! To access the DOM, use the dom module that is only available for the JavaScript platform.
To compile a Nim module into a .js file use the js command; the default is a .js file that is supposed to be referenced in an .html file. However, you can also run the code with nodejs ():
nim js -d:nodejs -r examples/hallo.nim
Interfacing
Nim offers bidirectional interfacing with the target backend. This means that you can call backend code from Nim and Nim code can be called by the backend code. Usually the direction of which calls which depends on your software architecture (is Nim your main program or is Nim providing a component?).
Nim code calling the backend
Nim code can interface with the backend through the Foreign function interface mainly through the importc pragma. The importc pragma is the generic way of making backend symbols available in Nim and is available in all the target backends (JavaScript too). The C++ or Objective-C backends have their respective ImportCpp and ImportObjC pragmas to call methods from classes.
Whenever you use any of these pragmas you need to integrate native code into your final binary. In the case of JavaScript this is no problem at all, the same html file which hosts the generated JavaScript will likely provide other JavaScript functions which you are importing with importc.
However, for the C like targets you need to link external code either statically or dynamically. The preferred way of integrating native code is to use dynamic linking because it allows you to compile Nim programs without the need for having the related development libraries installed. This is done through the dynlib pragma for import, though more specific control can be gained using the dynlib module.
The dynlibOverride command line switch allows to avoid dynamic linking if you need to statically link something instead. Nim wrappers designed to statically link source files can use the compile pragma if there are few sources or providing them along the Nim code is easier than using a system library. Libraries installed on the host system can be linked in with the PassL pragma.
To wrap native code, take a look at the c2nim tool which helps with the process of scanning and transforming header files into a Nim interface.
C invocation example
Create a logic.c file with the following content:
int addTwoIntegers(int a, int b) { return a + b; }
Create a calculator.nim file with the following content:
{.compile: "logic.c".} proc addTwoIntegers(a, b: cint): cint {.importc.} when isMainModule: echo addTwoIntegers(3, 7)
With these two files in place, you can run nim c -r calculator.nim and the Nim compiler will compile the logic.c file in addition to calculator.nim and link both into an executable, which outputs 10 when run. Another way to link the C file statically and get the same effect would be remove the line with the compile pragma and run the following typical Unix commands:
$ gcc -c logic.c $ ar rvs mylib.a logic.o $ nim c --passL:mylib.a -r calculator.nim
Just like in this example we pass the path to the mylib.a library (and we could as well pass logic.o) we could be passing switches to link any other static C library.
JavaScript invocation example
Create a host.html file with the following content:
<html><body> <script type="text/javascript"> function addTwoIntegers(a, b) { return a + b; } </script> <script type="text/javascript" src="calculator.js"></script> </body></html>
Create a calculator.nim file with the following content (or reuse the one from the previous section):
proc addTwoIntegers(a, b: int): int {.importc.} when isMainModule: echo addTwoIntegers(3, 7)
Compile the Nim code to JavaScript with nim js -o:calculator.js calculator.nim and open host.html in a browser. If the browser supports javascript, you should see the value 10 in the browser's console. Use the dom module for specific DOM querying and modification procs or take a look at karax for how to develop browser based applications.
Backend code calling Nim
Backend code can interface with Nim code exposed through the exportc pragma. The exportc pragma is the generic way of making Nim symbols available to the backends. By default the Nim compiler will mangle all the Nim symbols to avoid any name collision, so the most significant thing the exportc pragma does is maintain the Nim symbol name, or if specified, use an alternative symbol for the backend in case the symbol rules don't match.
The JavaScript target doesn't have any further interfacing considerations since it also has garbage collection, but the C targets require you to initialize Nim's internals, which is done calling a NimMain function. Also, C code requires you to specify a forward declaration for functions or the compiler will assume certain types for the return value and parameters which will likely make your program crash at runtime.
The Nim compiler can generate a C interface header through the --header command line switch. The generated header will contain all the exported symbols and the NimMain proc which you need to call before any other Nim code.
Nim invocation example from C
Create a fib.nim file with the following content:
proc fib(a: cint): cint {.exportc.} = if a <= 2: result = 1 else: result = fib(a - 1) + fib(a - 2)
Create a maths.c file with the following content:
#include "fib.h" #include <stdio.h> int main(void) { NimMain(); for (int f = 0; f < 10; f++) printf("Fib of %d is %d\n", f, fib(f)); return 0; }
Now you can run the following Unix like commands to first generate C sources form the Nim code, then link them into a static binary along your main C program:
$ nim c --noMain --noLinking --header:fib.h fib.nim $ gcc -o m -I$HOME/.cache/nim/fib_d -Ipath/to/nim/lib $HOME/.cache/nim/fib_d/*.c maths.c
The first command runs the Nim compiler with three special options to avoid generating a main() function in the generated files, avoid linking the object files into a final binary, and explicitly generate a header file for C integration. All the generated files are placed into the nimcache directory. That's why the next command compiles the maths.c source plus all the .c files form nimcache. In addition to this path, you also have to tell the C compiler where to find Nim's nimbase.h header file.
Instead of depending on the generation of the individual .c files you can also ask the Nim compiler to generate a statically linked library:
$ nim c --app:staticLib --noMain --header fib.nim $ gcc -o m -Inimcache -Ipath/to/nim/lib libfib.nim.a maths.c
The Nim compiler will handle linking the source files generated in the nimcache directory into the libfib.nim.a static library, which you can then link into your C program. Note that these commands are generic and will vary for each system. For instance, on Linux systems you will likely need to use -ldl too to link in required dlopen functionality.
Nim invocation example from JavaScript
Create a mhost.html file with the following content:
<html><body> <script type="text/javascript" src="fib.js"></script> <script type="text/javascript"> alert("Fib for 9 is " + fib(9)); </script> </body></html>
Create a fib.nim file with the following content (or reuse the one from the previous section):
proc fib(a: cint): cint {.exportc.} = if a <= 2: result = 1 else: result = fib(a - 1) + fib(a - 2)
Compile the Nim code to JavaScript with nim js -o:fib.js fib.nim and open mhost.html in a browser. If the browser supports javascript, you should see an alert box displaying the text Fib for 9 is 34. As mentioned earlier, JavaScript doesn't require an initialisation call to NimMain or similar function and you can call the exported Nim proc directly.
Nimcache naming logic
The nimcache directory is generated during compilation and will hold either temporary or final files depending on your backend target. The default name for the directory depends on the used backend and on your OS but you can use the --nimcache compiler switch to change it.
Memory management
In the previous sections the NimMain() function reared its head. Since JavaScript already provides automatic memory management, you can freely pass objects between the two language without problems. In C and derivate languages you need to be careful about what you do and how you share memory. The previous examples only dealt with simple scalar values, but passing a Nim string to C, or reading back a C string in Nim already requires you to be aware of who controls what to avoid crashing.
Strings and C strings
The manual mentions that Nim strings are implicitly convertible to cstrings which makes interaction usually painless. Most C functions accepting a Nim string converted to a cstring will likely not need to keep this string around and by the time they return the string won't be needed any more. However, for the rare cases where a Nim string has to be preserved and made available to the C backend as a cstring, you will need to manually prevent the string data from being freed with GC_ref and GC_unref.
A similar thing happens with C code invoking Nim code which returns a cstring. Consider the following proc:
proc gimme(): cstring {.exportc.} = result = "Hey there C code! " & $random(100)
Since Nim's garbage collector is not aware of the C code, once the gimme proc has finished it can reclaim the memory of the cstring. However, from a practical standpoint, the C code invoking the gimme function directly will be able to use it since Nim's garbage collector has not had a chance to run yet. This gives you enough time to make a copy for the C side of the program, as calling any further Nim procs might trigger garbage collection making the previously returned string garbage. Or maybe you are yourself triggering the collection.
Custom data types
Just like strings, custom data types that are to be shared between Nim and the backend will need careful consideration of who controls who. If you want to hand a Nim reference to C code, you will need to use GC_ref to mark the reference as used, so it does not get freed. And for the C backend you will need to expose the GC_unref proc to clean up this memory when it is not required any more.
Again, if you are wrapping a library which mallocs and frees data structures, you need to expose the appropriate free function to Nim so you can clean it up. And of course, once cleaned you should avoid accessing it from Nim (or C for that matter). Typically C data structures have their own malloc_structure and free_structure specific functions, so wrapping these for the Nim side should be enough.
Thread coordination
When the NimMain() function is called Nim initializes the garbage collector to the current thread, which is usually the main thread of your application. If your C code later spawns a different thread and calls Nim code, the garbage collector will fail to work properly and you will crash.
As long as you don't use the threadvar emulation Nim uses native thread variables, of which you get a fresh version whenever you create a thread. You can then attach a GC to this thread via
system.setupForeignThreadGc()
It is not safe to disable the garbage collector and enable it after the call from your background thread even if the code you are calling is short lived.
Before the thread exits, you should tear down the thread's GC to prevent memory leaks by calling
system.tearDownForeignThreadGc() | https://nim-lang.org/1.0.0/backends.html | CC-MAIN-2021-39 | refinedweb | 2,344 | 63.39 |
would be great to have a basic template of a new File with a new empty FXML with its respective controller:
like netbeans does:
Test.fxml
TestController.fxml
would be great to have a basic template of a new File with a new empty FXML with its respective controller:
Hi Sebastian,
it's planed to add create .fxml from usage: suppose you have includ tag and non-existing .fxml or have java code which references unresolved .fxml => intention would suggest to create corresponding .fxml. Create controller from usage in .fxml already exist. If you start from JavaFX Application you get these files generated. Too many templates - too many actions to choose from, I am afraid that most users won't find the action and still it would pollute the menu: so my question is would you need this generation if create from usage would be complete?
Thanks
I think is really important to have as we have a File->New (Java File, HTML.....) have a new FXML, for speed porpoises of making javafx apps, if you look netbeans yo add a new FXML and it adds you a basic template of an FXML file with the controller that corresponds to that FXML, we've been doind a lot of javafx and this is a really MUST for us to have this access to create new FXML with their respective "code behind".
Example:
New-> FXML and I put the name FXML.fxml
it creates
FXML.fxml
<?xml version="1.0" encoding="UTF-8"?>
<?import java.lang.*?>
<?import java.util.*?>
<?import javafx.scene.*?>
<?import javafx.scene.control.*?>
<?import javafx.scene.layout.*?>
<AnchorPane id="AnchorPane" prefHeight="400.0" prefWidth="600.0" xmlns:
</AnchorPane>
and
FXMLController.java
import java.net.URL;
import java.util.ResourceBundle;
import javafx.fxml.Initializable;
/**
* FXML Controller class
*
* @author Sebastian
*/
public class FXMLController implements Initializable {
/**
* Initializes the controller class.
*/
@Override
public void initialize(URL url, ResourceBundle rb) {
// TODO
}
}
and that's it whit this we have a very basic but usable start for creating javafx apps.
You may setup your own template in File|Settings|File and Code Templates => new action to create .fxml would appear in New menu. You may use $Name$ inside template (to compose e.g. controller name) it would be substituted with inserted .fxml name. Then alt + enter on unexisting class would give you a sample controller.
You may create an issue in youtrack () and we'll see how many votes it would get.
ok assigned new case | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206876465-new-FXML-file-for-JavaFX-with-the-corresponding-controller?page=1 | CC-MAIN-2020-45 | refinedweb | 412 | 65.83 |
= () => <h1>{`Time: ${Date.now()}`}</h1> render(<App/>, document.getElementById('app')
See the Pen [Your First React App]() by @jh3y.
This is pretty much all you need to create your first React app (besides the HTML) to get started. But, we could make this smaller, like so:
render(<h1>{`Time: ${Date.now()}`}</h1>, document.getElementById('app'))
In the first version,
App is a component, but this example tells React DOM to render an element instead of a component. Elements are the HTML elements we see in both examples. What makes a component, is a function returning those elements
Before we get started with components, what’s the deal with this “HTML in JS”?
JSX
That “HTML in JS” is JSX. You can read all about JSX in the React documentation. The gist? A syntax extension to JavaScript that allows us to write HTML in JavaScript. It’s like a templating language with full access to JavaScript powers. It’s actually an abstraction on an underlying API. Why do we use it? For most, it’s easier to follow and comprehend than the equal.
React.createElement('h1', null, `Time: ${Date.now()}`)
The thing to take on board with JSX is that this is how you put things in the DOM 99% of the time with React. And it’s also how we bind event handling a lot of the time. That other 1% is a little out of scope for this article. But, sometimes we want to render elements outside the realms of our React application. We can do this using React DOM’s Portal. We can also get direct access to the DOM within the component lifecycle (coming up).
Attributes in JSX are camelCase. For example,
onclick becomes
onClick. There are some special cases such as
class which becomes
className. Also, attributes such as
style now accept an
Object instead of a
string.
<div className="awesome-class" style={{ color: 'red' }}>Cool</div>
Note: You can check out all the differences in attributes here.
Rendering
How do we get our JSX into the DOM? We need to inject it. In most cases, our apps have a single point of entry. And if we are using React, we use React DOM to insert an element/component at that point. You could use JSX without React though. As we mentioned, it’s a syntax extension. You could change how JSX gets interpreted by Babel and have it pump out something different.
Everything within becomes managed by React. This can yield certain performance benefits when we are modifying the DOM a lot. This is because React makes use of a Virtual DOM. Making DOM updates isn’t slow by any means. But, it’s the impact it has within the browser that can impact performance. Each time we update the DOM, browsers need to calculate the rendering changes that need to take place. That can be expensive. Using the Virtual DOM, these DOM updates get kept in memory and synced with the browser DOM in batches when required.
There’s nothing to stop us from having many apps on a page or having only part of a page managed by React.
See the Pen [Two Apps]() by @jh3y.
Take this example. The same app is rendered twice between some regular HTML. Our React app renders the current time using
Date.now.
const App = () => <h1>{`Time: ${Date.now()}`}</h1>
For this example, we’re rendering the app twice between some regular HTML. We should see the title “Many React Apps” followed by some text, then the first rendering of our app appears followed by some text, and then the second rendering of our app.
For a deeper dive into rendering, check out the docs.
Components && Props
This is one of the biggest parts of React to grok. Components are reusable blocks of UI. But underneath, it’s all functions. Components are functions whose arguments we refer to as
props. And we can use those “props” to determine what a component should render. Props are “read-only” and you can pass anything in a prop. Even other components. Anything within the tags of a component we access via a special prop,
children.
Components are functions that return elements. If we don’t want to show anything, return
null.
We can write components in a variety of ways. But, it’s all the same result.
Use a function:
function App() { return <h1>{`Time: ${Date.now()}`}</h1> }
Use a class:
class App extends React.Component { render() { return <h1>{`Time: ${Date.now()}`}</h1> } }
Before the release of hooks(coming up), we used class-based components a lot. We needed them for state and accessing the component API. But, with hooks, the use of class-based components has petered out a bit. In general, we always opt for function-based components now. This has various benefits. For one, it requires less code to achieve the same result. Hooks also make it easier to share and reuse logic between components. Also, classes can be confusing. They need the developer to have an understanding of bindings and context.
We’ll be using function-based and you’ll notice we used a different style for our
App component.
const App = () => <h1>{`Time: ${Date.now()}`}</h1>
That’s valid. The main thing is that our component returns what we want to render. In this case, a single element that is a
h1 displaying the current time. If we don’t need to write
return, etc. then don’t. But, it’s all preference. And different projects may adopt different styles.
What if we updated our multi-app example to accept
props and we extract the
h1 as a component?
const Message = ({ message }) => <h1>{message}</h1> const App = ({ message }) => <Message message={message} /> render(<App message={`Time: ${Date.now()}`}/>, document.getElementById('app'))
See the Pen [Our First Component Extraction]() by @jh3y.
That works and now we can change the
message prop on
App and we’d get different messages rendered. We could’ve made the component
Time. But, creating a
Message component implies many opportunities to reuse our component. This is the biggest thing about React. It’s about making decisions around architecture/design.
What if we forget to pass the prop to our component? We could provide a default value. Some ways we could do that.
const Message = ({message = "You forgot me!"}) => <h1>{message}</h1>
Or by specifying
defaultProps on our component. We can also provide propTypes which is something I’d recommend having a look at. It provides a way to type check props on our components.
Message.defaultProps = { message: "You forgot me!" }
We can access props in different ways. We’ve used ES6 conveniences to destructure props. But, our
Message component could also look like this and work the same.
const Message = (props) => <h1>{props.message}</h1>
Props are an object passed to the component. We can read them any way we like.
Our
App component could even be this:
const App = (props) => <Message {...props}/>
It would yield the same result. We refer to this as “Prop spreading”. It’s better to be explicit with what we pass through though.
We could also pass the
message as a child.
const Message = ({ children }) => <h1>{children}</h1> const App = ({ message }) => <Message>{message}</Message>
Then we refer to the message via the special
children prop.
How about taking it further and doing something like have our
App pass a
message to a component that is also a prop.
const Time = ({ children }) => <h1>{`Time: ${children}`}</h1> const App = ({ message, messageRenderer: Renderer }) => <Renderer>{message}</Renderer> render(<App message={`${Date.now()}`} messageRenderer={Time} />, document.getElementById('app'))
See the Pen [Passing Components as Props]() by @jh3y.
In this example, we create two apps and one renders the time and another a message. Note how we rename the
messageRenderer prop to
Renderer in the destructure? React won’t see anything starting with a lowercase letter as a component. That’s because anything starting in lowercase is seen as an element. It would render it as
<messageRenderer>. We’ll rarely use this pattern but it’s a way to show how anything can be a prop and you can do what you want with it.
One thing to make clear is that anything passed as a prop needs processing by the component. For example, want to pass styles to a component, you need to read them and apply them to whatever is being rendered.
Don’t be afraid to experiment with different things. Try different patterns and practice. The skill of determining what should be a component comes through practice. In some cases, it’s obvious, and in others, you might realize it later and refactor.
A common example would be the layout for an application. Think at a high level what that might look like. A layout with children that comprises of a header, footer, some main content. How might that look? It could look like this.
const Layout = ({ children }) => ( <div className="layout"> <Header/> <main>{children}</main> <Footer/> </div> )
It’s all about building blocks. Think of it like LEGO for apps.
In fact, one thing I’d advocate is getting familiar with Storybook as soon as possible (I’ll create content on this if people would like to see it). Component-driven development isn’t unique to React, we see it in other frameworks too. Shifting your mindset to think this way will help a lot.
Making Changes
Up until now, we’ve only dealt with static rendering. Nothing changes. The biggest thing to take on board for learning React is how React works. We need to understand that components can have state. And we must understand and respect that state drives everything. Our elements react to state changes. And React will only re-render where necessary.
Data flow is unidirectional too. Like a waterfall, state changes flow down the UI hierarchy. Components don’t care about where the data comes from. For example, a component may want to pass state to a child through props. And that change may trigger an update to the child component. Or, components may choose to manage their own internal state which isn’t shared.
These are all design decisions that get easier the more you work with React. The main thing to remember is how unidirectional this flow is. To trigger changes higher up, it either needs to happen via events or some other means passed by props.
Let’s create an example.
import React, { useEffect, useRef, useState } from '' import { render } from '' const Time = () => { const [time, setTime] = useState(Date.now()) const timer = useRef(null) useEffect(() => { timer.current = setInterval(() => setTime(Date.now()), 1000) return () => clearInterval(timer.current) }, []) return <h1>{`Time: ${time}`}</h1> } const App = () => <Time/> render(<App/>, document.getElementById('app'))
See the Pen [An Updating Timer]() by @jh3y.
There is a fair bit to digest there. But, here we introduce the use of “Hooks”. We are using “useEffect”, “useRef”, and “useState”. These are utility functions that give us access to the component API.
If you check the example, the time is updating every second or
1000ms. And that’s driven by the fact we update the
time which is a piece of state. We are doing this within a
setInterval. Note how we don’t change
time directly. State variables are treated as immutable. We do it through the
setTime method we receive from invoking
useState. Every time the state updates, our component re-renders if that state is part of the render.
useState always returns a state variable and a way to update that piece of state. The argument passed is the initial value for that piece of state.
We use
useEffect to hook into the component lifecycle for events such as state changes. Components mount when they’re inserted into the DOM. And they get unmounted when they’re removed from the DOM. To hook into these lifecycle stages, we use effects. And we can return a function within that effect that will fire when the component gets unmounted. The second parameter of
useEffect determines when the effect should run. We refer to it as the dependency array. Any listed items that change will trigger the effect to run. No second parameter means the effect will run on every render. And an empty array means the effect will only run on the first render. This array will usually contain state variables or props.
We are using an effect to both set up and tear down our timer when the component mounts and unmounts.
We use a
ref to reference that timer. A
ref provides a way to keep a reference to things that don’t trigger rendering. We don’t need to use state for the timer. It doesn’t affect rendering. But, we need to keep a reference to it so we can clear it on unmount.
Want to dig into hooks a bit before moving on? I wrote an article before about them – “React Hooks in 5 Minutes”. And there’s also great info in the React docs.
Our
Time component has its own internal state that triggers renders. But, what if we wanted to change the interval length? We could manage that from above in our
App component.
const App = () => { const [interval, updateInterval] = useState(1000) return ( <Fragment> <Time interval={interval} /> <h2>{`Interval: ${interval}`}</h2> <input type="range" min="1" value={interval} max="10000" onChange={e => updateInterval(e.target.value)}/> </Fragment> ) }
Our new
interval value is being stored in the state of
App. And it dictates the rate at which the
Time component updates.
The
Fragment component is a special component we have access to through
React. In
React, a component must return a single child or
null. We can’t return adjacent elements. But, sometimes we don’t want to wrap our content in a
div.
Fragments allow us to avoid wrapper elements whilst keeping React happy.
You’ll also notice our first event bind happening there. We use
onChange as an attribute of the
input to update the
interval.
The updated
interval is then passed to
Time and the change of
interval triggers our effect to run. This is because the second parameter of our
useEffect hook now contains
interval.
const Time = ({ interval }) => { const [time, setTime] = useState(Date.now()) const timer = useRef(null) useEffect(() => { timer.current = setInterval(() => setTime(Date.now()), interval) return () => clearInterval(timer.current) }, [interval]) return <h1>{`Time: ${time}`}</h1> }
Have a play with the demo and see the changes!
See the Pen [Managed Interval]() by @jh3y.
I recommend visiting the React documentation if you want to dig into some of these concepts more. But, we’ve seen enough React to get started making something fun! Let’s do it!
Whac-A-Mole React Game
Are you ready? We’ll be creating our very own “Whac-A-Mole” with React! This well-known game is basic in theory but throws up some interesting challenges to build. The important part here is how we’re using React. I’ll gloss over applying styles and making it pretty — that’s your job! Although, I’m happy to take any questions on that.
Also, this game will not be “polished”. But, it works. You can go and make it your own! Add your own features, and so on.
Design
Let’s start by thinking about what we’ve got to make, i.e. what components we may need, and so on:
- Start/Stop Game
- Timer
- Keeping Score
- Layout
- Mole Component
Starting Point
We’ve learned how to make a component and we can roughly gauge what we need.
import React, { Fragment } from '' import { render } from '' const Moles = ({ children }) => <div>{children}</div> const Mole = () => <button>Mole</button> const Timer = () => <div>Time: 00:00</div> const Score = () => <div>Score: 0</div> const Game = () => ( <Fragment> <h1>Whac-A-Mole</h1> <button>Start/Stop</button> <Score/> <Timer/> <Moles> <Mole/> <Mole/> <Mole/> <Mole/> <Mole/> </Moles> </Fragment> ) render(<Game/>, document.getElementById('app')) ( <Fragment> {!playing && <h1>Whac-A-Mole</h1>} <button onClick={() => setPlaying(!playing)}> {playing ? 'Stop' : 'Start'} </button> {playing && ( <Fragment> <Score /> <Timer /> <Moles> <Mole /> <Mole /> <Mole /> <Mole /> <Mole /> </Moles> </Fragment> )} </Fragment> ) }.
See the Pen [1. Toggle Play State]() by @jh3y.
Timer
Let’s get the timer running. By default, we’re going to set a time limit of
30000ms, And we can declare this as a constant outside of our React components.
const TIME_LIMIT = 30000
Declaring constants in one place is a good habit to pick up. Anything that can be used to configure your app can be co-located in one place.
Our
Timer component only cares about three things:
- The time it’s counting down;
- At what interval it’s going to update;
- What it does when it ends.
A first attempt might look like this:
const Timer = ({ time, interval = 1000, onEnd }) => { const [internalTime, setInternalTime] = useState(time) const timerRef = useRef(time) useEffect(() => { if (internalTime === 0 && onEnd) onEnd() }, [internalTime, onEnd]) useEffect(() => { timerRef.current = setInterval( () => setInternalTime(internalTime - interval), interval ) return () => { clearInterval(timerRef.current) } }, []) return <span>{`Time: ${internalTime}`}</span> }
But, it only updates once?
See the Pen [2. Attempted Timer]() by @jh3y.
We’re using the same interval technique we did before. But, the issue is we’re using
state in our interval callback. And this is our first “gotcha”. Because we have an empty dependency array for our effect, it only runs once. The closure for
setInterval uses the value of
internalTime from the first render. This is an interesting problem and makes us think about how we approach things.
Note: I highly recommend reading this article by Dan Abramov that digs into timers and how to get around this problem. It’s a worthwhile read and provides a deeper understanding. One issue is that empty dependency arrays can often introduce bugs in our React code. There’s also an eslint plugin I’d recommend using to help point these out. The React docs also highlight the potential risks of using the empty dependency array.
One way to fix our
Timer would be to update the dependency array for the effect. This would mean that our
timerRef would get updated every interval. However, it introduces the issue of drifting accuracy.
useEffect(() => { timerRef.current = setInterval( () => setInternalTime(internalTime - interval), interval ) return () => { clearInterval(timerRef.current) } }, [internalTime, interval])
If you check this demo, it has the same Timer twice with different intervals and logs the drift to the developer console. A smaller interval or longer time equals a bigger drift.
See the Pen [3. Checking Timer Drift]() by @jh3y.
We can use a
ref to solve our problem. We can use it to track the
internalTime and avoid running the effect every interval.
const timeRef = useRef(time) useEffect(() => { timerRef.current = setInterval( () => setInternalTime((timeRef.current -= interval)), interval ) return () => { clearInterval(timerRef.current) } }, [interval])
And this reduces the drift significantly with smaller intervals too. Timers are sort of an edge case. But, it’s a great example to think about how we use hooks in React. It’s an example that’s stuck with me and helped me understand the “Why?”.
Update the render to divide the time by
1000 and append an
s and we have a seconds timer.
See the Pen [4. Rudimentary Timer]() by @jh3y.
This timer is still rudimentary. It will drift over time. For our game, it’ll be fine. If you want to dig into accurate counters, this is a great video on creating accurate timers with JavaScript.
Scoring
Let’s make it possible to update the score. How do we score? By whacking a mole! In our case, that means clicking a
button. For now, let’s give each mole a score of
100, and we can pass an
onWhack callback to our
Moles.
const MOLE_SCORE = 100 const Mole = ({ onWhack }) => ( <button onClick={() => onWhack(MOLE_SCORE)}>Mole</button> ) const Score = ({ value }) => <div>{`Score: ${value}`}</div> const Game = () => { const [playing, setPlaying] = useState(false) const [score, setScore] = useState(0) const onWhack = points => setScore(score + points) return ( <Fragment> {!playing && <h1>Whac-A-Mole</h1>} <button onClick={() => setPlaying(!playing)}>{playing ? 'Stop' : 'Start'}</button> {playing && <Fragment> <Score value={score} /> > ) }
Note how the
onWhack callback gets passed to each
Mole, and that the callback updates our
score state. These updates will trigger a render.
This is a good time to install the React Developer Tools extension in your browser. There is a neat feature that will highlight component renders in the DOM. Open the “Components” tab in Dev Tools and hit the Settings cog. Select “Highlight updates when components render”:
= () => ( <ul> {USERS.map(({ id, name }) => <li key={id}>{name}</li>)} </ul> )
The alternative would be to generate the content in a for loop and then render the return from a function.
return ( <ul>{getLoopContent(DATA)}</ul> ) ( <Fragment> <h1>Whac-A-Mole</h1> <button onClick={() => setPlaying(!playing)}>{playing ? 'Stop' : 'Start'}</button> {playing && <Board> <Score value={score} /> <Timer time={TIME_LIMIT} onEnd={() => console.info('Ended')}/> {new Array(5).fill().map((_, id) => <Mole key={id} onWhack={onWhack} /> )} </Board> } </Fragment> ).
See the Pen [6. Looping Moles]() by @jh3y.
What we need is a third state where we aren’t
playing but we have finished. In more complex applications, I’d recommend reaching for XState or using reducers. But, for our app, we can introduce a new state variable:
finished. When the state is
!playing and
finished, we can display the score, reset the timer, and give the option to restart.
We need to put our logic caps on now. If we end the game, then instead of toggling
playing, we need to also toggle
finished. We could create an
endGame and
startGame function.
const endGame = () => { setPlaying(false) setFinished(true) } const startGame = () => { setScore(0) setPlaying(true) setFinished(false) }
When we start a game, we reset the
score and put the game into the
playing state. This triggers the playing UI to render. When we end the game, we set
finished to
true. (The reason we don’t reset the
score is so we can show it as a result.)
And, when our
Timer ends, it should invoke that same function.
<Timer time={TIME_LIMIT} onEnd={endGame} />
It can do that within an effect. If the
internalTime hits
0, then unmount and invoke
onEnd.
useEffect(() => { if (internalTime === 0 && onEnd) { onEnd() } }, [internalTime, onEnd])
We can shuffle our UI rendering to render three states:
- Fresh
- Playing
- Finished
<Fragment> {!playing && !finished && <Fragment> <h1>Whac-A-Mole</h1> <button onClick={startGame}>Start Game</button> </Fragment> } {playing && <Fragment> <button className="end-game" onClick={endGame} > End Game </button> <Score value={score} /> <Timer time={TIME_LIMIT} onEnd={endGame} /> <Moles> {new Array(NUMBER_OF_MOLES).fill().map((_, index) => ( <Mole key={index} onWhack={onWhack} /> ))} </Moles> </Fragment> } {finished && <Fragment> <Score value={score} /> <button onClick={startGame}>Play Again</button> </Fragment> } </Fragment>
And now we have a functioning game minus moving moles:
See the Pen [7. Ending a Game]() by @jh3y.
Note how we’ve reused the
Score component.
Was there an opportunity there to not repeat
Score? Could you put it in its own conditional? Or does it need to appear there in the DOM. This will come down to your design.
Might you end up with a more generic component to cover it? These are the questions to keep asking. The goal is to keep a separation of concerns with your components. But, you also want to keep portability in mind.
Moles
Moles are the centerpiece of our game. They don’t care about the rest of the app. But, they’ll give you their score
onWhack. This emphasizes portability.
We aren’t digging into styling in this “Guide”, but for our moles, we can create a container with
overflow: hidden that our
Mole (button) moves in and out of. The default position of our
Mole will be out of view:
( <div className="mole-hole"> <button className="mole" ref={buttonRef} onClick={() => onWhack(MOLE_SCORE)}> Mole </button> </div> ) }.
<Mole key={index} onWhack={onWhack} points={MOLE_SCORE} delay={0} speed={2} /> ( <div className="mole-hole"> <button className="mole" ref={buttonRef} onClick={whack}> Mole </button> </div> ).
<div className="moles"> {new Array(MOLES).fill().map((_, id) => ( <Mole key={id} onWhack={onWhack} speed={gsap.utils.random(0.5, 1)} delay={gsap.utils.random(0.5, 4)} points={MOLE_SCORE} /> ))} </div> <div className="moles"> {moles.map(({speed, delay, points}, id) => ( <Mole key={id} onWhack={onWhack} speed={speed} delay={delay} points={points} /> ))} </div>valDeclarative With React Hooks,” Dan Abramov
- “How To Fetch Data With React Hooks,” Robin Wieruch
- “When To
useMemoAnd
useCallback,” Kent C Dodds
- Read more React articles right here on Smashing Magazine
| https://www.smashingmagazine.com/2021/05/get-started-whac-a-mole-react-game/ | CC-MAIN-2021-39 | refinedweb | 4,050 | 68.77 |
# Algorithms in Go: Merge Intervals
This is the third part of a [series](https://habr.com/en/post/545986/) covering the implementation of algorithms in Go. In this article, we discuss the Merge Intervals algorithm. Usually, when you start learning algorithms you have to deal with some problems like finding the least common denominator or finding the next Fibonacci number. While these are indeed important problems, it is not something that we solve every day. Actually, in the vast majority of cases, we only solve such kinds of algorithms when we learn how to solve the algorithms. What I like about the Merge Interval algorithm is that we apply it in our everyday life, usually without even noticing that we are solving an algorithmic problem.
Let's say that we need to organize a meeting for our team. We have three colleagues Jay, May, and Ray and their time schedule look as follows (a colored line represents an occupied timeslot):
Which timeslot should we pick up for a new meeting? One could look at the picture, find a gap in Ray's schedule, and check whether the others a gap there as well.
How can we implement this logic? Most straightforwardly, we can iterate through every minute of the day and check whether someone is having a meeting during that time. If none of the colleagues are occupied at that time, we find an available minute.
How can we simplify this approach? Instead of checking all employees for every minute, we can merge their schedules and find the available slots in the resulting schedule.
After the merge, we can iterate through the array of the merged meetings and find the gaps.
How can we implement the algorithm above? Let's create type `Slot` that represents a time slot in the schedule; for simplicity, we use integers to denote the start and the end of the time slot instead of `time.Time` type.
```
type Slot struct {
start int
end int
}
```
For each of the employees will have a sorted array of `Slots` (staring from the earliest time slot) that represent occupied time slots in their schedule. Our function will take an array of `Slot` arrays that represents the schedule for the whole team as an input parameter.
```
[][]Slot{
{{9, 10}}, // occupied time slots for John
{{1, 3}, {5, 6}}, // occupied time slots for James
{{2, 3}, {6, 8}}, // ...
{{4, 6}
}
```
Our function will return an array of `Slots` that represent the commonly available slots for each member of the team.
We merge the schedules of two employees by merging their arrays of `Slots`. How do we do the merge of two arrays? We need to iterate through both arrays and see whether we have overlapping time slots. How can we know that the slots are overlapping? In general, we have two options for overlapping intervals A and B:
If neither of the two above conditions is satisfied, then the intervals do not overlap.
We can iterate through both of the arrays and see whether we have an overlap. Let's `arr1` and `arr2` represent the occupied time slots for two employees. We start with `i1` and `i2` equal to zero and continue the iteration till we exhaust either of the arrays. At each step, we check whether the current intervals from each of the arrays overlap.
```
// Merge two array of slots.
func Merge(arr1, arr2 []Slot) []Slot {
out := make([]Slot, 0)
i1, i2 := 0, 0
for i1 < len(arr1) && i2 < len(arr2) {
v1, v2 := arr1[i1], arr2[i2]
```
If the intervals overlap then we merge them. Let's say interval `v1` from `arr1` ends earlier than `v2` from `arr2` In this case, the next interval from `arr1` can have an overlap with the merged interval of `v1` and `v2`. Therefore, we merge `v1` into `v2`, i.e. `v2.Start = min(v2.Start, v1.Start)` and `v2.Stop = max(v2.Stop, v1.Stop)` and increase `i1`. `i2` stays the same, therefore at the next iteration, we will check whether the next interval from `arr1` overlaps with the merged interval.
```
overlap12 := (v2.Start >= v1.Start) && (v2.Start <= v1.Stop)
overlap21 := (v1.Start >= v2.Start) && (v1.Start <= v2.Stop)
if overlap12 || overlap21 {
merged := Slot{
Start: min(v1.Start, v2.Start),
Stop: max(v1.Stop, v2.Stop),
}
if v1.Stop < v2.Stop {
arr2[i2] = merged
i1++
} else {
arr1[i1] = merged
i2++
}
continue
}
```
If there is no overlap, we save the interval with the earliest stop (let's say an interval `v2` from `arr2`) to the output array and increase index `i2` for `arr2.` The other interval `v1` from `arr1` can still have an overlap with the next interval from `arr2` so we don't increase `i1`.
```
if v1.Stop < v2.Stop {
out = append(out, v1)
i1++
} else {
out = append(out, v2)
i2++
}
```
When `i1` or `i2` becomes equal to the length of the corresponding array we stop the iteration. As no more overlaps are possible, we simply add the remaining intervals to the output array.
The full listing for the function:
```
// Merge two arrays of meetings.
func Merge(arr1, arr2 []Slot) []Slot {
out := make([]Slot, 0)
i1, i2 := 0, 0
for i1 < len(arr1) && i2 < len(arr2) {
v1, v2 := arr1[i1], arr2[i2]
overlap12 := (v2.Start >= v1.Start) && (v2.Start <= v1.Stop)
overlap21 := (v1.Start >= v2.Start) && (v1.Start <= v2.Stop)
if overlap12 || overlap21 {
merged := Slot{
Start: min(v1.Start, v2.Start),
Stop: max(v1.Stop, v2.Stop),
}
if v1.Stop < v2.Stop {
arr2[i2] = merged
i1++
} else {
arr1[i1] = merged
i2++
}
continue
}
// no overlap; save the earliest of the intervals
if v1.Stop < v2.Stop {
out = append(out, v1)
i1++
} else {
out = append(out, v2)
i2++
}
}
out = append(out, arr1[i1:]...)
out = append(out, arr2[i2:]...)
return out
}
```
We merged the schedules of two colleagues. We need to merge all the schedules and then find the gaps within the merged schedule. How do find the gaps? We need to invert a `Slot` array considering the start and the end of the working day.
```
// Given an array of occupied time slots,
// returns available time slots in range from 1 to 12.
// We consider that working day starts at 1 and ends at 12.
func inverseSchedule(schedule []Slot) []Slot {
start := 1
var out []Slot
for ind, appointment := range schedule {
if ind == 0 && appointment.Start == 1 {
start = appointment.Stop
continue
}
out = append(out, Slot{Start: start, Stop: appointment.Start})
start = appointment.Stop
}
if start < 12 {
out = append(out, Slot{Start: start, Stop: 12})
}
return out
}
```
The resulting function looks as follows:
```
// AvailableSlots for all employees.
// Working hours starts at 1 ends at 12.
func AvailableSlots(schedule [][]Slot) []Slot {
if len(schedule) == 0 {
return []Slot{{1, 12}}
}
if len(schedule) == 1 {
return inverseSchedule(schedule[0])
}
merged := Merge(schedule[0], schedule[1])
for _, s := range schedule[2:] {
merged = Merge(merged, s)
}
return inverseSchedule(merged)
}
```
Here you can find the [source code](https://gist.github.com/a11310b036a62716ea70f8e38f5bcf21) and the [tests](https://gist.github.com/c9ebf7979df346c2f54cc8d1996f10c8) for the solution.
In this post, we implemented a solution for the Merge Intervals problem. This is one of the most popular algorithmic patterns that solve an entire class of problems. More algorithmic patterns such as [Sliding Window](https://habr.com/en/post/531444/) or [Iterative Postorder Traversal](https://habr.com/en/post/545980/)can be found in the series [Algorithms in Go.](https://habr.com/en/post/545986/) | https://habr.com/ru/post/538888/ | null | null | 1,316 | 57.06 |
Building on the last post, we’ll create a free moving camera. Add a new class to the “Cameras” folder in the “ProjectVanquish” project. Name the class “FreeCamera”. We need to change the class declaration to inherit our “BaseCamera” class.
public class FreeCamera : BaseCamera { }
Add the following to the namespaces:
using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input;
You’ll see that we are referencing the “Input” namespace. This is so we can use Input devices like the Keyboard, Mouse and even GamePads. We’ll need to declare a “MouseState” object for us to store the original position of the Mouse before it’s moved. Add in the following variable declaration:
MouseState originalMouse;
Let’s create the constructor, remembering that we’ll need to pass in variables to the base camera class:
public FreeCamera(GraphicsDevice device, Vector3 position, Vector3 target, float aspectRatio, float near, float far) : base(device, position, target, aspectRatio, near, far) { originalMouse = Mouse.GetState(); }
We don’t need to do anything else with the constructor as all of the values are set within the “BaseCamera” class. Let’s look at overriding the “Move” and “Update” methods, starting with the “Move” method:
public override void Move(Vector3 vector) { // Move the cameras position based on the way its facing Vector3 rotatedVector = Vector3.Transform(vector, rotationMatrix); Position += speed * rotatedVector; }
This code moves the cameras position based on the direction that it’s facing. Let’s do the “Update” method:
public override void Update(GameTime gameTime) { // Free movement float dt = (float)gameTime.ElapsedGameTime.Milliseconds / 1000f; // Rotation MouseState currentMouseState = Mouse.GetState(); if (currentMouseState != originalMouse && Keyboard.GetState().IsKeyDown(Keys.Space)) { Vector3 rot = Rotation; float xDifference = currentMouseState.X - originalMouse.X; float yDifference = currentMouseState.Y - originalMouse.Y; rot.Y -= 0.3f * xDifference * dt; rot.X += 0.3f * yDifference * dt; Mouse.SetPosition(device.Viewport.Width / 2, device.Viewport.Height / 2); Rotation = rot; } originalMouse = Mouse.GetState(); // Key press movement KeyboardState keyboard = Keyboard.GetState(); Vector3 direction = Vector3.Zero; if (Keyboard.GetState().IsKeyDown(Keys.W)) Move(new Vector3(0, 0, -1) * dt); if (Keyboard.GetState().IsKeyDown(Keys.S)) Move(new Vector3(0, 0, 1) * dt); if (Keyboard.GetState().IsKeyDown(Keys.A)) Move(new Vector3(-1, 0, 0) * dt); if (Keyboard.GetState().IsKeyDown(Keys.D)) Move(new Vector3(1, 0, 0) * dt); }
That’s all we need to do for this class. We have added input handling in the “Update” method for the Keyboard and Mouse devices. All we need to do now is to tie this in with our “ProjectVanquishTest” project.
The code below is for reference purposes only. It won’t actually be used in the next part, so if you add it in, be prepared to remove it for the next post.
Open up the “Game1” file within the “ProjectVanquishTest” project and add the following namespace:
using ProjectVanquish.Cameras;
Declare a new variable:
FreeCamera camera;
We’ll instantiate this variable in the “Initialize” method:
camera = new FreeCamera(GraphicsDevice, new Vector3(0, 10, 0), Vector3.Zero, GraphicsDevice.Viewport.AspectRatio, 0.1f, 1000f);
We are positioning the camera at X:0, Y:10, Z:0 looking at X:0, Y:0, Z:0. Find the “Update” method and then add the following:
camera.Update(gameTime);
So, we have now incorporated our new Camera into the test project.
However, we will still see nothing on the screen at the moment, and also, this camera isn’t in the correct place. I believe that the camera management should be done in the Scene Manager, so in the next part, we’ll create a simple Scene Manager to handle this. | https://projectvanquish.wordpress.com/tag/cameras/ | CC-MAIN-2017-22 | refinedweb | 593 | 50.63 |
End of Year Puzzle
Puzzle in Java
Here am going to write a question as a puzzle, it is also very similar to the
those questions you find in the SCJP (Sun Certified Java Programmer)
examination
public class LittleFuzzy {
public int method(int
Change Background of Slide Using Java
Change Background of Slide Using Java
In this example we are going to create a slide then change background of the
slide.
In this example we are creating an object
Change Background of Master Slide Using Java
Change Background of Master Slide Using Java
... to create a slide then change background of the
master slide.
In this example we are creating a slide master for the slide show. To create
slide show we
Change Background Picture of Slide Using Java
Change Background Picture of Slide Using Java
...;
In this example we are going to create a slide then change background picture of the
slide.
In this example we are creating a slide. In this slide we are inserting a
picture
Create PowerPoint Slide Using Java
Create PowerPoint Slide
Using Java
... a PowerPoint slide .
HWSL is used to make the Microsoft PowerPoint 97(-2003) file format
by pure Java. It supports read and write capabilities of some, but not yet
Change Size of PowerPoint Slide Using Java
Change Size of PowerPoint Slide
Using Java
In this example we are going to create a slide then change the size of the
slide.
setPageSize(java.awt.Dimension pgsize
Retrieve The Size of Slide Using Java
Retrieve The Size of Slide Using Java
In this example we are going to find height and width of the
slide...
C:\POI3.0\exmples\PowerPoint>java retrieveSlideSize
Width:792
Height:612
Event on Slide bar In Java
Event on Slide bar In Java
...
handling for a swing component named Slide bar. This section provides a
complete...:
This program gives you two slide bar and a image inside
the frame. One slide
Interview Tips - Java Interview Questions
Interview Tips Hi,
I am looking for a job in java/j2ee.plz give me interview tips. i mean topics which i should be strong and how to prepare. Looking for a job 3.5yrs experience
Set Title of PowerPoint Slide Using Java
Set Title of PowerPoint Slide
Using Java
... slide and set title.
HWSL is used to make the Microsoft PowerPoint 97(-2003) file
format by pure Java. It supports read and write capabilities of some
Make Colorful Title of PowerPoint Slide Using Java
Make Colorful Title of PowerPoint Slide
Using Java
<
In this example we are going to create a PowerPoint slide and set color full title.
The methods used
Java Programming Tips, Articles and Notes
Java Programming Language simple tips to use Java effectively
This section is not very structured and discussing the random topics of the
Java Programming Language and which is useful in learning the Java Language.
Java Programming
Working on Rich Text Format on Slide Using Java
Working on Rich Text
Format on Slide Using Java... a slide then we will learn how we can
work on rich text using java... createSlide() method to create an object of
Slide. To make a textbox we need first
Set Textbox in PowerPoint Slide Using Java
Set Textbox in PowerPoint Slide
Using Java
... text in PowerPoint using
java.
In this example we are creating an object of ... of
Slide. To make a textbox we need first create an object of TexBox
Creating Trapezoid Using Java
on PowerPoint slide using
java.
In this example, we are creating a slide master for the slide show. To create
slide show we use SlideShow constructor and to create...
Creating Trapezoid Using Java
All iPhone Tips and Tricks
All iPhone Tips and Tricks
If you are looking for some cool iPhone 3G tips n tricks... provide you the best, coolest and hottest iPhone tips, tweaks, secrets that you
Creating Auto Shape in PowerPoint Using Java
Creating Auto Shape in PowerPoint Using Java
In this example we are going to create auto shape on PowerPoint slide using
java.
In this example, we
Inserting Text Trapezoid Using Java
on PowerPoint slide,
then we are inserting text using
java.
In this example, we are creating a slide master for the slide show. To create
slide show we are using...
Inserting Text Trapezoid Using Java
Linux tutorials and tips
Linux is of the most advanced operating system. Linux is being used to host websites and web applications. Linux also provide support for Java and other programming language. Programmers from all the over the world is developing many
Inserting Text on Shape Using Java
Inserting Text on Shape Using Java
In this example we are going to create auto shape and text in PowerPoint slide using
java.
In this example we are creating
i want to know how a slide will hide
i want to know how a slide will hide when im click on the button then im getting a slide, but when im click on the another button this slide will be hide.. and here im using logic iterate to getting the values from action class
java - Java Interview Questions
Friend,
Please visit the following links:
tetris game code - Swing AWT
tetris game code To develop a JAVA puzzle game which is a "variation" of the Tetris game - Java Beginners
links:
Thanks
java programming problem - Java Beginners
/java/java-tips/data/strings/96string_examples/example_count.shtml
http.../java-tips/data/strings/96string_examples/example_countVowels.shtml
follow...java programming problem Hello..could you please tell me how can I
Introduction to Dojo and Tips
Introduction to Dojo
and Tips
This tips is light towards people with some JavaScript... need to use some of the features found in dojo. In this tips, learn about
://
Thanks
RoseIndia Team...java Java always provides default constructor to ac lass is it true... constructor.If we don't create any constructor for a class java itself creates
jQuery 'Tab Slide Out' Plug-in
jQuery 'Tab Slide Out' Plug-in
In this Section, you will learn how to develop a slide out tab using jQuery
plug-in. For developing slide out window we are using
"jquery.tabSlideOut.v1.3.js" plug-in. The slide out
developing skills in java , j2ee - Java Beginners
developing skills in java , j2ee How to understand or to feel the flow of java or j2ee programme what is the way to become a expert programmer can you please give me tips
thanking you
Web content marketing tips that works in 2015
Web content marketing tips that works in 2015
java project
java project i would like to get an idea of how to do a project by doing a project(hospital management) in java.so could you please provide me with tips,UML,source code,or links of already completed project with documentation
Image Slide Show using 'Space gallery' plug-in
Image Slide Show using 'Space gallery' plug-in
In this tutorial , we will create a slide show using 'Space gallery' plug-in
.In this example , two tabs are given. in one tab 'Space gallery' slide show
JAVAPRGM - Java Beginners
-project.shtml example program for java IDE.
Hi Friend,
Please visit the following links:
java program - Java Beginners
java program plzzzzz help me on this java programming question?
hello people.. can u plzzzzzzzzzzzzzzzzzzz help me out with this java programm. its... in this question
This program represents the peg jumping puzzle.
The board starts
10 Tips for Writing Effective Articles
10 Tips for Writing Effective Articles
... be yourself convinced about what you are writing. Few tips that will help you... of a bonus with their paid services or products.
These few simple tips can help
:
java
java what is java
Outsourcing Communication Tips,Useful Cultural Tips in Offshore Outsourcing,Communication and Culture Tips
Communication and Culture Tips in Offshore Outsourcing Relationships
Introduction
Many offshore outsourcing and other business arrangements run into difficulties due to communication problems. The trend is very prominent in both
java swings - JavaMail
/java-tips/GUI/layouts/90independent-managers.shtml swings Hi sir,i am doing a project on swings,i don't have any...://
http
please help java
of the dashboard images to 1.0 when each image is moused over.
A Slide In Dashboard... ready
JavaScript & jQuery TheMissingManual
Slide
a java question
a java question A thief Muthhooswamy planned to escape from *** jail. Muthhooswamy is basically a monkey man and he is able to jump across the wall... attempt he jumps 5 more metres from that position and this time he doesn't slide image slider or code 4 java-image slideshow
java image slider or code 4 java-image slideshow plz help me out with java code for running an application in which images wil slide automatically... image slider using java
Java Layout
Java Layout
In java a layout manager class implements the LayoutManager interface. It is used... Orientation Tips on Choosing a Layout Manager Third-Party Layout
java
java write a java program using filenotfoundexception
java
java difference between class and interface
java
java why multiple inheritance is not possible in java
java
java send me java interview questions
java
java Write a java code to print "ABABBABCABABBA
java
java why to set classpath in java
java
java RARP implementation using java socket
Java
Java how to draw class diagrams in java
java
java explain object oriented concept in java
java
java is java purely object oriented language
Java
Java Whether Java is Programming Language or it is SOftware
java
java what are JAVA applications development tools
java
java how use java method
java
java write a program in java to acess the email
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/95969 | CC-MAIN-2015-48 | refinedweb | 1,617 | 51.07 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
scheduled actions not working?
Hello everyone,
I wrote a Scheduler actions like below, but not working...
in crm_lead.py
def compute_lead_type(self, cr, uid, ids, name, args, context=None):
for record in self.browse(cr, uid, ids, context=context):
lead_type=record.type
lead_id=record.id
if lead_type=='lead': # Checks the lead ID is converted to opportunity or not
lead_crm_phonecall_id = self.pool.get('crm.phonecall').search(cr, uid, [('opportunity_id', '=', lead_id)], context=context) # Checks the lead is scheduled or not
if lead_crm_phonecall_id: # Checks the lead ID is available or not in crm.phonecall
message_text='This Lead is scheduled'
else:
message_text='This Lead is Not scheduled'
message = _('Hello this is testing')
self.message_post(cr, uid, [lead_id], message, context=context)
return
in crm_lead_view.xml
<record forcecreate="True" id="ir_cron_scheduler_checking_lead"
model="ir.cron">
<field name="name">Checking status of lead</field>
<field eval="True" name="active" />
<field name="interval_number">1</field>
<field name="interval_type">minutes</field>
<field name="numbercall">-1</field>
<field eval="False" name="doall" />
<field eval="'crm.lead'" name="model" />
<field eval="'compute_lead_type'" name="function" />
<field eval="'()'" name="args" />
</record>
This is not working, please help me,
thanks in advance...
It is not working because you are doing it worng. You do not have value of ids while running cron job so you need to assign it False. Change your method defination like
def compute_lead_type(self, cr, uid, ids = False, name, args, context=None):
search_ids = self.pool.get('object.name').search(cr, uid, [])
for ids in search_ids:
# write your logic here
return True
and it will start working. sometimes if it doesn't start. go to scheduler menu click on edit button of scheduler and save it again. it will start working :)
I think you need to pass "ids" argument's value from cron record. Because from your code's first line,
for record in self.browse(cr, uid, ids, context=context):
If program doesn't find ids then it will skip that loop. That's why your program doesn't working !
Still let me know anything is still using v7 | https://www.odoo.com/forum/help-1/question/scheduled-actions-not-working-53559 | CC-MAIN-2017-04 | refinedweb | 368 | 58.99 |
Efficient memory management is essential in a runtime system. Storage for objects is allocated in a designated part of memory called the heap. The size of the heap is finite. Garbage collection is a process of managing the heap efficiently; that is, reclaiming memory occupied by objects that are no longer needed and making it available for new objects. Java provides automatic garbage collection, meaning that the runtime environment can take care of memory management concerning objects without the program having to take any special action. Storage allocated on the heap through the new operator is administered by the automatic garbage collector. The automatic garbage collection scheme guarantees that a reference to an object is always valid while the object is needed in the program. The object will not be reclaimed, leaving the reference dangling.
Having an automatic garbage collector frees the programmer from the responsibility of providing code for deleting objects. In relying on the automatic garbage collector, a Java program also forfeits any significant influence on the garbage collection of its objects (see p. 327). However, this price is insignificant when compared to the cost of putting the code for object management in place and plugging all the memory leaks. Time-critical applications should bear in mind that the automatic garbage collector runs as a background task and may prove detrimental to their performance.
An automatic garbage collector essentially performs two tasks:
decide if and when memory needs to be reclaimed
find objects that are no longer needed by the program and reclaim their storage
A program has no guarantees that the automatic garbage collector will be run during its execution. A program should not rely on the scheduling of the automatic garbage collector for its behavior (see p. 327).
In order to understand how the automatic garbage collector finds objects whose storage should be reclaimed, we need to look at the activity going on in the JVM. Java provides thread-based multitasking, meaning there can be several threads executing in the JVM, each doing its own task (see Chapter 9). A thread is an independent path of execution through the program code. A thread is alive if it has not completed its execution. Each live thread has its own runtime stack, as explained in Section 5.5 on page 181. The runtime stack contains activation records of methods that are currently active. Local references declared in a method can always be found in the method's activation record, on the runtime stack associated with the thread in which the method is called. Objects, on the other hand, are always created in the heap. If an object has a field reference, then the field is to be found inside the object in the heap, and the object denoted by the field reference is also to be found in the heap.
An example of how memory is organized during execution is depicted in Figure 8.1. It shows two live threads (t1 and t2) and their respective runtime stacks with the activation records. The diagram shows which objects in the heap are referenced by local references in the method activation records. The diagram also shows field references in objects, which denote other objects in the heap. Some objects have several aliases.
An object in the heap is said to be reachable if it is denoted by any local reference in a runtime stack. Additionally, any object that is denoted by a reference in a reachable object is also said to be reachable. Reachability is a transitive relation. Thus, a reachable object has at least one chain of reachable references from the runtime stack. Any reference that makes an object reachable is called a reachable reference. An object that is not reachable is said to be unreachable.
A reachable object is alive. It is accessible by the live thread that owns the runtime stack. Note that an object can be accessible by more than one thread. Any object that is not accessible by a live thread is a candidate for garbage collection. When an object becomes unreachable and is waiting for its memory to be reclaimed, it is said to be eligible for garbage collection. An object is eligible for garbage collection if all references denoting it are in eligible objects. Eligible objects do not affect the future course of program execution. When the garbage collector runs, it finds and reclaims the storage of eligible objects. However, garbage collection does not necessarily occur as soon as an object becomes unreachable.
From Figure 8.1 we see that objects o4, o5, o11, o12, o14, and o15 all have reachable references. Objects o13 and o16 have no reachable references and are, therefore, eligible for garbage collection.
From the discussion above we can conclude that if a composite object becomes unreachable, then its constituent objects also become unreachable, barring any reachable references to the constituent objects. Although objects o1, o2, and o3 form a circular list, they do not have any reachable references. Thus, these objects are all eligible. On the other hand, objects o5, o6, and o7 form a linear list, but they are all reachable, as the first object in the list, 05, is reachable. Objects o8, o10, o11, and o9 also form a linear list (in that order), but not all objects in the list are reachable. Only objects o9 and o11 are reachable, as object o11 has a reachable reference. Objects o8 and o10 are eligible for garbage collection.
The lifetime of an object is the time from when it is created to the time it is garbage collected. Under normal circumstances, an object is accessible from the time when it is created to the time when it is unreachable. The lifetime of an object can also include a period when it is eligible for garbage collection, waiting for its storage to be reclaimed. The finalization mechanism (see p. 324) in Java does provide a means for resurrecting an object after it is eligible for garbage collection, but the finalization mechanism is rarely used for this purpose.
In the garbage collection scheme discussed above, an object remains reachable as long as there is a reference to it from running code. Using strong references (the technical name for the normal kind of references) can prove to be a handicap in certain situations. An application that uses a clipboard would most likely want its clipboard accessible at all times, but it would not mind if the contents of the clipboard were garbage collected when memory became low. This would not be possible if strong references were used to refer to the clipboard's contents.
The abstract class java.lang.ref.Reference and its concrete subclasses (SoftReference, WeakReference, PhantomReference) provide reference objects that can be used to maintain more sophisticated kinds of references to another object (called the referent). A reference object introduces an extra level of indirection, so that the program does not access the referent directly. The automatic garbage collector knows about reference objects and can reclaim the referent if it is only reachable through reference objects. The concrete subclasses implement references of various strength and reachability, which the garbage collector takes into consideration.
The automatic garbage collector figures out which objects are not reachable and, therefore, eligible for garbage collection. It will certainly go to work if there is a danger of running out of memory. Although the automatic garbage collector tries to run unobtrusively, certain programming practices can nevertheless help in minimizing the overhead associated with garbage collection during program execution. Automatic garbage collection should not be perceived as a license for uninhibited creation of objects and forgetting about them.
Certain objects, such as files and net connections, can tie up other resources and should be disposed of properly when they are no longer needed. In most cases, the finally block in the try-catch-finally construct (see Section 5.7, p. 188) provides a convenient facility for such purposes, as it will always be executed, thereby ensuring proper disposal of any unwanted resources.
To optimize its memory footprint, a live thread should only retain access to an object as long as the object is needed for its execution. The program can make objects become eligible for garbage collection as early as possible by removing all references to the object when it is no longer needed.
Objects that are created and accessed by local references in a method are eligible for garbage collection when the method terminates, unless reference values to these objects are exported out of the method. This can occur if a reference value is returned from the method, passed as argument to another method that records the reference, or thrown as an exception. However, a method need not always leave objects to be garbage collected after its termination. It can facilitate garbage collection by taking suitable action, for example, by nulling references.
import java.io.*; class WellbehavedClass { // ... void wellbehavedMethod() { File aFile; long[] bigArray = new long[20000]; // ... uses local variables ... // Does cleanup (before starting something extensive) aFile = null; // (1) bigArray = null; // (2) // Start some other extensive activity // ... } // ... }
In the previous code, the local variables are set to null after use at (1) and (2), before starting some other extensive activity. This makes the objects denoted by the local variables eligible for garbage collection from this point onward, rather than after the method terminates. This optimization technique of nulling references need only be used as a last resort when resources are scarce.
When a method returns a reference value and the object denoted by the value is not needed, not assigning this value to a reference also facilitates garbage collection.
If a reference is assigned a new reference value, the object denoted by the reference prior to the assignment can become eligible for garbage collection.
Removing reachable references to a composite object can make the constituent objects become eligible for garbage collection, as explained earlier.
Example 8.1 illustrates how the program can influence garbage collection eligibility. Class HeavyItem represents objects with a large memory footprint, on which we want to monitor garbage collection. Each composite HeavyItem object has a reference to a large array. The class overrides the finalize() method from the Object class to print out an ID when the object is finalized. This method is always called on an eligible object before it is destroyed (see finalizers, p. 324). We use it to indicate in the output if and when a HeavyItem is reclaimed. To illustrate the effect of garbage collection on object hierarchies, each object may also have a reference to another HeavyItem.
In Example 8.1, the class RecyclingBin defines a method createHeavyItem() at (4). In this method, the HeavyItem created at (5) is eligible for garbage collection after the reassignment of reference itemA at (6), as this object will have no references. The HeavyItem created at (6) is accessible on return from the method. Its fate depends on the code that calls this method.
In Example 8.1, the class RecyclingBin also defines a method createList() at (7). It returns the reference value in the reference item1, which denotes the first item in a list of three HeavyItems. Because of the list structure, none of the HeavyItems in the list are eligible for garbage collection on return from the method. Again, the fate of the objects in the list is decided by the code that calls this method. It is enough for the first item in the list to become unreachable, in order for all objects in the list to become eligible for garbage collection (barring any reachable references).
class HeavyItem { // (1) int[] itemBody; String itemID; HeavyItem nextItem; HeavyItem(String ID, HeavyItem itemRef) { // (2) itemBody = new int[100000]; itemID = ID; nextItem = itemRef; } protected void finalize() throws Throwable { // (3) System.out.println(itemID + ": recycled."); super.finalize(); } } public class RecyclingBin { public static HeavyItem createHeavyItem(String itemID) { // (4) HeavyItem itemA = new HeavyItem(itemID + " local item", null); // (5) itemA = new HeavyItem(itemID, null); // (6) System.out.println("Return from creating HeavyItem " + itemID); return itemA; // (7) } public static HeavyItem createList(String listID) { // (8) HeavyItem item3 = new HeavyItem(listID + ": item3", null); // (9) HeavyItem item2 = new HeavyItem(listID + ": item2", item3); // (10) HeavyItem item1 = new HeavyItem(listID + ": item1", item2); // (11) System.out.println("Return from creating list " + listID); return item1; // (12) } public static void main(String[] args) { // (13) HeavyItem list = createList("X"); // (14) list = createList("Y"); // (15) HeavyItem itemOne = createHeavyItem("One"); // (16) HeavyItem itemTwo = createHeavyItem("Two"); // (17) itemOne = null; // (18) createHeavyItem("Three"); // (19) createHeavyItem("Four"); // (20) System.out.println("Return from main()."); } }
Possible output from the program:
Return from creating list X Return from creating list Y X: item3: recycled. X: item2: recycled. X: item1: recycled. Return from creating HeavyItem One Return from creating HeavyItem Two Return from creating HeavyItem Three Three local item: recycled. Three: recycled. Two local item: recycled. Return from creating HeavyItem Four One local item: recycled. One: recycled. Return from main().
In Example 8.1, the main() method at (13) in the class RecyclingBin uses the methods createHeavyItem() and createList(). It creates a list X at (14), but the reference to its first item is reassigned at (15), making objects in list X eligible for garbage collection after (15). The first item of list Y is stored in the reference list, making this list non-eligible for garbage collection during the execution of the main() method.
The main() method creates two items at (16) and (17), storing their reference values in references itemOne and itemTwo, respectively. The reference itemOne is nulled at (18), making HeavyItem with identity One eligible for garbage collection. The two calls to the createHeavyItem() method at (19) and (20) return reference values to HeavyItems, which are not stored, making each object eligible for garbage collection right after the respective method call returns.
The output from the program bears out the observations made above. Objects in list Y (accessible through reference list) and HeavyItem with identity Two (accessible through reference itemTwo) remain non-eligible while the main() method executes. Although the output shows that HeavyItems with identities Four and Five were never garbage collected, they are not accessible once they become eligible for garbage collection at (19) and (20), respectively. Any objects in the heap after the program terminates are reclaimed by the operating system.
Object finalization provides an object a last resort to undertake any action before its storage is reclaimed. The automatic garbage collector calls the finalize() method in an object that is eligible for garbage collection before actually destroying the object. The finalize() method is defined in the Object class.
protected void finalize() throws Throwable
An implementation of the finalize() method is called a finalizer. A subclass can override the finalizer from the Object class in order to take more specific and appropriate action before an object of the subclass is destroyed.
A finalizer can, like any other method, catch and throw exceptions (see Section 5.7, p. 188). However, any exception thrown but not caught by a finalizer invoked by the garbage collector is ignored. The finalizer is only called once on an object, regardless of whether any exception is thrown during its execution. In case of finalization failure, the object still remains eligible for disposal at the discretion of the garbage collector (unless it has been resurrected, as explained in the next subsection). Since there is no guarantee that the garbage collector will ever run, there is also no guarantee that the finalizer will ever be called.
In the following code, the finalizer at (1) will take appropriate action if and when called on objects of the class before they are garbage collected, ensuring that the resource is freed. Since it is not guaranteed that the finalizer will ever be called at all, a program should not rely on the finalization to do any critical operations.
public class AnotherWellbehavedClass { SomeResource objRef; // ... protected void finalize() throws Throwable { // (1) try { // (2) if (objRef != null) objRef.close(); } finally { // (3) super.finalize(); // (4) } } }
Unlike subclass constructors, overridden finalizers are not implicitly chained (see Section 6.3, p. 243). Therefore, a finalizer in a subclass should explicitly call the finalizer in its superclass as its last action, as shown at (4) in the previous code. The call to the finalizer of the superclass is in a finally block at (3), guaranteed to be executed regardless of any exceptions thrown by the code in the try block at (2).
A finalizer may make the object accessible again (i.e., resurrect it), thus avoiding it being garbage collected. One simple technique is to assign its this reference to a static field, from which it can later be retrieved. Since a finalizer is called only once on an object before being garbage collected, an object can only be resurrected once. In other words, if the object again becomes eligible for garbage collection and the garbage collector runs, the finalizer will not be called. Such object resurrections are not recommended, as they only undermine the purpose of the finalization mechanism.
Example 8.2 illustrates chaining of finalizers. It creates a user-specified number of large objects of a user-specified size. The number and size are provided through command-line program arguments. The loop at (7) in the main() method creates Blob objects, but does not store any references to them. Objects created are instances of the class Blob defined at (3). The Blob constructor at (4) initializes the field fat by constructing a large array of integers. The Blob class extends the BasicBlob class that assigns each blob a unique number (blobId) and keeps track of the number of blobs (population) not yet garbage collected. Creation of each Blob object by the constructor at (4) prints the ID number of the object and the message "Hello". The finalize() method at (5) is called before a Blob object is garbage collected. It prints the message "Bye" and calls the finalize() method in the class BasicBlob at (2), which decrements the population count. The program output shows that two blobs were not garbage collected at the time the print statement at (8) was executed. It is evident from the number of "Bye" messages that three blobs were garbage collected before all the five blobs had been created in the loop at (7).
class BasicBlob { // (1) static int idCounter; static int population; protected int blobId; BasicBlob() { blobId = idCounter++; ++population; } protected void finalize() throws Throwable { // (2) --population; super.finalize(); } } class Blob extends BasicBlob { // (3) int[] fat; Blob(int bloatedness) { // (4) fat = new int[bloatedness]; System.out.println(blobId + ": Hello"); } protected void finalize() throws Throwable { // (5) System.out.println(blobId + ": Bye"); super.finalize(); } } public class Finalizers { public static void main(String[] args) { // (6) int blobsRequired, blobSize; try { blobsRequired = Integer.parseInt(args[0]); blobSize = Integer.parseInt(args[1]); } catch(IndexOutOfBoundsException e) { System.err.println( "Usage: Finalizers <number of blobs> <blob size>"); return; } for (int i=0; i<blobsRequired; ++i) { // (7) new Blob(blobSize); } System.out.println(BasicBlob.population + " blobs alive"); // (8) } }
Running the program with the command
>java Finalizers 5 500000
might result in the following output:
0: Hello 1: Hello 2: Hello 0: Bye 1: Bye 2: Bye 3: Hello 4: Hello 2 blobs alive
Although Java provides facilities to invoke the garbage collection explicitly, there are no guarantees that it will be run. The program can only request that garbage collection be performed, but there is no way that garbage collection can be forced.
The System.gc() method can be used to request garbage collection, and the System.runFinalization() method can be called to suggest that any pending finalizers be run for objects eligible for garbage collection. Alternatively, corresponding methods in the Runtime class can be used. A Java application has a unique Runtime object that can be used by the application to interact with the JVM. An application can obtain this object by calling the method Runtime.getRuntime(). The Runtime class provides various methods related to memory issues.
static Runtime getRuntime()
Returns the Runtime object associated with the current application.
void gc()
Requests that garbage collection be run. However, it is recommended to use the more convenient static method System.gc().
void runFinalization()
Requests that any pending finalizers be run for objects eligible for garbage collection. Again, it is more convenient to use the static method System.runFinalization().
long freeMemory()
Returns the amount of free memory (bytes) in the JVM, that is available for new objects.
long totalMemory()
Returns the total amount of memory (bytes) available in the JVM. This includes both memory occupied by current objects and that which is available for new objects.
Example 8.3 illustrates invoking garbage collection. The class MemoryCheck is an adaptation of the class Finalizers from Example 8.2. The RunTime object for the application is obtained at (7). This object is used to get information regarding total memory and free memory in the JVM at (8) and (9), respectively. Blobs are created in the loop at (10). The amount of free memory after blob creation is printed at (11). We see from the program output that some blobs were already garbage collected before the execution got to (11). A request for garbage collection is made at (12). Checking free memory after the request shows that more memory has become available, indicating that the request was honoured. It is instructive to run the program without the method call System.gc() at (12), in order to compare the results.
class BasicBlob { /* See Example 8.2. */ } class Blob extends BasicBlob { /* See Example 8.2.*/ } public class MemoryCheck { public static void main(String[] args) { // (6) int blobsRequired, blobSize; try { blobsRequired = Integer.parseInt(args[0]); blobSize = Integer.parseInt(args[1]); } catch(IndexOutOfBoundsException e) { System.err.println( "Usage: MemoryCheck <number of blobs> <blob size>"); return; } Runtime environment = Runtime.getRuntime(); // (7) System.out.println("Total memory: " + environment.totalMemory());// (8) System.out.println("Free memory before blob creation: " + environment.freeMemory()); // (9) for (int i=0; i<blobsRequired; ++i) { // (10) new Blob(blobSize); } System.out.println("Free memory after blob creation: " + environment.freeMemory()); // (11) System.gc(); // (12) System.out.println("Free memory after requesting GC: " + environment.freeMemory()); // (13) System.out.println(BasicBlob.population + " blobs alive"); // (14) } }
Running the program with the command
>java MemoryCheck 5 100000
gave the following output:
Total memory: 2031616 Free memory before blob creation: 1773192 0: Hello 1: Hello 2: Hello 1: Bye 2: Bye 3: Hello 0: Bye 3: Bye 4: Hello Free memory after blob creation: 818760 4: Bye Free memory after requesting GC: 1619656 0 blobs alive
Certain aspects regarding automatic garbage collection should be noted:
There are no guarantees that objects that are eligible for garbage collection will have their finalizers executed. Garbage collection might not even be run if the program execution does not warrant it. Thus, any memory allocated during program execution might remain allocated after program termination, but will be reclaimed by the operating system.
There are also no guarantees about the order in which the objects will be garbage collected, or the order in which their finalizers will be executed. Therefore, the program should not make any assumptions based on these aspects.
Garbage collection does not guarantee that there is enough memory for the program to run. A program can rely on the garbage collector to run when memory gets very low and it can expect an OutOfMemoryException to be thrown if its memory demands cannot be met. | https://etutorials.org/cert/java+certification/Chapter+8.+Object+Lifetime/8.1+Garbage+Collection/ | CC-MAIN-2021-21 | refinedweb | 3,881 | 54.52 |
Agenda
See also: IRC log
<scribe> scribe: Mike Dean
<scribe> scribenick: mdean
<ChrisW> March 31 minutes:
<ChrisW> PROPOSED: approve minutes of last week
csma: don't remember resolution that presentation syntax should map 1:1 to xml
Sandro: design principle rather
than resolution - not crisp
... e.g. literals are 1:3 mapping
<ChrisW> RESOLVED: We will use Presentation Syntax, with minor changes, with a mapping table to the XML syntax.
Sandro: from F2F7
<sandro> so a mapping could be 3:12 or whatever -- a complex rule for going from a > b to a guard expression.
ChrisW: don't see 1:1 mentioned
csma: could say we prefer 1:1 rather than refer to resolution
Sandro: shouldn't change minutes, but clarify now
<ChrisW> RESOLVED: approve minutes of last week
<sandro> I think in the meeting we were thinking it was more-or-less a WG decision to keep things 1:1, but obviously that's not really the case.
ChrisW: added issue 91 (bounded
quantifiers) to agenda
... rdf:txt discussion on email list
Jos: issues addressed, but
haven't followed all email
... ready to review
Sandro: also reviewer - thinks its done
ChrisW: 1 or 2 minor things left,
but shouldn't impact reviewability
... last item on agenda
<ChrisW> ACTION: josb to review rdf:text [recorded in]
<trackbot> Created ACTION-725 - Review rdf:text [on Jos de Bruijn - due 2009-04-14].
csma: OMG PRR beta 2 vote on-going - AB has approved, thinking about next steps
Sandro: ChrisW still hasn't
registered
... 11 registered - usual suspects
... hope to buy lunches and snacks
<josb> sorry, our admin will not allow us to spend money, even if we do have it :(
<sandro> trackbot, help
<trackbot> See for help
<sandro> trackbot, status?
<josb>
Dave: working on syntax checker
<DaveReynolds> Just to clarify the minutes - Dave working on applying RIF PS syntax checker to vet OWL 2 RL ruleset
ChrisW: string less-than
issue
... broken into 2 issues
... no specific DTB predicates
<ChrisW> PROPOSED: Drop string-<, string->, string-<=, string->= from DTB, closing ISSUE-67.
ChrisW: consensus last week
<sandro> sandro: the idea is that these are just syntactic sugar for string-compare + numeric-less-than.
<ChrisW> PROPOSED: Drop string-<, string->, string-<=, string->= from DTB, closing ISSUE-67.
<AdrianP> Zakim IPcaller is me
<ChrisW> +1
<josb> no protest
<DaveReynolds> 0
<AxelPolleres> abstain (DERI)
<sandro> 0 I'm not thrilled, but okay....
0
<josb> +1
<Michael_Kifer> +1
<Harold> +1
<AdrianP> 0
<Gary> 0
<ChrisW> RESOLVED: Drop string-<, string->, string-<=, string->= from DTB, closing ISSUE-67.
<csma> +0
<ChrisW> ACTION: axel to remove string <>= from DTB [recorded in]
<trackbot> Created ACTION-726 - Remove string <>= from DTB [on Axel Polleres - due 2009-04-14].
<ChrisW> ACTION: chris to close issue-67 [recorded in]
<trackbot> Created ACTION-727 - Close issue-67 [on Christopher Welty - due 2009-04-14].
ChrisW: datatype IRIs issue, includes IRIs in general (import, prefix, annotations, datatype identifiers)
Jos: counterintuitive that 2 IRIs denote the same datatype
Sandro: don't allow equality with datatype IRIs?
<josb> xsd:string=xsd:int
Sandro: is always false
<josb> xsd:string=a
<josb> oftype("b",a)
Sandro: is true
<AxelPolleres> +1 to that being inconsistent and xsd:int=a being fine.
MKifer: rif:iri's should be
uninterpreted
... use anyURI
ChrisW: plain literals refer to themselves
Jos: can live with Michael's solution
<josb> oftype("x",xsd:string)
<josb> oftype("a",""^^anyURI)
josb: different syntax for denoting datatypes
<AxelPolleres> we don'
<AxelPolleres> t supporte anyURI at this point, but it could be added to DTB of course.
Sandro: concerned about equality - nice for users, but hard to implement as translation
<sandro> sandro: i had been thinking the mapping between xsd and native types was done in the translator, not in the rule engine.
<csma> A rif:iri constant must be interpreted as a reference to one and the same object regardless of the context in which that constant occurs
csma: DTB says that rif:iri constant must be interpreted as 1 object regardless of context
Jos: can be mapped to different
things in different interpretations
... context is where it occurs - in formula - still same interpretation (in BLD document)
<csma> Constants in this symbol space are intended to be used in a way similar to RDF resources
csma: Make this sentence
stronger?
... Make phrases more formal?
Jos: already formal in DTB document
<ChrisW> isLiteralOfType("ab"^^xsd:string, a)
<ChrisW> |=
<ChrisW> a = ""^^xsd:anyURI
<DaveReynolds> xsd:int [ rdfs:subClassOf -> xsd:integer ]
Dave: in RDFS, want to be able to make statements about datatypes
<sandro> it seems like a subproperty to me, not a subclass. :-)
<AxelPolleres> +1 to be able to talk about DTs.
Jos: nice to be able to combine RIF rules with RDF graphs
<ChrisW> "1"^^xsd:int
<ChrisW> |=
<sandro> The test case I'm concerned about is p("a"^^<a>) and <a>=xs:string |= p("a")
<ChrisW> 1 # xsd:int
<josb> sandro: this entailment does not hold
<AxelPolleres> allows to write datatype entailment rules (in a dialect allowing exiustentials in heads) it seems, e.g. rdfD1
<AxelPolleres>
<DaveReynolds> "1"^^xsd:int # xsd:int
<josb> xsd:string=a |= oftype("b",a)
<josb> "a"^^<a>
Josb: ^^ not evaluated
<sandro> sandro: so it's evaluated when you're doing oftype, but not when you're doing ^^.
<sandro> jos: right.,
<josb> xsd:string=a, "a"^^<a> |= oftype("a"^^<a>, xsd:string)
<josb> replace in example xsd:string w "http....string"^^anyURI
<josb> and the entailment still holds
<DaveReynolds> Right, I was talking about SWC
<AxelPolleres> rdfD1 could be emulated with skolemization in RIF BLD as: sk(?D) rdf:type ?D :- ofType(?X, ?D) ?D rd:type rdfs:Datatype.
ChrisW: what would we change to make this valid?
<sandro> sandro: sigh, yeah, I guess it's already accepted that datatypes are treated as classes (of their value space). [[ That's so broken. it means xs:hexBinary == xs:base64Binary. Sure, they are the same "class", but they are different "properties. ]]
<DaveReynolds> [[Sandro - having the same class extension does not mean xs:hexBinary == xsd:base64Binary, just means sameClassAs, can be different individuals and different property extensions. I claim it is not broken.]]
<ChrisW> xsd:int [ rdfs:subClassOf -> xsd:integer ]
<ChrisW> "1"^^xsd
<ChrisW> "1"^^xsd:int [ rdfs:type -> xsd:integer ]
<ChrisW> #
ChrisW:
Axel: makes sense
... oftype allows writing entailment rules in RIF
<AxelPolleres>
<ChrisW> a # ?x :- isLiteralOfType(a,?x)
<AxelPolleres> rdfD1 could be emulated with skolemization in RIF BLD as: sk(?D) rdf:type ?D :- ofType(?X, ?D) ?D rd:type rdfs:Datatype.
<ChrisW> (w/ D-Entailment)
Axel: capture finite set of inference rules in RIF
Michael: poor practice
<Michael_Kifer> datatype(?uri)
ChrisW: lots of things that BLD
can't do
... understand 2 sides now
... straw poll
<csma> yes
<sandro> yes
<ChrisW> Straw: +1: Use "anyURI" in isLiteralOfType, -1: define datatype URIs to denote themselves
<josb> -1
<AdrianP> 0
<AxelPolleres> -1 (I want RIF BLD to express at least such Horn expressible inference rules that need datatype extraction over RDF, using anyURI would prevent this.)
<ChrisW> -.333333
<DaveReynolds> -0.8
<sandro> -1
-1
<Gary> -0.5
<Michael_Kifer> +1
Michael: object, because it would require quite a few changes just to accommodate 1 builtin
ChrisW: problem still exists
Michael: rif:iri's are by definition supposed to be uninterpreted - could introduce another symbol space
Jos: can define it differently
Michael: notion of datatype isn't extensible
<AxelPolleres> The built-in is anyways already an "amputed" version of SPARQL's datatype()-built-in ... if we go with anyURI, it is making even less sense to me.
Jos: hadn't thought of
extensibility issue - nasty
... get rid of isLiteralOfType based on new information?
... different ways of referring to datatype inelegant
Michael: RIF pure logic, RDF evolving syntax
ChrisW: want to move on, but not lose state
<ChrisW> ACTION: michael to summarize objection to iris denoting themselves in email [recorded in]
<trackbot> Sorry, amibiguous username (more than one match) - michael
<trackbot> Try using a different identifier, such as family name or username (eg. msintek, mkifer, merdmann)
ChrisW: list datatype issue
<ChrisW> ACTION: mkifer to summarize objection to iris denoting themselves in email [recorded in]
<trackbot> Created ACTION-728 - Summarize objection to iris denoting themselves in email [on Michael Kifer - due 2009-04-14].
<AxelPolleres> the issue is: fixin the semantics on new datatypes in new rule-sets may hamper "forward"-compatibility (not sure whether that is the right term here) w.r.t. datasets not having those additional datatypes in mind, yes?
ChrisW: defer lists due to time
ChrisW: bounded quantifiers
<AxelPolleres> hmmm, asking myself whether the behaviour I would want could be "hidden" better in the semantics definition of isOfDatatype alone, without affecting datatypes.
ChrisW: weak support - no objections to dropping requirement
<ChrisW> PROPOSED: CORE will not have bounded quantifiers, closing ISSUE-91.
<sandro> +1
<sandro> or maybe +0
no objections
<josb> +1
<DaveReynolds> +1
<Michael_Kifer> +1
+1
<Gary> +1
<Harold> +1
<sandro> +0 they would have been nice, but it's not practical right now.
<AdrianP> +1
<ChrisW> +1
<sandro> some future Core (Core 2.0) might have it, but this Core wont have it.... We don't have time.
csma: considered deferring without closing
<ChrisW> RESOLVED: CORE will not have bounded quantifiers, closing ISSUE-91.
<ChrisW> ACTION: chris to close issue-91 [recorded in]
<trackbot> Created ACTION-729 - Close issue-91 [on Christopher Welty - due 2009-04-14].
Sandro: consider starting wish list for future WG
Axel: Boris and Axel tried to
resolve all open issues
... only 3 at risk notes left in document
<ChrisW> Feature At Risk #1: Usage of rtfn:
<ChrisW> Feature At Risk #2: rtfn:compare
<ChrisW> Feature At Risk #3: rtfn:length
Axel: can be emulated using existing functions
<Zakim> sandro, you wanted to ask about namespace thing
Sandro: what are different options?
<ChrisW> rdf:text editing: I see lots of undefined characters
Axel: don't see other options at this point
Sandro: prefer removing At Risk #1
<josb> I would support removing them as well
Sandro: willing to remove compare and length functions - XPath 3.0 can add them later
ChrisW: would at least document that
<josb> I actually thought there was a consensus about removing them on the rdf-text list
Sandro: obvious for XPath 3
Axel: no telecon where we
formally agreed
... can resolve here from RIF side
Sandro: OWL doesn't care about builtins
<josb> I would support this
Axel: fine to drop compare and length if decided here
<josb> or ever...
Sandro: keep at risk 2 and 3 and see if we get any feedback
<sandro> sandro: let's keep length & compare as At Risk, for now.
Axel: agreed
<sandro> PROPOSED: Keep rtfn:compare and rtfn:length as AT RISK
<sandro> +1
<AxelPolleres> +1
<ChrisW> +1
<josb> 0
<Michael_Kifer> +1
<Harold> +1
<AdrianP> +1
<DaveReynolds> 0
0
<sandro> RESOLVED: Keep rtfn:compare and rtfn:length as AT RISK
<sandro>
Sandro: need Director approval for namespace - shouldn't be a problem
<ChrisW> ACTION: axel to remove at risk comment on rtfn: namespace and check with OWL WG [recorded in]
<trackbot> Created ACTION-730 - Remove at risk comment on rtfn: namespace and check with OWL WG [on Axel Polleres - due 2009-04-14].
<sandro> issue-86?
<trackbot> ISSUE-86 -- rdf:text implies change to SPARQL -- OPEN
<trackbot>
<sandro> issue-87?
<trackbot> ISSUE-87 -- rdf:text document reinterprets xs:string as a subtype of rdf:text -- OPEN
<trackbot>
csma: impact on issues 86 and 87?
<sandro> +1 go 5 more minutes
<ChrisW> PROPOSED: extend for 5 mins
<csma>
<csma>
Axel: asked SPARQL WG for review
Sandro: no change required for SPARQL
<sandro> PROPOSED: Close ISSUE-86 and ISSUE-87, addressed by the current text of
<josb> +1
<AxelPolleres> +1
ChrisW: try to close 86 and 87 next week
Sandro: no telecon next week
<sandro> NO TELECON NEXT WEEK. F2F13 the next day.
<ChrisW> ACTION: chris to send message about no telecon next week [recorded in]
<trackbot> Created ACTION-731 - Send message about no telecon next week [on Christopher Welty - due 2009-04-14].
csma: put on agenda for F2F
ChrisW: no other business
<ChrisW> adjourn
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/have do/do have/ Succeeded: s/RIF IRIs/rif:iri's/ Found Scribe: Mike Dean Found ScribeNick: mdean Default Present: Mike_Dean, csma, Stella_Mitchell, ChrisW, DaveReynolds, josb, Sandro, Gary, Harold, +03539149aaaa, AxelPolleres, Michael_Kifer, AdrianP Present: Mike_Dean csma Stella_Mitchell ChrisW DaveReynolds josb Sandro Gary Harold +03539149aaaa AxelPolleres Michael_Kifer AdrianP Regrets: LeoraMorgenstern Agenda: Got date from IRC log name: 07 Apr 2009 Guessing minutes URL: People with action items: axel chris josb michael mkifer[End of scribe.perl diagnostic output] | http://www.w3.org/2009/04/07-rif-minutes.html | CC-MAIN-2016-50 | refinedweb | 2,109 | 57.4 |
Romain Guillebert, 26.07.2011 04:10: > I can now compile pxd files, but I have encountered something rather > strange : AnalyseExpressionsTransform turns : > > cimport foo > > def test(): > foo.printf() > > into : > > cimport foo > > def test(): > printf() This makes sense from the POV of C, which has a flat namespace. It is true that this doesn't make much sense for the Python backend. Specifically, we do not want to flatten the .pxd namespace here, because the attribute name may already exist in the module namespace. Only the qualified name gives us proper and safe Python code. >) ? Well, the current implementation is already a huge hack, see mutate_into_name_node(). The proper fix we envisioned for cases like this is to globally refactor the analyse_types() methods into methods that return self, or any other node that they deem more appropriate. The caller would then put the returned value in the place of the original node. This approach would allow analyse_as_cimported_attribute() to simply instantiate and return a new NameNode, instead of having to changing the type of itself locally. However, so far, no-one has done anything to get this refactoring done, likely because it's a lot of rather boring work. Regardless of this refactoring, I think it might be best to explicitly implement this method in a backend specific way. When targeting C, it should flatten the attribute access. When targeting Python, it should still determine the entry (and thus the type of the attribute) but should not build a NameNode. I assume the problem here is that the method does not actually know which backend it is targeting, right? The "env" parameter (i.e. the scope) won't know that, and I'm not sure it should know that. My suggestion is to let the module scope of the .pxd know that it's being read into (or as) a flat or prefixed namespace, and to propagate the information from there into the Entry objects it describes. Then the Entry of the attribute would know if it needs a namespace or not, and the analysis method could get its information directly from there. Does that make sense? Stefan | https://mail.python.org/pipermail/cython-devel/2011-July/001103.html | CC-MAIN-2016-30 | refinedweb | 355 | 64.1 |
Opened 4 years ago
Closed 15 months ago
Last modified 15 months ago
#16380 closed New feature (fixed)
admin_list_filter should pass the spec to the template
Description
If the admin_list_filter inclusion tag also passed the spec object to the admin/filter.html template we could write custom filterspec's that have attributes that influence the way the filterspec is displayed (in a dropdown with javascript for instance).
Right now you have to override the change_list.html, and create your own template tag to do this and then still override the admin/filter.html to actually change how it's displayed.
Attachments (1)
Change History (9)
Changed 4 years ago by hvdklauw
comment:1 Changed 4 years ago by hvdklauw
- Easy pickings set
- Has patch set
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 4 years ago by jezdez
- Easy pickings unset
- Needs documentation set
- Needs tests set
- Patch needs improvement set
- Triage Stage changed from Unreviewed to Accepted
comment:3 Changed 4 years ago by hvdklauw
Example admin/filter.html
{% load i18n %} <h3>{% blocktrans with title as filter_title %} By {{ filter_title }} {% endblocktrans %}</h3> {% if spec.field.related and choices|length > 5 %} <select class="filter_choice"> {% for choice in choices %} <option value="{{ choice.query_string|iriencode }}"{% if choice.selected %} selected='selected'{% endif %}>{{ choice.display|safe }}</option> {% endfor %} </select> {% else %} <ul> {% for choice in choices %} <li{% if choice.selected %}{{ choice.display }}</a></li> {% endfor %} </ul> {% endif %}
And this little bit of javascript somewhere on document.ready:
$(".filter_choice").change(function() { location.href = $(this).val(); });
comment:4 Changed 4 years ago by jezdez
Nice! Let's put something like that in the docs.
comment:5 Changed 3 years ago by jacob
- milestone 1.4 deleted
Milestone 1.4 deleted
comment:6 Changed 15 months ago by bruno@…
- Cc bruno@… added
This ticket could be closed, admin_list_filter is now (in v1.6 at least) a simple_tag that uses the spec's template AND pass the spec to the template (cf line 409).
comment:7 Changed 15 months ago by bmispelon
- Resolution set to fixed
- Status changed from new to closed
Hi,
Thanks for digging up these old out-of-date tickets
FYI, you don't need special permmissions to close a ticket here and you can do so yourself.
Putting the commit id in the comment when you close a ticket will help us double check.
In this case, this issue appears to have been fixed in 4dbeb4bca4638ff851a2f4844d262dbe1652f7b5.
Thanks.
comment:8 Changed 15 months ago by bruno@…
@bmispelon: ok will do next time.
While this is an interesting idea, this needs an actual example and tests. | https://code.djangoproject.com/ticket/16380 | CC-MAIN-2015-11 | refinedweb | 433 | 62.68 |
Scrabble
If a word is on the board and after that word is a double word score and you add s continuing down to spell out send do you count that double word block twice?
Related Questions
Asked in Volleyball
Does a block count as one of your three hits in volleyball?
Asked in Intel 8085
What is count register purpose?
Asked in Sports, Volleyball
What is the maximum number of hits a volleyball team is allowed to return the ball over the net?.
Asked in Area
How do you count the squares in a square?
Asked in Long Jump
In long jump when does it count as Scratch?
Asked in Chess, Checkers
How many squares are there on a 8 by 8 checker board?
Asked in Darts (game)
If a dart hits another dart and not the board what is the score?
Asked in Basketball, Basketball Rules and Regulations
If you block the ball and control it does it count statistically as a block and a steal?
Asked in Basketball, Basketball Rules and Regulations
Is double dribble in basketball a foul?
Asked in Computer History, Computer Keyboards
How many keys in key board?
Asked in Boston Celtics, Basketball History
Did Larry Bird's shot behind the board count?
Asked in Synonyms and Antonyms
Another word for left out?
Asked in Criminal Law
Does time served before sentencing count as double time?
Asked in Movie Ratings
What are the ratings and certificates for Taking the Count - 1937?
Asked in Snowboarding
How many Americans snow board?
Asked in Nouns
Is family a count noun?
Asked in Crocheting and Knitting
How do you count double crochet stitches?
Asked in Darts (game)
If you get a bullseye in darts then another dart knocks it out is it still a double bullseye or nothing?
Asked in Shot Put
What happens in shot put if a thrower steps on top of the stop board?
Asked in India Colleges and Universities, Anna University, Trichy
Does aieee count foe entering Anna university?
Asked in Math and Arithmetic, Geometry
How do you count edges vertices and faces on a cuboid?
Asked in Factoring and Multiples, Prime Numbers
How do you figure out the prime factorization of a number?
Here's an algorithm in java: import java.util.Scanner; import java.util.ArrayList; public class PrimeFactorization { public static void main(String[] args) { String response; Scanner scan=new Scanner(System.in); double num=0; boolean read=false,run=true; timer.start(); while(run) { System.out.println("Enter a number"); while(!read) { response=scan.next(); try { num=Double.parseDouble(response); read=true; } catch(NumberFormatException exception) { System.out.println("Incorrect Format"); } } read=false; TimerListener.resetT(); ArrayList<Double> list=getPrimeFactorization(num); System.out.println("Prime Factorization:"); for(int count=0;count<list.size()-1;count++) { System.out.print(list.get(count)+", "); } System.out.println(list.get(list.size()-1)); System.out.println("Time: "+TimerListener.getT()+"s\n\n"); } } public static ArrayList<Double> getPrimeFactorization(double num) { ArrayList<Double> list=new ArrayList<Double>(); boolean run=true; double index; while (run) { list.add(index=getFirstFactor(num)); num=num/index; if (num==1) run=false; } return list; } private static double getFirstFactor(double num) { for(int count=2;count<=Math.sqrt(num)+1;count++) { if ((double)num/(double)count%1==0)return count; } return num; } | https://www.answers.com/Q/If_a_word_is_on_the_board_and_after_that_word_is_a_double_word_score_and_you_add_s_continuing_down_to_spell_out_send_do_you_count_that_double_word_block_twice | CC-MAIN-2020-24 | refinedweb | 547 | 50.53 |
AWS Compute Blog
Deploying GitOps with Weave Flux and Amazon EKS
This post is contributed by Jon Jozwiak | Senior Solutions Architect, AWS
You have countless options for deploying resources into an Amazon EKS cluster. GitOps—a term coined by Weaveworks—provides some substantial advantages over the alternatives. With only Git as the single, central source for controlling deployment into your cluster, GitOps provides easy version control on a platform your team already knows. Getting started with GitOps is straightforward: create a pull request, merge, and the configuration deploys to the EKS cluster.
Weave Flux makes running GitOps in your EKS cluster fast and easy, as it monitors your configuration in Git and image repositories and automates deployments. Weave Flux follows a pull model, automatically triggering deployments based on changes. This provides better security than most continuous deployment tools, which need permissions to access your cluster. This approach also provides Git with version control over your configuration and enables rollback.
This post walks through implementing Weave Flux and deploying resources to EKS using Git. To simplify the image build pipeline, I use AWS Service Catalog to provide a standardized pipeline. AWS Service Catalog lets you centrally define a portfolio of approved products that AWS users can provision. An AWS CloudFormation template defines each product, which can be version-controlled.
After you deploy the sample resources, I quickly demonstrate the GitOps approach where a new image results in the configuration automatically deploying to EKS. This new image may be a commit of Kubernetes manifests or a commit of Helm release definitions.
The following diagram shows the workflow.
Prerequisites
In GitOps, you manage Docker image builds separately from deployment configuration. For image builds, this example uses AWS CodePipeline and AWS CodeBuild, which provide a managed workflow from GitHub source through to an image landing in Amazon Elastic Container Registry (ECR).
This post assumes that you already have an EKS cluster deployed, including kubectl access. It also assumes that you have a GitHub account.
- To deploy a cluster, see Getting Started with eksctl.
- If you need kubectl, see Installing kubectl.
- If you don’t have a GitHub account, sign up for a new account.
GitHub setup
First, create a GitHub repository to store the Kubernetes manifests (configuration files) to apply to the cluster.
In GitHub, create a GitHub repository. This repository holds Kubernetes manifests for your deployments. Name the repository k8s-config to align with this post. Leave it as a public repository, check the box for Initialize this repository with a README, and choose Create Repo.
On the GitHub repository page, choose Clone or Download and save the SSH string:
git@github.com:youruser/k8s-config.git
Next, create a GitHub token that allows creating and deleting repositories so AWS Service Catalog can deploy and remove pipelines.
- In your GitHub profile, access your token settings.
- Choose Generate New Token.
- Name your new token CodePipeline Service Catalog, and select the following options:
repo scopes (repo:status, repo_deployment, public_repo, and repo:invite)
read:org
write:public_key and read:public_key
write:repo_hook and read:repo_hook
read:user and user:email
delete_repo
4 . Choose Generate Token.
5. Copy and save your access token for future access.
Deploy Helm
Helm is a package manager for Kubernetes that allows you to define a chart. Charts are collections of related resources that let you create, version, share, and publish applications. By deploying Helm into your cluster, you make it much easier to deploy Weave Flux and other systems. If you’ve deployed Helm already, skip this section.
First, install the Helm client with the following command:
curl -LO
chmod 700 get_helm.sh
./get_helm.sh
On macOS, you could alternatively enter the following command:
brew install kubernetes-helm
Next, set up a service account with cluster role for Tiller, Helm’s server-side component. This allows Tiller to manage resources in your cluster.
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
Finally, initialize Helm and verify your version. Tiller takes a few seconds to start.
helm init --service-account tiller --history-max 200
helm version
Deploy Weave Flux
With Helm installed, proceed with the Weave Flux installation. Begin by installing the Flux Custom Resource Definition.
kubectl apply -f
Now add the Weave Flux Helm repository and proceed with the install. Make sure that you update the git.url to match the GitHub repository that you created earlier.
helm repo add fluxcd
helm upgrade -i flux --set helmOperator.create=true --set helmOperator.createCRD=false --set git.url=git@github.com:YOURUSER/k8s-config --namespace flux fluxcd/flux
You can use the following code to verify that you successfully deployed Flux. You should see three pods running:
kubectl get pods -n flux
NAME READY STATUS RESTARTS AGE
flux-5bd7fb6bb6-4sc78 1/1 Running 0 52s
flux-helm-operator-df5746688-84kw8 1/1 Running 0 52s
flux-memcached-6f8c446979-f45wj 1/1 Running 0 52s
Flux requires a deploy key to work with the GitHub repository. In this post, Flux generates the SSH key pair itself, but you can also specify a different key pair when deploying. To access the key, download fluxctl, a command line utility that interacts with the Flux API. The following steps work for Linux. For other OS platforms, see Installing fluxctl.
sudo wget -O /usr/local/bin/fluxctl
sudo chmod 755 /usr/local/bin/fluxctl
Validate that fluxctl installed successfully, then retrieve the public key pair using the following command. Specify the namespace where you deployed Flux.
fluxctl version
fluxctl --k8s-fwd-ns=flux identity
Copy the key and add that as a deploy key in your GitHub repository.
- In your GitHub repository, choose Settings, Deploy Keys.
- Choose Add deploy key and name the key Flux Deploy Key.
- Paste the key from fluxctl identity.
- Choose Allow Write Access, Add Key.
Now use AWS Service Catalog to set up your image build pipeline.
Set up AWS Service Catalog
To allow end users to consume product portfolios, you must associate a portfolio with an IAM principal (or principals): a user, group, or role. For this example, associate your current identity. After you master these basics, there are additional resources to teach you how to set up a multi-region, multi-account catalog.
To retrieve your current identity, use the AWS CLI to get your ARN:
aws sts get-caller-identity
Deploy the product portfolio that contains an image build pipeline service by doing the following:
- In the AWS CloudFormation console, launch the CloudFormation stack with the following link:
2. Choose Next.
3. On the Specify Details page, enter your ARN from get-caller-identity. Also enter an environment tag, which AWS applies to all resources from this portfolio.
4. Choose Next.
5. On the Options page, choose Next.
6. On the Review page, select the check box displayed next to I acknowledge that AWS CloudFormation might create IAM resources.
7. Choose Create. CloudFormation takes a few minutes to create your resources.
Deploy the image pipeline
The image pipeline provisions a GitHub repository, Amazon ECR repository, and AWS CodeBuild project. It also uses AWS CodePipeline to build a Docker image.
- In the AWS Management Console, go to the AWS Service Catalog products list and choose Pipeline for Docker Images.
- Choose Launch Product.
- For Name, enter ExamplePipeline, and choose Next.
- On the Parameters page, fill in a project name, description, and unique S3 bucket name. The specifics don’t matter, but make a note of the name and S3 bucket for later use.
- Fill in your GitHub User and GitHub Token values from earlier. Leave the rest of the fields as the default values.
- To clean up your GitHub repository on stack delete, change Delete Repository to true.
- Choose Next.
- On the TagOptions screen, choose Next.
- Choose Next on the Notifications page.
- On the Review page, choose Launch.
The launch process takes 1–2 minutes. You can verify that you now have a repository matching your project name (eks-example) in GitHub. You can also look at the pipeline created in the AWS CodePipeline console.
Deploying with GitOps
You can now provision workloads into the EKS cluster. With a GitOps approach, you only commit code and Kubernetes resource definitions to GitHub. AWS CodePipeline handles the image builds, and Weave Flux applies the desired state to Kubernetes.
First, create a simple Hello World application in your example pipeline. Clone the GitHub repository that you created in the previous step and substitute your GitHub user below.
git clone git@github.com:youruser/eks-example.git
cd eks-example
Create a base README file, a source directory, and download a simple NGINX configuration (hello.conf), home page (index.html), and Dockerfile.
echo "# eks-example" > README.md
mkdir src
wget -O src/hello.conf
wget -O src/index.html
wget
Now that you have a simple Hello World app with Dockerfile, commit the changes to kick off the pipeline.
git add .
git commit -am "Initial commit"
[master (root-commit) d69a6ba] Initial commit
4 files changed, 34 insertions(+)
create mode 100644 Dockerfile
create mode 100644 README.md
create mode 100644 src/hello.conf
create mode 100644 src/index.html
git push
Watch in the AWS CodePipeline console to see the image build in process. This may take a minute to start. When it’s done, look in the ECR console to see the first version of the container image.
To deploy this image and the Hello World application, commit Kubernetes manifests for Flux. Create a namespace, deployment, and service in the Kubernetes Git repository (k8s-config) you created. Make sure that you aren’t in your eks-example repository directory.
cd ..
git clone git@github.com:youruser/k8s-config.git
cd k8s-config
mkdir charts namespaces releases workloads
The preceding directory structure helps organize the repository but isn’t necessary. Flux can descend into subdirectories and look for YAML files to apply.
Create a namespace Kubernetes manifest.
Now create a deployment manifest. Make sure that you update this image to point to your repository and image tag. For example, <Account ID>.dkr.ecr.us-east-1.amazonaws.com/eks-example:d69a6bac.
Finally, create a service manifest to create a load balancer.
In the preceding code, there are two Kubernetes annotations for Flux. The first, flux.weave.works/automated, tells Flux whether the container image should be automatically updated. This example sets the value to true, enabling updates to your deployment as new images arrive in the registry. This example comments out the second annotation, flux.weave.works/ignore. However, you can use it to tell Flux to ignore the deployment temporarily.
Commit the changes, and in a few minutes, it automatically deploys.
Make sure that you push your changes.
git push
Now check the logs of your Flux pod:
kubectl get pods -n flux
Update the name below to reflect the name of the pod in your deployment. This sample pulls every five minutes for changes. When it triggers, you should see kubectl apply log messages to create the namespace, service, and deployment.
kubectl logs flux-5bd7fb6bb6-4sc78 -n flux
Find the load balancer input for your service with the following:
kubectl describe service eks-example -n eks-example
Now when you connect to the load balancer address in a browser, you can see the Hello World app.
Change the eks-example source code in a small way (such as changing index.html to say Hello World Deployment 2), then commit and push to Git.
After a few minutes, refresh your browser to see the deployed change. You can watch the changes in AWS CodePipeline, in ECR, and through Flux logs. Weave Flux automatically updated your deployment manifests in the k8s-config repository to deploy the new image as it detected it. To back out that change, use a
git revert or
git reset command.
Finally, you can use the same approach to deploy Helm charts. You can host these charts within the configuration Git repository (k8s-config in this example), or on an external chart repository. In the following example, you use an external chart repository.
In your k8s-config directory, get the latest changes from your repository and then create a Helm release from an external chart.
cd k8s-config
git pull
First, create the namespace manifest.
Then create the Helm release manifest. This is a custom resource definition provided by Weave Flux.
There are a few new annotations for Flux above. The flux.weave.works/locked annotation tells Flux to lock the deployment. This is useful if you find a known bad image and must roll back to a previous version. In addition, the flux.weave.works/tag.nginx annotation filters image tags by semantic versioning.
Wait up to five minutes for Flux to pull the configuration and verify this deployment as you did in the previous example:
kubectl get pods -n flux
kubectl logs flux-5bd7fb6bb6-4sc78 -n flux
kubectl get all -n nginx
If this doesn’t deploy, ensure Helm initialized as described earlier in this post.
kubectl get pods -n kube-system | grep tiller
kubectl get pods -n flux
kubectl logs flux-helm-operator-df5746688-84kw8 -n flux
Clean up
Log in as an administrator and follow these steps to clean up your sample deployment.
- Delete all images from the Amazon ECR repository.
2. In AWS Service Catalog provisioned products, select the three dots to the left of your ExamplePipeline service and choose Terminate provisioned product. Wait until it completes termination (1–2 minutes).
3. Delete your Amazon S3 artifact bucket.
4. Delete Weave Flux:
helm delete flux --purge
kubectl delete ns flux
kubectl delete crd helmreleases.flux.weave.works
5. Delete the load balancer services:
helm delete mywebserver --purge
kubectl delete ns nginx
kubectl delete svc eks-example -n eks-example
kubectl delete deployment eks-example -n eks-example
kubectl delete ns eks-example
6. Clean up your GitHub repositories:
– Go to your k8s-config repository in GitHub, choose Settings, scroll to the bottom and choose Delete this repository. If you set delete to false in the pipeline service, you also must delete your eks-example repository.
– Delete the personal access token that you created.
7. If you provisioned an EKS cluster at the beginning of this post, delete it:
eksctl get cluster
eksctl delete cluster <clustername>
8. In the AWS CloudFormation console, select the DevServiceCatalog stack, and choose the Actions, Delete Stack.
Conclusion
In this post, I demonstrated how to use a GitOps approach, which allows you to focus on committing code and configuration to Git rather than learning new CI/CD tooling. Git acts as the single source of truth, and Weave Flux pulls changes and ensures that the Kubernetes cluster configuration matches the desired state.
In addition, AWS Service Catalog can be used to create a portfolio of services that enables you to standardize your offerings, such as an image build pipeline based on AWS CodePipeline.
As always, AWS welcomes feedback. Please submit comments or questions below. | https://aws.amazon.com/blogs/compute/deploying-gitops-with-weave-flux-and-amazon-eks/ | CC-MAIN-2019-47 | refinedweb | 2,491 | 57.47 |
I'd looked at a number of articles on the internet about multi-threading, and thread-pool designs, but could not really adopt any of them suitably to my requirements. Most of the thread pools I'd seen, had a lot of the thread management logic intertwined with the actual function that the thread was executing. I wanted a different conceptual view of it. To do this, I moved away from deriving from threads and assigning pointers to functions to execute, and latched onto the command pattern, from the inherently useful Design Patterns: Elements of Reusable Object-Oriented Software, Gamma, Helm et al. Requests are submitted as functors, allowing the functor to maintain its own environment.
Fundamentally, you submit a request in the form of a functor to a queue, and then let the thread-pool do the rest. I didn't want to have to "join" threads back to the main thread, or "wait" for completion. A thread was merely an execution process. It doesn't care what it's executing, it just keeps doing the next thing in its path.
You'll need first of all to:
#include "ThreadPool.h"
Then, create the ThreadPool object. The simple way is simply to create the ThreadPool object thus:
ThreadPool
ThreadPool myPool;
Once this is done, you can call myPool.accept() to prepare the thread-pool to accept connections.
myPool.accept()
A second way, provides more control. We can derive from the ThreadPool class, and override the onThreadStart and onThreadFinish methods to provide thread specific information.
onThreadStart
onThreadFinish
class COMPool : public ThreadPool
{
public:
void onThreadStart(int threadId)
{
::OleInitialize(NULL);
}
void onThreadFinish(int threadId)
{
::OleUninitialize();
}
};
We also need to create one or more functors which we can request the thread pool to handle. These are created by deriving from ThreadRequest. If you wish to pass in parameters, or retrieve information from the functor, you can provide these in a constructor. Also, any necessary cleanup can be done in the destructor.
ThreadRequest
class SpecialRequest : public ThreadRequest
{
public:
SpecialRequest(int param1, int param2, int& retStat) :
myLocal1(param1), myLocal2(param2), myStatus(retStat)
{
// Any constructor setup stuff, like transaction begin or
//file opens or whatever else the functor
// may need to operate with.
}
void operator()(int)
{
// Do whatever you want in here -
//but DON'T let exceptions propogate.
retStatus = myLocal1 + myLocal2;
// OK - so it just adds 2 integers.... but it's multithreaded!!!
}
private:
int myLocal1;
int myLocal2;
int& retStatus;
};
To submit the request, we use the previously defined ThreadPool object, and submit the functor into the queue. In this case, creating it at the same time:
myPool.submitRequest(new SpecialRequest(1, 2, returnVal);
Note that the SpecialRequest has the two parameters being passed in, 1 and 2, and the reference to the returnValue for the functor to populate when it is completed. Note that the functor must be created on heap memory using new, because the thread pool will delete it.
SpecialRequest
returnValue
new
Once we are finished with the thread-pool, we can shut it down using
myPool.shutdown();
So, our main loop looks like this:
int main(int, char*)
{
int result;
COMPool myCOMThreadPool;
// We tell it to accept requests now
myCOMThreadPool.accept();
// Add 1 and 2, and store in result
myCOMThreadPool.submitRequest(new SpecialRequest(1, 2, result));
myCOMThreadPool.shutdown();
// And output the result.
std::cout << result;
}
The demonstration project contains a more explicit example using multiple threads, loops and thread statistics
Note: It's important that you handle any exceptions that your code may throw in the overridden operator()(int). The acceptHandler will not allow any exceptions to propagate out of its loop in order to maintain integrity. This is not actually a bad thing, because your functor should be able to handle its own exceptions anyway.
operator()(int)
acceptHandler
For the individuals who REALLY want to know how it works, the entire cpp and h files are commented using doxygen, and the tricky clever bits are commented also. What I'll try to explain here is why I've done certain things.
The guiding principle that I tried to follow was that a thread is a completely separate entity to the thing that it is executing. The idea being to disconnect the management of the threads and the execution completely. This would enable the thread-pool to be used easily for database calls - simulating an asynchronous call effectively - HTTP responses via fast-CGI, and calculations of PI to (insert arbitrarily large number here) decimal places, whilst simultaneously making me a coffee. I didn't want the thread to be aware of the execution status of the function/functor, and I didn't want the functor to be aware that it was being executed in a thread.
This provides a slightly different programming model to what would otherwise be expected. In many of the implementations I've seen, the worker thread is responsible for signalling it is finished, and the "main" thread of execution is responsible for waiting until the worker thread has signalled this. Then, the main thread needs to clean up the worker, or query the worker to get the workers results. Some of the implementations passed an arbitrary number of pointers around the place to indicate parameters and return values.
In this implementation, the worker thread doesn't wait at all. It executes the job and then cleans up after itself. If you need to get results, you can provide a pointer or reference to a structure in the constructor, and use the functor to populate that structure as part of its implementation. For example, you can send the results of a database commit into the passed constructor, and check this at anytime to see whether it has committed yet. Or, you can pass an entire business object, such as a Purchase Order, into the constructor, and make the functor responsible for populating the values. The thread doesn't care. All it does, is populates and cleans up after itself. If your application needs to wait for something to happen, you can include a signal (event) in the functor, and put the main thread to sleep until the signal is signalled. (sugged?) Again, you are waiting for the functor to complete not the thread. A conceptual difference, but I think a more accurate representation.
The second thing that I had to add was an onThreadStart and onThreadFinish call. In the simple case, this is a no-op, however by deriving from ThreadPool you can make these do whatever you like. I had to add these because COM needed to be initialized per thread when I was using this for my OLEDB calls.
The queue itself when it reaches the maximum queue length will actually block, which in the case of the example provided will also block the main thread. This has the affect of preventing additional requests, allowing the pool time to catch up.
See my todo list at the bottom, for improvement ideas I have for this
I also used functors because for the OLEDB calls, I really needed transaction integrity. To provide this, the constructor of the functor could perform the beginTransaction() and the destructor would call commit() or rollback() depending on the private transactionSuccess flag. This ensured that either commit or rollback was called, regardless of the outcome, and improved exception safety incredibly. After implementing it this way, I realized just how effective functors were, so I ended up using them for the generic solution.
beginTransaction()
commit()
rollback()
transactionSuccess
The beauty of this is that the functor contains everything that it needs to execute. Because it can maintain state, you can actually pass parameters in the constructor, and use these parameters to get information about the current functor object, including its state of execution, and the data returned. The best part though, is that by using a functor, you can test the functor itself outside of the multithreaded environment quite easily - allowing easy integration for unit tests.
Another benefit of functors is that they can maintain their own environment. In the case of the FastCGI application, I needed to pass the address of the output structure to return the output back to the webserver. I did this passing the environment, including the CGI parameters, the error stream and the output stream into the functor during construction. This meant that the same functor was perfectly thread safe because it was a completely individual object, and yet it had access to the environment that was created - at that time. On execution, it would always write to the correct output stream. This provided thread safety, without the need for mutexes and critical sections etc.
The acceptHandler has been declared as throwing no exceptions. This is necessary, because exceptions would potentially disrupt the safety of other threads. If a functor throws an exception, and it's not handled in the functor (This is bad!), the handler will simply swallow the exception, and you will not see it. You should not bank on this functionality though, because it may change. (Not sure how, or why, but if someone comes up with a better method, I'll change it in a flash)
All these are tentative ideas, and not things that I'll implement until I actually need them.
QueueFullException
Special thanks to Taka for reviewing the code for this article.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
class SpecialRequest : public ThreadRequest
{
SpecialRequest(bool& successReturn) : success(false){}
~SpecialRequest()
{
success = true; // Destructor indicates success when complete
}
private:
bool& success; // reference to passed in value
};
bool waitForCompletion(false);
myPool.submitRequest(new SpecialRequest(waitForCompletion));
while(!waitForCompletion)
{
// Do stuff until the thread updates the passed in parameter....
// to signal that it's completed.
}
if(notFull)
{
// normal queue stuff
}
else
return false;
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/3507/An-exception-safe-OO-thread-pool-framework?msg=457804 | CC-MAIN-2019-22 | refinedweb | 1,655 | 52.49 |
Equally-Sized Pies
The PieChart widget comprises of a pie and other elements which can affect the pie's size. This means that pies in several side-by-side PieCharts may differ in size. Collect all these widgets in a single size group by setting their sizeGroup options to identical values to avoid this.
jQuery
JavaScript
$(function() { $("#pieChartContainer1").dxPieChart({ // ... sizeGroup: "pies" }); $("#pieChartContainer2").dxPieChart({ // ... sizeGroup: "pies" }); });
Angular
HTML
TypeScript
<dx-pie-chart ... </dx-pie-chart> <dx-pie-chart ... </dx-pie-chart>
import { DxPieChartModule } from 'devextreme-angular'; // ... export class AppComponent { // ... } @NgModule({ imports: [ // ... DxPieChartModule ], // ... })
The widgets should have identical layouts, that is, the same container size, title and legend position, etc. Note also that a single page can contain many size groups, but a widget can be a member of only one of. | https://js.devexpress.com/Documentation/17_1/Guide/Widgets/PieChart/Equally-Sized_Pies/ | CC-MAIN-2019-51 | refinedweb | 130 | 51.34 |
RationalWiki:Technical support/Archive8
Contents
- 1 I demand McAdams website to be removed
- 2 OK, I'm confused
- 3 Watchlist
- 4 Capturebot not capturing again
- 5 More links for JFK assassination page
- 6 HotCAT
- 7 Spam user creation spree
- 8 Serving stale pages
- 9 Uploading copyrighted images under fair use
- 10 Blacklist
- 11 "New thread" button in forum
- 12 number of articles in
- 13 Move template
- 14 Wiki hiccups - it's been the squids
- 15 Sarcasm tag extension
- 16 Vandal bin
- 17 Today's Holiday on RC
- 18 WigoBot
- 19 Edit filter
- 20 Ridiculously insignificant bug
- 21 large scale rollback?
- 22 Searches exclude recent results
- 23 DNSBL
- 24 Is it just me...
- 25 State of the vandal
- 26 Formatnum
- 27 WIGO:CP Talk seems to be busted
- 28 Bertran's rights at ru-rw
- 29 Just broke login on ru:rw
- 30 Wiki Love
- 31 Help whith #if
- 32 MediaWiki 1.19.4 on en:rw and ru:rw
- 33 Some CSS Table on Contents thingy
- 34 More Bible
- 35 So how about that proxy filter?
- 36 Collapse top and bottom templates
- 37 Avoiding referers
I demand McAdams website to be removed[edit]
Many of you will know me, I am the man who has been trying to remove the site that defends the official version.
His website is full of errors, in fact there is enough evidence proving a conspiracy in the Kennedy assassination.
“I'm tired of it, and I'm coming out with the truth. And I think it's important that people need to know who this man(Oswald) was,” --Judyth Vary Baker
"The facts are that we do not know who killed President Kennedy, that the Warren Commission named the wrong man as the assassin and never searched for the truth of the crime." -- Howard Roffman, Presumed Guilty ( 1976 ).
"Had Oswald lived to face a trial, with competent defense it would have been risky, at best, for the authorities to try to make the claim that the physical evidence conclusively linked Oswald to the Walker shooting." --Gerald D. McKnight, Breach of Trust ( 2005 ) pg. 58.
." -- Buell Wesley Frazier.— Unsigned, by: 92.15.139.156 / talk / contribs
- We will get right on that.Tmtoulouse (talk) 16:13, 14 November 2012 (UTC)
Thanks and I also want you to put on the website that I have tired to put on instead of Mcadams one, and where it debunks the claims in Mcadams website. From a pervious version of the page. If you do not know, it is on the page called A site which defends the "official" version, critiquing conspiracy theories.— Unsigned, by: 92.15.139.156 / talk / contribs
- Just curious, what do you think about Michael T. Griffith's other writings? Such as this one?
- Oh, and while you are granting demands, Trent, can I have a shrubbery? Ni.--ZooGuard (talk) 16:39, 14 November 2012 (UTC)
- I'll take a nine inch pianist. MDB (the MD is for Maryland, the B is for Bear) 18:09, 14 November 2012 (UTC)
OK, I'm confused[edit]
End of Answers_Research_Journal_volume_5 - does anyone else see the text "Retrieved from "" " (or whatever the revision ID is today) and the categories looking just like the CSS is broken? Been constant through a few page revs - David Gerard (talk) 16:00, 19 November 2012 (UTC)
- I see it. Evil fascistoh noez 16:05, 19 November 2012 (UTC)
Watchlist[edit]
Sadly, mine is still broken.
This is not trivial, I consider it a disaster - my basic method for navigating recent edits to pages I care about has now been broken for months. If wikipedia can support my thousands of watched pages, why can't we? ħuman
07:59, 28 November 2012 (UTC)
- Well, Wikipedia's servers and software infrastructure cost thousands of dollars a month, so that's probably the reason it works there. Furthermore, if it's only one person affected, and that person's demands are unreasonable, then it is trivial. The frank, sensible suggestion (mysteriously not raised as of yet), is that you trim your grotesquely large watchlist. The watchlist function was created to allow users to track the articles most important to them; it is not a hack to create a version of RecentChanges that's even larger and more unwieldy than RecentChanges itself. You might as well be complaining that you can't use the search function to create an instant list of every page on the wiki by searching for "the" or "a". It doesn't matter if that's an integral part of how you navigate the wiki — that's not what it's for, and you damn well know it.
Radioactive Misanthrope 08:35, 28 November 2012 (UTC)
- My car is a sturdy little thing, and it usually takes me where I want to go. But just the other day, I used a dozen lengths of chain to strap my apartment building to my car, so I could move it down the block a little bit. First the chains broke, and then my car's engine overheated.
- I am complaining to the manufacturer that they need to fix this. Surely, if a car can carry people and household objects easily from one place to another, it must be broken if it can't transport several tons of building. Right?--
talk
10:38, 28 November 2012 (UTC)
- I'm sure that someone could come up with a Greasemonkey script to filter out the 20(?) pages that Human does not want to see in RC.
ГенгисIs the Pope a Catholic?
10:54, 28 November 2012 (UTC)
- Background for those not privy:As said here, it seems as though the problem is that he has about 18,800 pages on his watchlist.--
talk
11:15, 28 November 2012 (UTC)
- Human asked me about this on my talk page. My last question, in an attempt to diagnose the problem, was "Just how many pages are on your watchlist?" Human didn't answer, instead complaining here that it hadn't been fixed. This is rather less than helpful.
- FWIW, I've been playing silly buggers with Divabot's watchlist. At 15,000 or so entries (all of mainspace and lots of image space) I got it to give me blank pages if I asked for 3000 days' changes, though not at 30 days. This also didn't quite help me duplicate the problem, as it didn't cause the out of memory error Human's did.
- The problem with allocating enough memory for a pathological watchlist is that memory is the one thing we're really short of, and it would be nice not to break the server quite so much. However, with something to help diagnose stuff, at least we can try to do things.
- So, Human. How many pages are actually on your watchlist? That would be a useful start, which is why I asked it before - David Gerard (talk) 21:41, 28 November 2012 (UTC)
- Well, if he can't load Special:Watchlist, I'm not sure how can he find out the number. :) Perhaps by opening Special:EditWatchlist/raw, copying the entries to a text editor and looking at the number of lines?--ZooGuard (talk) 09:23, 29 November 2012 (UTC)
- And even if that doesn't work, that's also useful information - David Gerard (talk) 14:58, 29 November 2012 (UTC)
- As it happens, MediaWiki 1.19.3, which I've just applied, includes a fix for a watchlist bug - worth trying again - David Gerard (talk) 08:31, 30 November 2012 (UTC)
- Which one is that? I can't find anything relevant in the changelog. -- Nx / talk 13:31, 30 November 2012 (UTC)
- A regex thing which could theoretically break watchlists and RC. [1] Hey, bugfixes for free - David Gerard (talk) 20:55, 30 November 2012 (UTC)
Human has now sent me his watchlist! I shall be investigating what the fucking fuck ... - David Gerard (talk) 00:34, 2 December 2012 (UTC)
- This one is made of pathological edge cases. Divabot can load its watchlist and Special:EditWatchlist/raw, but trying to blank it from there is throwing up PHP fatal errors and 500ing. Special:EditWatchlist gives a PHP fatal error 500 too. And it's clearly doing horrible things to MediaWiki, as the load average hits about 15 whenever I try this. Special:Watchlist/clear redirects to Special:EditWatchlist/raw. Now attempting to clear Divabot's watchlist from the database ... - David Gerard (talk) 15:56, 2 December 2012 (UTC)
- And I accidentally deleted the whole watchlist for User:Huey gunna getcha too. Bah. But I deleted Divabot's watchlist, and made it a copy of Human's! ... and Divabot's doesn't break. FUCK. It'll be something weird and complicated - David Gerard (talk) 16:06, 2 December 2012 (UTC)
- FUCK. No, I accidentally deleted Human's. ARSE. - David Gerard (talk) 16:07, 2 December 2012 (UTC)
- Well, that's a solution, isn't it? ;D--ZooGuard (talk) 16:11, 2 December 2012 (UTC)
- I really didn't mean to do that. So it's a GOOD THING we both have a full list of its previous contents. I have emailed my profuse apologies with more stuff to test. (It really was completely random that it was Human.) This is what I get for doing dangerous things completely sober - David Gerard (talk) 16:12, 2 December 2012 (UTC)
Capturebot not capturing again[edit]
For at least the last six hours (as of the time of this post), Capturebot has been failing to cap and upload pictures. I put two links in the sandbox and nothing happened with them, and there are uncaptured links on WIGO:CP and WIGO:CP talk as well. However, Capturebot reports it's online when queried from the console. Ochotonaprincepsnot a pokémon 07:03, 5 December 2012 (UTC)
- I have no idea, but will try to have a look in a few hours - David Gerard (talk) 21:36, 6 December 2012 (UTC)
Down again, since before the downtime earlier today. Peter Subsisting on honey 08:38, 23 December 2012 (UTC)
More links for JFK assassination page[edit]
moved to Talk:John F. Kennedy assassination conspiracy theory SophieWilder 17:08, 25 December 2012 (UTC)
Reporting in a rude comment and reporting Brendiggg[edit]
Moved to Talk:John F. Kennedy assassination conspiracy theory
HotCAT[edit]
Here's something interesting: I can still add categories with HotCAT, but for some reason or another, I can no longer remove categories. Not sure if this is unique to me or not. Reckless Noise Symphony (talk) 13:20, 30 December 2012 (UTC)
Spam user creation spree[edit]
We seem to get a hell of a lot of user accounts created, most of which don't seem to make a single edit. I assume these are mostly spambot-generated. Is there anything we can or should do about it? Wèàšèìòìď
Methinks it is a Weasel 23:11, 26 December 2012 (UTC)
- There's various increasingly harsh actions that can be taken as and when it becomes a problem, without preemptively going all Citizendium on account creation - David Gerard (talk) 12:09, 27 December 2012 (UTC)
- Obviously the security questions which were put in place a few months ago (third letter of the logo etc.) haven't been very effective. Should they be replaced with something more robust? Щєазєюіδ
Methinks it is a Weasel 15:02, 27 December 2012 (UTC)
- Bump. Sprocket J Cogswell (talk) 15:30, 7 January 2013 (UTC)
- There is nothing more robust outside of draconian policies like IP blocking all of china, shutting off new user creation, or requiring some sort of application procedure. Tmtoulouse (talk) 15:51, 7 January 2013 (UTC)
- It seems like whatever mojo we're using to keep spambots from actually spamming the wiki are working pretty well. Let them waste their time making useless accounts. Theory of Practice "Now we stand outcast and starving 'mid the wonders we have made." 15:55, 7 January 2013 (UTC)
Serving stale pages[edit]
I thought this had gone away for a while, but it seems to be back: seeing a recent change, I go directly to the page in question (often the saloon) and do not see the new content until after a page refresh. Just now it was the two most recent topics which were missing, along with their responses. Since RW is the only place I see this, I doubt it is a caching thing here in me browser. If I click on the diff in RC, I see the new stuff right away. Sprocket J Cogswell (talk) 15:30, 7 January 2013 (UTC)
- It is the squid cache layer, whether or not you squid updates the cache of the page depends not just on whether it has changed but also on the response time of the database server, other users accessing the page, etc. Under high load this is more likely to happen, and is actually a good thing over all because it is what keeps the site from crashing completely like it used to. Tmtoulouse (talk) 15:49, 7 January 2013 (UTC)
Uploading copyrighted images under fair use[edit]
Not sure if this is the best place to post this, but the system for uploading copyrighted images under a claim of fair use is rather messy. I can't find fair use as an option on the licensing dropdown, which means that despite filling in the
{{fair use}} template I still get an angry red message telling me that the image does not have have an image copyright notice. And the "fair use upload form" (linked from the standard form) is a clusterfuck of bad code (e.g. %7CDate = %7E%7E%7E%7E%7E%0A). Wéáśéĺóíď
Methinks it is a Weasel 21:11, 7 January 2013 (UTC)
- +1 I've had a lot of trouble with this. Can anyone direct us to the extension that does the licenses? Uke Blue 23:23, 7 January 2013 (UTC)
- The usual explanation for why it's not on the dropdown is here. I don't know where the extension is. Peter Subsisting on honey 23:36, 7 January 2013 (UTC)
- I would add a dropdown item called "Fair use (use above template)" that would simply serve as an alternative to the "None selected". It wouldn't insert a license, but it would let the upload know, so to speak, that there is a license, but it's not in the dropdown. Seme Blue 23:47, 7 January 2013 (UTC)
- Found it: MediaWiki:Licenses is the page you want then. Don't break anything. Peter Subsisting on honey 23:50, 7 January 2013 (UTC)
Blacklist[edit]
Do we have a link-spam blacklist? aetherometry.com redirects all incoming from our site to a bestiality porn site, so I was going to add it to MediaWiki:Spam-blacklist, but that dosen't exist yet so I wasn't sure. Hipocrite (talk) 21:01, 10 January 2013 (UTC)
- We use Abuse filter for that. I'll add it. Nobodydon't bother 21:04, 10 January 2013 (UTC)
- Didn't we have a redirector for sites that pull that one? - David Gerard (talk) 21:22, 10 January 2013 (UTC)
- If we did I don't know where it is. Nobodydon't bother 21:28, 10 January 2013 (UTC)
[edit]
The "new thread" button in the forum portal pages (e.g. RationalWiki:Forum/Site-related discussion or RationalWiki:Forum/General discussion is busted, and brings up a "page title was invalid" page. Wēāŝēīōīď
Methinks it is a Weasel 09:38, 15 January 2013 (UTC)
- Just a bad interface design behaving in a non-obvious way - if you enter a title it works, if you leave the title blank it gets confused. There should be some way to give it a better default behaviour, or at least a better failure message - David Gerard (talk) 10:16, 15 January 2013 (UTC)
number of articles in[edit]
Is there a way to get in an article the number of articles in a known category at the time of the access the article? Same for namespace?
Jean 5 5 (talk) 00:19, 17 January 2013 (UTC)
- Here you go Nobodydon't bother 00:25, 17 January 2013 (UTC)
- Thanks Jean 5 5 (talk) 01:35, 17 January 2013 (UTC)
- No problem. Nobodydon't bother 01:48, 17 January 2013 (UTC)
Move template[edit]
Can someone add the "moved this section" template to the handy menu at the bottom of the edit window? Theory of Practice "Now we stand outcast and starving 'mid the wonders we have made." 22:10, 20 January 2013 (UTC)
- I have no idea where this is set - can't find it in Special:Allmessages. Anyone else know? - David Gerard (talk) 23:16, 20 January 2013 (UTC)
- On it. Tytalk 01:08, 21 January 2013 (UTC)
- Dude! Theory of Practice "Now we stand outcast and starving 'mid the wonders we have made." 01:13, 21 January 2013 (UTC)
- It's what I'm here for. Tytalk 01:18, 21 January 2013 (UTC)
- Nice one. Where was that configured, btw? - David Gerard (talk) 12:17, 21 January 2013 (UTC)
- MediaWiki:Edittools TyJFBANBSRADA 17:55, 28 January 2013 (UTC)
Wiki hiccups - it's been the squids[edit]
The squids have being arsey, so I've done something grossly inelegant and set them to restart daily (squid1 at 00:00, squid2 at 12:00) while I try to work out what's actually going wrong with them. The server itself seems mostly to have been happy, so that's nice. Service should be slightly more consistent - David Gerard (talk) 14:36, 26 January 2013 (UTC)
- Trent says it's the OOM killer coming out to play. There's nothing else on said boxes except squid3. Feck - David Gerard (talk) 17:16, 26 January 2013 (UTC)
- From someone who has no idea what the squids or the OOM killer are, you're doing a great job as sysadmin. Seme Blue 17:33, 26 January 2013 (UTC)
- For what it's worth, it looks like Blue and I have occasionally been hitting the API pretty hard, assuming that's how she's getting her data.
18:03, 26 January 2013 (UTC)
- The API is not going to go through the squids, the issue has to do with the amount of cached material versus available memory. I think there are a few solutions but not ones I have worked with before so will take some research before implementing. Tmtoulouse (talk) 18:17, 26 January 2013 (UTC)
- Have you tried combobulating the fribbit?--"Shut up, Brx." 18:06, 26 January 2013 (UTC)
FUCKING THING AAAAAAAA So it broke again. I saw someone on FB flag it as failing and both squids had fallen over. I've restarted them, but evidently I'm going to need a once-a-minute checker. Fucksake - David Gerard (talk) 11:33, 28 January 2013 (UTC)
- Just apply enough soy sauce to keep them twitching. :D --ZooGuard (talk) 11:40, 28 January 2013 (UTC)
- Obviously PZ has sent his squishy minions out on strike - David Gerard (talk) 11:53, 28 January 2013 (UTC)
Tech blog post on the topic - will be all "axle-maxle manglebratic geeble-geep" to most - David Gerard (talk) 11:56, 28 January 2013 (UTC)
- Remember the technique of shouting at it. Threatening to deep fry and serve them with a side salad might do the trick.
narchist 13:52, 28 January 2013 (UTC)
- In my professional experience, the most effective technique is to mutter a credible threat quietly. This is why, when your computer isn't working, the IT guy comes over and glares at it and suddenly it's sweetness and light until he goes away again - David Gerard (talk) 14:44, 28 January 2013 (UTC)
- The threat is more credible when the utterer is holding the Sceptre of Authoritah pointed in the general direction of the offending box. In my day, that was a #2 Phillips screwdriver, preferably Xcelite, with a stinky transparent yellow handle. Sprocket J Cogswell (talk) 14:51, 28 January 2013 (UTC)
Pink flash[edit]
Today I've been seeing a transient pink background (for about a few hundred milliseconds) in the part of the window that isn't banner or sidebar. Occurrence is infrequent, and most of the time it seems to be associated with a squid-clearing refresh <F5>. Is this a feature, a bug, or retinal detachment? Just curious, but thought it was interesting. Sprocket J Cogswell (talk) 18:41, 29 January 2013 (UTC)
- Has this co-occurred with any sensation of numbness or tingling, particularly in the extremities, any loss of time, trouble speaking, or dropping of the mouth? Tmtoulouse (talk) 18:54, 29 January 2013 (UTC)
- Loss of time? Most certainly, but that's easily attributable to reddit. The color is roughly #ff8080, evenly from edge to edge of the field, which is the body text area of the RW page. My short-term memory is every bit as good as it was five minutes ago. I had a chicken/avocado sandwich for lunch, which I made all by myself.
- By the way, I'm using the vector skin, and an older issue of FireFox. Sprocket J Cogswell (talk) 19:52, 29 January 2013 (UTC)
- I cannot think of any pink element whatsoever - David Gerard (talk) 21:04, 29 January 2013 (UTC)
- It hasn't happened again since, but I haven't been on the site much. If it does happen again, I'll grab more particular details. Sprocket J Cogswell (talk) 21:41, 29 January 2013 (UTC)
- Gah. It was here. The hidden box has a pink background, and was taking its own sweet time to collapse. I will now give myself five minutes' time out in the drooling idiot box. Sprocket J Cogswell (talk) 22:50, 29 January 2013 (UTC)
- Ya I've seen it too. I suspect just js waiting until the whole page is loaded before hiding that DOM element. It's a Good Thing that occasionally leads to a weird user experience. Lol do you always see colors in hex?
23:24, 29 January 2013 (UTC)
- Now that must have just been designed to piss people off - David Gerard (talk) 00:01, 30 January 2013 (UTC)
- Nutty, I see colors with the customary human wetware, but I have an image processing program handy that lets me see what various RGB values look like. Ask an artist about visual memory sometime, how it fades, how it can be fooled, and how it can be trained. I think I can remember colors more accurately and a bit longer than I can remember the sound of a particular violin, but I could be horribly wrong about both. At any rate, the GIMP tells me the pink in question is #ffc0cb, so I was horribly wrong about that too. <sob> Sprocket J Cogswell (talk) 00:18, 30 January 2013 (UTC)
- JavaScript was designed to mess with users and developers. It's hard to debug and ... anything that's 99% anonymous functions with that any mustaches is ALIENS.
00:36, 30 January 2013 (UTC)
- That's like demanding perfect pitch. Colors are remarkably different looking from context to context. FF8 and FFC are pretty close. The only thing that's weird is how you remembered the saturation. I can easily see people having a hard time distinguishing a washed out rosy pink and a washed out salmon even if they were presented together. Interesting how everyone remembers these sort of transitory sensory experiences differently.
00:50, 30 January 2013 (UTC)
- All I can remember is reading the actual article text in the pink box and knowing that pink will be associated with said article text burnt into my brain FOREVAAAAAHHHHHH (till tomorrow, maybe the next day) - David Gerard (talk) 01:20, 30 January 2013 (UTC)
Sarcasm tag extension[edit]
I made a sarcasm tag extension here: mw:Extension:Sarcasm
I think it could be useful. We don't need to use it in articles if that would be obtrusive, but using it in discussions could be good. I also created a CSS class for the tags so it's easy to change the sarcasm text from the wiki. ఠ_ఠ Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 07:59, 30 January 2013 (UTC)
Vandal bin[edit]
Is it just me, or does the "<ipblocklist-username>" stuff appear in the "Find a vandal binned user" box at Special:VandalBin for everyone? Peter Droid whisperer 03:05, 2 February 2013 (UTC)
- I see it too. TyJFBANBSRADA 03:07, 2 February 2013 (UTC)
- Not only that, but when you bin somebody, the first check box says "<ipbanononly>" --PsyGremlinSnakk! 07:14, 4 February 2013 (UTC)
- It appears to be in the actual PHP code for the vandalbrake extension, calling system messages that don't exist in 1.19, and this is just the first anyone's said anything. I could be wrong of course. In the meantime, I'm trying to work out what would be a good variable to use instead - David Gerard (talk) 21:05, 4 February 2013 (UTC)
- I was right - fixed the first one by fiddling with the extension code, second by creating the interface message (since the message does exist on Wikipedia). Any others? - David Gerard (talk) 21:19, 4 February 2013 (UTC)
Today's Holiday on RC[edit]
It's all smooshed over to the left instead of being centred. This has happened a few times before. Theory of Practice "...and we do love you madly." 13:59, 5 February 2013 (UTC)
- Fixed. TyJFBANBSRADA 15:09, 5 February 2013 (UTC)
- I knew I could count on you, kid. Theory of Practice "...and we do love you madly." 15:16, 5 February 2013 (UTC)
- Hey, it gives me something to do. TyJFBANBSRADA 15:19, 5 February 2013 (UTC)
WigoBot[edit]
Would someone please direct me to the code for WigoBot. Thanks.
16:09, 5 February 2013 (UTC)
Edit filter[edit]
[2] is an interesting edit filter. Implementation here should be considered. Hipocrite (talk) 21:16, 5 February 2013 (UTC)
- It's our Filter 1. Uke Blue 21:20, 5 February 2013 (UTC)
Ridiculously insignificant bug[edit]
The editcount extension hasn't been updated since LiquidThreads was installed, so both of the LQT namespaces show up as blank spaces. Uke Blue 00:41, 15 February 2013 (UTC)
large scale rollback?[edit]
Is there a way to rollback all of a user's edits in one go?--"Shut up, Brx." 01:43, 15 February 2013 (UTC)
- Not unless you want to write a script, though it's fairly simple if you just open the user's contribs and click "rollback - open in new tab". Uke Blue 01:55, 15 February 2013 (UTC)
- That's what I did, but my middle mouse button is deficient. Also, if the bot is prolific enough, it's possible that some of its misdeeds won't be seen. Then the vandalism will be present on an article indefinitely. Btw, thanks for activating the edit filter--"Shut up, Brx." 01:58, 15 February 2013 (UTC)
- Middle mouse button? I just right click then left click on the menu. Micro! Also, how would any edits turn invisible? Uke Blue 02:01, 15 February 2013 (UTC)
- Ctrl + left click should do the job, at least in most Windows set-ups. ΨΣΔξΣΓΩΙÐ
Methinks it is a Weasel 02:05, 15 February 2013 (UTC)
- If we fail to revert quickly enough, certain edits may become obfuscated simply because they are one among so many. And I'm not a huge fan of having to use the mouse menu to open new tabs. Before my mouse got faulty, I would click the middle mouse button over a link and it would open in a new tab for me. I also have a button on my mouse that double clicks for me, and one that goes forward in my history for browsers. I are lazies--"Shut up, Brx." 02:05, 15 February 2013 (UTC)
- Get popups. Seme Blue 02:09, 15 February 2013 (UTC)
- (EC) Use this list to remove most of the RC entries that are irrelvant to the cleanup. Peter mqzp 02:10, 15 February 2013 (UTC)
- @Blue: How do you mean?--"Shut up, Brx." 02:12, 15 February 2013 (UTC)
- Preferences -> Gadgets -> Browsing gadgets. First one on the list. Uke Blue 02:14, 15 February 2013 (UTC)
Searches exclude recent results[edit]
A few times recently I've been trying to find a fairly recent Saloon Bar discussion & just haven't been able to track it down with the barchive search. Now I see that the search just isn't picking up search terms from the last dozen or so archives at all. Random examples:
- Dyscalculia : link, search
- Monsanto : link, search
- cesspool : link, search
- Clint Eastwood : link, search
- pwnage : link, search
- CHRIIIIIIIIISTMAAAAAAAAAAASSSSS! : link, search
The same seems to be happening with the WIGO:CP talk archives:
Probably the same with other discussion archives. Is our search feature only able to search cached pages or something? Ŵêâŝêîôîď
Methinks it is a Weasel 10:23, 23 December 2012 (UTC)
- I had noticed that too, but I'd assumed that our seach engine was just intrinsically faulty due to MediaWiki being horribly shitty from the back end, so I didn't bring it up.
Radioactive Misanthrope 10:57, 23 December 2012 (UTC)
- Wikipedia doesn't seem to suffer from this problem. Wéáśéĺóíď
Methinks it is a Weasel 11:07, 23 December 2012 (UTC)
- Oh come on, that's totally unfair. WP pays almost $1 million per month and employs about 3 dozen people to keep things from going to poop.
Radioactive Misanthrope 11:22, 23 December 2012 (UTC)
- Yeah, I was just pointing out this seems to be a site-based problem, not a general MediaWiki problem. Wẽãšẽĩõĩď
Methinks it is a Weasel 12:31, 23 December 2012 (UTC)
Buggered if I know. I did just restart the daemon again. Presumably it will index when it feels like it. We use the Lucene-based search, which is the same thing Wikipedia uses. Not sure what the lag is on theirs though - David Gerard (talk) 11:51, 23 December 2012 (UTC)
- It seems to affect all fairly recently created pages, not just talk pages. E.g. the Errol Denton article doesn't show up on a search for Denton. Щєазєюіδ
Methinks it is a Weasel 13:37, 28 December 2012 (UTC)
- Hmm ... on a few quick searches, I can't see any results past July. Can anyone else? (Trent, how tf does this thing work?) - David Gerard (talk) 13:54, 28 December 2012 (UTC)
- The Lucene indexer is supposed to run every day, I think it's in cron.daily -- Nx / talk 14:02, 28 December 2012 (UTC)
- I'm making this sticky (if that function is even working) till it's fixed, as this problem is making it seriously difficult to find fairly important things like the current mod & board elections. Wẽãšẽĩõĩď
Methinks it is a Weasel 02:35, 31 December 2012 (UTC)
574 [main] WARN org.wikimedia.lsearch.oai.OAIHarvester - Error reading from url (will retry): java.io.FileNotFoundException: at sun.net.
Yeah, that probably doesn't help - David Gerard (talk) 08:33, 15 February 2013 (UTC)
- I tried switching on extension OAIRepository but haven't managed it quickly (the instructions are shitful). So I'm now running ./build to do a complete reindex, which is sorta not fast. Doing ./update via OAIRepository is the Right Thing, so I'll keep fiddling intermittently with that - David Gerard (talk) 08:54, 15 February 2013 (UTC)
- Well, that didn't work - crashed out with:
26231 [main] INFO org.wikimedia.lsearch.ranks.Links - Opening for read file path java.io.IOException: no segments* file found in org.apache.lucene.store.FSDirectory@file path
- Fucking thing. Searches up to about July 2012 still work. Will kick further - David Gerard (talk) 08:57, 15 February 2013 (UTC)
- Thank you for working on this, David!
Radioactive Misanthrope 08:59, 15 February 2013 (UTC)
- JUST SUCCESSFULLY BUILT THE INDEX! Now seeing if it can be done twice. (Takes 22 minutes with our 85,000 pages.) - David Gerard (talk) 21:36, 15 February 2013 (UTC)
- Oh, and Trent disabled OAIRepository because it fucked up other special pages. If we want it we need to fix it properly. Urgh. - David Gerard (talk) 21:48, 15 February 2013 (UTC)
Search updated!!![edit]
I built it and it worked. I built it a second time and it worked. This proves it works. Pretty much. So it's in weekly crontab. It took 26 minutes to run just now (for 86,584 pages), so it'll need an eye kept on it - David Gerard (talk) 22:07, 15 February 2013 (UTC)
- Nice one DG. Thanks for sorting it. Ŵêâŝêîôîď
Methinks it is a Weasel 22:10, 15 February 2013 (UTC)
DNSBL[edit]
The IP address of the coffee shop I work at sometimes is blocked, presumably by the edit filter. The problem is it's a Comcast static business address. If it was ever used as a proxy, it was in 2010 according to the single dated reference to the IP w/ "proxy" Google returns - that's a little weird because it's within a few months of this shop opening up but it's also possible they wouldn't have had the same static IP almost 3 years ago. There's a reference to Wed., Sept. 12, which could have been Sept. 12 last year, but this makes little sense. I was working here back then as well - it's just a bunch of hipsters and hackers. I'm interested in knowing what attempts the edit filter is actually catching and how they compared to what Brasov is able to pull off.
16:01, 13 February 2013 (UTC)
- The filter isn't on, but if you give me the IP I can check the logs to see if it got caught at some point. I can go through the filter logs to give you a innocent IP:Brasov ration in a while. TyCarnival time. 16:31, 13 February 2013 (UTC)
- Of the 1310 hits on the disable ip editing filter, 28 were legitamite IP edits, 6 were other vandals, the remaining 1276 were Brasov. TyCarnival time. 16:46, 13 February 2013 (UTC)
- Wow. The success rate doesn't justify investigating individual IPs.
17:05, 13 February 2013 (UTC)
- Eh? TyCarnival time. 17:07, 13 February 2013 (UTC)
- So wait, you're using the edit filter to stop all IP editing because of one fucktard? -- Nx / talk 18:02, 13 February 2013 (UTC)
- and they won't even let me run night mode... Occasionaluse (talk) 18:05, 13 February 2013 (UTC)
- (EC) That was what we did when the attacks showed no sign of stopping after a few hours and we didn't want to have to wait for David or Trent to both notice and decide what to do. Currently we're employing a DNS proxy ban courtesy of David. Seme Blue 18:07, 13 February 2013 (UTC)
State of the Vandal[edit]
For obvious reasons it would be silly to list plans here, but we're not asleep and a number of the more technically minded editors are working on this issue - David Gerard (talk) 23:57, 14 February 2013 (UTC)
- Good to know. Are all the erros in the last few minutes your "fault" or the vandal's? Peter mqzp 00:00, 15 February 2013 (UTC)
- Tmtoulouse (talk) 00:01, 15 February 2013 (UTC) <---- fault
- TOU-LOUUUUUUUUUSSSSEEEEEEE! - David Gerard (talk) 00:02, 15 February 2013 (UTC)
- Thank you, David and Trent. We do appreciate all of your work, especially in times like these.
- For future reference, it'd be best if you mention "we're working on it" when you start working on it, because otherwise it looks like you're ignoring the site while the rest of us struggle with damage control. You're not Seal Team 6; "operating in silence" isn't an option when other people are relying on you.
Radioactive Misanthrope 00:05, 15 February 2013 (UTC)
- Times like this really makes me happy that RW has the Sysop policy it does. This would be so much worse if only 5-6 people were Sysops and weren't always around. --Revolverman (talk) 00:07, 15 February 2013 (UTC)
- We can all play Bag The Vandal, I nearly got one but got a 503 then PeterL got it. Bah! - David Gerard (talk) 00:28, 15 February 2013 (UTC)
- Yeah, that's why I thought I should say something, even as a placeholder - David Gerard (talk) 00:27, 15 February 2013 (UTC)
This is getting to be an untenable situation. I have disabled IP editing via the edit filter once again. David, Trent, Nutty, please proceed with all deliberate haste, so we can return to moderately normal functioning. Uke Blue 01:58, 15 February 2013 (UTC)
- Recent changes just gets rendered useless--"Shut up, Brx." 02:00, 15 February 2013 (UTC)
- While I appreciate the work that Trent, David and Nutty are doing, I will not hesitate to activate the IP editing filter until such time as I am notified that any of their plans have gone into effect. Uke Blue 21:17, 15 February 2013 (UTC) (By that I mean should the vandal reappear, not indefinitely.) Uke Blue 21:23, 15 February 2013 (UTC)
- I can't promise hermetic whatever, obviously, but there's stuff happening. I just switched off the filter not to be a swinging dick, but because I am testing stuff - David Gerard (talk) 21:26, 15 February 2013 (UTC)
- May I suggest that, in the meantime, the filter be set to only block mainspace (and only when the vandal is present)—thus ensuring peace of mind but not blocking testing at the sandbox. Peter mqzp 21:43, 15 February 2013 (UTC)
- We are well past the "sandbox" testing, and its about testing in the wild now. Please don't turn off ip editing anywhere. Tmtoulouse (talk) 21:46, 15 February 2013 (UTC)
Is it just me...[edit]
that I am getting frequent, non-persistent 403's on RW? Thieh"6+18=24" 02:42, 15 February 2013 (UTC)
- No, it isn't. Perhaps we should put up an intercom message? Peter mqzp 02:45, 15 February 2013 (UTC)
- I emailed DG. I'm getting these up the wazoo--"Shut up, Brx." 02:48, 15 February 2013 (UTC)
- I'd be surprised if he were still up, actually. Peter mqzp 03:01, 15 February 2013 (UTC)
- I live in London, so those UTC timestamps are the time for me. I went to bed at 1am because the loved one told me to switch the damn computer off.
- The sporadic 403s should not quite be happening, but I think I know why - David Gerard (talk) 07:38, 15 February 2013 (UTC)
State of the vandal[edit]
So we're doing, er, stuff. The captcha has changed - please check if you can edit logged-out, report if you can or can't (Trent can, I'm finding I
can't am stupid). The vandal is actually hammering the servers quite hard and very little of it is getting through, but please don't block all IPs for the moment - David Gerard (talk) 21:59, 16 February 2013 (UTC)
- Cannot edit logged out. TyJFBAA 00:36, 17 February 2013 (UTC)
- Don't just type the picture, read the instructions - David Gerard (talk) 00:45, 17 February 2013 (UTC)
- *facepalm*TyJFBAA 00:50, 17 February 2013 (UTC)
- I got caught the same, I was going "TREEEEEEEEENT IT'S NOT WOOOORKING" and then went " ... ah." - David Gerard (talk) 01:18, 17 February 2013 (UTC)
- werks --83.84.137.22 (talk) 00:57, 17 February 2013 (UTC)
- Test. 121.74.95.199 (talk) 01:20, 17 February 2013 (UTC)
- I tried it several ways. Me and my tapeworm, we fail as a Turing machine. Ignorance is bliss, so don't bother to enlighten me. Sprocket J Cogswell (talk) 01:30, 17 February 2013 (UTC)
- Right now it's reCaptcha - not actually great for ordinary spam (to which we've been almost immune so far), but should work for this purpose - David Gerard (talk) 08:52, 17 February 2013 (UTC)
Formatnum[edit]
Is there a way to formate numbers in the french way (199,455,333.2 => 199 455 333,2)? Jean 5 5 (talk) 18:35, 24 February 2013 (UTC)
- There isn't a way right now. Extension:NumberFormat lets you do this , we don't have it installed. I would think it would be possible to format numbers using a template and some complicated parser functions, but I don't know of such a template having been written - David Gerard (talk) 19:01, 24 February 2013 (UTC)
WIGO:CP Talk seems to be busted[edit]
None of the signature seem to be working. --Revolverman (talk) 23:08, 24 February 2013 (UTC)
- Close your capture tags - David Gerard (talk) 00:00, 25 February 2013 (UTC)
Bertran's rights at ru-rw[edit]
(moved from user talk:David Gerard))
- No it doesn't - you've already got the Tech right, which includes the editinterface right ... so you should be able to edit those pages ... Can you edit the en:rw equivalents? Anyone else - what's going weird here? - David Gerard (talk) 22:08, 24 February 2013 (UTC)
- By the way, yes! I can edit en-equivalents! But none of those ru-pages.--Mr. B 08:30, 25 February 2013 (UTC)
- I suggest that removing my tech-rights and returning them may give me an access to those pages. What do you think about this?--Mr. B 17:59, 25 February 2013 (UTC)
- I've just tried that. Can you get in?
- Also, to solve your immediate problems: please create the new versions of the ru:rw pages you want to change, somewhere that I can copy them to the correct places, and I'll put them there.
- Trent says the ru:rw configuration and setup is just completely messed up. I'll try to find time to look at it properly ... - David Gerard (talk) 20:20, 25 February 2013 (UTC)
- OK, the 'tech' group had no powers at ru:rw (in LocalSettings.php). I've just made it so it does, i.e. you do. See what powers you have now - David Gerard (talk) 21:32, 25 February 2013 (UTC)
- I've also upgraded ru:rw from 1.19.1 to 1.19.3, you should see no difference at all but it's a little more secure - David Gerard (talk) 21:40, 25 February 2013 (UTC)
Just broke login on ru:rw[edit]
Owing to the 1.19.3 upgrade breaking some fragile rubbish elsewhere, I just broke login on ru:rw - you can log in by going to en:rw and logging in there, while we fix it - David Gerard (talk) 21:49, 25 February 2013 (UTC)
- I have just kludged it HORRIBLY, by following this kludge advice and wrapping all of /home/rationalwiki/public_html/w/includes/SkinTemplate.php in
<?php if ( ! class_exists ('SkinTemplate') ) { ... }
- - this is just terrible on so many levels: it fixes the problem by making it worse, it means upgrades won't quite behave and it seems to give random blank pages on login/logout. Gah. Trent's viva is next week, if we can hold on with the kludge until then that would be lovely! - David Gerard (talk) 22:13, 25 February 2013 (UTC)
- ... it's supposed to be broken? Are you sure? If so, I'll change it back ... - David Gerard (talk) 22:46, 25 February 2013 (UTC)
- I've changed it back as it was breaking the wiki really badly and causing all those 503s and white pages. (This bug, which is the interaction of APC and terrible code, and what I did there causes terrible code.) I can't baby it overnight, so I'll have to fix it in the morning. Or at least start on working out wtf is happening in the morning - David Gerard (talk) 00:14, 26 February 2013 (UTC)
Wiki Love[edit]
- This discussion was moved to RationalWiki:Saloon bar. Seme Blue 06:59, 26 February 2013 (UTC)
Help whith #if[edit]
why does :
Cet article est une ébauche{{#if:{{{1}}}| portant sur {{{1}}}{{#if:{{{2}}}|, {{{2}}}{{#if:{{{3}}}| et {{{3}}}}}}}}}
called whith Folklore as only argument return:
Cet article est une ébauche portant sur Folklore, {{{2}}} et {{{3}}}.
Jean 5 5 (talk) 20:41, 26 February 2013 (UTC)
- I don't know for certain, but I think if you add a pipe (|) to each of your {{{<number>}}}'s like so: {{{2|}}}, that will fix the problem. I think I know why that works, but you'll probably need David or someone if you want an understandable explanation. Peter mqzp 03:25, 27 February 2013 (UTC)
- Gawd, I don't understand template syntax. I wonder if the new Lua templating can be installed in 1.19 ... - David Gerard (talk) 08:28, 27 February 2013 (UTC)
- Well, I appear to have fixed the problem anyway. What's so great about Lua? Peter mqzp 08:44, 27 February 2013 (UTC)
- It's a programming language intended for programming in, rather than one where Turing-completeness was basically an accident - David Gerard (talk) 12:26, 27 February 2013 (UTC)
- And where's the fun in that?
- More seriously: what could we do with it that we can't do with the existing system, terrible though it may be? Peter mqzp 07:06, 28 February 2013 (UTC)
- Well, the server isn't actually melting, so it's not in fact an immediate problem. I might not die of bracket poisoning, mostly - David Gerard (talk) 08:46, 28 February 2013 (UTC)
- Thanks. Jean 5 5 (talk) 10:32, 27 February 2013 (UTC)
- The reason is that the text after the pipe is included if the parameter is blank, and if you just include the pipe with no text, it includes no text. So, for instance, if the template had wibble, you'd get
- Cet article est une ébauche portant sur Folklore, wibble et {{{3}}}.
- ...but with you get nothing at all. rpeh •T•C•E• 10:38, 27 February 2013 (UTC)
MediaWiki 1.19.4 on en:rw and ru:rw[edit]
Security patch came out a coupla hours ago, has been applied. Please let me know of any breakage (and specify new or old) - David Gerard (talk) 20:01, 4 March 2013 (UTC)
Some CSS Table on Contents thingy[edit]
A little out of my depth here but whatever.
I'm trying to limit how many levels are shown in the Table of Contents. The Goal is to get User:Voxhumana/Temp_Genesis to only show the chapter headings, and not hundreds of headings for each verse.
I think it has something to do with this (taken from here)
* 8. SOME OTHER SMALL THINGS * ------------------------------------------------- */ /* Give a bit of space to the TOC */ #toc {margin: 1em 0; } /** * Allow limiting of which header levels are shown in a TOC; * <div class="toclimit-3"> , for instance, will limit to * showing ==headings== and ===headings=== but no further. * Used in Template:TOC */ ;
Apologies if the above is irrelevant. VOXHUMANA 10:03, 12 March 2013 (UTC)
- Can you look at User:Voxhumana/Temp_Genesis and figure out what I did wrong? I keep getting the full TOC no matter what I do. VOXHUMANA 23:18, 12 March 2013 (UTC)
- Ah. We'll need a tech then - if it only works for people who have modified their CSS that's not really a general solution. Thanks Peter VOXHUMANA 23:36, 12 March 2013 (UTC)
- I mean, you can modify your own css to test it, and once it looks good you can then get a tech to do it. Peter mqzp 23:38, 12 March 2013 (UTC)
More Bible[edit]
Vox's format changes to the bible pages are screwing with the bile text generators, e.g:.
Can somebody please fix the extension? Thanks. Peter mqzp 00:42, 14 March 2013 (UTC)
- Can you be more specific about what's wrong?
(formerly Ghostface Editah) 00:47, 14 March 2013 (UTC)
- I won't be devastated if my changes are undone in the interim - we can always restore them later. My motive is editing/reading the annotations is MUCH easier if there is a TOC to navigate. VOXHUMANA 00:58, 14 March 2013 (UTC)
- The text doesn't appear in difflinks, but it does when you load the page/preview edits. Peter mqzp 01:04, 14 March 2013 (UTC)
- I can see it on this page but not on the bible pages. I haven't been following what Vox did. I'll have a look and see what's going on.
(formerly Ghostface Editah) 01:06, 14 March 2013 (UTC)
- I have no idea how that extension works. Sorry.
(formerly Ghostface Editah) 01:08, 14 March 2013 (UTC)
- I don't know if anyone does, and it may be best to simply strip all uses from the site (if we can find them). How it appears to work is that the text is stored on the annotated bible pages, and is parsed by the extension to create excerpts like the one above. Changing the section headers slightly has the downside that the extension is now not parsing properly. All that probably needs to happen is for a line of code to be changed. Peter mqzp 01:12, 14 March 2013 (UTC)
Well I've been writing extensive Bible commentaries, and am planning to eventually get through the entire bible (I've completed 13 chapters of Genesis and a few of Joshua so far, and have offline notes on much more). However another option is to break these off into separate articles, one page per chapter. My complaint is that trying to edit a single page version is a nightmare - you are forever scrolling around trying to find where you were up to. VOXHUMANA 02:49, 14 March 2013 (UTC)
- I will look into it. Tmtoulouse (talk) 05:58, 15 March 2013 (UTC)
So how about that proxy filter?[edit]
Is it turned off or something? Because our very best friend, Syamsu, has been going on a tear with various IPs, but otherwise exactly the same. Whatever we've got is insufficient against actual meaty wandals. Ochotona princepsnot a pokémon 10:06, 16 March 2013 (UTC)
- Never mind, I wasn't paying attention to the ranges he was using. I guess he was just refreshing his IP from his ISP, not using proxies. I'd delete this, but instead, I'll just make this correction. Hurrrr. Ochotona princepsnot a pokémon 11:17, 16 March 2013 (UTC)
- The ranges he's using are:
- 109.32.0.0/16
- 109.33.0.0/16
- 109.34.0.0/16
- 109.35.0.0/16
- 109.36.0.0/16
- 109.37.0.0/16
- 109.38.0.0/16
- 31.136.0.0/16
- 31.137.0.0/16
- 31.138.0.0/16
- They're blocked for ~9 hours but I imagine he'll be back after that. rpeh •T•C•E• 11:25, 16 March 2013 (UTC)
- Well, yeah: whatever we've got is insufficient against meat vandals, because if we let IPs edit to talk to us (as we do), then they can edit to trash stuff. The price of good speech being the possibility of bad speech.
- The open proxy blocker is still in place. In the last days of the attack from the vandal we won't name, he was having to work really hard to find proxy IPs that weren't on the lists the blocker uses - so it's not perfect, but frankly it's good enough. ReCAPTCHA is still a really hard captcha to automate, so that was the final barrier.
- At this point it's philosophical, not technical: if we want IPs to be able to talk to us, we're going to get occasional persistent morons of this sort, and it becomes a social question for the mob - David Gerard (talk) 12:18, 16 March 2013 (UTC)
- Is it possible to range vandal bin?--"Shut up, Brx." 17:45, 16 March 2013 (UTC)
While block duration is topical...[edit]
Any chance of having something between 1 day and 3 months in the block duration choices? 1 week, perhaps? SophieWilder 13:41, 16 March 2013 (UTC)
- 314159 seconds = 3.6 days - added to list - David Gerard (talk) 15:43, 16 March 2013 (UTC)
Collapse top and bottom templates[edit]
WP has some very nifty template to collapse lengthy, irrelevant discussions that can't be justifiably deleted, but clutter up the page. There's a lengthy pile of off-topic whining going on in the Talk:Mind-Energy_forum page, and I wanted to collapse it so that the small percentage of actually relevant stuff could be responded to.
The WP templates in question are:
I would do it myself, but I'm fairly certain I'll f**k it up. VOXHUMANA 04:57, 19 March 2013 (UTC)
- IF admin rights are needed to see the code, let me know what you want retrieved and I'll get it for you. The templates look like they have admin level protection for editing, but I'm not sure if that prevents reading the code or not. VOXHUMANA 04:59, 19 March 2013 (UTC)
Avoiding referers[edit]
I've created template:unref, which makes possible going around referer shenanigans such as those on a certain page starting with "aether" (intentionally unlinked). I'm just not sure how reliable it is - I guess that if they notice the links, it's possible to set the redirect to work on that specific kind of Google referers too.
Apparently, there's a client-side trick to send a page with a blank referer based on a creative use of JavaScript and "about:blank". That will require some Tech work, though.--ZooGuard (talk) 22:06, 23 March 2013 (UTC)
- Link to description of said trick? - David Gerard (talk) 22:47, 23 March 2013 (UTC)
- Here - it uses an iframe and a POST request. I think it can be simplified for RW's needs, as we want just the page, not POST-ing anything to it.
- On RW, I imagine it to work like this: the template inserts a link that's a call to a JavaScript function, that function is in the common.js (or whatever it's called). The function opens a new tab/window from about:blank, then manipulates its contents in the way described in the linked post to refresh to or GET the target URL. Feasible?--ZooGuard (talk) 09:58, 24 March 2013 (UTC)
- There used to be a that I wrote because Jinx was redirecting everyone from RW to a goatse image. What happened to that? -- Nx / talk 11:23, 24 March 2013 (UTC)
- May have got lost in an upgrade - David Gerard (talk) 11:41, 24 March 2013 (UTC)
- Found it! Back in place at that URL. What are the parameters for it? - David Gerard (talk) 11:44, 24 March 2013 (UTC)
- "?url=" seems to work.--ZooGuard (talk) 11:51, 24 March 2013 (UTC)
<?php $url = $_GET["url"]; header( "Location: {$url}" ); ?>
- - David Gerard (talk) 12:59, 24 March 2013 (UTC)
- Yeah, that was my first attempt, which didn't work in all browsers (doesn't seem to work in any today) -- Nx / talk 13:44, 24 March 2013 (UTC)
Or just use anonym.to: -- Nx / talk 12:09, 24 March 2013 (UTC)
- It's in a lotta workplace censorware lists - David Gerard (talk) 13:00, 24 March 2013 (UTC)
- But yes, seems to do the right thing - David Gerard (talk) 13:02, 24 March 2013 (UTC)
- No, it doesn't - after clicking on your link, it pointed out that the referer's this page.
- I've put anonym.to in Template:Unref for now. Feel free to change it to redir.php if/when you fix it.--ZooGuard (talk) 13:10, 24 March 2013 (UTC)
- Well, c*ck - David Gerard (talk) 13:20, 24 March 2013 (UTC) | https://rationalwiki.org/wiki/RationalWiki:Technical_support/Archive8 | CC-MAIN-2017-43 | refinedweb | 9,381 | 69.72 |
Hello,
currently it is not possible to create a supplier return (notification) in case the corresponding inbound delivery document is not in ByDesign. We are currently working on a concept and hope that we can deliver this feature in one of the next releases. Nevertheless we think there might be a good workaround which can even be automated with some add-on development.
First you have to create a supplier return delivery notification. This either can be done outside the ByDesign system or you could create a new business object as an add-on. This new BO would then represent the supplier delivery notifications and new instances could e.g. be created with reference to migrated purchas orders or by manually typing in the data. Print forms can also be created based on this add-on business object.
As a next step you need to consume the goods with reference to a cost center. This is available as a common task in the ‘Internal Logistics’ work center. The underlying business object is released for add-on developments as well.
Now you need to create a credit memo without purchase order. Please use the same cost center as used for the ‘consumption for cost center’.
This will now lead to the following financial postings:
Since the same cost center is used both for inventory consumption and credit memo postings the two amounts would perfectly balance to zero. In case the inventory valuation costs and the credit memo amount differ an additional posting might be needed.
In case credit memo amount is bigger than valuation amount:
Debit Material Consumptions Credit Price Differences
In case credit memo amount is smaller than valuation amount:
Debit Price Differences Credit Material Consumptions
Hope this helps,
Sissi, Marco and Stefan | https://blogs.sap.com/2017/02/03/supplier-return-without-inbound-delivery-reference/ | CC-MAIN-2018-39 | refinedweb | 293 | 53.21 |
Tax
Have a Tax Question? Ask a Tax Expert
Hi. My name is ***** ***** I will be happy to help you.
To file a joint return, both parties must agree. Unfortunately you cannot force her file jointly if she doesn't want to. If you are not legally separated or divorced and she doesn't want to file a joint return, your only option is to file married filing separate. It may affect your financial aid and you will not be able to claim any student or education related expenses or credits but if she doesn't want to you cannot legally make her to file jointly.
She said she is filing separated / head of household claiming both kids. Is that because she has the kids more than me? How do I switch all my joint return I imported in ttax from joint to married filing separate?
If you kept separate households for more than 6 month and the children live with her for more than 6 month, she can file head of household. In order to file head of household, you would need a dependent. Once a return is filed jointly it cannot amended/changed to married filing separate. You can change from separate to joint but not the other way around..
I did not file the return yet. I just wanted to know if since I filed married joint last year and now this year as married filing separate how can she claim my college so as head of household when he lives away at college?
Every person listed on a tax return (taxpayers and dependents) is usually entitled to one exemption. The college deductions and credits goes with the exemption. It means it can only be claimed on the return on which the exemption for the student is claimed. If she files head of household and you claim married filing separate neither of you can claim the benefits because she is not claiming exemption for you and in your case (you claim your own exemption) the disclassifier is the filing status.
Q: how can she claim my college so as head of household when he lives away at college?
I am not sure if you are talking about your education your your son's education. Even if the child is away at college and technically doesn't "live" with her, she can still claim the exemption for the child and the education benefits because going to college is considered a temporary absence for tax purposes, just like just like going to health institution, prison or combat. In this circumstances the person is expected to return to his primary residence and therefore such "going away" is classified as temporary absence (time is irrelevant). If the child lives apart to go to college, the child still meet all tests to be her dependent.
I'm sorry I don't understand.this.how due to filing status neither can claim benefits..what benefits..and if we filed joint would we have these? If so I need to convince her in layman terms the adv to filing joint.
The college credits or deductions. They are not allowed on married filing separate tax return. There also other credits not allowed when file separate like earned income credit. But the only way to see the full picture is to prepare the return both ways (filing jointly or Head of household and filing separately). There are too many variables (income, type of income, state you live in, age of the dependent just name few).
In 95% of cases it is more advantages to file jointly than separately.
What about how filing status affects fasfa, eligibility, or grants?
If you are formally and/or legally separated you ex-spouse's income will not be considered. Financial officer may ask for a proof of separation like utility bill or rental agreement for instance. You will only use your income and your return for file FASFA application. | https://www.justanswer.com/tax/9jqzl-separated-filed-last-yr-joint-wife.html | CC-MAIN-2017-43 | refinedweb | 659 | 71.65 |
flutter_screen_recorder 0.0.1
A new flutter plugin project.
flutter_screen_recorder #
A flutter plugin to record phone screen and generate
.mp4 video, this plugin inspired by react-native-screen-recorder
Accomplishement #
The screen recording feature is able to run on both platforms but the implementation is quite different on each platform.
iOS #
ASScreenRecorder is included as an external library to handle the screen recording function. All I need
The output video will be saved into iOS camera roll.
* In this app, it's naively assumed that the recording is always the latest video in the camara roll.
Android #
Work on progress..
Installation #
Add
flutter_screen_recorder to your
pubspec.yamlfile.
Usage #
import 'package:flutter_screen_recorder/flutter_screen_recorder.dart'; // Start screen recording final result = await FlutterScreenRecorder.startRecording(); if(result){ // Start screen recording succeed } final result = await FlutterScreenRecorder.stopRecording(); if(result){ // Stop screen recording succeed } | https://pub.dev/packages/flutter_screen_recorder | CC-MAIN-2020-40 | refinedweb | 138 | 52.56 |
Benefit of human readable protocols
Robert C. Martin: I'd rather use a socket
Go ahead. Try sending this to here. In fact, here's the program to do exactly that.
It’s just data
Robert C. Martin: I'd rather use a socket
Go ahead. Try sending this to here. In fact, here's the program to do exactly that.
Ha! Now of course, HTTP POST over the wire doesn't look all that much better. ;-) But at least I've got a bunch of easy-to-use clients out there that I can use to test functionality.
Great example. And yes virginia, I have done a telnet to port 80 to type in a simple HTTP request as part of a debugging session...
DavePosted by David Sifry at
I think you missed Bob's point. Insofar as I understand it, he's saying that we (developers) glom onto new protocols (and languages, for that matter) for no good reason. Why SOAP instead of CORBA, for instance? CORBA exists, it works, and yes, you could slap a listener on port 80. Instead, everyone ran to a slower text based protocol. Ditto languages - the frenzy to port "everything" to Java over the last few years was just stupid. Sometimes, I think that IT shops and developers are just sheep, ready and willing to be led to the next great fad, regardless of the value. And by value, I mean business value.Posted by James Robertson at
And another point - I was doing remote debugging of CORAB apps back in the early 90's. Whether the protocol is text or binary is irrelevant. If you have good tools, you can get a lot done. If you insist on using sharp rocks and pointy sticks, go ahead and revel in text based protocols.Posted by James Robertson at
CORBA works great if you own both ends of the wire.
It seems to me that Bob's point is exactly the opposite of yours, James. He advocates sockets and flat files instead of large frameworks. SOAP is sockets and flat files (albeit XML ones, but files nevertheless). CORBA is a large framework.Posted by Sam Ruby at
No, he's using those as examples, I think. What he's advocating is simplicity.Posted by James Robertson at
import socket
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('api.google.com', 80))
s.send("absurd obfuscation")
I like the way Aaron thinks. One nit: in the code snippet shown, the receiver has no way of knowing whether the sender is done or merely pausing. Something like a \0 at the end is required.Posted by Sam Ruby at
Sigh.
Sure the receiver knows when to stop looking. The same as any other protocol, there's a defined end of message character (or characters). Also a timeout interval after which you consider it a failure. HTTP, after all, is just a protocol. And you are all taking Bob Martin literally - and I seriously doubt that he meant to be taken literally. It was a set of examples - along the lines of the XP approach that he advocates - that one should do "the simplest thing that could possibly work". So, for instance - if you have two apps that need to communicate, and you control the wire, you might try something simpler than SOAP or CORBA - the basic RPC mechanism for the language/library you use, for instance (RMI for Java, Opentalk for VisualWorks, whatever). You guys are reading way more into this....Posted by James Robertson at
James: Aaron's code fragment is not HTTP. Adding a well defined end of message character (I suggested \0) makes his example the simplest thing that could possibly work.
XP relies heavily on the notion that one can always refactor later. This is fine for code, but for data structures requires a bit of planning for extensibility.Posted by Sam Ruby at
my thoughts... all over the place as usual... dat's de slopynes dat Marc pointied oot...
Proprietary Non-XML streams v. XML
Advantage of using XML instead of proprietary data streams is that you don't have to write a parser, because there are already too many good XML parsers. The proprietary stream may seem simpler, but requires more code to get off the ground (i.e. RSS 3.0). Also proprietary streams lead to parser hell, where input and output parsers have different expectations, whereas XML is well defined.
IIOP v. SOAP (or other XML)
Advantage of using SOAP instead of IIOP is that you can write small scripts that use existing HTTP and XML classes and produce readable working samples in record time, as per Sam Ruby's comments. The same is not even conceivable in IIOP.
Is SOAP a slower text based protocol?
Yes, if you read the specs and stick to the examples. No if you think outside the box. Don't limit yourself to the textual tag-heavy serialization of the XML dataset. Consider binary XML. I use to use SGML (ya, the good old days) until we got the protocol correct, then we switch to BXML for efficiency. .NET remoting does provide the capability of using alternate serializers. I'm surprised the development of efficient XML representations hasn't progressed these last 5 years.Posted by Randy Charles Morin at
More fun over on Sam Ruby's blog. My response. Proprietary Non-XML streams v. XML Advantage of using XML instead of proprietary data streams is that you don't have to write a parser, because there are already too many good XML parsers. The...Excerpt from iBLOGthere4iM at
My blog - - and BottomFeeder - - both use binary objects serialized to disk for storing data. In neither case did I give ANY thought to extensibility ahead of time. in BOTH cases I have had to change the shape of the objects, and sometimes the form of the objects. See <a href=""></a> for how I managed it without planning.Posted by James Robertson at
The barrier to entry with SOAP is low. That is better than CORBA. Although the barrier to CORBA was getting pretty low before the SOAP storm clouded the vision, I doubt many Perl, Python, etc. developers would be jumping in with their own implementations.
The entire WS stack will be at least as big as the OMG repertoire before long. With an unclear organization or roadmap, the result may be more confusing for some time. The OMG organization and roadmap was more mature. Again, OMG was just becoming effective in its promotion of interop and modularity/framework architecture.
Meanwhile I interpret Bob's general message as "don't assume, instead evaluate". Maybe he'll jump in and clarify what he really meant.Posted by Patrick Logan at
Patrick, I agree that the entire WS stack will be as big as the OMG repertoire before long.
What I like about the WS stack is that if all you need is SOAP document literal, all you need to implement is extremely small. One doesn't need huge middleware. One doesn't need enormous databases.
My problem is that Bob gave the appearance that he assumed that SOAP did require one or both of these. My pointing to the Google API is meant to encourage him and others to question this assumption and to evaluate.
Later in the same presentation I pointed to is the actual SOAP implementation for the server side of the RSS validator.Posted by Sam Ruby at
Here's my theory as to why SOAP caught on where CORBA didn't.
SOAP was created by people who develop applications, so it didn't suffer from a lack of applications to pull it through.
In other words it had a viable developer proposition.
So many proponents of various protocols and formats have no idea how their protocol and format is going to be used. Such P&F's always languish.
One man's opinion. Have a nice day.Posted by Dave Winer at
Sam writes:
What I like about the WS stack is that if all you need is SOAP document literal, all you need to implement is extremely small.
Yes, a great boon for adoption.
Dave Winer writes:
SOAP was created by people who develop applications, so it didn't suffer from a lack of applications to pull it through.
Yes, a great boon for adoption! 8^) You folks were not about to create a CORBA first and then build apps later.
And this is a problem, as far as I can see, with the WS-xxx stuff taking place today, which is more like CORBA than it is like XML-RPC. Shouldn't the WS-xxx folks be working with OMG on that stuff?Posted by Patrick Logan at
Sorry to disagree Dave, you are one of my heros, but I think the problem with CORBA is that it started without a standard wire protocol. By the time IIOP was introduced, the standard was already beyond ridicule.Posted by Randy Charles Morin at
Sam
Have a look here[1]. What do you think?
[1] by Dilip at
Dilip. I like sockets and XML. That's also why I like SOAP.
To the extent that Cedric and Robert see the issue as "SOAP vs Sockets", I disagree with both of them.Posted by Sam Ruby at
Socket?
import httplib
headers = { 'Content-type': 'text/xml' }
c = httplib.HTTPConnection('api.google.com')
c.request('POST', '/search/beta2', message, headers)
r = c.getresponse()
print r.read()
c.close()
Arien: that is http which is one thin layer above sockets. A perfect example of why I like both sockets and web services: you can program at whatever layer you like, without harming the ability of the person on the "other side" from programming at the layer they like.
Applications and protocols which treat the data being tranmitted as opaque and only intended for computer consumption miss out on this big advantage.Posted by Sam Ruby at
Sam, my point was that the you cannot infer the fact that a socket is being used from the code given. (HTTP doesn't have to run on top of TCP/IP, although there's obviously some remapping to be done if you choose another reliable transport protocol.)
You seem to explicitly reveal the socket layer. Why? The point of the layering is, after all, the exact opposite.
Also, I can't help but feel you are making Robert C. Martin's point for him: why SOAP on top of HTTP? This proves there is no need for the SOAP layer. What does SOAP add in this situation?
Or, conversely (to avoid REST vs SOAP), why HTTP between sockets and SOAP? The way it's used here, it adds next to nothing: TCP/IP already does the transport. What does HTTP add here?Posted by Arien at
Arien, for starters, I agree with RCM up to the point where he seems to imply that sockets are an alternative to SOAP.
Now as to your specific points... layers leak. What SOAP adds is orthogonal extensibility. And the possibility of transport independence... which is simultaneously both a key strength and a weakness.
What does HTTP add? A way of encoding header information independent of the content type. Is that a good thing? Depends on whether you want to be content type agnostic or not.Posted by Sam Ruby at
Benefit of human readable protocols Give this a try The browser configuration string to do it: <a...
[more]
Trackback from Bill de hÓra | http://www.intertwingly.net/blog/1420.html | crawl-002 | refinedweb | 1,925 | 75.2 |
Columbus Museum for Industrial Innovation An Analysis
Iowa State University Arch 401 — SC Fall 2013
Preface
Conceptualizing conventional approach to museum design Renzo Piano famously averred that “they [museums] are kingdoms of darkness.” A sense revilement may be a probable strategy executed in darkness theory or perhaps, this generalization even suggests that museums are places to experience within without mundane distractions. Meaning attached to places for collections of historical and cultural value is often spiritual as well as intellectual. Thus, leisure and entertainment have been often considered byproduct or bonus of one’s experience, but not an integrated quality exclusive to the experience. Museums being places of controlled access, exclusive patronage of cultural elites also brings another element to this dichotomy of experience and access. Another notion that museums are for objects d’art frozen in time or depositories of last resort is not uncommon among experts and scholarly spheres. This furthers the presumption that they are places devoid of ingredients, catalyst to day-to-day operations and that they should continue the same way for ontological reasons. Such notions often mislead stake holders of museum projects to end up with pseudo monuments of quasi architecture. In hindsight, both in the past and the present, quite a number of prolific architects or in plain terms “star designers”, have not had a museum project to their name. Hence, it is fair for budding spatial practitioners to pose the question “what is one going to get out of such grand gestures of architecture.” The contemporary museum poses not only questions pertaining to their role of depicting lessons from the past, but also testifying very existence of the present as well as projecting challenges of the future. As museums are often categorized under essential capital burdens or “expensive buildings,” civic-sustainability may be the key to battle the popular claim that museums are mere monumentalization of a glorified past. For students of architecture, design interventions like museums pose the rudimentary question “can architecture offer anything for positive social change?” If museums are for objects, multiplication of objectiveness may not be the common-sense approach to its logical (architectural) form. Arguably, a scenario that looks beyond the inhabitants shelter (displays/objects within architecture/form) may strive to become the sole administrator of the knowledge. In this vein, one might find Columbus, IN a poor choice to challenge and change the face of museum as it already has its own issues with objects, but this analysis gradually develops the mindset that every innovation is a product of inescapable constrains. For example, Kimbell Art Museum employs an intelligent synthesis of inhabitants and shelter to nurture knowledge via experience or simply adapting “experience is knowledge.” Chamila Subasinghe Ph.D. RIBA
Colombus Museum of Industrial Objects
Introduction The collaborative efforts of the studio to create this resource to inform and aid design decisions throughout the semester. Through various macro and micro levels of analysis and research, the studio has created comprehensive outlines and proposals to aid in the concept formulation and design process of museums. Book One outlines precedent study of various topics: successful museum precedents, Columbus trends and forecasts, Columbus demographics, museum and city specific codes, and climatic analysis of Columbus. Each of these concludes with design recommendations for the existing conditions and constraints presented to the design of a museum. These sections provide a basis for resources needed in all stages of design and are the foundation for understanding how to approach designing in Columbus, Indiana. The second book outlines how each topic can specifically be used in a set of programming exercises that provide basic options for design and further dissection of the guidelines outlined in the previous chapters. These programming details are meant as resources for the studio to combine, manipulate, and re-arrange to fit conceptual and schematic needs of a specific design while still remaining informed by the analysis provided. This analysis of precedent and programming provides a comprehensive tool for understanding the project and its context. Each section provides a platform for informed design decisions as well as the opportunity to challenge the standards to create a better museum. Editors, Jessalyn Lafrenz + Alex Olevitch
Arch 401 - Fall 2013
Table of Contents List of Images
i
BOOK ONE: Precedents Precedents Demographic Analysis of Columbus Trends and Forecasts: URBAN PLANNING, LANDSCAPE, AND ARCHITECTURE Code Compliance and Guidelines for Museums Climate
1 35 65 105
BOOK TWO: Programming Programming 169
Programming Through Education
181
Programming With Site Context
209
Programming With Code
221
Climatic Programming
247
145
Conclusion
Colombus Museum of Industrial Objects
List of Images BOOK ONE Precedents 1
Fig. 1.2.1, Renzo Piano Sketch Fig. 1.2.2, Jewish Museum Berlin, Germany Daniel Libeskind Fig. 1.2.3, Holocaust Tower Fig. 1.2.4, Garden of Exile Fig. 1.2.5, Axis Diagram Fig. 1.2.6, Salvador Dali Museum St. Petersburg, Florida HOK Fig. 1.2.7, DNA shaped staircase Fig. 1.2.8, Salvador Dali Sketch Fig. 1.2.9, Typical Arabian Mushrabiya House Fig 1.2.10, institut de monde Paris, France Jean Nouvel Fig. 1.2.11, Mapping Diagram Fig. 1.2.12, MuSe Trento, Italy Renzo Piano Fig. 1.2.13, Renzo Piano Sketch Fig. 1.2.14, Reconnection of Town and Riverfront Fig. 1.3.1 Circulation Diagram - Jewish Museum Berlin Daniel Libeskind Fig 1.3.2 Ground Floor Plan- Art of Americas Wing Boston, MA 2010 Foster + Partners Fig. 1.3.3 First Floor Plan- Art of Americas Wing Boston, MA 2010 Foster + Partners Fig 1.3.4 Enfilade Based Circulation Fig. 1.3.5 Procession Based Circulation Fig. 1.3.6 Ground Floor Plan- Nelson Atkins Museum of Art Kansas City, MO Steven Holl Architects Fig 1.3.7 Basement Floor- Jewish Museum Berlin Berlin, Germany Daniel Libeskind Fig. 1.3.8 Longitudinal Secton- Jewish Museum Ber-
Arch 401 - Fall 2013
lin Berlin, Germany Daniel Libeskind Fig 1.3.9 Predetermined Circulation Fig. 1.3.10 Second Floor- MuSe Museum Trento, Italy Renzo Piano Fig. 1.3.11 Section Diagram for Tier Based Circulation Fig. 1.4.1 Wing of Americas Foster + Partnersd Fig 1.4.2 Turning Radius Clearance- Gradual Turn Fig. 1.4.3 Turning Radius Clearance Fig 1.4.4 Loading Dock Considerations Fig. 1.4.5 Ground Floor Plan- Nelson Atkins Museum of Art Kansas City, MO Steven Holl Architects Fig. 1.4.6 Procession Based Circulation Fig. 1.4.7 Longitudinal Section- Nelson-Atkins Museum Kansas City, MO Steven Holl Architects Fig. 1.4.8 Exhibit Proximity Diagram Fig. 1.5.1 Fig. 1.5.2, Jewish Museum Berlin, Germany Daniel Libeskind Fig. 1.5.3 Fig. 1.5.4 Fig. 1.5.5 Fig. 1.5.6 Fig. 1.5.7, Day lighting Fig. 1.5.8, “Breathing T” Fig. 1.5.9, Nelsen-Atkinson Museum of Art Kansas City, Missouri Steven Holl Fig. 1.5.10, panel deatail Fig. 1.5.11, institut de monde Paris, France Jean Nouvel Fig. 1.6.1¬ Fig. 1.6.2 Daniel Libeskind; Jewish Museum Reinforced Concrete Interior Fig. 1.6.3 Daniel Libeskind; Jewish Museum Zinc
Façade Fig. 1.6.4 Stevel Holl; Nelson Atkins Museum “Glowing Lantern” - At night the building is lit from within, illuminating the sculpture garden. Fig. 1.6.5 Steven Holl; Art of the Americas Wing Interior Lighting Quality – Exhibit Fig. 1.6.6 Steven Holl; Nelson Atkins Museum Fig. 1.6.7 John Nouvel; Arab World Institute Solar Activated Southern Façade Fig. 1.6.8 John Nouvel; Arab World Institute White Concrete Fig. 1.6.9 HOK; Salvador Dali Museum Inside the Glass Enigma Fig. 1.6.10 HOK; Salvador Dali Museum Cast in Place Concrete Spiral Stair Fig. 1.7.1 Fig. 1.7.2 Renzo Piano; MuSE Sustainable Practices Fig. 1.7.3¬¬¬ Fig. 1.7.4 Steven Holl; Nelson Atkins Museum Addition “Breathing T’s” Fig. 1.7.5
Demographic Analysis of Columbus
35
Fig 2.1.1 - Shows total population trends and forcasts. Fig 2.1.2 - Shows the age and gender distribution profile. Fig 2.1.3 - Describes how all of the seperate aspects of the downtown are contributing factors to the cultural diversity issue downton. Fig 2.1.4 - Describes the different races and ethnicities found in Columbus. Fig 2.1.5 - Describes the different types of housing occupancies.
Fig 2.1.6 - Describes in more detail where the vacant housing occupancies are located. Fig 2.1.7 - Describes in more detail where the vacant housing occupancies are located. Fig 2.1.8 - Describes the different household and family types. Fig 2.1.9 - Describes the different types of maritial statuses. Fig 2.1.10 - Analyzes how focusing on the residents who are now married can help create a more dynamic and positive community Fig 2.1.11 - Analyzes the downtown in more depth, giving a better understanding of the framework of downtown Columbus. Fig 2.1.12 - Analyzes the relationship between the amount of residents in Columbus vs. Dowtown. It shows how the downtown is situated in between 3 major highways making visitors pass right through the area. Fig 2.1.13 - Analyzes contributing factors to the gentrification issue downtown focusing on undeveloped blocks. Fig 2.1.14 - Analyzes the gaps in the city. This helps with understanding how the gaps contribute to the gentrification issue downtown. Fig 2.1.15 - Analyzes contributing factors to the gentrification issue downtown focusing on a lack of after hours. Fig 2.1.16 - displays where the current points of interest and shopping areas are located. This helps give a more clear understanding where exactly improvements could be made. Fig 2.1.17- Analyzes contributing factors to the gen-
trification issue downtown focusing on one way streets. Fig 2.1.17 - Shows where one way streets are causing people to completely miss the downtown adding to the gentrification issue. Fig 2.2.1 - Describes the rates at which you can rent different apartment types at. Fig 2.2.2 - Describes household income and trends for the residents throughout a decade. Fig 2.2.3 - Describes the most common male jobs. Fig 2.2.4 - Describes the most common female jobs. Fig 2.2.5 - Describes the most common male Industries. Fig 2.2.6 - Describes the most common female industries. Fig. 2.3.1- Graph of relationships of city population within Bartholomew County Fig 2.3.2 - Shows the % of people that bike to work. The greatest density located in a central district (represented in yellow) has 7.1% residents reporting that they regularly bike to work. Fig 2.3.3 Transportation within the city is dominated by single drivers. Fig 2.3.5 - Bus routes within the city Fig 2.3.6 - Describes the employee rate in Columbus. Fig 2.3.7 - Describes the payroll rate in Columbus. Fig 2.3.8 - Describes the types and quantity of religious buildings their are in Columbus. Fig 2.3.9 - Describes the % likely for homosexual households Fig 2.3.10 - Analyzes the education in Columbus. Fig 2.6.10 - Analyzes where the educational facilities are located. Fig 2.3.11 - Types of establishments within Columbus Fig 2.3.12 - Attractions near site Fig 2.3.13 - Describes the Crime Rates in Columbus. Fig 2.5.1 - Analyzes the current condition of Columbus and how things effect each other.
Trends and Forecasts: URBAN PLANNING, LANDSCAPE, AND ARCHITECTURE 63 Fig. 1.1 - Land Use/District Map of Columbus in 1949 Fig. 1.2 - Currently adopted Land Use Plan. This map shows where Columbus is planning to change in terms of expansion of land use. Fig. 1.3 - Bicycle and Pedestrian Systems Plan Map
shows how all the trails will connect within the city and to their specific destinations. Fig. 1.4 - The Downtown Development Plan shows the strategy that the city planning department wants to implement and how the downtown districts will connect with one another. Fig. 1.5 - Map of Columbus neighborhoods. Fig. 1.6 - Map showing the Downtown Columbus neighborhood. Fig. 1.7 - Map showing the Columbus Central neighborhoods. Fig. 1.8 - Map showing the East Columbus neighborhood. Fig. 1.9 - Map showing the National Road Commercial Corridor neighborhood. Fig. 1.10 - Map showing the Western Rocky Ford neighborhoods. Fig. 1.11 - Map showing the East 25th Street neighborhoods. Fig. 1.12 - Map showing the Columbus Municpal Airport neighborhood. Fig. 1.13 - Map showing the U.S. 31 / Indianapolis Road Area neighborhood. Fig. 1.14 - Map showing the Western Gateway Area neighborhood. Fig. 1.15 - Map showing the Western Hills neighborhood. Fig. 1.16 - Map showing the Woodside / Walesboro neighborhood. Fig. 1.17 - Map showing the State Road 11 South neighborhood. Fig. 1.18- Map showing the Eastern Rural Area neighborhood. Fig. 1.19, 1.20 - The study area for the State Street Corridor Plan is highlighted on a map of Columbus; an example of an area along State Street showing the clear separation between the two sides of the street because of its large width. Fig. 1.21 - A plan for revitalizing the State Street Corridor in East Columbus. Fig. 2.1 - Courthouse landscaping. Fig. 2.2 - Historical Map Fig. - 2.3 Otter Creek Clubhouse Fig. 2.4 - 1910 Irwin Home Gardens by Henry Philips Fig. 2.5, 2.6. - Mill Race Park Fig. 2.7 - First Christian Church Landscape Fig. 2.8 - The New Commons Fig. 2.9, 2.10 - Cummins Corporate Office: Jack Curtis
Fig. 2.11 - Library Plaza- Henry Fig. 2.12 - Flood Guage Indicator in Downtown Columbus. Fig. 2.13- FEMA map showing flood plains in the Columbus area. Fig. 2.14 - Picture of Columbus during the Spring Flood of 2005. Fig. 2.15 - Picture of Mill Race Park’s Round Lake, which was designed to help alleviate flooding. Fig. 2.16 - Picture of the Red Bridge during flood conditions. Fig. 2.17 - Example of a formal landscape design. Fig. 2.18 - Example of a natural landscape design Fig. 2.19, 2.20 - Drawings of parking lots and buildings with landscape buffers. Fig. 2.21 - Miller House. Fig. 2.22 - Miller House’s preserved landscape. Fig. 2.23 - Map showing the Corridors of Columbus. Fig. 2.24 - The Downtown Corridors of Columbus. Fig. 3.1 - Map of Columbus notable architecture color-coded according to year of erection. Fig. 3.2 - Columbus Bartholomew County Courthouse- 1874 Fig. 3.3 - Columbus Bartholomew Consolidated School Corporation- 1896 Fig. 3.4 - First Chriatian Church Exterior- 1942 Fig. 3.5 - First Chriatian Church Sanctuary- 1942 Fig. 3.6 - Lillian C. Schmitt Elementary School- 1957 Fig. 3.7 - View of the Library Plaza- 1969 Fig. 3.8 - The Lillian C. Schmitt Elementary School Addition- 1995 Fig. 3.9 - View of where the original school meets the new addition Fig. 3.10 - View of the Original Visitor’s Center looking towards the renovated side Fig. 3.11 - Window Renovation with featured Yellow Neon Chandelier Fig. 3.12 - Matrix showing the architectural movements through history. Fig. 3.13 - Energy Consumption Chart from the U.S. Energy Information Administration, 2011 Fig. 3.14 - Cozy Home Performance, 2009 Fig. 3.15 -Cozy Home Performance, 2009 Fig.3.16 - U.S. Green Building Council and New Buildings Institute’s statistics show that most LEED certified buildings do not hold up to their predicted energy savings, BuildingGreen.com. Fig.3.17 - U.S. Green Building Council ‘s LEED certification point breakdown for housing.
Code Compliance and Guidelines for Museums 103 Fig 4.1.1 Diagram of Zoning Relationships as it relates to the site Fig.4.1.2 Reference for landscaping requirements for a museum campus. Fig 4.1.3 Diagram of Designated Landscaping Areas as outlined in the text Fig 4.1.4 Max Site coverage and 10 foot setback illustration for site restrictions Fig 4.1.5 Table of Bicycle Rack Quantity Requirements-dependent upon the number of vehicle parking spaces Fig 4.16 Examples of allowed and disallowed bike racks, including the City of Columbus custom bike rack design Fig 4.1.7 Requirements for quantity of parking spaces Fig 4.1.8 Design dimensions and arrangement options for vehicle parking spaces Fig.4.2.1 - Track Lighting and Tungsten Halogen demonstration Fig. 4.2.2 - Standard lighting diagram for most galleries Fig. 4.3.1- An example of a painting exposed to high temperatures, resulting in cracking and deformation. Fig. 4.3.2 - This diagram shows various levels of heat in a gallery space. Red is higher where as blue is colder Fig. 4.3.3 - The three diagrams depict relative humidity levels in relation to surrounding temperature. As temperature rises the same amount of water vapor falls relative to the capacity of air. Fig. 4.3.4- The diagram above shows a wide range of viruses, bacteria, etc. that thrive at different percentages of humidity Fig. 4.4.1 Three parking lots serving a building with one accessible entrance. Fig.4.4.2 Minimum required number of accessible parking spaces Fig.4.4.3 Diagrammatic example of three parking lots serving a building with three accessible entrances Fig.4.4.4 Diagram of three parking lots (including a satellite lot) serving a building with two accessible entrances Fig.4.4.5 Minimum required dimensions for accessible parking spaces Fig.4.4.6 Standard accessible and van accessible parking spaces and access aisles
Colombus Museum of Industrial Objects
Fig.4.4.7 Standard accessible an van accessible angle parking spaces and access aisles Fig.4.4.8 Diagramming of different ramp configurations Fig.4.4.9 A warning barrier used to alert people of limited headrooms Fig.4.4.10 Handrail with the required extension at the bottom of a stair flight Fig.4.4.11 Dimension of elevators Fig 4.4.12 Turning around and passing spaces Fig 4.4.13 T shapeed spaces Fig.4.4.14 spaces for wheelchair Fig.4.4.15 traveling space for wheelchair Fig.4.4.16 The requirements for wheelchair locations, as set by the Smithsonian Guidelines for Accessible Design for Facilities and Sites, are as follows: Fig.4.4.17 Accessible spaces adjacent to fixed seating allow a person in a wheelchair to sit with persons allow a person in a wheelchair to sit with persons with whom they may be traveling. Fig.4.4.18 Provide work stations with seating minimizes the differences between seated and standing visitors. Fig 4.4.19 number of accessible toilets Fig 4.4.20 display requirements for accessibility Fig.4.5.1 Minimum Corridor Width Fig.4.5.2 An example of a small exbihition space for egress travel. Fig.4.5.3 Dimension for windows Fig.4.5.6 Dimensions of loading truck Fig.4.
Climate 133 Figure 5.1.1 Daily High and Low Temperatures Figure 5.1.2 Fraction of Time Spend in Various Temperature Bands Figure 5.1.3 Wind Speed Figure 5.1.4 Fraction of Time Spent with Various Wind Directions Figure 5.1.5 Wind Directions Over the Entire Year Figure 5.1.6 Relative Humidity
Arch 401 - Fall 2013
Figure 5.1.7 Dew Point Figure 5.2.1 Ground covers Figure 5.2.2 Wild Ginger Figure 5.2.3 Trees Figure 5.2.4 Maple Sugar Tree Figure 5.2.5 Grasses Figure 5.2.6 June Grass Figure 5.2.7 Ferns Figure 5.2.8 Cinnamon Fern Figure 5.2.9 Vines Figure 5.2.10 Crossvine Figure 5.2.11 Shrubs Figure 5.2.12 Serviceberry Figure 5.2.13 Flowering Perennials Figure 5.2.14 Royal Catchfly Figure 5.2.15 Soil Parent Materials Figure 5.2.16 Soil Wetness Characteristics Figure 5.2.17 Soil Erosion Potential Figure 5.2.18 Soil Explanation Figure 5.2.19 Munsell Soil Color Chart Figure 5.2.20 Soil Chart Figure 5.2.21 Deep Foundation through layer(s) of compressible soil Figure 5.2.22 Schematic of U.S. Airways terminal constructed on shallow foundations bearing on highly compressible soil layers Figure 5.3.1 Average monthly rainfall in Columbus, Indiana Figure 5.3.2 Daily probability of precipitation in Columbus, Indiana Figure 5.3.3 Cistern level and intake at 13,000 sf collector area Figure 5.3.4 Cistern level and intake at 14,000 sf collector area Figure 5.3.5 Cistern level and intake at 15,000 sf colector area Figure 5.3.6 Cistern level and intake at 16,000 sf collector area Figure 5.3.7 Probability of snow on a day in Colubus, Indiana Figure 5.3.8 Daily levels of the Morgan 4 water table Figure 5.3.9 Frost depth map Figure 5.3.10 Foundations without and with basement showing hydrostatic pressure with
basement Figure 5.4.1 40 Degrees Latitude Sun Path Diagram Figure 5.4.2 Daily Sunrise and Sunset Figure 5.4.3 December Solstice Time- Lapse Figure 5.4.4 September Solstice Time- Lapse Figure 5.4.5 June Solstice Time- Lapse Figure 5.4.6 March Solstice Time- Lapse Figure 5.4.7 Median Cloud Cover Figure 5.4.8 Cloud Cover Types Figure 5.4.9 December Solstice Time- Lapse Figure 5.4.10 September Solstice Time- Lapse Figure 5.4.11 June Solstice Time- Lapse Figure 5.4.12 March Solstice Time- Lapse Figure 5.4.13 Shading Device Options
BOOK TWO
Programming 157 Fig. 6.2.1 Fig. 6.2.2 Fig. 6.3.1 Fig. 6.3.2 Fig. 6.3.3 Fig. 6.3.4 Fig. 6.4.1 Fig. 6.4.2 Fig. 6.4.3 Fig. 6.4.4 Fig. 6.4.5 Fig. 6.5.1 Fig. 6.5.2 Fig. 6.5.3 Fig. 6.5.4 Fig. 6.5.5 Fig. 6.5.6 Fig. 6.5.7 Fig. 6.5.8 Fig. 6.5.9
Programming Through Education 177 Figure 7.1.1 shows a bar graph with the educational attainment in 2011 Figure 7.1.2 shows a map plotting the schools in Columbus
Figure 7.2.1: the proposed room test for cafe (including kitchen) and common open area Figure 7.2.2 : a proposed room test for the gift shop Figure 7.2.3 : a proposed room test for the entrance area Figure 7.2.4 : a proposed room test for the classroom Figure 7.2.5 : a proposed room test for the presentation area Figure 7.2.6 : a proposed room test for the administrative area Figure 7.2.7 : a proposed room test for the meeting room Figure 7.4.1 shows a flow chart with the spatial relationships within the museum Figure 7.4.2 shows a color coded map signifying different programs within the city Figure 7.4.3 examines the site north of the visitor’s center and shows a proposed layout of the museum’s programs Figure 7.4.4 examines the visitor’s center site and shows a proposed layout of the museum’s layout Figure 7.4.5 examines the post office site and shows a proposed layout of the museum’s programs Figure 7.5.1 : a map of the trends in the programs and the circulation Figure 7.5.2: a series of different paths taken by 4 different visitors Figure 7.6.1 shows a bubble diagram of the existing condition in Columbus and the factors influencing it
Programming With Site Context
197
Fig. 1.1 - The initial site options overlaid on top of the redevelopment opportunities map from the Downtown Strategic Plan. Fig. 1.2 - The second phase of selecting potential sites overlaid on top of the redevelopment opportunities map from the Downtown Strategic Plan. Fig. 1.3 - The final two sites overlaid on top of the redevelopment opportunities map from the Downtown Strategic Plan with a dashed box highlighting the area that the site model will show. Fig. 2.1 - Post Office site in its context. Fig. 2.2 - Visitor’s Center site in its context. Fig. 2.3 - Map showing analysis of the Post Office site and how its facades relate to its surrounding corridors. Fig. 2.4 - View From North West Fig. 2.5 - View From North East
Fig. 2.6 -View From South West Fig. 2.7 - View From South East Fig. 2.8 - Map showing analysis of the Visitor’s Center site and how its facades relate to its surrounding corridors. Fig. 2.9 View from center of site. Fig. 2.10 - View From North West. Fig. 2.11 - View From North East. Fig. 2.12 - View From South West. Fig. 2.13 - View From South East. Fig 3.1 - Parking Garage Adjacent to Post Office Site Fig 3.2 - Bubble diagram that shows the proximity of rooms based on programmatic needs. Fig 3.3 - Flow diagram showing the relationships between different department coordinators. Fig 3.4 - Movement abstraction representing departments as various shapes. The smaller shapes across the diagram show movement from the main office of the director. Fig. 3.5- Programming of room spaces- Option One. Fig. 3.6 – Programming of room spaces- Option Two. Fig. 3.7 – Programming of room spaces- Option Three.
Programming With Code
tions based on precedent and code analysis.
Climatic Programming 247 Figure 10.1.1 Landscape Scheme- Linear Figure 10.1.2 Landscape Scheme- Circular Figure 10.1.3 Landscape Scheme- Bubble Diagram Figure 10.1.4 Landscape Scheme- Plan Figure 10.2.1 Temperature Scheme- Linear Figure 10.2.2 Temperature Scheme- Circular Figure 10.2.3 Temperature Scheme- Bubble Diagram Figure 10.2.4 Temperature Scheme- Plan Figure 10.3.1 Water Scheme- Linear Figure 10.3.2 Water Scheme- Circular Figure 10.3.3 Water Scheme- Bubble Diagram Figure 10.3.4 Water Scheme- Plan Figure 10.4.1 Solar Lighting Scheme- Linear Figure 10.4.2 Solar Lighting Scheme- Circular Figure 10.4.3 Solar Lighting Scheme- Bubble Diagram Figure 10.4.4 Solar Lighting Scheme- Plan
217
Fig 9.1.1 Hidden Source Lighting Effect Fig 9.1.2 Artificial Lighting Effect Fig 9.1.3 Primarily Natural Lighting Effect Fig 9.1.3 Natural Lighting for User Experience Fig 9.2.1 Mixed gallery distribution. Red indicates passive heat sources and blue indicates controlled, Isolated environments Fig 9.2.2 Fully encased displays. Red indicates passive heat sources and blue indicates controlled, Isolated environments Fig 9.2.3 Exposed Systems. Red indicates trapped heat from usage and mechanic system output. Fig 9.3.1- Diagram of appropriate levels for optimal viewing by all users. Fig 9.3.2-Varied seating for handicap accessibility Fig 9.4.1 Hierarchy of use from the visitor perspective Fig 9.5.1 Procession of space Fig 9.6.1 Adjacency of program based on code efficiencies Fig 9.7.1 Approximate square footage recommenda-
Colombus Museum of Industrial Objects
Arch 401 - Fall 2013
book 1
Colombus Museum of Industrial Objects
Precedent
Arch 401 - Fall 2013
CHAPTER 1: Precedents Jeremy Ernst, Holly Pohlmeier, Kevin Stromert
1.0 Introduction 1.0.1 Matrix 1.1 Concept Development 1.1.1 Experiential 1.1.2 Image Developed 1.1.3 Contextual 1.1.4 Site Informed 1.2 Circulation 1.2.1 Enfilade Based 1.2.2 Procession Based 1.2.3 Predetermined 1.2.4 Tier Based 1.3 Curation + Safety 1.3.1 Loading/Unloading 1.3.2 Lighting 1.3.3 Security 1.3.4 Curator Proximity
2
1.4 Lighting 1.4.1 Direct Lighting 4 1.4.2 Soft Lighting 1.4.3 Adjustable Lighting
19
1.5 Materials 1.5.1 Historical References 1.5.2 Experientailly Driven 9 1.5.3 Cultural Influences 1.5.4 Climate Sensitive
23
1.6 Sustainability 1.6.1 LEED Certified 1.6.2 Green Space 14 1.6.3 Other Examples
28
1.7 Conclusion + Design Recommendation
33
1.8 References + Works Cited
34
Columbus Museum of Industrial Objects
Precedents Arch 401 - Fall 2013
1.0 - Introduction A museum should target an audience and incorporate innovative ideas to promote education and reflection in order to motivate visitors to return. Limiting the breadth of the museum collection, or creating a focus, can prevent visitors from becoming overwhelmed. Additionally, engaging the audience in a story can help them make sense of the exhibits as a whole. This can be done through attention to circulation and object placement. Museums should attempt to spark curiosity and user interaction in order to inspire learning and provoke thought. Museums are not just for the visitors, many people work in these spaces on a daily basis. Visitors often only see a portion of the building, since other portions are dedicated primarily to storage and “back of house� activities. This includes things like employee restrooms and support spaces, mechanical rooms, staff offices, transportation of exhibit elements to and from storage, and storage spaces. The placement of these portions of the building can greatly affect the way the building is experienced. This analysis has selected six case studies to act as precedents -- all designed by starchitects. This allows us to understand the design intent holistically in terms of each architect’s style and attitude towards experiential architecture. The second criteria dictated that the chosen museums had been designed for a specific purpose, allowing for a more definitive concept and specificity towards design choices. In attempt to compare the case studies, we have defined six categories for evaluation: 1. Concept Development 2. Circulation 3. Curation & Safety 4. Lighting 5. Materials 6. Sustainability The following chapter highlights which museums were most successful at each category, and the approaches they took. Arch 401 - Fall 2013
Jewish Museum [Daniel Libeskind]
Salvador Dali Museum [HOK]
Experiential
Historical References
Climate Sensitive
Procession Based
MuSe [Renzo Piano]
Site Developed
Institut De Monde Arabe [Jean Nouvel]
Contextual
Thermal Mass
Crate Shipping
Enfilade Based
Nelson Atkins Museum [Steven Holl]
Sustainability
Materials
Lighting
Curation + Safety
Direct Lighting
Predetermined
Image Developed
Art of the Americas Wing [Foster + Partners]
3
Circulation
Concept Development
1.0.1 - Precedent Matrix
Security
Soft Lighting
Experientially Driven
Green Space
LEED Certified
Tier Based
Adjustable Lighting
Cultural Influences
Passive Cooling
Colombus Museum of Industrial Objects
Precedents
1.1 - Concept Development Concepts are developed in the early stages of design and involve quick sketches and models. Each category can have hierarchy over the others depending on the importance of each in the specific design. The concept of your design is the big idea and can be the rationale for every decision you make for your design. The concept bonds context, program, and form into one unique being. Following are a few precedents that all had their own unique driving concept.
Driving Concepts Experiential Concept: A story can be developed in the circulation of the museum that can control the entire outcome of the museum. The experience the museum intends to create can impact the materials, form, context and the program of the museum. Image Developed Concept: Some architects need to create an image with the design of the museum to attract a broader audience into the space. Fig. 1.1.1, Renzo Piano Sketch of Muse
Contextual Concept: Understanding an importance in the context can help to drive the concept. It may be important to reflect contextual materials, height, or even forms. Site Developed Concept: The surrounding landscape can help to formulate the profile of the building. Understanding the site may help to line the museum up with certain landmarks or public spaces.
Arch 401 - Fall 2013
4
1.1.1- Experiential Over 300, 000 people toured the Jewish museum in the first two years it was open with out any items on display. People toured the museum because the architecture alone embodied the story of Jewish past and future. Libeskind, the architect, choose to have the entrance in the existing building on site and have a path that takes you underground and into the new museum. The physical path resembles the historical path taken by Jewish people in Berlin. The use of architecture is as powerful, in the act of story telling, then the historic items that it archives. The axis of holocaust is a dead end that leads to an empty dark room with a stream of light coming in from a corner. The axis of exile leads to the garden of exile, another dead end, that Libeskind states “is to completely disorient the visitor. It represents a shipwreck in history.� When developing an experiential concept, Libeskind focused on circulation, light, material, and scale.
Fig. 1.1.5, Axis Diagram
Fig. 1.1.2, Jewish Museum Berlin, Germany Daniel Libeskind
5
Fig. 1.1.3, Holocaust Tower
Fig. 1.1.4, Garden of Exile
Colombus Museum of Industrial Objects
Precedents
“contrast between the rational world of the conscious and the more intuitive, surprising natural world” - Salvador Dali
1.1.2 - Image-Developed This piece of architecture acts more as an image to draw people in. Most readings on the museum speak towards the number of visitors of his work increasing in result of the museum’s image. The experience of the museum is not as widely discussed as the glass enigma or the spiral stairs being objects. HOK addressed the protection of the Dali collection while also expressing the beliefs and rituals of Salvador Dali through the architecture. An 18 inch thick concrete wall surrounds the work of Salvador Dali to protect it from a level five hurricane. The spiral staircase is used as a piece of architecture to represent Dali’s passion for DNA. The glass enigma When deriving a museum concept around a particular image, it is important that this image relates to the museums context. The image must also be memorable and have the ability to interact with visitors.
Fig. 1.1.6, Salvador Dali Museum St. Petersburg, Florida HOK
Fig. 1.1.7, DNA shaped staircase Arch 401 - Fall 2013
Fig. 1.1.8, Salvador Dali Sketch
6
1.1.3 - Contextual The institut de monde reflects the existing contexts of it’s site including the geography and culture at hand. The river facade follows the curvature of the waterway and helps to reduce the hardness of a rectangular box, adapting itself to the view from the near by bridge. Understanding the culture Jean Nouvel used the transparency of the building to encourage the sense that the Arab World Institute is not a gateway but a scrim separating old and new. In recognition to the context, Nouvel lined the driveway, splitting the two parts of the building, with the existing towers of Notre Dame. At the base of the driveway is a fountain echoing the fountains of Arabian palaces 1,000 years ago. The panels on the south side of the institute resemble different forms used in typical Arabian Mushrabiya homes.
Fig. 1.1.9, Typical Arabian Mushrabiya House
Fig 1.1.10, institut de monde Paris, France Jean Nouvel
7
Fig. 1.1.11, Mapping Diagram
Colombus Museum of Industrial Objects
Precedents
1.1.4 - Site-Developed Renzo Piano, the architect, understood the existing site in Trento, Italy to be large mountain peaks and wanted to relate this to the profile of the museum. The scale of the museum to the distant mountains can be seen in the photograph to the left. He also used locally sourced materials that were harvested near the site to realate the building to its landscape. The redevelopment of the site aimed to reconnect the town with the Adige River-front area that it had previously been cut off from. The site-developed connection of the town to the riverfront was important enough to generate a concept.
Fig. 1.1.12, MuSe Trento, Italy Renzo Piano
Fig. 1.1.13, Renzo Piano Sketch
Arch 401 - Fall 2013
Fig. 1.1.14, Reconnection of Town and Riverfront
8
1.2 - Circulation Human interaction within fine art museums, or any museum for that matter, is one of the principal design factors. The amount of interaction can be controlled by the circulation within the museum. There are many specific examples of types of circulation in all shapes and sizes. Listed within this chapter are four of the main types commonly seen in museums today. One should note that variations of these four are absolutely necessary to achieve individual success in projects. These example works for the projects listed, but may not necessarily work for the Columbus project. The four types of circulation described in the following chapter include enfilade based, procession based, predetermined, and Tier based circulation. The four projects selected for each style include Wing of Americas by Foster + Partners, Nelson-Atkins Museum of Art by Steven Holl Architects, Jewish Museum Berlin by Daniel Libeskind and MuSe Museum by Renzo Piano. Each has main circulation routes diagramed along with smaller diagrams to understand each specific type of circulation.
Fig. 1.2.1 Circulation Diagram - Jewish Museum Berlin Daniel Libeskind
9
Colombus Museum of Industrial Objects
Precedents
1.2.1 - Enfilade Based Art of Americas Wing, Boston, MA. 2010. Foster + Partners Enfilade based circulation refers to the French term “Enfilade” which means “a suite of rooms with doorways in line with each other.” This is a classic style of museum layout, and is used in the existing Museum of Fine Arts in Boston.
Fig 1.2.2 Ground Floor Plan- Art of Americas Wing Boston, MA 2010 Foster + Partners
Fig. 1.2.3 First Floor Plan- Art of Americas Wing Boston, MA 2010 Foster + Partners
Arch 401 - Fall 2013
The Art of Americas Wing incorporates enfilade based circulation through the gallery spaces, no distinct precession is preferred within the spaces. The plan is compiled of a principal axis with two smaller wings. Feature stairs are also located in the main portion. Access stairs are located in each of the two wings. A minimum of two points of egress are available from each wing. The main elevator located in between each wing, close to main/feature stair. One of the most important factors to Foster was that the building connects to the existing museum of Fine Arts. The plan and circulation display this connection to the existing context of the Fine Arts Museum.
Fig 1.2.4 Enfilade Based Circulation
10
1.2.2 - Procession Based Nelson-Atkins Museum of Art, Kansas City, MO. Steven Holl Architects.
Fig. 1.2.5 Procession Based Circulation
The procession based circulation involves similar room to room movement like the Enfilade style but less choice is given to the user. A fairly common path from user to user is evident in these styles of circulation. The Nelson-Atkins Museum of Art utilizes procession based circulation, beginning within the older museum wing through lower level galleries of the new wing. Holl’s 5 distinct “Lenses” of light from upper levels align the general access of precession. The largest of the 5 “lenses” houses lobbies, cafe and a multipurpose room. Circulation then flows from lobbies into the gallery spaces along the 4 remaining lenses. Exits exist at either end of the chain of spaces, along with an additional exit to the sculpture garden. These exits meet the means of egress required by code.
Fig. 1..6 Ground Floor Plan- Nelson Atkins Museum of Art Kansas City, MO Steven Holl Architects
11
Colombus Museum of Industrial Objects
Precedents
1.2.3 - Predetermined Jewish Museum Berlin, Berlin, Germany. 1999. Daniel Libeskind. Circulation is a main priority within the Jewish Museum. It is pertinent to the experience that one takes the path intended. The Libeskind establishes a main axis of approach from the basement of the existing museum. The predetermined circulation guides the user to the highest level of the museum taking them through the dramatic lighting and scenes. The project is intended to remind and place the visitors in the dark and twisted history of the Jewish Holocaust. In a project that requires such experiential qualities one should control the movement of a user as such in the predetermined style of circulation. There are many different styles of predetermined paths that can be designed, the figures to the left diagram the approach used in the Jewish Museum by Libeskind.
Fig 1.2.7 Basement Floor- Jewish Museum Berlin Berlin, Germany Daniel Libeskind
Fig. 1.2.8 Longitudinal Secton- Jewish Museum Berlin Berlin, Germany Daniel Libeskind
Arch 401 - Fall 2013
Fig 1.2.9 Predetermined Circulation
12
1.2.4 - Tier Based MuSe Museum, Trento, Italy. 2013-In progress. Renzo PianoA hybrid between the enfilade circulation style and the pre-determined circulation, the museum patron is guided from the lower tiers up each level, but within each level exploration of the user’s choice is encouraged. This may be an applicable strategy for museums that want to encourage exploration with the exhibits themselves. Also introduces the grouping of exhibits for visitors. Each floor will contain certain types of exhibits, allowing more “pick and choose” for the guests. The tiers in the MuSe museum are based around an atrium to emphasize vertical movement guiding users up each tier. Standard egress is implemented into the plan to ensure proper safety of the museum guests.
Fig. 1.2.10 Second Floor- MuSe Museum Trento, Italy Renzo Piano
Fig. 1.2.11 Section Diagram for Tier Based Circulation
13
Colombus Museum of Industrial Objects
Precedents
1.3 - Curation/Safety
Fig. 1.3.1 Wing of Americas Foster + Partners
Arch 401 - Fall 2013
The transportation of fine arts can be one of the most damaging situations for such pieces. Considerations for the shipment and vehicles used in the process are important to incorporate in the designs.. Generally fine art should be kept in environments between 65-75 degrees Fahrenheit and 40-45% relative humidity. The low humidity levels will prevent mold from growing on any of the pieces..
14
Curator Overwatch
1.3.1 - Loading/Unloading The transportation of fine arts can be one of the most damaging situations for such pieces. Considerations for the shipment and vehicles used in the process are important to incorporate in the designs. The curator will need to maintain overwatch during the entire handling process, so designs of these logistics should comply..
Crate Handling
Exhibit Handling
Fig 1.3.2 Turning Radius Clearance- Gradual Turn
65-75 Degrees Fahrenheit 40-45% Relative Humidity
Extended Clearance
14’
Fig. 1.3.3 Turning Radius Clearance
15
Fig 1.3.4 Loading Dock Considerations
Colombus Museum of Industrial Objects
Precedents
10’
Fig. 1.3.5 Ground Floor Plan- Nelson Atkins Museum of Art Kansas City, MO Steven Holl Architects
1.3.2 - Lighting.
Fig. 1.3.6 Procession Based Circulation
Arch 401 - Fall 2013
16
1.3.3 - Security Nelson-Atkins Museum of Art, Kansas City, MO. Steven Holl Architects. Access and curation of the exhibits was important in the design of this museum. In section it is seen that below the gallery level, there is an entire floor for exhibit/art service. It does not interrupt the finely detailed and experiential qualities of the upper levels, but adequate space has been provided. The security of having these storage two floors underground is also increased by its placement. The storage of exhibits and artwork in a museum should not attract attention to itself.
Fig. 1.3.7 Longitudinal Section- Nelson-Atkins Museum Kansas City, MO Steven Holl Architects
17
Colombus Museum of Industrial Objects
Precedents
Curator $$$
$$
1.3.4 - Curatior Proximity $.
Fig. 1.3.8 Exhibit Proximity Diagram
Arch 401 - Fall 2013
18
1.4 - Lighting Lighting is an important and essential design element in a museum. The light has to be carefully studied for each object in the museum to maximize it’s performance. Without the correct amount and the right style of light the objects may read incorrectly. v Natural light can be used as long as direct light is not hitting objects that can be damaged from the harsh light. Different transparencies can be used to take advantage of a softer light, or deeper windows can be used to force the light to bounce off multiple surfaces before it can reach the objects. Artificial light must me used whether it is only at night or during the day as well. Certain galleries will allow the light sources to be closer to the objects while others will require the light sources to be further away and less powerful.
Fig. 1.4.1
19
Colombus Museum of Industrial Objects
Precedents
1.4.1- Direct Direct lighting in a museum can be fatal if not thought out completely. Mistakingly, Libeskind did not take into account the job of the curator when designing the lighting in the Jewish museum. He was too focused on experience of the architecture rather then the items it had to eventually house. After two years of being empty, artifacts were eventually put into the museum. Walls had to be put up blocking this harsh light from reaching the valuable and irreplaceable artifacts. Unless the artifacts are not prone to destruct when in direct light, or the direct light can’t reach the artifacts, it is not a common practice in the design of museums to use direct light. All museums need to eventually be artificially lit when the sun sets. The bottom right photo shows a room that is being lit by both natural and artificial light. It can be seen how Libeskind clearly thought out how artificial light would be able to match the rest of the lighting in the building by using the cuts.
Arch 401 - Fall 2013
Fig. 1.4.3
Fig. 1.4.4
Fig. 1.4.5
Fig. 1.4.6
Fig. 1.4.2, Jewish Museum Berlin, Germany Daniel Libeskind
20
1.4.2 - Soft An effective and common way to bring light into an exhibition space is to use indirect light. A majority of pieces housed in museum collection are not supposed to be exposed to direct light. If the items receive harsh light on a regular basis they can start to fade. The diagram shows how Steven Holl chooses to use “Breathing T’s” to carry the light down into the exhibition spaces. Several other Museum designs use this same technique to take advantage of the natural light while still being able to keep the artifacts safe.
Fig. 1.4.7, Day lighting
Fig. 1.4.9, Nelsen-Atkinson Museum of Art Kansas City, Missouri Steven Holl
21
Fig. 1.4.8, “Breathing T”
Colombus Museum of Industrial Objects
Precedents
1.4.3 - Adjustable Museums can use an adjustable skin that controls the amount of light that enters the space. The panels shown on the institut de monde show how they can regulate the amount of light that enters the building. Understand that the direct harsh light can be destructive to the artifacts and has to be studied in depth to avoid damaging pieces. A good way to use an adjustable skin is to allow the light when it is indirect and to block the light when it is direct. If used correctly the skin can help with the efficiency of energy in the museum. The institut de monde used the adjustable panels to control the light in an efficient manner. The panels also have other cultural meanings in this specific case. The panels create an interesting texture of direct light on the surface of the room but would not be conducive for valuable artifacts.
Fig. 1.4.11, institut de monde Paris, France Jean Nouvel
Arch 401 - Fall 2013
Fig. 1.4.10, panel deatail
22
1.5 - Materials Material choices can affect experience of a building in a potentially more prominent way than other design choices. Materiality is noticed first, and plays a great role in defining the tone or mood of a space. There are many reasons that a designer chooses a material palette including site context, experience of the space, interactivity of the building, etc.
Things to Consider Symbolism: Some materials carry specific connotations within different cultures or regions. Climate: Materials can be affected differently in varying climate conditions, so one should consider energy costs (especially in terms of heating or cooling loads) when selecting a material. Many exterior building materials may weather, so they should either be treated agains this or accepted into the design. Context: some materials are also more compatible with or reminiscent of a certain place or period (i.e. adobe flat roof construction might not fit contextually in NYC, etc.) Scale: When choosing a material, it’s important to consider scale of the building. How much of the material is needed? Scale/quantity of materials can have a great impact on overall construction cost. Location: Where is the material manufactured? Does it need to be shipped to the site or is it a local material? Location of the material can affect both the cost and the environment. Choosing materials that are closer to the project location is often more sustainable. Many other potential questions can arise in terms of material strength, color, texture and pattern. Sometimes these qualities can vary within a single material, so obtaining large quanities of the same material may be tricky.
Fig. 1.5.1
The following case studies have incorporated materials that specifically relate to the building concept, and have done so rather successfully.
23
Colombus Museum of Industrial Objects
Precedents
1.5.1 - Historical References Daniel Libeskind’s Jewish Museum resides in Berlin. It houses many exhibits related to JewishGerman history. This is a somber point of history for both Germany and those who practice the Jewish faith. Libeskind’s museum design allowed him to choose materials that would effectively communicate the tone of events that occurred during the Holocaust. Daniel Libeskind’s primary material choices were concrete and zinc. The building is composed of many sharp angles, and joints. Concrete is fluid when poured, so it’s able to take on the shape of any form – ideal for this project. The structure and foundation were made primarily of reinforced concrete. However, during construction, both pre-cast and cast in place techniques were used. Concrete weathers well, and can take on the characteristics of a cool, dark place – as utilized in the museum. The ability to cast the building also allowed for the windows and strips of lighting in the façade. The entire concrete building is clad with zinc panels. Zinc has a long history in German architecture; however, it was primarily used for roofs or decorative architecture elements. Zinc was a readily available material and relatively cheap. Libeskind notes that zinc, “offers a metaphor for the Jewish presence in Berlin”. He knew that the panels will oxidize over time and change color, allowing for the structure to fade into its surroundings, “much like the Jews are only one part of Berlin’s history”. Libeskind’s material choices helped the project come in 10% under budget. Fig. 1.5.2 Daniel Libeskind; Jewish Museum Reinforced Concrete Interior
Arch 401 - Fall 2013
Fig. 1.5.3 Daniel Libeskind; Jewish Museum Zinc Facade
24
1.5.2 - Experientially Driven Steven Holl has been known to experiment with materials in much of his work, and the Nelson Atkin’s Museum addition is no different. Holl’s Nelson Atkin’s Museum design played largely with the effects of space and light on experience. Holl chose physical materials that would emphasize his nonphysical ideas in a meaningful way. The interior of the museum addition is equipped with dark flooring and generally blank, bright, white walls. The interior spaces are very minimal in their decoration, but this puts greater emphasis on the work being exhibited. The neutral interior prevents people from being distracted, and also allows for the opportunity to house temporary exhibits.
Fig. 1.5.4 Stevel Holl; Nelson Atkins Museum “Glowing Lantern” - At night the building is lit from within, illuminating the sculpture garden.
Light is the largest factor in most Steven Holl projects. Holl uses light to delineate forms and give life to the interior spaces. In the Nelson Atkins addition, he also uses artificial light to light the entire building from within. In order for light to become the material that sets the tone of the space, Holl had to choose a material that would promote it. Holl chose an insulated glass material. The product, Okalux, is manufactured in Germany. It’s an insulated glass product that helps protect against heat and sound. The insulation is made of a “glass fiber tissue”, this helps to diffuse natural light into the space-and limits glare. The insulated glass is translucent, so does not permit views out, but the lighting quality is enhanced throughout the space. The translucent glass also gives the opportunity for the building to be lit from within, and becomes a light source for the Nelson Atkins sculpture garden at night.
Fig. 1.5.6 Steven Holl; Nelson Atkins Museum
Fig. 1.5.5 Steven Holl; Art of the Americas Wing Interior Lighting Quality - Exhibit
25
Colombus Museum of Industrial Objects
Precedents
1.5.3 - Cultural Influences The Institut de Monde Arabe (Arab World Institute) in Paris, France brought John Nouvel to fame. The institute is meant to promote research and knowledge about the Arab culture in France. In addition to a museum, the building also houses a restaurant, library, auditorium, and offices. The most notable features of this building have come with the materiality of the dynamic façade.
Fig. 1.5.7 John Nouvel; Arab World Institute Solar Activated Southern Facade
Nouvel took inspiration from Arabic architecture, specifically the mashrabiya. Mashrabiya is the Arabic term given to a “typee of projecting oriel window enclosed with carved wood latticework…located on the second story of a building or higher”. The mashrabiya has been used in traditional Arabic architecture for centuries. “It is most often used on the street side” but has also been utilized on the side of a courtyard. In Nouvel’s project, the mashrabiya is utilized on the south side—overlooking the courtyard. This provides privacy and shading to the interior, while still allowing views out. The south façade is large glass curtain wall, but on the interior sits a light sensitive screen that reacts to the sun. Primarily made of aluminum, the façade shines in the sunlight, bringing attention to its placement in the city. There are 240 apertures on the façade. The apertures open and close automatically and can help control the amount of heat and light entering the building. This allows for filtered light in the interior spaces, an “effect often used in Islamic architecture”. Nouvel also utilized white cement and marble throughout the project which offer reduced glare in a museum setting. Both materials have also been utilized in many other examples of Arabic architecture.
Fig. 1.5.8 John Nouvel; Arab World Institute White Concrete
Arch 401 - Fall 2013
26
1.5.4 - Climate Sensitive
Fig. 1.5.9 HOK; Salvador Dali Museum Inside the Glass Enigma
HOKs Salvador Dali Museum was a revolutionary design for US architecture. It “houses the largest collection of Dali’s paintings outside of Spain”. The building resides in St. Petersburg, FL, an area that has potential for hurricanes. This selection of paintings is so valuable that the architect wanted to protect them against damage from building collapse, etc., due to hurricanes. Dali’s paintings consistently play “with differences between organic forms and the mathematics of nature.” The architect wanted to reflect this within the building. For this reason, he created the glass enigma, morphing out of a concrete box. Glass and Concrete are the primary materials in this design. The building was primarily concrete and cast in place. The exterior walls are 18” thick in order to combat wind loads from a category five hurricane. The storm surge is at a 28’, so all of the critical functions are placed above that height. The concrete box is meant to protect, and was left exposed and unfinished in order to reduce maintenance and contrast with the organic formed atrium. The atrium is also made to combat hurricane wind loads at a thickness of 1.5”. The geodesic geometry allows for different views to the outside in every direction. Each panel and connection is different, so it was fabricated and shipped to the site for assembling. The glass enigma is transparent, and permits views out and allows natural lighting to enter the space. This structure was the “first type of free-form geodesic geometry in the United States”.
Fig. 1.5.10 HOK; Salvador Dali Museum Cast in Place Concrete Spiral Stair
27
Colombus Museum of Industrial Objects
Precedents
1.6 - Sustainability Sustainability is a driving factor when designing the built environment. There are many definitions for sustainability but in the most general sense it’s the “capacity to endure”. This can be thought of in terms of human consumption, environmental conservation, etc. Sustainability seeks to “use resources in a way that makes it possible for future generations to enjoy the same resources”. In terms of building design and construction, here are a few general guidelines to follow. Use the Site Responsibly Incorporate the building into the site; develop from what is existing as earthwork can often be expensive. Notice the ratio of paved surfaces to greenery, some landscaping should remain. Incorporate gardening into the landscape, or plant native vegetation because it will take less water over its lifetime. Storm water infiltration and irrigation systems can help control water consumption. Cisterns can be helpful, depending on site location and amount of rainfall. Cisterns collect rainwater for reuse throughout the building including providing water for irrigation systems and toilets, etc. Utilize Passive Strategies Shading devices, building orientation, and natural ventilation are all examples of passive strategies. Building orientation and shading devices can both aid heating and cooling loads throughout the year. This can ultimately reduce cooling costs by limiting the use of AC, etc. Cross ventilation is a natural way to cool a space, and also helps improve air quality throughout the space. Incorporate Energy Efficient Building Strategies When possible, try to incorporate local materials. This can help to lower overall building and construction costs. It is also saves time and energy for shipping materials to the site. Choosing materials that are energy efficient materials is also very sustainable. In order to do so, one has to research climate data and choose materials that are able to stand up to the changing weather (materials with high R-Value for insulation or materials that protect against solar gain/loss, etc.). Lastly, try to incorporate non-toxic materials. Fig. 1.6.1
Sustainability practices also relate to construction of buildings. Man power and labor are resources, too. If items are shipped to the site in pieces and have to be assembled, it takes more time and energy than creating a “kit of parts”. The following case studies provide examples of successful strategies to reduce energy consumption and utilize sustainable practices.
Arch 401 - Fall 2013
28
1.6.1 - LEED Certified Renzo Piano’s MUSE (MUseum delle ScienzE), opened in the summer of 2012 in Trentino, Italy. The museum is meant to be just one part of a larger cultural center that’s being formed to bring people back to an area of Trentino that had once been alienated. The building earned a LEED Gold standard for sustainability. Part of this is due to its proximity to bike trails, and limited parking (to promote carpooling and public transportation) The museum lies is a valley of the Dolomite Mountains (a part of the Alps), so it is often viewed from above. The roofline mimics the surrounding mountains, and makes use of the existing site in a variety of ways. There are solar panels on the roof to help contribute to the buildings energy needs. The building is also equipped with geothermal heating. Piano provided solutions for natural lighting and ventilation throughout the facility. This helped to greatly reduce energy consumption and increased thermal comfort in the interior spaces. Though much of the building’s exterior is glazing, the rest of the facade is white in color. This helps to reflect sunlight and lower building cooling loads throughout the hot months of the year. One portion of the museum is a greenhouse displaying non-native plants. Plants that aren’t native to a region often use more water, so a cistern has been incorporated on to the site. The cistern collects rainwater and reuses it for irrigation purposes. The cistern helped to reduce “overall use of hydro water by 50%”.
Fig. 1.6.2 Renzo Piano; MuSE Sustainable Practices
Fig. 1.6.3
29
Colombus Museum of Industrial Objects
Precedents
1.6.2 - Green Space Steven Holl’s Nelson Atkins Museum incorporates many sustainable features. The building is able to illuminate the outdoors because of its double skinned façade. This façade provides a high amount of insulation while still allowing a gracious amount of natural light throughout the day (cutting down on energy consumption). The cavities between the sheets of glass begin to act as a thermal mass, gathering heat in the winter or letting it out during the summer. Holl created a system of “Breathing T’s” to transmit light down through the galleries. Additionally, the T’s provide a location to hide HVAC ductwork. The combination of these two tasks make the building more energy efficient overall. The breathing Ts allow for natural light to enter and diffuse onto the surfaces of the white walls and then reflect into the galleries. The outdoor sculpture garden continues up and over the galleries creating green roofs to help “achieve high insulation and control storm water.” Green roofs are often used to reduce the amount of storm water runoff. The roof will store the water as it rains, and then eventually allow it to evaporate, reducing the overall flow of the rainstorm. The lifespan of a green roof is generally much longer than that of a conventional roofing system. Though there is often a higher initial cost for green roofing systems, the overall lifetime costs decrease due to reduced energy consumption. This integration of site and building connects the visitor to experience the building and landscape as one.
Fig. 1.6.4 Steven Holl; Nelson Atkins Museum Addition “Breathing T’s”
Arch 401 - Fall 2013
30
1.6.3 - Other Examples HOK Salvador Dali Museum The Salvador Dali Museum in Petersburg, FL is sustainable in its rigidity. The building was made with 18” thick walls, a 12” thick roof, and a 1.5” thick glass enigma that had to be assembled on site. This doesn’t sound sustainable, however, if sustainability is the “capacity to endure” – this building can. The building is made to stand in up to 165 mph winds. This is meant to protect the valuable collections inside, and the building itself. Buildings that are built to last are sustainable. The thick concrete walls can also act as a thermal mass, and passively heat and cool the space throughout the day. The building is also equipped with highly efficient AC system equipped with a hot water dehumidification system, and the glass enigma allows for strategic daylighting to illuminate the interior spaces.
JEAN NOUVEL Institute de Monde Utilizing the sun for passive cooling and shading is one sustainable aspect of the Institut de Monde Arabe. The south façade on the Arab World Institute is solar responsive. From an environmental standpoint, the façade is able to control solar gain and cooling load by reducing the aperture size based on the time of day or angle of the sun. Each sensor on the façade is controlled by a “photovoltaic sensor which permits 10 to 30 percent daylight”, therefore keeping the temperature inside the building at a comfortable level.
FOSTER + PARTNERS The Art of the America’s Wing, Museum of Fine Arts The Art of the America’s Wing addition to the Museum of Fine Arts in Boston was designed to be energy efficient. The large courtyard is boxed in to create a giant atrium, however, they atrium is naturally lit and allows for open views to the sky. Galleries in the building are climate controlled to keep the art protected. This also means that cost can be controlled through climate adjustments in individual galleries. Foster + Partners has also started looking at sustainable means of shipping. They’ve adopted building crate shipping. This means that they ship and receive many items of varied sizes at once in a single crate to reduce
31
Fig. 1.6.5
Colombus Museum of Industrial Objects
Precedents
Arch 401 - Fall 2013
32
1.7 - Conclusion These are just a few of the many examples that can act as helpful precedents. We can conclude from our studies that each museum is vastly different, but there are patterns throughout much of museum design. Circulation is often most directly related to concept, curation, and safety. The type of circulation can often define how interactive the museum is, etc. Light is also largely affected by museum concept and form. However, materials and sustainability most often directly relate to context and cultural ideals. Many of the museums in this chapter have directly related building form and materiality to the concept or culture. This helps inform visitors about the museum before ever going in. However, this can cause building form to deviate from the surrounding context, so how does one design for this? Design Recommendations:
- Make a decision about Contextual Design Does the building need to stand out, or fit in? Should it express the collection on the outside? - Steven Holl, Nelson Atkins: “Think of Light as a Material� Natural Light vs. Artificial Lighting Decide of objects can be subject to sun, etc. - Neutral Spaces allow more potential for temporary exhibits. Think about scale. - Layer Information There should be a hierarchy to exhibit information. Attempt to tell a story, or put things in an order. - Choose to integrate sustainable materials and systems
Columbus Museum of Industrial Objects
Precedents
1.8 - References + Works Cited Jewish Museum - Daniel Libeskind [ARTE] Architecture Collection - Episode 12: Daniel Libeskind - Jewish Museum Berlin - YouTube.” YouTube. (accessed September 10, 2013). Kroll, Andrew. “AD Classics: Jewish Museum, Berlin / Daniel Libeskind” 25 Nov 2010.ArchDaily. Accessed 27 Sep 2013. (accessed September 10, 2013). “Jewish Museum Berlin | Studio Daniel Libeskind.” Studio Daniel Libeskind.. com/projects/jewish-museum-berlin (accessed September 10, 2013). “Jewish Museum, Berlin.” NCSU Weblog. kberry.wordpress.ncsu.edu/ (accessed September 10, 2013).
Salvador Dali Museum - HOK
Art of the Americas Wing - Foster + Partners “A wing in the making - Boston.com.” Boston.com - Boston, MA news, breaking news, sports, video. (accessed September 9, 2013). “Art of Americas Wing at the Museum of Fine Arts, Boston / Foster + Partners | ArchDaily.” ArchDaily | Broadcasting Architecture Worldwide. (accessed September 9, 2013). “Museum of Fine Arts | Projects | Foster + Partners.” Foster + Partners.. com/projects/museum-of-fine-arts/ (accessed September 9, 2013).
Nelson Atkins Museum - Steven Holl
Cliento, Karen. “In Progress: Salvador Dalí Museum / HOK + Beck Group | ArchDaily.” ArchDaily | Broadcasting Architecture Worldwide. (accessed September 9, 2013).
Fernandez Solla, Ignacio. “A Quest for thick glazed Facades.” Facades Confidential. facadesconfidential. blogspot.com/2012/05/quest-for-thick-glazed-facades.html (accessed September 9, 2013).
Hine, Hank. “Beauty by the Bay.” HOK. (accessed September 9, 2013).
“The Nelson-Atkins Museum of Art / Steven Holl Architects | ArchDaily.” ArchDaily | Broadcasting Architecture Worldwide. (accessed September 10, 2013).
Pham, Diane. “Interview: architect Yann Weymouth of HOK on new Dali Museum.” inhabitat: design will save the world . inhabitat.com/interview-hoks-yann-weymouth-discusses-designing-the-hurricaneresistant-salvador-dali-museum/ (accessed September 9, 2013).
“Nelson Atkins Museum of Art.” Steven Holl Architectus.. php?type=museums&id=19 (accessed September 10, 2013).
MuSe - Renzo Piano
Institute De Monde Arabe - Jean Nouvel
Loomans, Taz. “Renzo Piano’s MUSE Museum is a Sustainable Building that will Revitalize Trento’s Riverfront.” inhabitat: design will save the world . inhabitat.com/renzo-pianos-muse-museum-is-a-sustainable-building-that-will-help-revitalize-trentos-riverfront/ (accessed September 9, 2013).
“AD Classics: Institut du Monde Arabe / Jean Nouvel | ArchDaily.” ArchDaily | Broadcasting Architecture Worldwide. (accessed September 13, 2013).
“MUSE – a Science Museum is the core of a regional development project in Italy - DETAIL-online. com.” DETAIL-online.com - The portal for architecture and construction.. com/architecture/news/muse-a-science-museum-is-the-core-of-a-regional-development-project-in-italy-021601.html (accessed September 9, 2013).
“Arab World Institute - WikiArquitectura - Buildings of the World.” WikiArquitectura - Buildings of the World. (accessed September 9, 2013).
“Renzo Piano’s mountain-like MUSE opens in Trentino.” designboom magazine | your first source for architecture, design & art news. (accessed September 14, 2013). “MuSe Museum by Renzo Piano Building Workshop.” Dezeen - architecture and design magazine. http:// (accessed September 10, 2013). Arch 401 - Fall 2013
“Ateliers Jean Nouvel-Inter-cultural Institut du Monde Arabe -Paris | mapolis | Architektur – das Onlinemagazin für Architektur.” mapolismagazin for architecture | mapolis | Architektur – das Onlinemagazin für Architektur. (accessed September 13, 2013).
Chapter 2 - Demographics Ahmed Al Monsouri, Alex Olevitch, Bec Ribeiro
1
2.0 Introduction
36
2.1 Population Diversity 2.2.1 Family Background 2.1.2 Gentrification
37
2.2 Economic Status
46
2.3 Cultural Profile of Columbus 2.3.1 Columbus Transportation 2.3.2 Employment of Columbus 2.3.3 Religion 2.3.4 Education 2.3.5 Existing Cultural Establishments 2.3.6 Crime
49
2.4 Client Profiles
60
2.5 The Columbus Condition
62
2.6 Conclusion
63
Columbus Museum of Industrial Objects
Demographics
Introduction The city of Columbus has a very restrained demographic population. Looking at the research done in this chapter you will find that teh city lacks diversity in many different aspects including racial, religious, and education. The Lack of diversity in the city results in a chain reaction where everything affects everything else. For example there is lack of ethnic diversity because there are not any major businesses other than Cummins. If there were a wider variety of businesses in the city it might be more inviting for people from all over the world to come and work in. When people have lower incomes they are restricted to enrolling their children into public schools, which in return does not encourage and demand a more diverse educational system. Therefore most people tend to enroll their children in public or religious schools, as opposed to private schools. The studies and research in this chapter show the lack of interest in private schooling, and the abundance of religious institutions. This lack of educational diversity then affects the city’s growth, which in return affects the average income. This is an ongoing cycle of cause and effect that has long been established.
Arch 401 - Fall 2013
2
2.1 POPULATION DIVERSITY Historically the residents if Columbus have been known for their community involvement. Like many districts it has been formed through the vigorous shifting of its physical environment. Retail and market forces are a major issue in Columbus. The downtown trade has declines and is in much need of improvement. Being able to energize the area is a driving influence on the success of attracting people downtown. Traffic both vehicular and pedestrian are another main component issue on the success of getting people into the downtown areas. As of recent studies, Columbus is seen as inconvenient to both residents and visitors. Because of the lack of public transportation it is important to establish a standard of circulation. This section refers to designing architecture/museum space that responds to the demographics and context of the site, which is vital to its success. Viewing this information will influence the design of a museum, which holds great potential of being a cultural mecca. Set in this chapter, are graphs that explain and analyze the general demographics of the city of Columbus. There are analytical breakdowns in response to the data we have collected that give suggestions for preferred design choices.
Fig 2.1.2 - Shows the age and gender distribution profile.
Age distribution is large factor attributing to the lack of a dynamic ambiance throughout Columbus. Creating a program that targets active elders and empty nester can help encourage this force. Another completely separate program should be created pursuing the younger urban professional market. This will help the general growth of the area while making it more cosmopolitan city. We suggest the addition of a live performing arts setting in addition to the exhibit space. Amenities such as cafes and restaurants would also help the development of both groups. Fig 2.1.1 - Shows total population trends and forecasts.
As evident in the total population chart, Columbus is a slowly but consistently growing city. It grows about 14% every 10 years. While Columbus as a whole is increasing in population the downtown has been decreasing. The success of the museum is dependent upon increasing activity downtown. The lack of motivation and purpose to come to the site is serious problem. As a whole having a city of 45,000 people is relatively small. We would suggest designing a building that is smaller in scale, yet may have the ability to adapt for the subtle growth that the city is undertaking.
37
Columbus Museum of Industrial Objects
Demographics
After extensive research it is evident that education is the key to success in Columbus. Most of the residents in the area have only received a high school education and those that are elite educated were imported from somewhere else. All of the public schools are too compacted with kids while the private schools have next to no one in them. It is clear that the residents are not receiving the proper education to stimulate a nourishing diverse community. Having these opposing cultures and mixtures of people is important for the success of a society. It helps the ability to bring new people in and overall function properly. White residents overpower the city of Columbus. While it is important to please the majority, steps must be taken to promote a more positive dynamic. We aren’t proposing to change the city, just enrich/ broaden the existing culture. We advise designing a space that has the potential for a more heterogeneous exhibit. This would help to attract a vast variety of visors to the museum and even to the city itself.
Fig 2.1.3 - Describes how all of the separate aspects of the downtown are contributing factors to the cultural diversity issue downtown.
Fig 2.1.4 - Describes the different races and ethnicities found in Columbus.
Arch 401 - Fall 2013
38
2.1.1 Family Background At first glance it may not seem like the amount of vacancies are an issue. Further findings and analysis concluded that this assumption was untrue. Blocks in the immediate downtown area reach up to 50% unoccupied homes. When comparing to thriving cities such as New York, which never surpass 7%, the problem becomes more discerned. While it is unrealistic to expect Columbus Indiana to be flourishing like New York, there is plenty of room for improvement. Compared to Columbus as a whole, the immediate downtown area is depicted for its higher vacancy rates and lower owner occupancy. Due to the increases in multi-family housing, the market has a minimal area for growth. Because of this the value of housing is Lower downtown and appreciating at a much slower rate then the rest of Columbus. Families move out of the downtown for a better opportunity and education for their families. Lowering the amount of vacancies will both increase the amount of residents in the city and the desire to be in the city. Having more people in the city also brings more diverse cultures with it. This in turn would positively help to educate the residents and prevent them from having such a narrow mind frame.
Fig 2.1.6 - Describes in more detail where the Vacant housing occupancies are located.
Fig 2.1.5 - Describes the different types of housing occupancies.
39
Columbus Museum of Industrial Objects
Demographics
Most of the dwellings in Columbus consist of family occupancies. This includes marital or blood related occurrences. It is important to notice that in the downtown area more of the residents are without children. This reinforces the idea that attention should be focused on the younger adults age groups. It is important that this group of people feels connected with their community, in order for them to encourage the cities growth. However, creating a program that is geared towards family oriented activity could help promote a healthy growth of family life in the area. Making the museum family friendly can help better connect the residents of Columbus together. It could also help encourage a community that aims to informally educate the inhabitants.
Fig 2.1.7 - Describes in more detail where the vacant housing occupancies are located.
Fig 2.1.8 - Describes the different household and family types.
Arch 401 - Fall 2013
40
The majority of residents in Columbus seem to be young adults who are married without kids. Because of the information gathered it could be assumed that a new generation of children would be born in the near future. It is important that when this happens the residents feel like they can raise their kids in the area. They need reasons to stay, including a sense of community that could be provided through the museum design. For most families their children’s education is most important. Hitting hard on improving the education throughout the downtown area would help give a reason for these families to stay. It would also promote new families to wanting to raise their kids in the area. This would increase the population encouraging a more positive atmosphere. By increasing this factor the community would receive more local support, which would create more development for the city. This would help to create more job opportunities, which would help the city, seem more attractive to outsiders and help it grow. It would then increase visitations and create an overall successful inhabitance.
Fig 2.1.10 - Analyzes how focusing on the residents who are now married can help create a more dynamic and positive community
Fig 2.1.9 - Describes the different types of martial statuses.
41
Columbus Museum of Industrial Objects
15,700 4,260
Traffic Count (2004) Traffic Count (1997)
2. Few fully developed streets in downtown. Many blocks in downtown that were once developed are now paved for parking. Undeveloped blocks create gaps or “missing teeth” in the downtown streetscape and its ability to remain pedestrian friendly. Empty blocks should be a priority for new investment and development.
7
Mixed-use buildings on Washington Street are beginning to offer residential above with office or retail space at street-level. Many communities throughout the United States are realizing the importance of housing to downtown’s success.
Study Area Boundaries Focus Area Boundaries Pedestrian Oriented Corridor
10
13 14 15 16
Legend Single-Family Residential Multi-Family Residential Commercial Services Commercial Goods Commercial Food & Entertainment Special Use - Not for profit Special Use - Education Special Use - Governmental Industrial Parking Lot Green Space - Park Available for Development
9
12
3. Surface parking separates residential neighborhoods and parks from downtown. Parking in downtown is important for Downtown Columbus' functionality. However, large surface parking lots that separate downtown from surrounding neighborhoods and amenities, make downtown less attractive to walk to from nearby areas.
Existing Land Use Framework
8
11
2.1.3 Gentrification
Taking Stock of the Community
Downtown Columbus | Strategic Planning Project
Public/Free Restricted 3 Hour Limit Public/Free Restricted <1 Hour Limit
that added a vibrancy and activity to downtown. Today, these storefronts are primarily occupied by legal and banking services that are important to downtown's functionality but have a limited ability to animate the street particularly after-hours. Much care should be given to locate businesses and shops that create activity and after-hour attractions at the streetlevel.
17
4. Fifth Street is a significant architectural tourism corridor. Fifth Street connects the community to Mill Race Park and the neighborhoods to the east and functions as a major community “front door” for tourists. 5. Second and Third Streets move traffic “past” downtown. The one-way streets, particularly Second and Third Streets, quickly and efficiently move traffic past Downtown Columbus. Travelers and visitors can easily miss downtown because it is not apparent, through signage and development cues, where downtown begins and ends. 6. Limited residential units within the downtown. Like many small communities in the Midwest, Downtown Columbus housing is limited. The Columbus housing market is strongest to the north, west, and east of Columbus. Downtown housing is important to create a strong and vibrant “24-hour” downtown. 7. Limited “after-hours” activity unless an event is programmed. Downtown Columbus is most active from 8 am - 5 pm. Most downtown stores and businesses close at or before 5 pm. Many of the activities and stores that remain open after 5 pm are located in the Commons Mall thus adding little animation and activity to the street. 8. Side streets have a “fragile” level of activity. Most activity in Downtown Columbus is located along Washington Street and downtown offers limited pedestrian connections to surrounding neighborhoods. As a result, side streets are often devoid of activity even during normal business hours.
18 19
Coriden Law Office on Washington Street (shown above) is one of many businesses that benefits from free on-street parking in Downtown Columbus. Free on-street parking provides easy access to downtown and its businesses, however patrons and employees do find parking in some locations to be difficult during peak hours.
20 21
S h
B-3 Central Business 240,000 sq. Ft. / 5.5 Acres B-3 Central Business 20,000 sq. Ft. / .5 Acres B-3 Central Business 25,000 sq. Ft. / .6 Acres B-3 Central Business 25,000 sq. Ft. / .6 Acres R-6 Residential 95,000 sq. Ft. / 2 Acres B-3 Central Business 27,000 sq. Ft. / .6 Acres B-3 Central Business 50,000 sq. Ft. / 1.1 Acres B-3 Central Business 43,000 sq. Ft. / 1 Acre B-3 Central Business 27,000 sq. Ft. / .6 Acres B-4 Highway Business 96,000 sq. Ft. / 2.2 Acres B-3 Central Business 76,000 sq. Ft. / 1.7 Acres B-3 Central Business 48,500 sq. Ft. / 1.1 Acres B-3 Central Business 52,000 sq. Ft. / 1.2 Acres B-3 Central Business 12,000 sq. Ft. / .3 Acres B-3 Central Business 12,000 sq. Ft. / .3 Acres B-3 Central Business
Demographics
Fig 2.1.11 - Analyzes the downtown in more depth, giving a better understanding of the framework of downtown Columbus.
5
The current status of Columbus is comprised of residents who live on the outer edges of the city. The downtown area is dissipating in population. People who live in these areas claim to have no reason to go downtown. There is a strong market potential for downtown but several obstacles. The first major issue is that there are few amenities that would entice residents from entering. The downtown is situated in between 3 major highways. Because of this visitors just drive right through the area and miss the entire city. The goal is to create a program that brings people back downtown. Arch 401 - Fall 2013
Fig 2.1.12 - Analyzes the relationship between the amount of residents in Columbus vs. Downtown. It shows how the downtown is situated in between 3 major highways making visitors pass right through the area.
42
A 3 a a l b A s H
Development Opportunities
29
B-3 Central Business 90,000 sq. Ft. / 2 Acres I-2 Medium Industrial
PEARL STREET
LAFAYETTE STREET
STREET
Proposed Central Middle School
FIFTH STREET
CALIFORNIA STREET
CALIFORNIA STREET
EET WASHINGTON STR
FRANKLIN
JACKSON STREET
BROWN STREET
SECOND STREET
SEVENTH STREET
CALIFORNIA STREET
PEARL STREET
EIGHTH STREET
EIGHTH STREET
8. Side streets have a “fragile” level of activity. Most activity in Downtown Columbus is located along Washington Street and downtown offers limited pedestrian connections to surrounding neighborhoods. THIR As aDresult, side STREET streets are often devoid of activity even during normal business hours.
town. Eliminating these gaps in the city and providing more amenities will help to motivate people to walk. Vacantengaged Structure in all of the activities instead of driving to their single destination This will force people to be more 27 28,000 sq. Ft. / .66 Acres landBusiness that encompass unused buildings should be immediately and leaving straight away. The 5.5 acresB-4ofHighway 28 127,000 sq. Ft. / 2.9 Acres filled with specific programs. This would be an economically convenient way to start activating the area.
NINTH STREET
CHESTNUT STREET
LAFAYETTE STREET
Mixed-use buildings on Washington Street are beginning to offer residential above with office or retail space at street-level. Many communities throughout the United States are realizing the T NINTH STREE importance of housing to downtown’s success. FRANKLIN STREET
4,260
Traffic Count (2004) Traffic Count (1997) BROWN STREET
LINDSAY STREET
EET
STR
SAY
LIND
Taking Stock of the Community
Downtown Columbus | Strategic Planning Project
T
E RE
TENTH STREET 15,700
Developme
Underused Property 1 330,000 sq. Ft. / 7.5 Acres SU-11 Special Use 2 33,500 sq. Ft. / .77 Acres B-4 Highway Business 3 63,000 sq. Ft. / 1.5 Acres B-2 Business 4 27,000 sq. Ft. / .6 Acres B-3 Central Business 5 138,000 sq. Ft. / 3 Acres B-2 Business 6 57,000 sq. Ft. / 1.3 Acres B-3 Central Business 7 240,000 sq. Ft. / 5.5 Acres B-3 Central Business 8 20,000 sq. Ft. / .5 Acres B-3 Central Business 9 25,000 sq. Ft. / .6 Acres B-3 Central Business 10 25,000 sq. Ft. / .6 Acres R-6 Residential 11 95,000 sq. Ft. / 2 Acres B-3 Central Business 12 27,000 sq. Ft. / .6 Acres B-3 Central Business 13 50,000 sq. Ft. / 1.1 Acres B-3 Central Business 14 43,000 sq. Ft. / 1 Acre B-3 Central Business 15 27,000 sq. Ft. / .6 Acres B-4 Highway Business 16 96,000 sq. Ft. / 2.2 Acres B-3 Central Business 17 76,000 sq. Ft. / 1.7 Acres B-3 Central Business 18 48,500 sq. Ft. / 1.1 Acres B-3 Central Business 19 52,000 sq. Ft. / 1.2 Acres B-3 Central Business 20 12,000 sq. Ft. / .3 Acres B-3 Central Business 21 12,000 sq. Ft. / .3 Acres B-3 Central Business
CHESTNUT STREET
ST
Unlimited Public/Free Restricted 3 Hour Limit Public/Free Restricted <1 Hour Limit
ELEVENTH STREET
All of the vacant structures reside in the B-3 Central Business 12 27,000 sq. Ft. / .6 Acres southwest corner of Downtown where Legend Existing LandB-3 Use Framework SEVENTH STREET Central Business Highway 46 East crosses the White River. Single-Family Residential 13 50,000 sq. Ft. / 1.1 Acres Multi-Family Residential Commercial Services 4. Fifth Street is a significant architectural tourism corridor. B-3 Central Business Commercial Goods Fifth Street connects the community to Mill Race Park and the neighborhoods to the east 14 43,000 sq. Ft. / 1 Acre Commercial Food & Entertainment and functions as a major community “front door” for tourists. B-3 Central Business Special Use - Not for profit SIXTH STREET Special Use - Education 15 27,000 sq. Ft. / .6 Acres 5. Second and Third Streets move traffic “past” downtown. Special Use - Governmental Coriden Law Office on Washington Street (shown above) is one of B-4 Highway Business The one-way streets, particularly Second and Third Streets, quickly and efficiently move Industrial many businesses that benefits from free on-street parking in traffic past Downtown Columbus. Travelers and visitors can easily miss downtown Parking Lot 16 96,000 sq. Ft. / 2.2 Acres Downtown Columbus. Free on-street parking provides easy access to Green Space - Park because it is not apparent, through signage and development cues, where downtown B-3 Central Business downtown and its businesses, however patrons and employees do find Available for Development begins and ends. parking in some locations to be difficult during peak hours. 76,000 sq.contributing Ft. / 1.7 Acres factors to the gentrification 2.1.13 - 17 Analyzes B-3 Central Business 6. Limited residential units within the downtown. Study Area Boundaries issue downtown focusing on undeveloped blocks. STREET FIFTH Like many small communities in the Midwest, Downtown Columbus housing is limited. The 18 48,500 sq. Ft. / 1.1 Acres Focus Area Boundaries Columbus housing market is strongest to the north, west, and east of Columbus. Pedestrian Oriented Corridor B-3 Central Business Downtown housing is important to create a strong and vibrant “24-hour” downtown. 19 52,000 sq. Ft. / 1.2 Acres B-3 Central Business 7. Limited “after-hours” activity unless an event is programmed. Downtown Columbus is most active from 8 am - 5 pm. Most downtown stores and 20 12,000 sq. Ft. / .3 Acres FOURTH STREET businesses close at or before 5 pm. Many of the activities and stores that remain open B-3 Central Business after 5 pm are located in the Commons Mall thus adding little animation and activity to the 21 12,000 sq. Ft. / .3 Acres street. B-3 Central Business
Vacant Property 22 45,000 sq. Ft. / 1 Acre B-4 Highway Business According to the diagram there is about acres land 23 34 125,000 sq. Ft.of / 2.8 Acres that are not being used. 12 of these acres consist Businesshave a structure that is being unused. All of these of vacant land that has no structure. 5.5 B-2 of Community these acres 24 95,000 sq. Ft. / 2.2 Acres vacancies create pockets in the city that disrupt circulation of the downtown. It is unnecessary to have B-4 Highwaythe Business (west ½) B-3 Central Business (east 1/2) multiple parking lots in the center of downtown. Residents have complained that parking is an issue down25 25,000 sq. Ft. /5 .58 Acres town. Through observation of the sites itB-3 isCentral clearBusiness that these parking lots are being used but are not beneficial 26 250,000 sq. Ft. / 5.7 Acres to proper functionality. It would be much more appropriate to have parking ramps on the edges of downB-5 General Business
43
On-Street Parking
T
TWELFTH STREE
SYCAMORE STREET
Off-Street Parking Reserved Customer Only Public/Free Restricted 3 Hour Limit
H
Fig
PEARL STREET
8T
eet (shown above) is one of free on-street parking in rking provides easy access to atrons and employees do find uring peak hours.
B-3 Central Business
acres 10 25,000 sq.separates Ft. / .6 Acres 3. Surface parking residential neighborhoods and parks fromcontain downtown. vacant structures. The larger underused Parking in downtown is important for Downtown Columbus' functionality. However, large parcels are located R-6 Residential surface parking lots that separate downtown from surrounding neighborhoods and between downtown and Mill Race Park. 11 95,000 sq. Ft. / 2 Acres amenities, make downtown less attractive to walk to from nearby areas.
Proposed Mill Race Center
TWELFTH STREET
WASHINGTON STREET
W
treet are beginning to offer space at street-level. Many States are realizing the ccess.
B-3developed Central Business 2. Few fully streets in downtown. Many blocks in downtown that were once developed are now paved for parking. 8 20,000 sq. Ft. / .5 Acres According to the diagram approximately Undeveloped blocks create gaps or “missing teeth” in the downtown streetscape and its B-3 Central Business 34 acres land are underused, 12 acres ability to remain pedestrian friendly. Empty blocks should be a priority for newof investment 9 25,000 sq. Ft. / .6 Acres and development. are vacant land with no structure and 5.5
Legend
JACKSON STREET
Legend
CHESTNUT STREET
D
R Transportation &diagram Parking Framework The Development Opportunities AN LM The site analysis key findings summarize the outcomes of thepresents Downtown Columbus’ L the property in Downtown Underused Property TE physical analysis. Downtown Columbus’ existing physical conditions is a significant factor Columbus determined to be underused or 1 future 330,000 sq. Ft. / Findings 7.5 Acresfrom this study indicate that influencing investment. the market potential in SU-11 Special Downtown Columbus is Use strong; however, physical conditions such asThis land includes use, vacant. vacant land and transportation, infrastructure character will factor into the downtown's ability and to attractsurface parking lots. 2 33,500 sq. Ft. / and .77 Acres buildings new development. The issues that most directly influence potential for redevelopment B-4 Highway Business that is labeled underused is activity in downtown are summarized below. Several issues stemProperty from lack of after-hours 3 63,000 sq. Ft. / 1.5 Acres activity and lack of fully developed blocks while others are relatedprincipally to the way downtown is comprised of surface parking Businessof surface parking lots. organized and B-2 the number but also includes structures that could 4 27,000 sq. Ft. / .6 Acres offer development potential for the uses 1. MostlyB-3 small businesses Central Businessand services at street-level in Downtown. The streets in Downtown Columbus were historically lined with animated shops in andthis stores proposed plan. Property is labeled 5 a 138,000 Ft.activity / 3 Acresto downtown. Today, these storefronts that added vibrancy sq. and vacantfunctionality ifare it primarily is not B-2 and Business occupied by legal banking services that are important to downtown's but a park and it has not received physical improvements. have a limited animate theAcres street particularly after-hours. Much care should be given 6 ability 57,000tosq. Ft. / 1.3 to locate businesses and Business shops that create activity and after-hourStructures attractions at the streetB-3 Central are considered vacant if they level. 7 240,000 sq. Ft. / 5.5 Acres have no use or occupants.
JACKSON STREET
Legend Site Analysis Key Findings
FIRST STREET
WATER
Source: 2005 City of Columbus Downtown DevelopmentFig Potential Inventory 2.1.14& -Property Analyzes the gaps in the city. This helps with understanding Columbus/Bartholomew Planning Department contribute to the gentrification issue downtown. Source: Development Concepts, Inc.
how the gaps
Columbus Museum of Industrial Objects
6
The Deve presents Columbus vacant. T buildings Property principally but also offer deve proposed vacant if received Structures have no us
According 34 acres o are vacan acres co larger un between d All of the southwest Highway 4
Context Framework
Landm
Demographics Many residents admit that they do not come downtown for entertainment options. There is an overwhelming need for evening options and entertainment that includes performing arts. Festivals are held from time to time which generate a large community support. These seem to be the main focus for residents attending the area. Programming a space that could hold amenities like this more frequently could help repopulate the area. From analyzing what residents attend there is the greatest potential for success with bar or nightclubs and spaces that have live music performances. The Second Street suspension bridge (pictured above) and Bartholomew County Public Library (pictured below) are just two of 60 architecturally significant structures in Columbus. Of these 60 public and private buildings, 50 provide the most concentrated collection of contemporary architecture in the world. Columbus’ program advocating for modern architecture began in 1942 with a series of events that started when the First Christian Church dedicated its new building. Designed by Eliel Saarinen, the building heralded the beginning of modern architecture in Columbus. Since then, many award-winning architects have designed buildings in Columbus including Richard Meier, I. M. Pei, Kevin Roche, and Robert Venturi.
Dining is ranked as the highest necessity for the locals. They are looking for a variety of options in the median price range. A study was conducted for the market potential through the Bureau of Labor Statistics’ Consumer Expenditure Interview Survey. Through researching residents within a 15-minute radius, conclusions were made that over 92.5 million dollars is spent annually at restaurants and bars. 16% of this is spent on alcoholic beverL ages. It is also concluded that if the right choices were given residents would spend over 98.6 millionworks, dollars. Amid the famous architectural the Columbus Bar at the intersection of Fourth and Jackson Streets offers local After analyzing this date through supply and demand and what the city could handle it is clear there is most residents and tourists with a place to gather. potential for full service restaurants. There is potential for 8 of these types of restaurants, which would generate over 7 million dollars!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Landmarks
Context Framework
Context Framework Legend
Golf Courses/Country Clubs Parks and Recreation connects Mill Race Park to residential and institutional Fig 2.1.15 - Analyzes contributing factorsFifth toStreet the gentrification to the hours. east and is linked by seven of Columbus’ significant of after issue downtown focusing on a lackuses
H
Columbus Regional Hospital
works of modern architecture.
Shopping A Clifty Crossing B Fair Oaks Mall(pictured above) and The Second Street suspension bridge C Library Northern Villagebelow) Center Bartholomew County Public (pictured are just two of D Eastbrook 60 architecturally significant structures inPlaza Columbus. Of these 60 E 25th Street Shopping public and private buildings, 50 provide the most Center concentrated F architecture West Hill Shopping CenterColumbus’ collection of contemporary in the world. G Holiday Center began in 1942 with a program advocating for modern architecture H Columbus series of events that started when the First Christian Church Center dedicated its new building.I Designed by Eliel Saarinen, the building Edinburgh Premium Outlets the beginning ofJ modern Cloverarchitecture Center in Columbus. Since The downtown area is known for being a 9-5 city, with most stores and amenities closing at this time.heralded This is Mill Race Park occupies 85 acres at the western edge then, of Downtown many award-winning designed buildings in K architects Columbushave Crossing a major problem with the gentrification issue. After surveying theColumbus. surrounding a consensus of the most It includesareas a 500-seat performance amphitheater Columbus including Richard I. M. Pei, Kevin Roche, and Robert L Meier, Commons Mall of accommodating a round lake; a boat house for convenient times to visit the city was conducted. Most agreed thatcapable it was after 5 and5,000; on the weekends, the Venturi. paddle boat rentals; three picnic shelters; a large playground; a exact opposite of what is actually happening.
basketball court; a fishing pier; a park overlook and park tower; horseshoe pits; restrooms; connection of the existing People Trail with athe one-half mile Riverpotential Walk; and a wetland interpretiverevearea. It is evident through research and survey that Washington Street has greatest generating
nue on shopping. Residents express the need for such attractions in motivating them to come downtown. It is suggested to include stores that would create a wide variety. Hobby stores would generate a great amount of success. It is determined that tourists spend about 21% of their spending on shopping. This is a huge amount of revenue that could be received while bringing in more people to other attractions.
Points of Interest Significant Architectural Points
L
Amid the famous arc the intersection of Fo residents and tourists
Target is one of many “big box” retail stores located on National Road (U.S. 31 Bypass) on the north side of Columbus that draws patrons from throughout Bartholomew and adjoining Counties.
Landmarks
Framework Legend Fig 2.1.16 - displays where Context the current points of interest and shopping areas are located. This helps Golf Courses/Country Clubsimprovements could be made. give a more clear understanding where exactly
Parks and Recreation
Arch 401 - Fall 2013
Fifth Street connects Mill Race Park to residential and institutional uses to the east and is linked by seven of Columbus’ significant works of modern architecture.
H
Columbus Regional Hospital Shopping
44
Fig 2.1.17- Analyzes contributing factors to the gentrification issue downtown focusing on one way streets.
The one-way streets have caused a major problem with how people interact with the downtown. While 2nd and 3rd street are very effective with getting people moving, it causes people to completely miss the downtown. There is not proper signage helping to indicate where the downtown is located. The big grayed area on the map indicates the proposed museum design space. It is centrally located within the downtown area. This could help to better circulate the city. Since there is poor circulation and parking having this centrally located building could help encourage people to walk from the outer areas where parking garages could be located. This would also help to generate population to other programs that could be produced near by. The smaller block is another proposed place that could help when programming about education. It is located next to an educa-
45
Fig 2.1.17 - Shows where one way streets are causing people to completely miss the downtown adding to the gentrification issue.
Columbus Museum of Industrial Objects
Demographics
2.2 Economical Status the minimal amount of private schools and low household incomes. A museum that bridges that disconnection of education is a good option because it would encourage people to come in as opposed to push them away for it’s foreign language in design.
Fig 2.2.1 - Describes the rates at which you can rent different apartment types at.
Fig 2.2.2 - Describes household income and trends for the residents throughout a decade.
Arch 401 - Fall 2013
46
Fig 2.2.3 - Describes the most common male jobs.
Fig 2.2.4 - Describes the most common female jobs.
47
This graph reinforces Columbus’ pioneer role in the engineering and manufacturing industry amongst the total residents of Indiana. 10% of the entire male population of Columbus is involved in engineering. This is not surprising given the city’s affiliation with manufacturing and industrial production.
In opposition to the male statistic, the most common female occupations are typically administrative, secretarial, and educational. Secretaries and support roles within administrations hold the greatest percent of female occupations at 7%. This statistic, in combination with the statistics for male occupation reveals an important trend within the city: culturally established gender roles are still professionally enforced. Their professional encouragement is likely a reflection of the overall culture within the city.
Columbus Museum of Industrial Objects
Demographics
Fig 2.2.5 - Describes the most common male Industries.
Fig 2.2.6 - Describes the most common female industries.
Arch 401 - Fall 2013
48
2.2 Cultural Profile of Columbus This section will more closely examine the cultural profile of the city of Columbus. The examination will include both residents and tourists and forces that exist within the cities infrastructure that helped to establish the predominant cultural trends within. Conversely, the existence of these institutions and infrastructures are a direct reflection of the values of the residents and employees. Transportation systems, employment rates and occupations, education, religion, crime, and existing cultural institutions all contribute to and reflect the cultural of Columbus. Of course, these factors do not independently influence the residents but they also work together, in either harmony or dissonance to perpetuate and strengthen the cities cultural virtues.
It is important to note that the city of Columbus is the only city within Bartholomew County with a sizable population. This has earned the city a “hub� status for the county and it has become the cultural center for the surrounding area. Despite Indianapolis’ proximity, Columbus remains an important node in the Indiana landscape. Due to this condition, Columbus must serve the greater county and has been outfitted with an airport and other forms of public transit. Fig. 2.3.1- Graph of relationships of city population within Bartholomew County
49
Columbus Museum of Industrial Objects
Demographics
2.2.1 Columbus Transportation The city of Columbus, like many urban hubs surrounded by rural landscape, relies heavily on private vehicular transportation. With such an environment as this, other forms of transportation such as walking or biking or train are less convenient if not less time effective. The city of Columbus does implement a bus “fixed-route” system consisting of four primary routes throughout the city. Fig. 2.3.4 shows a map of these routes.
Fig 2.3.3 Transportation within the city is dominated by single drivers.
This map reveals a key cultural fact about the city: there is little sustainable consciousness with regard to transportation. For the majority of the residents, the choice to drive alone to work is reinforced by the popularity of the decision. The small yellow district in the center of the city highlights the area of “most bike riders”. This number, 7.1%, is encouraging however when compared to the rest of the districts it appears to be an anomaly within the city. Fig 2.3.2 - Shows the % of people that bike to work. The greatest density located in a central district (represented in yellow) has 7.1% residents reporting that they regularly bike to work.
Arch 401 - Fall 2013
50
Willow Glen Apts.
:49
Route 1
Route 3
Greenbe lt Golf Course
Boarding Point ............................ Time
Boarding Point ............................ Time
Leave Depot ................................................................ :05 11th & Franklin ........................................................... :08 22nd & Home (Donner Center) .................................... :12 27th & Home (North, Northside, St. B’s)....................... :13 Rockyford & Central (Willow Crossing) ........................ :17 Ivy Tech / IUPUC / Social Security / Airport .................. :19 Rockyford / Candlelight...................................................... :24 Leave Target Inbound .......................................................:35 Middle Road (NorthPark) ....................................................... :38 Central & National (Krogers) .............................................. :40 27th & Maple (Northside) .................................................:41 25th & Home (Subway, Taco Bell, North) ..................... :43 22nd & Chestnut ......................................................... :44 15th & Washington (Trustee) ....................................... :49 Depot at Mill Race Station ........................................... :55
Jefferso n Ed. Center
Leave Depot ................................................................. :05 8th & Central ................................................................ :10 Gladstone at Quail Run................................................. :11 17th & Gladstone (Hospital).......................................... :13 Beam & National (Captain D’s, Marsh) ......................... :17 Waycross (Villas, Fairington, Smith) .............................. :20 25th & Lockerbie (Lincoln VIllage) ................................ :21 25th Foxpointe ............................................................. :23 Williamsburg (Holiday Center) ...................................... :28 Leave Target Inbound ................................................... :35 Midway (CHR, Two-Worlds) .......................................... :37 17th & Gladstone (Hospital).......................................... :40 14th & Cottage (United Way) ........................................ :43 7th & Pearl (Central Middle) ......................................... :50 Depot at Mill Race Station ............................................ :55
UNITED WAY Flatr ock Park
I.U.P.U.C.
Social Secur ity Office
IUPUC Info-Tech Park Nor thbrook
Par k Forest Estates North
:19
Ivy Tech
Sims Homestead Addn.
Nor thbrook Park
Route 4
Route 2 Boarding Point ............................ Time
Breakaway Trails
Boarding Point ............................ Time
Homestead Mobile Home
Leave Depot .................................................................:05
Leave Depot ................................................................ :05
Tr ansit
5th & Chestnut ............................................................ :10
Westenedge Park
17th & Home (Party Mart) ........................................... :13
Par kside School
Par k Forest Estates
25th & Hawcreek (CSA, Lincoln Center)........................ :17 25th & Taylor ............................................................... :23
Wehmeier (East High School) .............................................. :15 Candlelight Village
McKinley & McClure (McDowell, Eastside) .......................... :17 10th & Creekview (Goodwill,Movies, Walmart) .................... :20
Canterbury Apar tments
Deerfield
Hobby Lobby ............................................................... :25
State Street (Health Dept, Love Chapel, Dorel) .................... :10 Peppertree Village
Par kside
The Woods
Leave Target Inbound .................................................. :35 ColumBus
9th St. Park
Statio n :05
Leave Target Inbound .......................................................... :35
Willowwood Apts.
For est Park Nor th
FairOaks (back entrance Kmart) .................................. :40
:17
22nd & Cottage (Big Cheese) ...................................... :42
:23
Heathfield
10th & McClure ................................................................... :39 Schnier (Mapleview,CIMA, BMV, Centerstone) ..................... :40
Windsor Place
Cedar Ridge
Broadmoor Nor th
Indiana & Marr (East High, Columbus Christian) .................. :42
5th & Pearl (Lincoln, Library, Visitors Center) ............... :49
McKinley & Hope (Five Points, FFY) ..................................... :47
Depot at Mill Race Station ........................................... :55
Broadmoor
Depot at Mill Race Station ................................................... :55
Mead Village For est Park West
:38
Sims
Mead Village Park
Cor nerstone's Nor thpark
Hillcrest
Jefferson Park
For est Park
Nor ther n Village Shopping Center
Tipton Park North Columbus Village Apts. Nor thside Middle
Ever road Park
:40
Ever r oad Park West
:41 School
Madison Park
Chapel Square
Grant Park
Car riage Estates
G B
Foxpointe Schmidt Tipton Park School
Central Middle School
Eastbrook Plaza
25th St. Shopping Center
Fairington Apts.
Briar wood Apts. Eastgate Smith School
:23
Edgewood
Sandy Hook
Quinco
Har tford Place
ColumBus
:13
City Cemetery
H
Hospital
McKinley Apts.
Washington Ct. Apts.
:25
Willow Glen Apts.
Clifty Crossing
Greenbelt Golf Course
UNITED WAY
Jefferson Ed. Center
County Gov. Bldg
ck
Riv
Columbus City Utilities
Tr ansit ColumBus
Station :05
Mill Race Park
Gar land Br ook
Central Middle School
Ar mory Apts. Library
:49
The Commons
Creekview
Centerstone
Creekview
McDowell
Prestwick Square
ColumBUS Transit
Court Syc amore Ser vices Place Apts. Center City Hall
M
Knollwood
Mor ningside Park
Foundation For Youth
County Gov. Bldg
Court House
The Ridge
Clifty School
Stonegate
Lincoln School
Cummins
Court Syc amore Ser vices Place Apts. Center
Foundati on For Youth
Heritage Wood Apar tments
9th St. Park
Cummins
tr o Fla
er
McCulloughs Run Park
k
Noblitt Park
Edgewood South Pic Way Plaza
ee
Donner Park
Quail Run
Cambr idge Square
Jail
Steinhurst Manor
Country Br ook
Hours of Operation: 6:00 am – 7:00 pm Monday through Saturday ColumBUS Transit has four fixed routes that leave the Mill Race Stations at Five (:05) after each hour.
Cr
ee
k
Clifty Park
Ha
w
County Highway Gar age
City Hall
Jail
Fixed route service is 25¢ per trip, payable upon boarding.
Fodrea
Pence St. Park
Transferring to another route will be an additional 25¢.
Columbus East High School
Pence Place Apts
Wehmeier Addition
Cosco
All vehicles are accessible. All fixed routes are equipped with bicycle racks.
South Mapleton Industr ial Park
Proposed Site
Recycling Center
Cr
ee
k
As seen in Fig. 2.3.4, the proposed site is situated in a crucial zone for public transportation. The primary transit station is 5 blocks away and the red line which services the central West-East axis of the city runs down the adjacent street, 5th. While elsewhere it has been made apparent that the primary form of transportation is personal vehicle, the infrastructure for promoting more sustainable travel exists and is concentrated nearest our initial site.
51
Apts
Applegate
Tar get :35
Two Worlds
Noblitt Falls
Cummins
Court House
Lincoln Village Apts.
Char leston Square
Columbus Center Lincoln Park
:17 Donner Center Par ks & Rec
:49
The Commons
Sandy Hook Nor th
Hiker Trace Clover Center
Hamilton Center
Cr
:49
Holiday Center
ft y
Library
Crump Estates
:42
Lincoln School
Flintwood Foxpointe Apts.
Fair Oaks Mall
:12 :43
376-2506
Fairlawn
Cli
Ar mory Apts.
Cu mm ins
Mill Race Park
Columbus Nor th High School
Williamsburg Apar tments
Ever road Park
:40
Boarding Points – Route 1 Donner Center / North High School / St. Bartholomew
Schmitt Elementary / Ivy Tech / IUPUC / Learning Center Social Security / Candlelight / Target / Northpark Medical Sandcrest / Kroger / International School of Columbus Northside Middle School / Taco Bell / Volunteers in Medicine Moose
= Shelter (available in these areas) City Gar age
Boarding Points – Route 2 Chestnut / Party Mart / Family Video / CSA New Tech / Lincoln Park / Marsh / Holiday Center / Hobby Lobby / Aldi / Target Fair Oaks Mall / Big Cheese / California / St. Peters / CSA Lincoln Library / Visitors Center / Post Office
Bar tholomew
Boarding Points – Route 3 8th St. 1st Presbyterian / Quail Run / Hospice / Captain D’s Marsh / Holiday Center / Villas / Fairington / Smith Elementary Lincoln Village / Foxpointe Apts. / Flintwood / Richards Elementary Williamsburg / Holiday Center / Target / CHR / Two Worlds Hospital / United Way / COHA / Central Middle School Volunteers in Medicine
Boarding Points – Route 4 City Hall / Health Dept. / Dorel / Wehmeier / East High School McDowell / Eastside Comm. Center / DSI / Goodwill / AMC Movie Theater / Walmart / Cummins GOB / Target / Quail Run Mapleview / BMV / Centerstone / Columbus Christian / CSA Fodrea / FFY / Cummins Tech Center / Courthouse
Fig 2.3.5 - Bus routes within the city Columbus Museum of Industrial Objects
County
F
Demographics
2.2.2 The Employment of Columbus
Fig 2.3.7 - Describes the payroll rate in Columbus.
Columbus has within its borders the standard institutions found in nearly every city. These are establishments such as retail, entertainment, accommodation, education, and production/manufacturing. What sets Columbus apart is its thriving manufacturing industry, epitomized by the Cummins Industry headquarters. As indicated by the chart, the manufacturing industry brings in 56% of the total annual payroll for the city. This industry also employs the greatest number of people within the population of any existing industry. Fig 2.3.6 - Describes the employee rate in Columbus. Arch 401 - Fall 2013
52
2.2.4 Religion Statistically, 74% of the residents of Columbus identify with one Christian denomination or another. The remaining 26% can only be identified as “other� in the chart as they are dwarfed by the shear number of believers in Christianity. This has made a fairly strong impression in the Columbus city culture. Churches have historically been well funded and many of them were designed by some of the premier architects of their time. As a result, according to a strategic research plan for Downtown Columbus conducted by the Columbus Redevelopment Commission in 2005, the number one reason that residents come into the downtown area is for church. This is important information for the design of the Museum for Industrial Innovation as the proposed site is flanked by two of the most prominent churches in the city. Another interesting note which will later be discussed in greater detail is the presence of private religious educational institutions such as elementary schools and schools teaching at higher levels.
Fig 2.3.8 - Describes the types and quantity of religious buildings their are in Columbus.
What little homosexual representation is present in Columbus has been pushed to the outskirts of the city. While there may also be other factors this could have something to do with the strong religious persuasion of the residents within the city’s borders. Fig 2.3.9 - Describes the % likely for homosexual households
53
Columbus Museum of Industrial Objects
Demographics
2.2.3 Education
As seen in this graph, Columbus has a lower percentage of residents that have graduated high school than the rest of the state. However, on the other hand they have a greater number of residents with at least a bachelor’s degree and their percent of those with master’s degrees is nearly double the Indiana average. This statistic speaks to the primary occupations and industries within the city. A large company like Cummins employs a great number of engineers, an occupation that requires at least a bachelors. It can be inferred that those greater numbers are employees of these larger companies and are probably not native to Columbus but rather were brought into this manufacturing hub, thus skewing the averages. Arch 401 - Fall 2013
Fig 2.3.10 - Analyzes the educational attainment in Columbus.
54
Elementary Schools: — PK-8 5 Christian Affiliated 10 No religious affiliation Middle Schools — 7-8 No religious affiliation High Schools — 9-12 No religious Affiliation Other: — PK - 12 All Christian
Clearly, the county does not lack formal education opportunities. While many of the elementary schools are strictly pre-K, they still contribute to the prototypical path of the American student. However, a well balanced education is two fold, consisting of both formal and informal education. So, despite the numerous opportunities presented , the city lacks an outlet for the students to continue to educate themselves in a more engaging and less formal environment. This type of structure can help establish negative connotations towards educational establishments which may explain why as many as 33% of the residents chose to not pursue higher education after high school. Fig 2.6.10 - Analyzes where the educational facilities are located.
55
Columbus Museum of Industrial Objects
Demographics
2.2.5 Existing Cultural Establishments Columbus Top Tourist Attractions 1 - Miller House 2 - North Christian Church 3 - First Christian Church 4 - Mill Race Park 5 - Columbus Architecture Tours 6 - The Commons 7 - Kids Commons 8 - St. Peters Lutheran Church 9 - Zaharako’s Ice Cream Parlour and Museum
Cultural Institution Religious Religious Park Cultural Entertainment Entertainment Religious Food
The trend within the tourist attractions is immediately evident- it leans towards cultural and architectural institutions and entertainment. As an architectural mecca of the Midwest, the city is often visited for just such reasons. The three churches on the list are religious establishments but rarely do tourists visit these for their own salvation, but rather to view the incredible design of these spaces. In this instance it could be said that they are cultural institutions as well considering the cultural and architectural value placed on them by the residents of the city and the greater community.
Columbus Top Resident Attractions 1 - Church 2 -Bank 3 - Post Office 4 - City Hall / Courthouse 5 - Parks 6 - Kids Commons 7 - Entertainment 8 - Library 9 -Shopping
Religious Official Government Government Recreation Entertainment Entertainment Entertainment/Education Entertainment
The trend for the residents is slightly different than that of tourists. The top reason for a resident to visit downtown is to attend church. This is probably augmented by the architectural prowess of these religious spaces but more likely due to the strong Christian culture of the city. The second, third, and fourth reasons for visiting downtown are necessities and present a bleak view of the culture of downtown: there are few compelling reasons to visit. The latter attractions are better indicators of a thriving urban area but the fact is they remain less frequented than official establishments.
Arch 401 - Fall 2013
Fig 2.3.11 - Types of establishments within Columbus
56
Upon review of the most frequented destinations by tourists and residents, note that many opportunities for informal education do exist within the cities infrastructure. However, despite their presence, they remain primarily an attraction for tourists and disregarded by the residents, further contributing to the educational bipolarity of the city’s culture. This likely has to do with the fact that these institutions are capable of holding interest for perhaps one or two visits so the residents need not return (consequently contributing to gentrification mentioned earlier).
Fig 2.3.12 - Attractions near site
57
Columbus Museum of Industrial Objects
Demographics
2.2.6 Crime Robberies and Assaults are extremely low when compared to the National average, often times 3 times lower rates by year. In contrast, theft is outrageously high in Columbus, often doubling national averages. In recent years, Columbus has seen a dramatic increase in auto-theft, while the national average has been on a steady decline. Crimes like arson, murder, and rape have seemingly no pattern or clear average. Overall, Columbus’ national rating is near average however it is important to note these select crimes that are significantly more present.
Fig 2.3.13 - Describes the Crime Rates in Columbus.
Arch 401 - Fall 2013
58
2.3 Client Profiles Visitors Profile: -Interest in architecture -Middle-aged men and women -Family oriented -Interest in street fairs and car shows -Interest in marathons -Typical American tourist
Funders profiles: -Cummins is the leading diesel engine company that has helped develop and shape the City of Columbus. They can be a possible funder for the project given that the museum is housing industrial innovative objects. -Visitor center in Columbus has a history of funding art and design initiatives within the city including hosting tours of the Miller House. It is reasonable to conclude that they too would have interest in the proposed museum, especially given its immediate adjacency.
Columbus is ranked 6th in the nation by the American Institute of Architects for its architectural masterpieces, making it a good destination for people with interest in architecture. The preceding research shows that the most likely visitors of Columbus are middle-aged or older families because of the activities associated with the city. There are not many businesses or programs designed to attract a younger demographic. There is however the car show and other street fair around the year that attracts families and car fanatics. There are also other activities including various biking and running marathons. The Cummins running marathon has become a big attraction in Columbus attracting more than 5,000 people every year.
Resident profiles: - Caucasian -Adults (over the age of 18) -Family oriented with children -Married -Lower middle class -Traditional -Christian -Republicans m, Looking at the demographics of Columbus, a recurring pattern for the type of resident therein is established. Most of the people that reside in the city are white adults over the age of 18. Lots of them are married with children, or live with their blood-related families. Looking at the abundance of religious educational institutes, and the great number of churches in Columbus, we concluded that the people are very traditional. This research also showed a percentage as low as 0.3% of self-proclaimed homosexuals in the city. Looking at that combined with the number of votes that went to republicans during the presidential elections shows that most people have conservative point of views. The household income was low relative to the price of real estate, which concludes that most of the residents are lower-middle class.
59
Columbus Museum of Industrial Objects
Demographics
2.4 The Columbus Condition
There is a positive chain reaction that can follow an increase of income for the residents of Columbus. This reaction can affect the appeal to improve the education and diversity that the city lacks. Increasing education opportunities by proposing more prospects of classes, schools, and informal education will result in a more cultured and conscious people. Residents with higher education can trigger new job opportunities and can have a better chance of getting a good job at Cummins Company. This will undoubtedly result in an increased income. That income can affect the city in various ways including the improvement of their educational system. There is minimal enrollment of students at private schools and that can be caused by the low income. With greater income, parents will have the chance to enroll their students in better schools. Improving urban circulation is another aspect the city can benefit from. Instead of building new buildings, one can spread out and disperse the programs into existing buildings in the city to enliven underutilized pockets. Maximize the available programs rather than proposing entirely new ones. Making smart design choices and solutions can decrease the energy usage the building uses, as well as serve to be a form of informal education for residents and visitors.
Fig 2.5.1 - Analyzes the current condition of Columbus and how things effect each other.
Arch 401 - Fall 2013
60
Conclusion To conclude, this investigation has uncovered many cultural and social imbalances that exist within the city’s structure. The residents are proud of their city, of that there is no doubt, and much of that pride stems from the architectural monuments erected within. However, as evidenced in the preceding pages, their culture is unstable and not sustainable. The continuing imbalance of education, catalyzed by the lack of informal educational institutions and infrastructure that promote a mixing of educational classes is not a sign of a culturally and economically thriving city. Many of these issues are inherently linked and through attempting to solve one, all the otherse will receive residual effect. In the case of museum design, an approach towards educational promotion seems to be the most logical path towards cultural stabilization.
Columbus Museum of Industrial Objects
Demographics
CHAPTER 2 - Demographic Analysis of Columbus: WORKS CITED Ahmed Al Monsouri, Alex Olevitch, Bec Ribeiro
Development Concepts Inc. Downtown cColumbus Strategic Planning. Strategic Plan. Columbus: City of Columbus, 2005.
Welcome To Columbus Indiana. 1 1 2013. 20 9 2013 <>.
Bartholomew County, Indiana . 10 1 2012. 1 10 2013 < profiles.asp?scope_choice=a&county_changer=18005&button1=Get+Profile&id=2&page_ path=Area+Profiles&pah_id=11&panel_number=1>
Columbus Indiana Population and Demographics Resources. 1 January 2013. 20 September 2013>
Columbus, Indiana. 2012. 20 9 2013 <>. Population in Columubus. 2013 <>.
Arch 401 - Fall 2013
Columbus, Indiana. 10 1 2013. 1 10 2013 <. html>
CHAPTER 3: Trends and Forecasts: Urban Design, Landscape, and Architecture David Greco, Jackie Katcher, and Sarah Ward
3.0 Introduction 3.1 Urban Planning 67 3.2.1 The City of Columbus Comprehensive Plan 3.2.2 The Neighborhoods of Columbus 3.2.3 Ongoing Projects 3.2 Landscape 87 3.2.1 Evolution of the Landscape 3.2.2 Flood Plains 3.2.3 Standards and Ordinances 3.2.4 Corridors of Columbus 3.3 Architecture 95
3.4.1. Modern Architecture as a Source of Civic Pride 3.4.2 Columbus Architecture in the 1800s 3.4.3 Modernism Comes to Columbus 3.4.4 Cummins Foundation Architecture Program Founded- 1957 3.4.5 I.M. Pei Builds Bartholomew County Public Library- 1969 3.4.6 The Era of Columbus Additions and Renovations 3.4.7 Columbus’s Petition for Green
3.4 Conclusion + Design Recommendations 103 3.5 References + Works Cited 104 Colombus Museum of Industrial Objects
Trends and Forecasts
Introduction This chapter is intended to present the trends and progress of Columbus as a whole. In order to compile the information read here, our team has researched three main subject areas: Urban Design, Landscape Design, and Architecture. The underlying premise of the city’s attitude toward these subjects is sustainability. In order to allow the community to function as a whole, the private and public sectors work together to provide the residents with tools for success. A major factor in the forming of the city’s infrastructure is the Cummins Corporation, a service engine and technology company headquartered in Columbus. This privately owned corporation has funded a large number of buildings in the city of Columbus. This has enabled architecture talents to build notable works of architecture there. Despite the city’s rank as the sixth most notable city for architecture in the United States, our database research, internet research, books, and a visit to the city itself, has lead us to come to a few opposing conclusions. One is that the city of Columbus lacks in public open space. Its downtown is filled with inefficient single-level parking lots and its outer limits are devoid of activity. Although the attempts to provide public space through Mill Race Park was successful in turning around its neighborhood, it was not able to connect the city as a whole. Another is that the city has not significantly contributed to its collection of buildings since its building boom of the late 60s through the early 80s. Overall, the city still has much to improve upon, but it has enormous potential due to its rich infrastructure. Arch 401 - Fall 2013
3.1 Urban Planning 3.2 Urban Planning
3.1.1 The City of Columbus Comprehensive Plan The city of Columbus has changed considerably throughout the years in order to meet the needs of the consistently growing population and the changing world around it. Because of these changes, Columbus has produced its own comprehensive plan which addresses multiple elements of the city planning. Each element of the plan goes into detail by highlighting the planning principles, the issues that need to be resolved, and proposing solutions for further development of the city. The current comprehensive plan contains six separate elements that were adopted over time starting with the Goals and Policies element in 1999. The six elements are as follows: 1. 2. 3. 4. 5. 6.
Goals and Policies Land Use Plan Thoroughfare Plan Bicycle and Pedestrian Plan Downtown Plan Central Avenue Corridor Plan
By taking a look at the city’s comprehensive plan, both current and past, one can understand how the city has addressed the needs of the changing city and how these acts have begun to shape it into what we now see today. This section will further discuss some of these elements within the plan and how these past, current, and future plans affect what is being built today. To keep this section brief, only certain major elements of each plan that need to be addressed when understanding how the city planning department operates within Columbus will be discussed. This is vital in order to understand how planning has evolved within Columbus and how this has affected what has been built within the city.
67
Fig. 1.1 - Land Use/District Map of Columbus in 1949.
Colombus Museum of Industrial Objects
Trends and Forecasts
Goals and Policies The Goals and Policies element is divided into two sections: Part 1 is a statement of the community values and Part 2 is a detailing of the city’s policies for further redevelopment. The community’s values are listed in eight catergories: small-city atmosphere, farmland, open space, and recreation areas, environmental quality, community appearance, economic vitality and diversity, accessibility, streets and utilities, and intergovernmental cooperation. Understanding the community’s values and what their intent for planning within the city is critical for designing to meet the needs of the people of Columbus. Without this, the design lacks cohesiveness and connectivity within the greater scale of the city. The Goals and Policies themselves are also listed in ten catergories: development patterns, environment, parks and recreation, housing, commercial development, transportation and streets, drainage and stormwater, utilities, public facilities, and economic development. Each category lists specific goals and policies that corresponds with each topic. These are intended to help understand what the city is striving towards and how they want to go about redeveloping it.
Land Use Plan
Fig. 1.2 - Currently adopted Land Use Plan. This map shows where Columbus is planning to change in terms of expansion of land use.
Arch 401 - Fall 2013
The Land Use Plan is the second element of the comprehensive plan and was adopted to promote the community values and further the goals and policies by establishing land use priniciples for Columbus. The current future land use map (see Fig. 2.1) is not meant to be a zoning map, but a map that is flexible and that can be used as a guideline. The catergories of future land use are as follows: agriculture, mixed use, residential, estate/cluster residential, industrial, commercial, floodway/sensitive area, and special use (airport). By creating these land use catergories, the city can understand how each area of Columbus is being utilized and what possible further redevelopment might need to take place in those areas.
68
Thouroughfare Plan The Thouroughfare Plan element is used as a guide to understanding and anticipating the future transportation needs within the city and responds to those needs by identifying conceptual options for a well-coordinated, efficient, and effective street network. This element works in conjunction with the Bicycle and Pedestrian Plan and Land Use Plan to create an overall well-designed system of connections throughout the city. The plan is comprised of two components: the Thoroughfare Plan policies and the Thoroughfare Plan map. The policies are intended to describe the overall philosophy of the city of Columbus in regards to the transportation network within its jurisdiciton. The map is the application of those policies in a graphic representation format so as to understand the opportunities for future growth and development. Another important aspect to creating a well-designed transportation network is understanding the traffic patterns of Columbus and how those have changed over the years. The growing population and the expansion of the city land use has caused some streets to become overwhelmed with traffic. This means that the city needs to rethink the overall thoroughfare plan and how to create connections within the city that are safe for not only drivers, but bicyclists and pedestrians as well.
Bicycle and Pedestrian Plan A newer element to the comprehensive plan is the Bicycle and Pedestrian Plan, which was first adopted in 2010. The city conducted a public survey in 2008 which gave them a look at the types of transportation people were primarily using and where most of them lived and worked. After conducting this survey, the city had a better understanding of the need to improve the systems for pedestrian and bicycle traffic to make them more enjoyable and safe to use, while at the same time, discouraging the reliance on vehicles for transportation. Also, because the obesity rate has been increasing in more recent years, this has caused an even greater concern for the city planning department to develop a better trail system for both pedestrians and bicyclists, which will encourage them to be healthier and more active.
69
Fig. 1.3 - Bicycle and Pedestrian Systems Plan Map shows how all the trails will connect within the city and to their specific destinations.
Colombus Museum of Industrial Objects
Trends and Forecasts
Downtown Plan The purpose of the Downtown Plan element is to establish a framework and direction to create an active and vibrant downtown that is able to support significant development investment and can improve economic vitality for the overall city. The five major focuses of this plan are to improve amateur sports and recreation, learning and culture, dining and entertainment, living, and shopping. In order to achieve these ideals, the plan involves conducting a market analysis which assesses the economic conditions of the area, establishing the framework for future growth and development plans, identifying potential redevelopment opportunities and projects to activate the downtown, and preparing a development strategy for downtown investment. Altogether, this gives the city a redevelopment strategy with economic, physical, and organizational development recommendations to further improve and activate the downtown area. At this point, it is difficult to know how effective this plan will be because it is still in its early stages of being implemented. However, the plan definitely has potential and the process that the city has gone through to develop this plan is logical and consistent with the community’s values for the area. It is also obvious that the city is well aware of the lack of connectivity of the overall city fabric and how crucial it is for the downtown area to bring everyone together.
Central Avenue Corridor Plan Central Avenue is a prominent transportation corridor featuring a mix of land uses that has developed and evolved over time. It is anchored by the Columbus Municipal Airport to the north and Cummins Plant 1 to the south and connects US 31/National Road to State Road 46. This connecting thoroughfare hosts a steady volume of vehicular traffic while accommodating single-family residential homes, national retailers, vacant and operational industries, and civic institutions, among other types of land uses. Central Avenue’s role as a transportation thoroughfare, commercial corridor, and residential street has resulted in conflicting land uses, traffic inefficiencies, and unsafe and undesirable pedestrian environments.
Fig. 1.4 - The Downtown Development Plan shows the strategy that the city planning department wants to implement and how the downtown districts will connect with one another.
Arch 401 - Fall 2013
The Central Avenue Corridor Plan is also a relatively new element to the comprehensive plan being that it was first adopted in 2010. This plan provides a vision and strategy for Central Avenue between Rocky Ford Road (northern boundary) and State Street (southern boundary) that is inclusive of the road itself, the uses and neighborhoods along and adjacent to the corridor, areas of influence, and intersecting streets. A corridor study was conducted in 2007 which involved collecting relevant data, identifying primary issues and concerns, and recommending strategies to improve the corridor’s traffic flow. This plan outlines a framework for future development, redevelopment, and public space improvement projects while exploring the recommendations established in the 2007 study.
70
3.1.2 The Neighborhoods of Columbus The difficulty of understanding Columbus and its urban fabric is due to the lack of cohesiveness and connectivity within the city. As a result, there are segregated neighborhoods that can be classified for their special character and noteworthy features. This section will describe those neighborhoods in detail making them easier to identify and to understand the feel of each unique environment. From the Land Use Plan element of the City of Columbus Comprehensive Plan, the planning department has classified these parts of the city into 13 “character areas”: • • • • • • • • • • • • •
71
Downtown Columbus Columbus Central Neighborhoods East Columbus National Road Commercial Corridor Western Rocky Ford Neighborhoods East 25 Street Neighborhoods Columbus Municipal Airport U.S. 31 / Indianapolis Road Area Western Gateway Area Western Hills Woodside / Walesboro Area State Road 11 South Eastern Rural Area
Fig. 1.5 - Map of Columbus neighborhoods.
Colombus Museum of Industrial Objects
Trends and Forecasts
Downtown Columbus This district of Columbus has always received special attention from the planning department and from businesses and institutions within Columbus because it is the heart of the city and requires constant improvement to make it a quality cultural center. As a result, it is an attractive downtown setting for Columbus with a variety of commercial, residential, institutional, and recreational uses. It is also the center of financial and governmental activity of the community. This area is one of the more sustainable areas of the entire city because of its mixed use approach and focus to create a walkable and livable community. The downtown district is also the home of many of the city’s most architecturally significant buildings, as well as Mill Race Park, which is one of the best examples of urban planning, landscape design, and sustainability all coming together. The landscaping and lighting of the downtown area are generally attractive and suit the area well. One of the many projects to help enhance and beautify the downtown area was a streetscape project in the 1980s, which provided new sidewalks and trees, streetlights, and benches. As part of this project, the traffic pattern was also revised. This allowed the downtown traffic to flow more smoothly, slowly, and safely. Downtown Columbus is served by a system of alleys that provide secondary access and service areas. Three of the east-west alleys have been reconstructed to provide attractive pedestrian access to Washington Street.
Fig. 1.6 - Map showing the Downtown Columbus neighborhood.
Arch 401 - Fall 2013
Of course, the downtown still has room for improvement. One of the major issues that Columbus has seen in recent years is the general population moving out of the downtown area and into the suburban-like neighborhoods to the north and east of Columbus. This has caused the city to rethink its overall downtown plan. It is important for the downtown to preserve the historical landmarks within the area while at the same time creating a better connectivity between each district and landmark.
72
Columbus Central Neighborhoods The Columbus Central Neighborhoods district is adjacent to the Downtown district, surrounding it on the north and east sides. It is a mixed-use area, although residential development predominates. There are commercial, industrial, and recreational uses interwoven with the residential areas, as is typical of older neighborhoods. This type of setting offers convenient access to shopping and services. This area has many mature trees, and most properties are attractively landscaped. For the most part, the area is well maintained. This area contains the oldest residential neighborhoods of Columbus with a variety of housing prices and types. The development pattern is traditional with houses on small lots and small front yards that are set relatively close to the street and streets that are in a grid pattern. This area also has some great examples of adaptive reuse and infill development projects. The Columbus Central Neighborhoods area is also home to the Cummins Engine Company and Reeves Pulley which were born here. The industrial buildings were among the earliest structures built in this area. Cummins Engine Company maintains its main engine plant in this area of town as well as other companies, such as Arvin Industries, Reliance Electric, Golden Castings, and Ventra. As for institutional buildings within the area, the Cleo Rogers Library and the Columbus Visitors Center are two of the more important components of the 5th Street Corridor. There are also some very well-known churches to the area, such as St. Peter’s, First Christian, First United Methodist, and First Presbyterian. The neighborhood is also home to several public schools, such as Central and Northside Middle Schools, North High School, and Lincoln and Schmitt Elementary Schools.
73
Fig. 1.7 - Map showing the Columbus Central neighborhoods.
Colombus Museum of Industrial Objects
Trends and Forecasts
East Columbus An unincorporated area until the late 1940s, East Columbus remains a relatively cohesive mixed use area. It contains residential neighborhoods, industrial areas, commercial development, parks, and institutional uses. East Columbus, perhaps more than any other area of Columbus, has a sociopolitical cohesiveness. Residents and businesses alike identify with the area and its image as a neighborhood of hardworking people with solid Midwestern values. During the 1970s, the City of Columbus made part of East Columbus a priority for neighborhood improvements by applying for and receiving federal Community Development Block Grant Funds for street and drainage improvements in the area south of State Street. While parts of East Columbus are old and in disrepair, the area also contains several of the city’s architecturally significant buildings, including East High School, Fodrea School, McDowell School, Fire Station 3, the Irwin Union Bank, and the Foundation for Youth. One of the biggest issues that East Columbus is facing right now is the traffic congestion and safety issues on State Street, which is East Columbus’ own “main street.” There is a strong need for revitalization within this area so that it is more pedestrian-friendly. There is also a lack of identity and connection between both sides of the street due to their repetitive scale, which makes this corridor less appealing to the public.
Fig. 1.8 - Map showing the East Columbus neighborhood.
Arch 401 - Fall 2013
There is also a lack of code enforcement causing unsafe buildings and junk cars to be more common within the district, in addition to inaccuarate flood hazard area maps which have caused areas to be destroyed. This area also contains slum and blight which reflects the economic status of some of the people who live in this neighborhood.
74
National Road Commercial Corridor This area is the primary commercial corridor in Columbus. Commercial development in this corridor has expanded over the past four decades, and the changes in retail can be seen in the variety of types of centers that are located here. The area contains several strip malls, such as Columbus Center, Northern Village, Clifty Crossing and the 25th Street Shopping Center. Several of these strip malls contain big-box retail development, such as Wal-Mart, Office Max, and Target. Additional big-box development is planned for the property at 10th Street and National Road. Also in the area is the Fair Oaks Mall, an enclosed shopping center. A wide variety of goods and services are available in the National Road Commercial Corridor. The character of this area is typical of commercial corridors in many suburban communities. Because much of the commercial development in this area abuts residential neighborhoods, buffering has been an important land use issue. Strip malls typically are developed with buildings set far back from the street, with expansive parking areas in the front. One result of this type of layout is that the buildings back up and are relatively close to adjacent neighborhoods. Neighboring residents find themselves looking at loading docks, refuse bins, and HVAC equipment. In some of the older centers, such as the 25th Street Shopping Center, there is little or no screening. Planning process participants cited the Fair Oaks Mall as an example of a well-designed center. They noted the extensive landscaping and signs that are informative without being intrusive or excessive. Traffic congestion is also a problem with this area, especially during peak times. Pedestrian movement is another aspect that needs to be improved as well since the sidewalks are few and linkages are poor. This area in general lacks character because of the generic building designs of the big-box stores since they are owned by enterprises. This lack of uniqueness makes this area less memorable and less cohesive with the rest of Columbus.
75
Fig. 1.9 - Map showing the National Road Commercial Corridor neighborhood.
Colombus Museum of Industrial Objects
Trends and Forecasts
Western Rocky Ford Neighborhoods In this area, adjacent to the Columbus Municipal Airport, single-family residential development is the dominant land use. The area is suburban in character and is well maintained. The neighborhoods are of the typical subdivision design of the past three decades with wide, curvilinear streets, cul-de-sacs, and relatvely large front yards. Other neighborhoods have more of a traditional design with grid or modified grid street patterns. The area has a wider range of housing prices and types than found in most areas of Columbus with neighborhoods ranging from high-priced homes to mobile home parks. This shows the diversity of income of the people who live in this neighborhood.
Fig. 1.10 - Map showing the Western Rocky Ford neighborhoods.
Arch 401 - Fall 2013
Because this area is primarily residential, there are very few opportunities to be able to walk or bike to work which puts a strong reliance on cars as a transportation method. By expanding the city outwards into these suburb-like neighborhoods, the city becomes increasinly more spread out and less cohesive as a community.
76
East 25th Street Neighborhoods This area is predominantly residential. There is commercial and office development, primarily along 25th Street. The land is relatively flat, making it attractive for economical building sites. There are soil types in the area that are subject to flooding or are unstable, presenting challenges for building construction. The area is suburban in character. Drainage concerns have limited some development in this area. A county regulated drain, Sloan’s Branch, is the receiving stream for much of the runoff from development in this area. The County Drainage Board has received complaints that the stormwater discharged into this drain exceeds capacity during and after heavy rains. The residential neighborhoods consist mostly of single-family subdivisions with some apartment complexes for renting options. The neighborhoods within this area are stable and well-maintained unlike parts of the East Columbus neighborhood. Most of the commerical development is located along 25th Street which includes offices, retail, and services. There are also two elementary schools, Richards and Smith, and several churches within the area, such as the First Baptist Church. Fire Station 4 and Par 3 Clubhouse are amongst the more famous architecture of the area as well. This area continues to be active farming within this area due to proximity of this farmland to existing development and to city infrastructure and services.
77
Fig. 1.11 - Map showing the East 25th Street neighborhoods.
Colombus Museum of Industrial Objects
Trends and Forecasts
Columbus Municipal Airport
The Columbus Municipal Airport served as a military base during World War II. The U.S. government declared the property to be excess and deeded it to the City of Columbus Board of Aviation Commissioners, with the proviso that in the event of a national emergency, the federal government may reclaim the land. The city has long seen this property as an economic development opportunity, especially as the location for a high-tech industrial park. Property at the airport is available for lease, not purchase. The rent receipts have kept the airport self-supporting; local property taxes are not used to fund the operations there. This area contains the airport operations, including the terminal, tower, runways, and hangars. The property belonging to the Board of Aviation Commissioners encompasses an area much larger than needed for aviation use. Development on the airport property tends to be clustered according to the type of use: an educational complex, industrial development, recreational uses, health care uses, and some commercial development. This area is also home to the primary higher education complex in Columbus, the Indiana University-Purdue University at Columbus. The People Trail system serves this area, but other than that, there are no other pedestrian facilities.
Fig. 1.12 - Map showing the Columbus Municpal Airport neighborhood.
Arch 401 - Fall 2013
The airport has excellent access, via Central Avenue, Marr Road, River Road, and C.R. 500 N. Interior roads offer easy access to businesses and institutions. Once the location of dilapidated barracks, the aviation board and its staff worked diligently to improve the area during the 1990s. Old buildings have been removed, and new, attractive buildings have been constructed. Planning process participants found that there is room for improvement, and that some unattractive metal buildings remain. The group also found that the airport is lacking in landscaping.
78
U.S. 31 / Indianapolis Road Area Much of this area consists of small-to-medium-sized farms, with scattered subdivisions. Along the highway corridors, there are denser subdivisions, manufactured home parks, commercial and industrial uses. The land is generally flat and is characterized by extensive flood plain areas. The area has excellent highway and rail access. There are several places along the highway where drivers have views of the river and flood plain areas. Development proposals in this area have resulted in conflicts regarding utility service. Water service to part of the area is available from Eastern Bartholomew Water Corporation, as is water from Columbus City Utilities. Sewer service is provided by Driftwood Utilities and by the City of Columbus. The portions of this area that are outside the city limits are in the path of city growth. The water supply system owned by Eastern Bartholomew does not meet the standards of the Columbus Fire Department for firefighting purposes. Driftwood Utilities now has an agreement with the City of Columbus for the treatment of sewage, but direct connections to the city system for properties outside is permitted only under limited conditions. The existence of two utilities in the area results in inefficiency, as developers are required to install dual water systems: one for the potable water supply and another for firefighting. The potential for this area to be an attractive entrance corridor has not yet been realized. With the use of better lighting, better landscaping, more effective signs, and more definition, this area could be more inviting than it currently is. Pedestrian access is limited with many areas lacking sidwalks, and traffic patterns are also increasing which gives rise to safety concerns and congestion. Some buildings in this area are poorly maintained and deteriorating.
79
Fig. 1.13 - Map showing the U.S. 31 / Indianapolis Road Area neighborhood.
Colombus Museum of Industrial Objects
Trends and Forecasts
Western Gateway Area This area is the primary entry corridor to Columbus and is predominantly commercial in character. The I65/SR 46 interchange was rebuilt in the 1990s, with a signature red arched bridge as the distinguishing feature. The city has invested considerable resources in making this corridor an inviting entrance to Columbus. Because this area is a highway interchange, the types of commerical businesses are typical to such a corridor, such as fast food restaurants, motels, and automobile service areas. Even though it is mostly a commercial area, there are some urban-density residential developments with condominiums, single-family homes, and apartments located here.Fire Station 5 is also located in this area and is the principal instituitonal use of the district.
Fig. 1.14 - Map showing the Western Gateway Area neighborhood.
Arch 401 - Fall 2013
A significant portion of this land is located in a designated floodway which means there is a great deal of open space. The lands subjected to flooding will continue to be used for agricultural purposes. Once the People Trail system within this area is complete, it will connect Tipton Lakes and the commerical corridor to Mill Race Park and Downtown Columbus.
80
Western Hills Historically, the primary uses in this area have been agriculture, open space, and woodlands, but the area is now changing to rural residential and suburban. While much of the county is relatively flat, this area is characterized by rolling hills and woodlands. There are several highly desirable residential neighborhoods in this area. This area is convenient to shopping and services and provides a visual character not available in other parts of the county. The primary land use of this area is for residential development now and contains single-family homes with well-landscaped yards. The Tipton Lakes Development, begun in the 1970s, is the largest residential area in the Western Hills area. Containing more than 1,200 acres, it is a planned community containing a variety of housing types and prices and offering amenities, such as walking trails, lakes, and a marina. There is little commercial development and hardly any industrial development, but in the future, the city is planning on utilizing more of the area for these purposes in addition to expanding the residential neighborhoods. Farming does continue in this neighborhood, but because of its closer proximity to the downtown area, some of this land may eventually be converted to more residential and other uses. To improve further connectivity to the city since this neighborhood is more segregated, it will be important for the city to have this area be served by public transit more than it is currently.
81
Fig. 1.15 - Map showing the Western Hills neighborhood.
Colombus Museum of Industrial Objects
Trends and Forecasts
Woodside / Walesboro Area This area is an employment center for the Columbus community. The architectural quality of the area is high, particularly the industrial parks and the city’s fire station. The area has excellent highway access to I-65, S.R. 11, S.R. 31, and S.R. 58. A lot of the residential development of this area is to provide choices for the major employers so that there is a close proximity and ease of access to work for the employees. The commercial development within this area is to serve industrial parks and interstate traffic with gas stations, convenience stores, and fast food restaurants. Fire Station 6 and the county’s new sanitary landfill are also located in this area. The landfill is intended to serve the city and county for its solid waste disposal needs for the next 50 years. Outside of these areas, particularly to the south, the dominant land use is agriculture.
Fig. 1.16 - Map showing the Woodside / Walesboro neighborhood.
Arch 401 - Fall 2013
To improve this area, the corridors could be better landscaped, more could be done with the lighting of this area so that it is better indicated as an entrance, and a pedestrian system should be developed more in this district.
82
State Road 11 South This area contains some residential development along with commercial and industrial development. The highway serves as a major southern entrance to Columbus. The East Fork of the White River forms the eastern boundary of this area, and several creeks also cross this area. As a result of these rivers and streams, much of the area is subject to flooding. There are areas of single-family homes and older mobile home parks built in this area, but housing in this area is generally not recommended due to flood hazards. A couple schools and several churches are also located in this area with few commercial and industrial development. Because the area is in the floodplains, there is a great amount of open space, which means agricultural use is also very prominent in this area and will likely continue to be the major use of this land. In general, this area is unattractive and economically depressed. Many of the buildings are poorly maintained and deteriorating, landscaping is sparse and poorly designed, and the signs and billboards are not attractive. There are also traffic safety problems in this area, including excessive numbers of driveways, poorly defined driveways, and inadequate sight distance for drivers entering the highway. The highway also is flooded and impassible when there are heavy rains. A pedestrian system is also non-existent in this area, and there is a lack of proper signage directing drivers toward attractions within the city.
83
Fig. 1.17 - Map showing the State Road 11 South neighborhood.
Colombus Museum of Industrial Objects
Trends and Forecasts
Eastern Rural Area This area is predominantly agricultural, with a few subdivisions, scattered rural housing and some businesses. Four state highways enter the area. Although the Town of Clifford is located within this area, it is incorporated and not covered by this plan. The area is generally flat. The area also contains parts of the community’s river system, with Clifty Creek, the East Fork of the White River, Haw Creek and the Flat Rock River all partly located here. There are significant areas subject to flooding. Citizens working on the land use plan noted that there are many buildings in this area that are poorly maintained.
Fig. 1.18- Map showing the Eastern Rural Area neighborhood.
Arch 401 - Fall 2013
Most housing in this area is located on farms or scattered along county roads. There are also a number of houses within the Sand Creek township that are historically significant and will continue to exist in this area. The strip residential developments and scattered subdivisions are an undesirable land use pattern to the city because scattered housing is isolated from shopping and services and is expensive to provide with services and wastes land. There is limited commercial and industrial use in this area, but there is one school and several churches located here. Again, since this is a floodplain, there is lots of open space and predominant agricultural use that will likely continue for the next 20 years.
84
3.1.3 Ongoing Projects Currently, the planning department of Columbus is undergoing a couple projects that are in various stages of completion. One of those projects is the State Street Corridor Plan which has been studied, researched, and the other is a Columbus housing study. This section will further discuss these two current projects and what their outcomes for the future might be.
State Street Corridor Plan Historically, State Street has been the “main street� of East Columbus, but in recent decades, it has declined in its appeal. This is largely due to the increase in population and the commercial development within the neighborhood, which has led the transportation corridor to be more heavily trafficked than it was originallly designed for. The planning department of Columbus is currently working on developing a complete land use and transportation plan for the State Street Corridor. The aim of the project is to identify community goals for the corridor and to improve the functionality, walkability, and economic vitality of the area. The city is also looking to enhance State Sreet as a gateway into Columbus so as to create an inviting entrance into the city. In order to do this, the planning department is working to formulate recommendations for street improvements that will create a pedestrian, bicyclist, and transit-friendly transportation corridor, develop long-term land use recommendations, identify redevelopment opportunities, and develop recommendations for streetscape and site design improvements. Once the project is completed, the intent is for these plans to be adopted as part of the City of Columbus Comprehensive Plan.
85
Fig. 1.19, 1.20 - The study area for the State Street Corridor Plan is highlighted on a map of Columbus; an example of an area along State Street showing the clear separation between the two sides of the street because of its large width.
Colombus Museum of Industrial Objects
Trends and Forecasts
Because this project is focused on creating a walkable and functional environment, it is obvious that Columbus is working towards a more sustainable future by focusing on improving transportation within the city and discouraging individual car use. Also, by creating a place where people can walk from store to store, creates a more social environment for the community and boosts the local economy. Just by improving the landscaping and the thouroughfare, the corridor will become much more pedestrianfriendly because it will be safer, more attractive, and more enjoyable for those walking from store to store. This will likely boost the development within the area as it becomes more desirable for everyone at stake.
Columbus Housing Study Due to the population ever increasing, the city of Columbus has initiated a study of the local housing market. The focus of the study is to compare the supply and demand for housing for the complete range of income groups and to identify.
Fig. 1.21 - A plan for revitalizing the State Street Corridor in East Columbus.
Arch 401 - Fall 2013
It is important for Columbus to conduct such a study in order to understand the critical needs of the public and to understand how creating more housing options for Columbus will affect the overall city fabric and the other elements of the comprehensive plan.
86
3.3 Architecture
3.3.1 Modern Architecture As a Source of Civic Pride Columbus, Indiana’s architecture has been active in fostering a sense of civic pride for at least 50 years. Columbus’s leading employer, Cummins Engine has played a large role in providing and maintaining the city’s architecture. The company has brought in renowned architects and paid the design fees in order to make the city more attractive, and foster this sense of community pride. One of their objectives in being conscious of their buildings is to sustain the economic prosperity of the area businesses, and as a result, their work force. Since Eliel Saarinen designed the First Christian Church in 1942, the company has helped to fund over 50 buildings. Today the company has not been as involved in the acquisition of new architectural talent, but has instead focused on other green initiatives. The map on the right shows the evolution of Columbus architecture over time. There has been a decline in the number of notable buildings built in the last 15 years.
1800-1850 1851-1900 1901-1950 1951-1960 1961-1970 1971-1980 1981-1990 1991-2000 2001-2013
Fig. 4.1 - Map of Columbus notable architecture color-coded according to year of erection.
87
Colombus Museum of Industrial Objects
Trends and Forecasts
3.4.2 Columbus Architecture in the 1800s
In the 1800s, Columbus was just establishing itself as a city. The Bartholomew County Courthouse, built in 1874, marked one of the first civic buildings of Columbus. The courthouse is a Second-Empire style building, with its characteristic mansard roof, heavy cornice detailing, and grand entrance. The Columbus Bartholomew Consolidated School Corporation building also relies heavily on its rectilinear plan and grand entrance with a tower. However, his is not a second empire building, but a mix of several different historical building types. This shows the reliance of the community on the past, while having difficulty creating a style of its own. Fig. 4.2 Columbus Bartholomew County Courthouse- 1874
Fig. 4.3 Columbus Bartholomew Consolidated School Corporation1896
3.4.3 Modernism Comes to Columbus- 1940s
The year 1942 marks the ground-breaking of the first modern building in Columbus. The simple geometry and flat surfaces of the exterior attracted much attention in the small town. Eliel Saarinen’s tall tower and interest in irregularity in nature also attracted attention across the country, causing people to rethink how churches were built. The building set the precedent for future modern buildings in Columbus. Although it was not known then, the town’s exposure to modernism would soon prove valuable to future buildings in Columbus. Fig. 4.4 First Chriatian Church Exterior- 1942 Arch 401 - Fall 2013
Fig. 4.5 First Chriatian Church Sanctuary- 1942
88
3.3.4 Cummins Foundation Architecture Program Founded-1957 When the Baby Boom hit and the need for public schools in Columbus increased, a new school was proposed. The funding for the school was under discussion when the Cummins Foundation decided it was in their best interest to help make the school happen. Without schools, the parents of these kids would not want to live in Columbus. Therefore, it was to Cummin’s benefit to help make idea of the school a reality. The architect Harry Wesse was brought in from Chicago to design the building. It was important to Wesse to keep the building low to the ground in order to keep it from overwhelming the residential buildings in the area. The building therefore was made unique in its hexagonal central form, which became a multipurpose room. This idea of a non-rectilinear structure was excepted by the community because it still respected and sustained the city’s existing infrastructure. Due to Cummin’s selection as Weese as the architect, modern architecture started to be seen as a way for the community to strive.
Fig. 4.6 Lillian C. Schmitt Elementary School- 1957
3.3.5 I. M. Pei Builds Bartholomew County Public Library-1969
In 1966, I.M. Pei was selected to design the Bartholomew County Public Library, a major community asset in Columbus. Even though the building was not part of the Cummin’s architecture program, it was partially funded by the Cummins Foundation, as well as by J. Irwin Miller. Miller wanted to make sure that the building corresponded to the neighboring Irwin House and First Christian Church which he also funded. With the building of the library, downtown Columbus was beginning to take shape as it is currently. Once the downtown gained these prominent structures, the infrastructure of the community was set. Fig. 4.7 View of the Library Plaza- 1969
89
Colombus Museum of Industrial Objects
Trends and Forecasts
3.3.6 The Era of Columbus Additions and Renovations Lillian C. Schmitt Elementary School Addition By the 1980s and 1990s, the period of major development in Columbus’s public buildings had already finished. The focus then shifted to improving the existing infrastructural elements. In the 1990s, the major renovation and addition phase of Columbus began. The first Cummin’s funded public building was also one of the first buildings to have an addition built. In 1991, Leers Weinzapfel Associates designed an addition for the Lillian C. Schmitt Elementary School. The new addition was built with efficiency in mind. The classroom space was triped while only covering one-third of the site space that the original building occupies. The main challenge Columbus faces currently is how to adapt to a growing community needs without ruining the exisiting infrastructure. Fig. 4.8 The Lillian C. Schmitt Elementary School Addition- 1995
Fig. 4.8 View of where the original school meets the new addition
Columbus Area Visitor’s Center Renovation-1995 The Visitor’s Center building was built in 1864 as the home of John V. Storey. As the community began to grow and attract tourists, the need arose for a visitor’s center. In 1973, the city the building on as an adaptive use project. This worked well for a number of years, but in 1995, the Columbus Area Visitor’s Center was once again renovated. This was considered “an expansion,” creating more office space and more space for the visitor’s shop.
Fig. 4.9 View of the Original Visitor’s Center looking towards the renovated side Arch 401 - Fall 2013
Fig. 4.10 Window Renovation with featured Yellow Neon Chandelier
Columbus is currently still in the era of renovations. The city’s infrastructure has long since been established, so renovations and adaptive use is the most sustainable way to keep Columbus’s structures working for the community. The trend in architecture is to make as little impact on the environment as possible, and Columbus’s government has recently turned to green building certifications to rate their buildings.
90
3.3.7 Columbus’s Petition for Green Our built environment has enormous energy demands. Columbus, Indiana’s Energy Matters Community Coalition focuses on reducing their environmental footprint. Their mission statement is to “reduce local contributions to global warming and to increase the health and prosperity of our communities through smart public and private energy choices (Energy Matters meeting, March 24, 2008).” In 2009, buildings accounted for 48 percent of the total energy consumption in the United States. This was followed by transportation at 27 percent, and industry at 25 percent. When just looking at electricity consumption, buildings have play even more of a role. Building operations used 76 percent of the electricity in the United States in 2009. Industry followed with 23 percent, while transportation only used one percent of the total electricity consumption. When looking at data from 2011 by end-use sector, industrial leads in energy consumption. This means that the city of Columbus has the potential to use more energy than other cities its size, due to its high proportion of jobs due to industry. The typical commerical or institutional building uses up to 500 kWhe/m2/yr. New buildings in the United States use between 200 to 500 kWhe/m2/yr. However, new building codes for low-energy buildings have requirements to have amounts between 45 to 90 kWhe/m2. Fig. 4.5 - Energy Consumption Chart from the U.S. Energy Information Administration, 2011
In order to make sure that all buildings in Columbus are as energy efficient as possible, the Energy Matters Community Coalition is asking building owners and architects to question their current methods of operation. They also ask companies who are planning on acquiring additonal buildings if the building is already sustainable. The EMCC asks the question, “Is the building among the most energy efficient in the country?” They have stringent guidelines for assessing building energy performance, that are based on the U.S. Environmental Protection Agency’s Energy Star program. This system rates a building’s energy performance from 1 to 100. A building that scores in the 75 to 100 range is considered among the top 25 percent of all buildings of the same type. If in that group, they can earn an Energy Star label. The program has a free online tracking tool that rates buildings. In 2006, over 27,000 buildings had already been assessed. Out of those, over 2,800 of them were awarded an Energy Star label. The EMCC asks all architects and engineers involved in new construction projects to set a goal for EPA energy performance. Fig. 4.6 - Cozy Home Performance, 2009
91
Fig. 4.7 -Cozy Home Performance, 2009
Colombus Museum of Industrial Objects
Trends and Forecasts Most green performance systems are too flexible and rate some buildings with poor energy performance as sustainable. The Energy Matters Community Coalition in Columbus is troubled by this and suggests that all buildings be reassessed for energy performance after they have been built and are actually operating. The EMCC is particularly interested in performance levels of the building envelope, mechanical systems, lighting, and controls systems. The reason buildings need to undergo energy performance assessing after they are built is that sometimes buildings are not built according to the proper specifications of the architect. Even if the proper materials and components are used, sometimes they are not installed correctly. The EMCC recommends that extra funding be allotted for commissioning during the project. If architects supervise construction sites of their buildings, they are more likely to be built efficiently and to the original specifications. To help with the energy efficiency planning process, the Environmental Protection Agency has a Target Finder tool. This is an easy way to estimate the Energy Star 1-100 rating for a specific design project. The EMCC also recommends that investors in a green certified building research the specific critera that the certification guarantees for energy efficiency.
Fig.4.4 - U.S. Green Building Council and New Buildings Institute’s statistics show that most LEED certified buildings do not hold up to their predicted energy savings, BuildingGreen.com.
In the United States, there are several different green building certification systems. Out of all of the programs, the most recognized system is the United States’ Green Building Council’s LEED certification. LEED stands for Leadership in Energy and Environmental Design. Two of the less recognized rating systems are Green Globes and the Federal Sustainable Buildings Principles. Green Globes is run by the Green Building Initiative. Most green building certifications rate on the same categories of sustainable or environmentally preferable construction, design, and building systems. The differences between the programs are how much weight they put on each category. In other words, more points in the system may be awarded for environmentally friendly construction methods, while other programs might rate more on the mechanical and electrical systems in the design. Most systems give specific certifications based on the final points awarded. LEED awards different levels: Certified (40-49 points), Silver (5059 points), Gold (60-79 points), and Platinum (80 points and above). Since Columbus is working towards more sustainable buildings, looking at LEED standards would be helpful when designing a museum. Fig.4.4 - U.S. Green Building Council ‘s LEED certification point breakdown for housing.
Arch 401 - Fall 2013
92
3.4 Conclusion and Design Recommendation After investigating the trends in Urban Design, Landscape Design, and Architecture in Columbus, a conclusion has been reached that Columbus lacks open public space. The urban fabric is scattered and contributes to a lack of continuity throughout the city. If Columbus were to have better places to gather, the community would be given a better opportunity to flourish. We recommend that the parking systems be carefully considered in the treatment of the sites. There are many single-story parking lots in the city which could potentially serve as public plazas. In order to make these spaces accessible to pedestrians, while still allowing for adequate parking, garages could be built. The study recommends that the design pay close attention to the existing infrastructure of the city and respond accordingly. The rich history of Columbus’s neighborhoods and their character should not only be preserved, but improved.
93
Colombus Museum of Industrial Objects
Trends and Forecasts
CHAPTER 3: WORKS CITED David Greco, Jackie Katcher, and Sarah Ward
“AIA Committee On Design.” AIA Committee On Design. N.p., n.d. Web. 10 Sept. 2013. Berkey, Ricky. “52 Weeks of Columbus, Indiana.” 52 Weeks of Columbus Indiana. N.p., n.d. Web. 25 Sept. 2013. “Central Avenue Corridor Plan.” City of Columbus, Indiana. N.p., 31 Jul 2011. Web. 9 Sept. 2013. <http:// showMeta/0/>. “City of Columbus Indiana.” Flood Hazard Information-. N.p., n.d. Web. 10 Sept. 2013. “city of Columbus, Indiana Thoroughfare Plan.” City of Columbus, Indiana. N.p., 10 Nov. 2010. Web. 9 Sept. 2013. <.>.
“Historic Styles/ Second Empire 1855-1885.” Second Empire Architecture Facts and History. N.p., n.d. Web. 25 Sept. 2013, “National Register of Historical Places- INDIANA (IN), Bartholomew County. N.p., n.d. Web. 30 Sept. 2013. Olsen, Scott. “Green building movement picking up steam in Indiana: more than 100 LEED projects i the pipeline statewide.” Indianapolis Business Journal 15 Sept. 2008: 22+. Business Insights: Essentials. Web. 9 Sept. 2013. “Ongoing Projects.” City of Columbus, Indiana. N.p., n.d. Web. 9 Sept. 2013. < planning/projects/>. “Redirect Notice.” Redirect Notice. N.p., n.d. Web. 28 Sept. 2013.
“City of Columbus Indiana.” Planning. N.p., n.d. Web. 10 Sept. 2013.
“Tclf.org Website Analysis.” Tclf.org. N.p., n.d. Web. 20 Sept. 2013.
“Columbus, Indiana Bicycle & Pedestrian Plan.” City of Columbus, Indiana. N.p., 12 May 2010. Web. 9 Sept. 2013.<>.
“Who’s Coming, Going, and Moving Up.” Contract 29 Aug. 2008. Business Insights: Essentials. Web. 9 Sept. 2013.
“Columbus, Indiana Comprehensive Plan: Land Use Plan Element.” City of Columbus, Indiana. N.p., 5 Jun. 2002. Web. 9 Sept. 2013. <>. “Columbus, Indiana State Street Cooridor Revitalization Plan.” City of Columbus, Indiana. N.p., Web. 9 Sept. 2013. <. “Community Branding Helps Columbus Attain Economic Development Superlative.” PRWeb Newswire. 14 July 2012. Business Insights: Essentials. Web. 20 Aug. 2013. “Completed Studies.” City of Columbus, Indiana. N.p., n.d. Web. 9 Sept. 2013. < planning/studies/>. “Downtown Columbus: Downtown Strategic Plan.” City of Columbus, Indiana. N.p., August 2002. Web. 9 Sept. 2013. <>. Giovannini, Joseph. “Mill Race Park Structures, Columbus, Indiana.” Architecture Aug. 1995: 80+. Business Insights: Essentials. Web. 9 Sept. 2013,
Arch 401 - Fall 2013
94
CHAPTER 4: Code Compliance and Guidelines for Museums Yan Chen, Jessalyn Lafrenz, Deep Shrestha
4.1 Introduction 108 4.2 Columbus Codes 109 4.2.1 Zoning 4.2.2 Landscaping Requirements 4.2.3 Building Restrictions 4.2.4 Bicycle Parking Requirements 4.2.5 Vehicle Parking Requirements 4.3 Lighting Requirements 112 4.3.1 Three Categories of Lighting 4.3.2 Lighting Techniques 4.4 Temperature + Humidity Control 114 4.4.1 Effects on Artwork 4.4.2 Ideal Temperature Conditions 4.4.3 Relative Humidity of Interiors 4.4.4 Ideal Humidity 4.5 Egress 118 4.5.1 Accessible Parking Location 4.5.2 Amount of Accessible Parking 4.5.3 Multiple Parking Lots 4.5.4 Alternate Van Accessible Parking Space Design 4.5.5 Van Accessible Parking Space Design 4.5.6 Ramps 4.5.7 Egress 4.5.8 Egress Elevators 4.5.9 Route and Space Clearances 4.5.10 Bathroom Location and Design Requirements 4.5.11 Display Recommendations 4.6 Egress 130 4.6.1 Corridor 4.6.2 Egress Windows 4.6.3 Loading Dock 4.7 Design Recommendations + Conclusions 133 4.8 References + Works Cited 134
Colombus Museum of Industrial Objects
Arch 401 - Fall 2013
Code Compliance and Guidelines for Museums
Introduction In the interest of filtering information to the most necessary considerations for Museum design in Columbus, Indiana, this section discusses crucial code requirements and the implications they have on design. Though this is not an exhaustive list of every building code, it addresses key features necessary for creating a successful, functional design. The topics addressed include zoning and building laws as required by the city of Columbus, general building classification codes, and lighting, humidity and temperature control and accessibility as it relates specifically to museums. Through careful analysis, the codes presented are a synthesis of some of the most important regulations as it pertains to museums. In the exploration of Columbus city codes and International Building Codes, the information has been condensed and diagrammed to show how the code should be interpreted. This section should be used as a guideline to help inform design decisions that need to be rooted in legal compliance.
Arch 401 - Fall 2013
4.2 Columbus Codes Residential Commercial Downtown
4.2.1 Zoning The main site is within one half-block of a residential neighborhood so special requirements are outlined that limit building heights and give other building restrictions to limit negative impact on the residents of the area. The site is deemed a Public/Semi-Public or Commercial Downtown space, which have varying requirements. This presents an opportunity to manipulate code to fit program depending on the distinction. It is suggested that the distinction chosen is the one that best fits the surrounding city fabric (as sites can vary throughout the city of Columbus). This classification determines required parking, overall setback rules, and overall site usage restrictions and guidelines.
Fig 4.2.1 Diagram of Zoning Relationships as it relates to the site
107
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.2.2 Landscaping Requirements Landscaping requirements are met through a point system, each zoning designation must meet a minimum number of points to be compliant with the code according to the City of Columbus. There are 5 areas that must be landscaped within the lot. Area 1- Between the parking lot and the public street frontage. The required setback for parking areas shall be landscaped. See chart for planting guidelines. Area 2- Parking lot interior. Interior landscape areas must exist in parking lots of 25 or more spots. See chart for planting guidelines. Area 3- Front setback. See Landscaping Points Requirements Table. Must be landscaped based on linear footage of lot frontage on adjacent streets/roads.
Side Street
Fig.4.2.2 Reference for landscaping requirements for a museum campus.
2
1
Primary Structure
Area 4- Lot interior. Landscaping must achieve minimum number of points as required in the table. Calculation of needs is based on the linear footage of the building perimeter. Accessory buildings are excluded from this calculation. 25 % of the required Area 4 plantings must be within 15 feet of the primary structure. Area 5- Freestanding signs exceeding 6 feet. Must have landscaping in the area radiating 5 feet from the base of the sign at minimum. This will not count toward the total minimum landscaping points.
4
5
3 Frontage Street
Fig 4.2.3 Diagram of Designated Landscaping Areas as outlined in the text
Arch 401 - Fall 2013
108
4.2.3 Building Restrictions Maximum Height (Commercial Downtown) is 125 feet for the primary structure, with exceptions. Any accessory structures have a limit of 35 feet in height. Exceptions to this restriction include a limit of 60 feet on the Washington Street Frontage for half a block as well as a 50 foot limit within one half block of any single family residential zoning district. There is no rear or side setback for this zone. The front setback has a 0 foot build-to designation According to the Public/Semi-Public Facilities there is a minimum lot width and frontage of 50 feet. The max lot coverage is 65%. The minimum side and rear setbacks are a minimum of 10 feet from the lot boundary for both primary and accessory structures. There is a limit of 1 primary structure per lot. The maximum height for the primary structure is 45 feet with any accessory structures limited to 25 feet. The front setback guidelines are designated by the road type at the front face of the lot. Minimum Front Setback • Arterial Road: 50 feet • Arterial Street: 10 feet • Collector Road: 35 feet • Collector Street: 10 feet • Local Road: 25 feet • Local Street: 10 feet General exceptions to height include steeples, bell towers, spires, belfries, cupolas and industrial related storage tanks, smokestacks, and mechanical equipment. These exceptions may only double the max height allowable. Necessary building equipment may only exceed height requirements by a maximum of 10 feet. These include but are not necessarily limited to: (9-2) • Utility stations and related facilities • Water tanks • Chimneys • Fire towers • Stair towers • Stage bulkheads • Elevator bulkheads
109
Rear
65% Max Structure Coverage Side
Front
Fig 4.2.4 Max Site coverage and 10 foot setback illustration for site restrictions
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
Fig 4.2.5 Table of Bicycle Rack Quantity Requirements-dependent upon the number of vehicle parking spaces
4.2.4 Bicycle Parking Requirements All facilities must have parking for bicycles. Bike spaces are to be provided based on the number of vehicle parking spaces. Bicycle racks must support the bike upright and must allow for the frame and at least one wheel to be locked. Refer to the graphic for examples of approved systems. The racks must be placed in a high visibility location. It must also provide safe and convenient access to the main entrance. In alignment with many other facilities throughout the city of Columbus, an acceptable solution to the bike rack would be to utilize the “C� logo to denote a city attraction and has been used to promote connectivity between those attractions.
Fig 4.2.6 Examples of allowed and disallowed bike racks, including the City of Columbus custom bike rack design
Arch 401 - Fall 2013
110
Fig 4.2.7 Requirements for quantity of parking spaces
4.2.5 Vehicle Parking Requirements
Off street parking must be provided and located on the same property as the structure for which they are used. All commercial and public/semi-public distinctions may count 20% of any public spaces within 300 feet of the property toward meeting the minimum number of required spaces. Refer to Fig no. 4.2.8 for general parking space . Barrier free handicap parking spaces must be marked and provided in all parking lots. In parking lots with 10 spaces or less, the required barrier free parking spaces are in addition to the minimum parking spaces required. For parking lots with more than 10 spaces, the required accessible parking spaces are considered toward meeting the minimum requirement.
Fig 4.2.8 Design dimensions and arrangement options for vehicle parking spaces
111
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.3 Lighting Requirements 4.3.1 Three Categories of museum lighting Museum lighting design and exposure levels are to be based on conservation strategies. These strategies fall into any of three categories of lighting.
Highly susceptible to light damage Do not use natural lighting, use low watt fluorescent bulbs or LED alternatives.
Moderately susceptible to light damage Use a mix of natural light and manufactured light – as long as lighting conditions can be completely controlled for visual and aesthetic comfort Only a narrow range of light level changes are acceptable – an active lighting system override is required Not susceptible to light damage Use of natural daylight is allowed as it promotes true color rendering Comfort considerations triumph conservation Fig.4.3.1 - Track Lighing and Tungsten Halogen demonstration
Arch 401 - Fall 2013
112
4.3.2 Lighting Techniques Lighting sources need to be closely considered in order to maintain visual clarity, comfort, and aesthetic organization. The proper light source determines accurate color rendering and appearance. Typical lighting for high color sources includes Tungsten Halogen. Light delivery is most ideal through the use of reflectors, lenses, or filters. Each of these devices can be utilized to limit the exposure of direct light onto Category A exhibitions while providing excellent visual access. Accurately optimizing the amount of day light allowed can control general comfort levels for users while creating a safe environment for curated objects. Reflectors can be implemented as well for use in Category B and C exhibitions, reflectors are an excellent solution to diffuse direct daylight. Exhibit spaces that allow daylight may be controlled with the use of reflectors as well.
Fig. 4.3.2 - Standard lighting diagram for most galleries
113
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.4 Temperature and Humidity 4.4.1 Effects on Artwork Temperature affects museum collections in a variety of ways. At higher temperatures, chemical reactions increase. For example, high temperature leads to the increased deterioration of cellulose nitrate film. If this deterioration is not detected, it can lead to a fire. As a rule of thumb, most chemical reactions double in rate with each increase of 10째C (18째F). Biological activity also increases at warmer temperatures. Insects will eat more and breed faster, and mold will grow faster within certain temperature ranges At high temperatures materials can soften. Wax may sag or collect dust more easily on soft surfaces, adhesives can fail, lacquers and magnetic tape may become sticky
Fig. 4.4.1- An example of a painting exposed to high temperatures, resulting in cracking and deformation.
Arch 401 - Fall 2013
114
4.4.2 Ideal Temperature Conditions In exhibit, storage and research spaces, where comfort of people is a factor, the recommended temperature level is 18-20° C (64-68° F). Temperature should not exceed 24° C (75° F). Try to keep temperatures as level as possible. In areas where comfort of people is not a concern, temperature can be kept at much lower levels—but above freezing.. This is discussed in more detail in the next section. Fig. 4.4.2 - This diagram shows various levels of heat in a gallery space. Red is higher where as blue is colder
115
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.4.3 Relative Humidity of Interiors Relative humidity (RH) is a relationship between the volume of air and the amount of water vapor it holds at a given temperature. It.
Fig. 4.4.3 - The three diagrams depict relative humidity levels in relation to surrounding temperature As temperature rises the same amount of water vapour falls realative to the capacity of air.
Arch 401 - Fall 2013
Relative humidity is directly related to temperature. In a closed volume of air (such as.
116
4.4.4 Ideal Humidity Deterioration can occur when RH is too high, variable, or too low.. Too low: Very low RH levels cause shrinkage, warping, and cracking of wood and ivory; shrinkage, stiffening, cracking, and flaking of photographic emulsions and leather; desiccation of paper and adhesives; and dessication of basketry fibers. Fig. 4.4.4- The diagram above shows a wide range of viruses, bacteria, etc. that thrive at different percentages of humidity
117
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.5 Accessibility 4.5.1Accessible Parking Location Each lot or structure on a site with multiple lots or structures should include accessible parking. Exceptions to this are sometimes allowed as explained below: Wherever parking serves a specific building, locate accessible spaces on the shortest accessible route of travel from parking to an accessible entrance. Wherever parking serves buildings that have multiple accessible enrtances, disperse accessible spaces among areas adjacent to each accessible entrance. Wherever Paring does not serve a specific building, locate accessible parking on the shortest accessible route to an accessible pedestrian entrance of the parking area. Wherever parking spaces are provided for specific purpose ( such as buses, delivery trucks, and official vehicles), the spaces are not required to be accessible if an accessible passenger loading zone is provided. Fig. 4.5.1 Three parking lots serving a building with one accessible entrance.
Arch 401 - Fall 2013
All van accessible spaces may be grouped on one level of a parking structure.
118
4.5.2 Amount of Accessible Parking Provide an adequate number of accessible parking spaces to accommodate the maximum number of people who might use the parking lot or structure at peak-use times. Calculate the number of required accessible parking spaces on a lot-by-lot or structure-by-structure basis for sites with multiple parking lots or structures. Provide at least the minimum number of accessible spaces required in each parking area. Ensure that 1 in every 6, but not less than one, accessible space is a van accessible parking space.
Fig.4.5.2 Minimum required number of accessible parking spaces
4.5.3 Multiple parking lots
VA=Van Accessible space A=Accessible space
Where multiple parking lots serve a building, the required number of standard accessible and van accessible parking spaces must be located as close as possible to the accessible entrances. Where multiple parking lots serve a building with only one accessible entrance, locate all accessible parking spaces in the parking lot that is closest to the accessible entrance. Where multiple parking lots serve a building with multiple accessible entrances, distribute accessible parking spaces close to each accessible entrance. Where multiple parking lots include a satellite lot that serves a building, locate all accessible parking spaces in the parking lots that are closest to the accessible entrances.
lotB, 27 spaces LotA, 27 spaces Fig.4.5.3 Diagrammatic example of three parking lots serving a building with three accessible entrances
119
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
Lot C, 27 spaces
VA=Van Accessible space A=Accessible space
4.5.4 Alternate Van Accessible Parking Space Design Plan, design, and construct parking lots and structures with van accessible parking lots and structures with van accessible parking spaces (rather than alternate van accessible spaces that make it easy for unauthorized vehicles to park in access aisles). Ensure that alternate van accessible spaces meet all of the following minimum requirements: parking space width: 8 ft. access ailes width: 8 ft. vertical clearance: 8ft 2in.
lotB, 27 spaces
LotA, 27 spaces
Ensure that length of alternate van accessible parking is at least 19 ft.
Fig.4.5.4 Diagram of hree parking lots(including a satellite lot) serving a building with two accessible entrances
Accessible spaces Standard parking Van parking
Alternate Van parking
Space width
8 ft
11 ft
8 ft
8 ft
Access aisle width
5 ft
5 ft
8 ft
5 ft
Vertical clearance
6 ft. 8 in
8 ft. 2 in
9 ft. 6 in
Space length
19 ft
19 ft
20 ft
8 ft. 2 in 19 ft
Ensure that van accessible parking is as close as entrance, so they can easily get in to the building.
Fig.4.5.5 Minimum required dimensions for accessible parking spaces
Arch 401 - Fall 2013
120
unpaved space between parking and routes prevents vehicles from obstructing accessible routes
4.5.5 Van Accessible Parking Space Design Van accessible parking spaces safely and comforbly accommodate vans and wheelchair lifts and other specialized equipment. Ensure that van accessible spaces meet all of the following minimum requirements: parking space width: 11 ft access aisle width: 5 ft vertical clearance: 8 ft. 2 in. Ensure that the length of van accessible parking spaces is at least 19 ft.
Fig.4.5.6 Standard accessible and van accessible parking spaces and access aisles unpaved space between parking and routes prevents vehicles from obstructing accessible routes
Provide access aisles on either side of van accessible spaces that are perdicular to the accessible routes. Provide access aisles on the passenger side of angled van accessible spaces, so they can easily go on to the ramp.
this space cannot be a van accessible space because the access aisle is not on the passenger side Fig.4.5.7 Standard accessible an van accessible angle parking spaces and access aisles
121
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
Switchback Ramp Accommodates site conditions non-conducive to a standard “L” shaped ramp configuration.
Ramp with Extended Landing
4.5.6 Ramps
Provides access for buildings with multiple doorways.
Ramps or curb ramps are required wherever an accessible route contains a level change greater than 0.5 in. Wherever accessible routes cross curbs, curb ramps are required to make streets and sidewalks accessible for wheelchair users and others for whom curbs are obstacles. • Provide a minimum clear width of 36in along the full length of the ramp, not including flared sides.
“L” Shaped Ramp Accommodates site conditions non-conducive to a standard or switchback ramp configuration.
• Provide slope between 1: 14 (7.1%) and 1: 16 (6.3%) or less steep are preferred because many people have difficulty with slopes of 1:12 ( 8.3%). • Ensure that the maximum cross slope of a ramp or landing surface does not exceed 1: 48( approx. 2%).
Standard Ramp with Intermediate Landing Intermediate landings are required for all ramps with thresholds greater than 30”.
Arch 401 - Fall 2013
Fig.4.5.8 Diagramming of different ramp configurations
122
4.5.7 Egress Exit stairways are often used as part of a means of egress. To be considered a means of egress, a stairway must be enclosed. Ensure that interior and exterior stairs that are part of a means of egress meet all ADAAG requirements for stairs. Provide a minimum clear width between handrails of 48in. (1220mm) on stair flights that are part of an egress route.
Fig.4.5.9 A warning barrier used to alert people of limited headrooms
Stairways used as a means of egress must meet one of the following condition incorporate areas of refuge that meet accessibility guidelines, or be reachable directly from either an area of refuge or a horizontal exit.
Fig.4.5.10 Handrail with the required extension at the bottom of a stair flight
123
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.5.8 Egress evelators Safe egress out of multi-story buildings is a serious concern for people with disabilities. In some multi-story buildings, accessible means of egress may include egress elevators that are specially designed and constructed for evacuation. These specially designed and equipped egress elevators include standby power and other safety features. • Whatever an accessible floor is four or more stories above or blew a level of exit discharge, provide an egress elevator as part of at least one of the required accessible. • Locate egress elevators so they can be reached from either a horizontal exit or an accessible area of refuge. • Work with the elevator manufacturer to choose and intall elevators that meet or exceed the standards in AMES A 17.1, Rule 211, including requirements for emergency operation mad signaling devices. • Where possible, install tactile signs at stairways and elevators that are not accessible to direct people to nearest accessible exits. • Ensure that each accessible sign meets the requirements for size, characters, finish and contrast, and mounting height.
Fig.4.5.11 Dimension of elevators
Arch 401 - Fall 2013
124
4.5.9 Route and Space Clearances Route width for Passing and Turning Around:
Minimum required clear space for turning around and passing
Preferred clear space for turning around and passing
Fig 4.5.12 Turning around and passing spaces
People using wheelchairs need sufficient space to pass others and turn around. Ideally, in moderately traveled areas, routes should have a clear width of 60in. to accommodate passing and turning around. The minimum space required for a person in a wheelchair to make a 180 turn is 60in. by 60 in. Where two intersections meet to form an adequate sized T shaped space, people can also maneuver.
Provide routes that are at least 60in wide. When a route is 60in wide, you do not have to provide additional space in the route for passing and turning around spaces.
If the toute is less than 60in wide, provide turning spaces at reasonable intervals that meet one of the following conditions: Clear space of 60in minnmum in diameter. T-shaped spaces ( such as those created by the intersection of two routes) where each leg of the T is aminimum of 36 in wide for a minimum of 48 in.
Minimum required space at a T-shaped intersection for turning around
Ensure that 60in minimum clear width is provide between display cases where turning or maneuvering is required.
Minimum required space at a T-intersection for passing
Fig 4.5.13 T shapeed spaces
125
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
Route Width for Turns in Routes: To evaluate the accessibility of routes, measure the distance between turns and check the angle of turns in accessible routes. Examine routes to ensure that there is sufficient space for wheelchairs to turn around corners and obstructions. For turns in routes provide adequate width in all routes provide smooth, wide turning spaces along all routes; and eliminate obstructions in turning areas.
Fig.4.5.14 spaces for wheelchair Minimum space for maneuvering between display Minimum dimensions for a route with turns cases in an exhibite area around obstructions ( e.g. museum bookstor
When a route contains turns arounnd obstructions that are less than 48in wide, or where the route turns around obstruction that are less than 48 in wide, ensure taht route meets all of the following condition: a clear width of 42in minimum ar the turn; a clear width of 42in minimum in the routes approaching and leaving the turn; a clearance of 60in past any obstruction before narrowing a route to the minimum width for an unobstructed route.
Fig.4.5.15 traveling space for wheelchair Minimum and preferred dimensions for Minimum and preferred dimensions for man maneuvering around a display case that is euvering around a display case that is 48 less than 48 inches wide inches or more wide
Arch 401 - Fall 2013
126
Seating space: • Provide seating areas that are accessible to people using wheelchairs.
Fig.4.5.16 The requirements for wheelchair locations, as set by the Smithsonian Guidelines for Accessible Design for Facilities and Sites, are as follows:
• Locate spaces for wheelchair users so that they adjoin, but do not block, an accessible route that also serves as a means of egress in an emergency. •. • Seating must be provided in each exhibition. 50% of the seats must be accessible. Single-gallery exhibitions must have seating nearby, in a corridor or in an adjacent gallery space. • Seating spaces are important for accessibility. This allows all users to experience the space comfortably and as it is designed.
Fig.4.5.17 Accessible spaces adjacent to fixed seating allow a person in a wheelchair to sit with persons allow a person in a wheelchair to sit with persons with whom they may be traveling.
Fig.4.5.18 Provide work stations with seating minimizes the differences between seated and standing visitors.
127
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
Accessible wheel- Accessible WalkNumber of toilets in stalls (include Urinals) Accessible Urinals chair stalls
MEN
WOMEN
All public restrooms 6 or more
All public restrooms 6 or more
At Least 1
At Least 1
At Least 1
At least 1
4.5.10 Bathroom Location and Design Requirements At Least 1
At Least 1 A Least 1 Provide 1 for every 10 toilet stalls
At Least 1
All public toilet rooms and/or bathrooms should be accessible to allow for safe and convenient use by all people. Public toilet rooms and/or bathrooms are those that are provided for the public or that are located in common areas of buildings used by visitors and/or employees. Accessible toilet room and/or bathrooms must be located on accessible routes and should be cantrally located for convenient access. Museums are huge public spaces which require larger facilities to accomodate occupancy. This chart illustrates how many accessibility toilets we need of the total toilets in museum.
Fig 4.5.19 number of accessible toilets
Arch 401 - Fall 2013
128
4.5.11 Display Recommendations Average viewing sightlings
Mount small items (to center line) at no higher than 1015 mm (40 in.) above the floor. A male adult who uses a wheelchair has an average eye level of between 1090 mm (43 in.) and 1295 mm (51 in.) above the finished floor. Objects placed above 1015 mm (40 in.) will be seen only from below by most seated and short viewers. overall design of a textile is blocked for both visitors with visual and mobility impairments. The standing visitor with low vision cannot get close enough to the object to see the details; the seated visitor cannot see the object’s top or interior at all. Shallow cases better serve both types of visitors.
Height of case table.
Case heights for accessible viewing
Fig 4.5.20 display requirements for accessibility
129
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.6 Egress 4.6.1 Corridor The required width of corridors shall be unobstructed. Where more than one exit access shall be arranged such that there are no dead ends in corridors more than 20 feet in length. Corridors shall not serve as supply, return, exhaust, relief or ventilation air ducts.
Fig.4.6.1 Minimum Corridor Width
Corridors not only could be the way to connect all the exibihitions places or galleries, but also could be a part of exibihition or gallery. So when we design a museum we don’t need to separate corridars and gallerys so clearly, they could be one thing. Having seating paces, drinking fountains and good lighting condition are all improtant to corridor design for a museum.
Fig.4.6.2 An example of a small exbihition space for egress travel.
Arch 401 - Fall 2013
130
4.6.2 Egress Windows
Dimensions of windows and the height from the ground are very important to accessibility. Because light and view can have huge effect to visitors, so it is important to bring same light and view condition to accessible visitors, especially in museums. Different heights and size of window can create huge different lighting condition for interior and view difference for visitors. These two things also can effect visitor’s mood, so we can use them to adjust or fit for emotional needs of people.
Fig.4.6.3 Dimension for windows
131
Columbus Museum of Industrial Objects
Code Compliance and Guidelines for Museums
4.6.3 Loading Dock The specifics of loading dock design criteria based upon practical experience should include: • Provide entrance and egress, as well as turning radius, for tractor-trailer units of 80+ feet in total length. • In an enclosed dock plan for 14 feet clear height for tractor-trailer units, plus two or more feet for over head lighting or ductwork. • Plan for tractor-trailer units of 102 inches in total width – 15-18 feet for proper maneuvering. • Trailer bottom clearance is 8 inches, ramps must be designed to avoid high-centering. • The dock must be level for the trailer at the stop point (52+ feet). Fig.4.6.6 Dimensions of loading truck
• The dock must be well lit • Provide for drainage.
• Provide a hose bib connection at dock. • If the dock is completely enclosed, provide for adequate ventilation for diesel engines. • Provide striping on ramp for safe backing, and bumpers on the dock end for safe parking. • Provide a dock leveler of adequate dimensions and weight capacity. • Provide a dock plate for trailer to dock unload.
Fig.4.
Arch 401 - Fall 2013
• Provide stairs from the ramp to dock height.
132
Conclusion The codes presented are some of the most crucial for design, occupation, and use. However each is meant to be interpreted to fit the needs of individual program design. As the topics discuss requirement, there are interpretations and design suggestions for the given site scenarios. Each code given has an inherent feeling of finality but those presented have been interpreted to show the possibilities they present rather than purely restrictions. Design Recommendations: Use the code as a guideline rather than a concrete rule Where code can be improved, use that as a place to improve user experience rather than sticking to the minimum restrictions set forth
Code Compliance and Guidelines for Museums
CHAPTER 5: Climate Meghan Bouska, Dane Buchholz, Heather Wailes
5.1 Weather 5.1.1 Temperature 5.1.2 Wind 5.1.3 Relative Humidity 5.2 Landscape 5.2.1 Plant Life 5.2.2 Soil Analysis 5.2.3 Settling 5.3 Precipitation 5.3.1 Rain 5.3.2 Water Collection 5.3.3 Snow 5.3.4 Water Table & Frost Line 5.4 Sun and Shading 5.4.1 Solar Analysis 5.4.2 Solar Analysis- Post Office Site 5.4.3 Cloud Cover Analysis 5.4.4 Shading Device Recommendations
Colombus Museum of Industrial Objects
Introduction Climate is a very important factor when designing a building. This chapter will consist of a broad range of information beginning with temperature. Columbus, Indiana does not get as hot as one would think. The relative humidity is on average, in the 80% range. With temperatures and relative humidity levels being moderately low, designing is made easier. Because there is not such a range in temperature, choosing materials for a building to control heat gain and loss is not as much of an issue. Wind can be severe at times, coming from the south, but does not exceed 31 mph. Landscaping can contribute to the site for many reasons. Plants and other vegetation are usually visually appealing, but can also help the environment. When plants grow in their natural landscape, they are actually beneficial to that environment because those specific plants encourage animals and insects to remain in that area as well. Plants grow in certain locations because of the soil types there. They are native to that land and therefore grow abundantly. Soil is another great factor when considering building on a piece of land. Proper soil types are mandatory before building. When the soil is not properly prepared, and the foundation not built accordingly, the building can fail in the future. The water table is another aspect of the foundation that could cause failure. The water level is only about nine feet under the surface, so anything built below that will require hydrostatic considerations. Columbus gets a very steady amount of rainfall throughout the year, making it great for rain water collection and reuse. This chapter includes charts and calculations about the size of roof required. Since columbus is warmer, they do not get as much snow as the code requires a load capacity for, so any material that is built to code will be able to support the worst average snow load. Finally, this chapter addresses the effects of sun and shading on building design. Knowing that Columbus exists near 40ยบ N Latitude, the chapter includes specific information on solar angles, annual sunrise and sunset times, and average cloud cover data. Through synthesis of this data, the information provided will guide designers to select proper glazing placement and shading devices.
Arch 401 - Fall 2013
Chapter 5: Climate 5.1 Weather
5.1.1 Temperature In Fig. no. 5.1.1, in the light red shaded column in the center, presents the warm season which lasts from May 21 to September 23 with an average daily high temperature above 75 degrees. The hottest day of the year is July 25 with the average high temperature being 85 degrees and the average low temperature being 67 degrees. The cold season occurs between November 29 and February 26 and is represented in light blue shaded columns on each side of the graph. The average high temperature is around 43 degrees or below. The coldest day of the year occurs January 21, the average high temperature being 33 degrees and the low being 21 degrees. This data helps with design strategy, knowing the temperatures and the possible heat gain and loss through the walls of the structure. Seeing that it does not get very hot or very cold, building materials do not need to be able to hold as much heat in the winter, and cool the building during the summer.
Fig.5.1.1 Daily High and Low Temperature
Fig. no. 5.1.2 represents the amount of time that is spend within each heat range throughout the day. As discussed above, the hottest part of the year runs from May 21 through September. This season is represented mainly with red (85- 100 degrees), orange (75- 85 degrees), and yellow (65- 75 degrees). Some days are colder and are represented in with light green (50- 65 degrees). Being the warmest part of the year, its is rare that there would be many days represented in light green. The cold season is represented in dark green (32- 50 degrees), blue (15- 32 degrees), and purple (below 15 degrees). It is rare to have days that are colder than 15 degrees, but occasionally there is a bad winter with cold winds which bring the temperature down quickly. This graph compliments Fig. no. 5.1.1 to explain more in depth the air temperatures throughout the year. Seeing that very few days are spent in the hottest and coldest zones, there is not as much need to design walls to hold heat or to design buildings with major cooling systems.
137
Fig.5.1.2 Fraction of Time Spent in Various Temperature Bands
Colombus Museum of Industrial Objects
Climate
5.1.2 Wind
Wind speeds can vary throughout the year from 0 mph to 18 mph and rarely exceeds 31 mph, which would be high winds. The highest average wind speed hits around 10 mph, which is a gentle breeze which cools the skin on a warm day. This breeze occurs around March 5. Around this same time, the average high wind speed is 17 mph. The lowest average speed is 4 mph, which is a light breeze, and accurs August 21. The averatge high wind speed around that same time is 17 mph. Every building should be constructed so that if it is exposed to any high winds, it does not experience any major issues with tilting, leaning, or moving in any way that would be distructive to the structure. Foundations need to be built far enough into the ground in order for the building to be able to withstand major wind forces. Fig. no. 5.1.5 explains which directions wind typically comes from. These are the sides of the buildings which need to be built properly to withstand wind force from that direction. The walls should be built thicker than the other walls in order to be able to stand up against the wind. Materials must be strong as well. An example of a strong building that would withstand harsh winds would be a concrete foundation with a concrete wall with decorative exterior materials such as brick. Admin
Rotating
Entry
Community
Art & Design
Innovative Entry
Admin
Back of House
Shop/Cafe
Fig.5.1.3 Wind Speed
Art & Design
Innovative
Archives
Fig. no. 10.1.4 & 10.2.4 From Chapter 10 Demonstrating proper wall thickness on the South wall of the building
Rotating
Archives
Shop/Cafe
Community
Back of House
Wind can be difficult to analyze at times because if there is no wind, the speed of wind is zero and therefore cannot be added into the total percentage when adding percentage of wind from each direction. Fig. no. 5.1.5 shows the percentage of time in which the wind is at a certain speed. With 16%, it is most common to have wind from the south. Wind comes from the south throughout there year with some change in direction around fall time. Wind is not dominately from any direction throughout the year. As Fig. no. 5.1.5 shows, wind comes least from the north east, east and south east. Most commonly, as Fig. no. 5.1.4 represents, wind commonly comes from the south, but much of the time, there is no wind and therefore it cannot be calculated into the total percentage. Fig.5.1.4 Fraction of Time Spent with Various Wind Directions
Arch 401 - Fall 2013
Fig.5.1.5 Wind Directions Over the Entire Year
138
5.1.3 Relative Humidity
A comfortable relative humidity for the average person is around 41%. Typical relative humidity levels in Columbus vary from 41% to 97%, which is very humid. The humidity level rarely drops below 25%, which would be very dry. Also, humidity levels rarely reach 100% which would be very humid. The driest times of the year occur around October 29 when the humidity drops below 48%, still remaining at a comfortable level. It is most humid around July 12 when humidity levels reach 96%.
Fig.5.1.6 Relative Humidity
Dew point is sometimes a more accurate way to measure how comfortable a person is during different types of weather, rather than measuring this by using relative humidity. This is because dew point is more directly related to how perspiration will evaporate from skin and other surfaces. Low dew points feel drier and high dew points feel more humid to the skin. Dew point varies from dry dew point being 16 degress to muggy dew point being 71 degrees throughout the year in Columbus. Dew point rarely sinks below 5 degrees or rises above 76 degrees. Between April 23 and June 9, and between August 13 and October 7 are the most comfortable times of the year.
Fig.5.1.7 Dew Point
139
Colombus Museum of Industrial Objects
Climate
5.2 Landscape 5.2.1 Plant Life Fig.5.2.7 Ferns
Fig.5.2.1 Groundcovers
Fig.5.2.9 Vines
Fig.5.2.2 Wild Ginger
Fig.5.2.8 Cinnamon Fern
Fig.5.2.10 Crossvine
The images and lists to the left provide names of native plants to Columbus, Indiana. These plants are important to the area because they control the animals and insects in the area. “Indigenous plants are a significant part of a region’s geographic context. In fact, they help define it. They have proved themselves capable of surviving in a landscape for millennia. What better plants can there be, if not the natives, to confront the soil conditions, climate, pests, and diseases of the local areas?” -Michael A. Homoya Indiana Department of Natural Resources Division of Nature Preserves
Fig.5.2.3 Trees
Fig.5.2.4 Maple Sugar Tree
Fig.5.2.11 Shrubs
Fig.5.2.12 Serviceberry
Native plants are very important to the location they are native to. They can grow in their native locations because of the soil type in that location. Native plants do not spread to locations that they are not welcome in because they cannont grow in other locations and they are attacked by other plants and animals in other areas. Also, native plants grow in their respective locations because of the weather conditions. With the proper weather conditions, native plants can grow and require less fertilizer, pesticides and less water. Native plants are helpful to the environment by controling the spread of intrusive and unusual animals. As an example for how native plants help the environment, butterflies only lay eggs on specific plants. These plants also provide the proper food for caterpillars. In order to keep these species in the area, native plants are necessary.
Fig.5.2.5 Grasses
Fig.5.2.6 June Grass
Arch 401 - Fall 2013
Fig.5.2.13 Flowering Perennials
Fig.5.2.14 Royal Catchfly
140
5.2.2 Soil Analysis
Parent materials are the different soils in the ground in an area. Parent material, meaning the minerality and chemical composition of the soil, is very important to the area in which the soil is located. Northern Indiana has soil that consists of sand and gravel which is technically called glacial outwash. Other soil from the Wisconsin glacial is clay and silt, also known as glacial till. Loess (also known as silt) and pronounced ‘luss’ is very common in Indiana. It is found as deposits of silt deposited by the wind during and after glacial events. There are different terms which are used to explain the different types of silt. Eolian is thick loess and means that the soil was deposited by the wind. Allucial is soil that is deposited near rivers and other washout and means that is was deposited by water. Soils in Indiana tend to drain very slowly because of the high clay content. Because it is clay, the water is unable to drain through the particles and therefore causes many issues with flooding. Many homes have flooding in basements, yards and often have a difficult time growing many plants around their home. Although most of Indiana consists of clay soil, southern Indiana has a different consistency. The soil in southern Indiana is sedimentary bedrock and loess. This type of soil is much better for drainage because of the spaces between the bedrock that allows for water to seep through. Columbus, Indiana Bartholomew County
Fig.5.2.15 Soil Parent Materials
141
Colombus Museum of Industrial Objects
Climate
How fast the soil can drain water is very important to know in order to properly build upon the land. In Indiana, most people know that the soil has poor drainage because of the high clay content within it. As Fig. no. 5.2.16 shows, the northern part of indiana has pour soil for draingage. Dark brown covered land represents soil that is mostly clay and does not drain well. Light brown locations drain moderately well, and white areas has decent drainage. The white areas have clay that consists of silt and sand, more so than the norther portions of the state which consists of more clay than silt and sand. These changes occured because gracial till tends to consist of high amounts of clay. The areas in the central parts of the state have higher amounts of glacial outwash and therefore drain moderately well.
Columbus, Indiana Bartholomew County
Fig.5.2.16 Soil Wetness Characteristics
Arch 401 - Fall 2013
142
Topography has a huge influence on what types of soil are in each area. As topography changes over the years, so do the soil types because of erosion. A great change in topography, for example, a mountain, the soil types changes as well. The variation of soil types in a short distance are greater in southern Indiana than they are in northern indiana. Erosion plays a huge part in changing the soil types because as erosion occurs, the soil moves. The map to the right indicates which areas of Indiana have higher erosion than others.
Columbus, Indiana Bartholomew County
Fig.5.2.17 Soil Erosion Potential
143
Colombus Museum of Industrial Objects
Climate
There are many different types of soil throughout Indiana. These types of soil are composed of different materials ranging from clay, to sand, to bedrock. There are different material compositions because of weathering throughout the years. The main type of soil is called Alfisols. This type of soil is located mainly under highly forested areas and is capable for growing plant life even throughout the most humid season. This type of soil is very important to Indiana because of its agricultural potential. Because it is so important, it is necessary to take care of the soil and conserve it when possible; erosion can quickly take it away. Fig.5.2.18 Soil Explanation
Fig.5.2.19 Munsell Soil Color Chart
In the northwest corner of the state, another type of soil is more common, it is called mollisols. This type of soil is located under gasslands and has a rich chemical makeup and is able to absorb water very well. Also in the northern part of Indiana, Histosols are common. This type of soil is commonly located around swams and marshes and consists of high amounts of carbon. Because of the high amounts of carbon, it makes it easy for plants to grow in these areas. In the opposite end of the state, in the souther part of Indiana, Inceptisols is more common. This type of soil is common in the southern part of the state because of the steep topography of the land. It is most commonly found on slopes, close to the surface. Although it is located close to the surface, it is a relatively young soil that has not shown much weathering yet. Entisols are also located in high elevated areas which also have high erosion. This type of soil is also used for agriculture.
Fig. no. 5.2.18 explains some colors of soil and what they makeup of the soil means. This image is pared with Fig. no. 5.2.19 which gives examples of soil color. This chart is used by Geotechnical engineers after boring soil to figure out the different types of soil before construction begins. Fig. no. 5.2.20 represents the different types of soil out there and how engineers classify the different soils they come across.
Fig.5.2.20 Soil Chart
Arch 401 - Fall 2013
144
5.2.3 Settling
Soil types determine which type of foundation should be used for a building. For the site in Columbus, Indiana, the soil type has medium potential for erosion,it is moderatly moist, and may consist of Alluvial and outwash deposits, this loess over loamy glacial till, moderately thick loess over loamy glacial till or moderately thick loess over weathered glacial till (see fig. no. 5.2.15-5.2.17). Keeping this in mind, it is important for architects to know which type of foundation would be best. Fig. no. 5.2.21 is an image of a deep foundation. Deep foundations are necessary when building on soil such as clay or silt. Large piles must penetrate into the ground through the weak and compressible soil, down to bedrock to create a base for the building. When deep foundations are constructed, soil is filled in around the base of the building and that fill will keep going through settling. As it settles, it sometimes likes to pull the piles down with it and can cause the piles to break away from their caps (if caps are used) near the surface. Considerations need to be made to design the proper foundation for they type of soil.
Fig.5.2.21 Deep Foundations through layer(s) of compressible soil
145
Colombus Museum of Industrial Objects
Climate
Shallow foundations can be used on fills over weak soil as well, but more problems may arise. Settling is very common after a building is constructed, therefore, in order to reduce the amount of settling that may occur after construction, an option is to preload the soil. This is done by adding loads of soil and rolling over the layers with a large roller machine to compact the soil. Engineers must be on site to watch over the compaction process and test the soil frequently to make sure it is compacting correctly.
Fig.5.2.22 Schematic of U.S. Airways terminal constructed on shallow foundations bearing on highly compressbile soil layers.
Arch 401 - Fall 2013
146
5.3 Precipitation 5.3.1 Rain
The city of Columbus gets its water supply from an underground aquifer. When it rains however, only 8-16% of rainfall water is absorbed into the ground to resupply this source. Using a water collection system allows about 80-90% of rainfall to be directly collected from the site of the museum where it can be stored for use to lessen depletion of the aquifer. The climate of Columbus, Indiana is quite conducive to using a rainwater collection system to support the museum’s water usage needs. Even though there are drier seasons, it receives between 3 and 5 inches per month year round. Having this more steady, year-round rainfall average will allow the collection system to keep up with the regular monthly water usage of the museum. The following page shows charts of the water level in the cistern for the first three years of 5 different scenarios of collecting area size. In order to be 100% sustainable, the collecting area must be 16,000sf. At 15,000sf and below, the purple line shows how many gallons must be added to the system at certain times.
Fig.5.3.1 - Average monthly rainfall in Columbus, Indiana.
Water Usage: Columbus, IN population = 44,677 assuming 75% of population will visit once every 6 months =33,500 visitors every 6 months =186 visitors/day on average 5 gallons/visitor/day = 930 gpd 930gpd * 30 = 27,900 gallons/month 930 gpd * 365 = 339,450 gallons/year Cistern Sizing: 44.31� per year average rainfall 3.7� monthly average rainfall area(sf) * rainfall (in) * 0.623 = gallons collected
Fig.5.3.2 - Daily probability of precipitation in Columbus, Indiana.
*Charts on following page assume 80% of average rainfall to account for drought years.
147
Colombus Museum of Industrial Objects
Climate
5.3.2 Water Collection
Fig.5.3.3- Cistern level and intake at 13,000 sf collector area.
Fig.5.3.5- Cistern level and intake at 15,000 sf collector area.
Fig.5.3.4- Cistern level and intake at 14,000 sf collector area.
Fig.5.3.6- Cistern level and intake at 16,000 sf collector area.
Arch 401 - Fall 2013
148
(Copy) Fig.5.3.3- Cistern level and intake at 13,000 sf collector area.
(Copy) Fig.5.3.5- Cistern level and intake at 15,000 sf collector area.
(Copy) Fig.5.3.4- Cistern level and intake at 14,000 sf collector area.
(Copy) Fig.5.3.6- Cistern level and intake at 16,000 sf collector area.
149
Colombus Museum of Industrial Objects
Climate
Arch 401 - Fall 2013
150
5.3.3 Snow
Precipitaion is most likely to occur around December 29; it occurs roughly 45% of days. It is least likey to occur around August 4 and only occurs 14% of the days.
Fig.5.3.7- Probability of snow on a day in Columbus, Indiana.
The most common forms of precipitation are light rain and snow. Light rain is the most severe precipitation observed during 64% of those days with precipitation. It is most likely around April 17, when it is observed during 26% of all days. Light snow is the most severe precipitation observed during 17% of those days with precipitation. It is most likely around December 29, when it is observed during 21% of all days. The cold season lasts from November 29 until February 26. Durring that time, there is a 38% average chance that precipitation will occur at some point. Precipitation at this time of the year is most likely in the form of light rain- 45%- then light snow- 41%- and lastly- 8%- being in the form of moderate rain.
According to “Heavy Snow Loads” by Curt A. Gooch of Cornell University, one cubic foot of heavy snow weighs approximately 21 pounds. The graphic to the right shows the total amount of accumulated snow load for each month based on adding the monthly averages. The snow load that the roof will require for each month is then calculated with the following formula: Snow Load psf = 21lbs / 12” * Inches of Snow In reality, some snow would likely melt each month as well as the new snow being added, so this graphic shows the worst case scenario of none of the snow melting throughout the winter. The code requirement is for a roof to have a live load capacity of 40 pounds per square foot. Since the most the snow load will reach is 25 pounds, the code will determine more strictly which roofing materials can be used.
14.7” = 25.73 lbs/sf 14.6” = 25.55 lbs/sf 12.6” = 22.05 lbs/sf 8.3” = 14.53 lbs/sf 3.8” = 6.65 lbs/sf 0.6” = 1.05 lbs/sf
151
Colombus Museum of Industrial Objects
Climate
5.3.4 Water Table & Frost Line CLIMATIC DIVISION 5 MORGAN 4 -- DAILY MINIMUM VALUES Most Recent Reading -13.99 (9/2/2013)
The sand and gravel water table under our site is very close to the surface. This chart shows the record maximum and minimum levels as well as the daily readings for the last five and a half years.
-3
DEPTH TO WATER FEET BELOW LAND SURFACE
-5
The average maximum level that we should plan for is nine feet below the surface. This means that the building should most likely not have an underground portion deeper than nine feet.
-7
-9
-11
-13
1/1/2014
10/1/2013
7/1/2013
4/1/2013
1/1/2013
10/1/2012
7/1/2012
4/1/2012
1/1/2012
10/1/2011
7/1/2011
4/1/2011
1/1/2011
10/1/2010
7/1/2010
4/1/2010
1/1/2010
10/1/2009
7/1/2009
4/1/2009
1/1/2009
10/1/2008
7/1/2008
4/1/2008
-17
1/1/2008
-15
MONTHS Record High
Record Low
Daily Minimum
Fig.5.3.8 - Daily levels of the Morgan 4 water table. 9/3/2013
Fig.5.3.9 - Frost depth map.
Daily Mean (1984-2010)
Fig.5.3.10 - Foundations without and with basement showing hydrostatic pressure with basement.
The frost depth in Columbus, Indiana is between 1 meter (3.2’) and 0.75 meters (2.5’). Since this means that the foundation only needs to be a maximum of about three feet deep, a basement would not be necessary and would raise the cost without need.
Arch 401 - Fall 2013
152
5.4 Sun and Shading 5.4.1 Solar Analysis
Columbus Indiana is located at 39.2140째 N Latitude, 85.9111째 W Longtude. The Fig. no. 5.4.1 shows the angle of the sun at this location for any given hour. This information is useful in helping to analyze the amount of sun falling on surfaces of the building. In addition, shadow conditions from surrounding buildings can be analyzed. Fig. 5.4.3 through 5.4.6 illustrate the lighting conditions of the visitor center site. The solar analysis of the post office site can be seen in Fig. 5.4.9 through 5.4.12.
Fig. 5.4.1 - 40째 N Latitude Sun Path Diagram
This Fig. no. 5.4.2 shows the time of each sunrise and sunset throughout the year..2 - Daily Sunrise and Sunset
153
Colombus Museum of Industrial Objects
Climate
Fig. 5.4.3 - December Solstice Time-lapse
Fig. 5.4.4 - September Solstice Time-lapse
Arch 401 - Fall 2013
154
Fig. 5.4.5 - June Solstice Time-lapse
Fig. 5.4.6 - March Solstice Time-lapse
155
Colombus Museum of Industrial Objects
Climate
Arch 401 - Fall 2013
156
5.4.2 Solar Analysis - Post Office
Columbus, Indiana is located in the Eastern Daylight Time Zone. From this information, the time of day when sunlight hours are occuring can be concluded. Daylight Savings Time (DST) occurs between March 10th and November 3rd of this year.
Standard Time Zone:
UTC/GMT -5 Hours
Daylight Savings Time:
+1 Hour
Current Time Zone:
UTC/GMT -4 Hours
DST Start/End Dates:
Start: March 10, 2013 at 2:00 AM End: November 3, 2013 at 2:00 AM
Fig. 5.4.7 - Eastern Daylight Time
Fig. no. 5.4.8 shows the total hours of sunlight that occur on each day of the year. This has a great effect on the amount of sunlight that can be used for daylighting..8 - Daily Sunrise and Sunset
157
Colombus Museum of Industrial Objects
Climate
Fig. 5.4.9 - December Solstice Time-lapse
Fig. 5.4.10- September Solstice Time-lapse
Arch 401 - Fall 2013
158
Fig. 5.4.11 - June Solstice Time-lapse
Fig. 5.4.12 - March Solstice Time-lapse
159
Colombus Museum of Industrial Objects
Climate
Arch 401 - Fall 2013
160
5.4.3 Cloud Coverage Analysis
The amount of cloud cover in Columbus can vrange from 32% (mostly cloudy) to 74% (partly cloudy). As the chart shows, it is most cloudy around January 12 and clearest around October 13. March 13 is the start of the clearest part of the year, and November begins the cloudiest part of the year. October 13 is said to be the clearest day of the year; 43% of the time, it is clear, mostly clear, or partly cloudy. Fig. 5.4.13 - Median Cloud Cover
Fig. no. 5.4.14 shows the percentage of time that each date receives a particular amount of cloud cover. By factoring this cloud cover data into the solar analysis we can now see how the amount of light falling on the building and site is affected. In the analysis, a sun and shadow time-lapse is presented for each of the four solstices. Each figure shows an image at 9:00 AM, 12:00 PM, and 4:00 PM. Fig. 5.4.14 - Cloud Cover Types
161
Colombus Museum of Industrial Objects
Climate
Fig. 5.4.15 - December Solstice Time-lapse
Fig. 5.4.16 - September Solstice Time-lapse
Arch 401 - Fall 2013
162
Fig. 5.4.17 - June Solstice Time-lapse
Fig. 5.4.18- - March Solstice Time-lapse
163
Colombus Museum of Industrial Objects
Climate
5.4.4 Shading Device Recommendations
3-D View
Section/Plan
Ideal orientation
View restriction
Horizontal single blade
Outrigger system
Horizontal multiple blade
Vertical fin
Fig. no. 5.4.19 shows examples of various shading devices that can enhance building design and efficiency. Each option provides a different level of shading depending on the orientation of the glazing. Another aspect to consider when selecting a solar shading device is view restriction. Through careful consideration this selection can lead to the success of a project. In the case of the Visitor Center site, it is likely that the building will need solar shading on the southern facade. Using the chart, it is possible to see multiple horizontal shading techniques. In addition, the western facade can best be shaded using vertical fins.
Slanted Vertical fin
Eggerate
Fig. 5.4.19 - Shading Device Options
Arch 401 - Fall 2013
164
Conclusion Weather affects many things when designing a building; yearly average temperatures and humidity levels are very important to know about so that proper building materials can be chosen. We suggest that materials be chosen wisely based upon temperatures throughout the year; if the surrounding climate has a large range of temperatures the building materials must be able to withstand both extreme cold and warm temperatures. The materials should be able to help heat the building in the winter and cool the building in the summer. Wind also has an effect on the building materiality as well as the structural components of the building. If the surrounding climate has high winds in the spring and fall, the structure must be able to withstand those winds. Structural components within the building must not move too much to cause the building to collapse when the wind hits the building. The building must have a strong foundation for multiple reasons. The foundation is what holds the building down and what the entire building is attached to. A strong foundation gives the building a base to build upon and be able to withstand harsh weather conditions. We recommend testing the soil on the site before beginning to build. The soil must be the proper soil, or be prepared correctly before foundation construction begins. If the soil consists of clay- like soil, compaction is necessary as well as the addition of sand or gravel to properly prepare the site. Our design recommendations for water issues are to not build below six to nine feet underground. This avoids the water table causing problems with the structure. A water collection system of 16,000+ square feet should be simple to achieve with the required building footprint and will have a great impact. Finally, snow load isn’t much of an issue to worry about, but the frost level is about three feet. This means footings or basements will have to be between three to nine feet and any roofing material that is to code can be used. From the information provided, it is recommended that the majority of glazing be oriented towards the South and West in order to capture optimized levels of natural daylight. Shading devices used with glazing on the southern facade should be horizontal in nature. On the western facade vertical shading devices are recommended.
Colombus Museum of Industrial Objects
Climate
CHAPTER 5: WORKS CITED Meghan Bouska, Dane Buchholz, Heather Wailes
Figures 5.1.1-5.1.7, 5.3.1, 5.3.2, 5.3.7 and Information
Figures 5.3.3-5.3.6 Cistern level and collector size Created by Dane Buchholz
Figures 5.2.1, 5.2.3, 5.2.5, 5.2.7, 5.2.9, 5.2.11, 5.2.13
Figure 5.3.8 - Daily levels of the Morgan 4 water table
Figures 5.2.2 Wild Ginger
Figure 5.3.9 - Frost depth map
Figure 5.2.4 Maple Sugar Tree
Figure 5.4.1 - 40º N Latitude Sun Path and Information Mechanical and electrical equipment for buildings Stein, Benjamin, John Reynolds, and William J. McGuinness. Mechanical and electrical equipment for buildings. 8th ed. New York: J. Wiley & Sons, 1992.
Figure 5.2.6 June Grass Figure 5.2.8 Cinnamon Fern Figure 5.2.10 Crossvine Figure 5.2.12 Serviceberry Figure 5.2.14 Royal Catchfly Figures 5.2.15- 5.2.20 and Information Figures 5.2.21-5.2.22 and Information “Foundations on Weak and/or Compressible Soils” Arch 401 - Fall 2013
Figure 5.4.1- 5.4.2, 5.4.7-5.4.8, 5.4.13-5.4.14 and Information Figure 5.4.3 - 5.4.6 - Sun Analysis Diagrams- Parking Lot Site Created by Meghan Bouska Figure 5.4.9 - 5.4.12 - Sun Analysis Diagrams- Post Office Site Created by Meghan Bouska Figure 5.4.15- 5.4.18- Cloud Analysis Diagrams Created by Meghan Bouska Figure 5.4.13 - Shading Device Options
book 2
Colombus Museum of Industrial Objects
Programming
Arch 401 - Fall 2013
CHAPTER 6: Programming Jeremy Ernst, Holly Pohlmeier, Kevin Stromert
6.0 Introduction 6.1 Tools for Architectural Programming 6.1.1 Definitions + Questions to Ask 6.1.2 Factors 6.1.3 Sample Back of House 6.2 Proximity Diagrams 6.2.1 Diagram 1 6.2.2 Diagram 2 6.2.3 Diagram 3 6.3 Precedent Program Analysis
158
6.4 Columbus Based Programming 159 6.4.1 Sample Columbus Analysis 6.4.2 Profile 6.4.3 Contextual 6.4.4 Programmatic lighting 6.4.5 City Plan vs. Site Options 162 6.5 Columbus Project + Sq. Footages 6.6 Conclusion 166 6.7 Works Cited
169
174 175 176
Columbus Museum of Industrial Objects
Precedents Arch 401 - Fall 2013
6.0 - Introduction The following chapter begins to look at programming for Columbus, taking direction from the precedent studies conducted in Chapter 1. Museums are often comprised of many components that the day to day user may never interact with. This includes spatial parameters like “back of house facilities� and staff support spaces. Many of the precedent studies’ programs offered a solution to a design problem. In order to define a program for Columbus, the building concept was investigated: Industrial Innovation. After understanding the museum concept, the building organiztion was thought about in terms of hierarchy, volume, and flow of spaces (as seen in the precedent studies). This chapter investigates one way to take on programming, by proposing a problem and presenting a solution.
Arch 401 - Fall 2013
6.1.1 - Precedents: Programming for Columbus Define Industrial Innovation: INDUSTRIAL: of, relating to, or characterized by industry. INDUSTRY: hard work, economic activity concerned with processing of raw materials and the manufacture of goods in factories. INNOVATION: the action or process of innvoating; INNOVATE: to make changes in something established especially by introducing new mathods, ideas, or projects. Interestingly, there isn’t a definition for the term “Industrial Innvation”. It is implied.
Questions to Ask
6 Categories for Evaluation; Based on Precedent Research: Concept Development Concept is what helps define design choices throughout the entire building. - Concept: Industrial Innovation - How does concept inform the building’s form? Circulation - How much of the exhibit spaces are devoted to circulation? How much space is circulation alone? - What kind of circulation fits the concept? Should circulation be fixed to lead visitors on a certain path, or open? Safety + Curation - Where + how are items stored? - What kind of security is available to the museum? - How private is this process?
171
Lighting - Natural vs. Artifical? - Is lighting diffused? Does it dpend on objects in the room? etc. Materials - Do materials play a role in the programming? - Materials can often mark transition between spaces, how? - Materials help dictate the mood of the space. Sustainability - Affected by each category. - Avoid waste - How can you incorporate passive or sustainable practices? Are there local materials that could be used?
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
6.1.2 - Programming Factors: Spatial: Space Types + Square Footages + Quantities Example
48%
Public Collections Space
Emotional: Client + Message + Vision
10%
Non Public Non Collections Space
Specialist Perspective: Exhibits
16%
Research / Curational
Non Public Collections Space
26%
Audience Narrative
Public Non Collections Space
Functional: Relationships + Adjacencies + How Spaces Work
Tourist Perspective: Exhibits Research / Curational
Educational / Didactic
Design / Visual
Audience Narrative
Educational Didactic Design / Visual
Hierarchical: Place Characteristics + Order
Fig. 6..1.1 Arch 401 - Fall 2013
172
Arrows note tranportation of objects to display.
Security
Shipping + Receiving
6.1.3 - Sample “Back of House” Facilities
Documentation Office
Crate Storage
It’s easy to forget about all of the things that happen behind the scenes in a museum.
Crating + Uncrating
In order to better understand different facilities necessary in a museum, pr ecedents were studied and analyzed. Museums require offices, storage spaces, large mechanical rooms, and security booths - to name a few. There are many different ways to oragnize these spaces.
Isolation
Photo Studio
The transition between public and private spaces (gallery and exhibit spaces to security and storage) is the most important thing to address. In this example, it seems that they are kept seperate. However, this is not the only way to approach the adjacency diagram. Curational Exam Rooms
Collection Storage
Conservation Lab
Permanent Collection Galleries
Tranist Storage
Exhibit Prep.
Exhibit Prep.
Clean Workshop
Dirty Workshop
Temporary Exhibition Galleries
Fig. 6.1.2
173
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
6.2 - Proximity Diagrams One study done to develop programs is to look at the proximity of the different elements within the program. Proximity considers the adjacency of these elements and a general distance for each element. This may begin to bring forward patterns in circulation, but is mostly done as a strategy for the arrangement of such elements. The next three pages will go into three strategies depicted in diagrams. All three include the same elements for the program. These elements include lobby, ticketing, bathroom, coat check, meeting space/conference space, a cafĂŠ, and a gift shop. Other major elements include the exhibit spaces, offices and other back of the house support spaces.
Fig. 6.2.1
Arch 401 - Fall 2013
174
Proximity Diagram #1
Lobby Bathrooms Ticketing
6.2.1 In this diagram, the program is split into three wings from the lobby space. One direction heads to the spaces that may be accessed by guests that do not necessarily need access to the galleries. This includes the café, gift shop, bathrooms, coat check, and meeting rooms. The second wing accesses the gallery spaces, but via the ticketing spaces. This allows security and control for the galleries. The third wing is to the “back-of-house” spaces, which can allow private access and security. These spaces include offices, loading and unloading spaces, storage and archive spaces.
Coat Check Curator Innovation
Archives
Support Spaces
Meeting Room/ Conference Space
Columbus Art & Design
Gift Shop
Rotating Exhibits
Fig. 6.2.2
175
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
Proximity Diagram #2
Archives
6.2.2 Lobby
Administrative Space
Storage
Cafe
Ticketing & Security
Gift Shop Meeting/ Conference Room
The second diagram also begins with the lobby and splits into three options from there. One option provides direct access to meeting rooms and the cafĂŠ. The other two options are access to the back of house and the ticketing/security space. From there the guests will take a precession based access to each of the galleries rather than a free plan between the exhibits. This path through the galleries terminates in the gift shop. From the lobby access to the back of house is provided. Similar to the last diagram but now direct access to the exhibits for the curator is included. This is a common program in museums today.
Innovation
Columbus Art & Design
Rotating Exhibit Space
Fig. 6.2.3
Arch 401 - Fall 2013
176
Proximity Diagram #3 Archives Curator Visual Connection
Cafe
6.2.3 This third and final proximity diagram begins with the lobby and provides two options from entry. Again separate access to the meeting room, cafĂŠ, gift shop, and support spaces. The second option is access to the main galleries. Note the alternate entrance to the back of house spaces. The major difference for this program is the visual access to the back of house spaces and archives. This type of program could work well for specialty museums such as automobile or ancient artifacts. Visual access to such spaces that prepare these kinds of works provides an opportunity for education for the guests.
Storage & Documentation
Gift Shop
Galleries
Support Spaces
Meeting/ Conference Rooms
Lobby
Coat Check
Fig. 6.2.4
177
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
6.3 - Programming helps to address a problem... PROBLEM
Jewish Museum
To welcome the Jewish community back to Berlin after the effects of WWII.
SOLUTION The circulation works to tell the story of the Jewish community through experiental and formal qualities. Elements like the axis of continuity and the Holocause tower acknowledge the past, and become a kind of memorial.
MuSe
Nelson Atkins Museum
Salvador Dali Museum
Arch 401 - Fall 2013
To encourage people to come back to a portion of the community that had been left behind through education of the science and biodiversity of the region.
Reviatlize an existing museum. Increase the square footage for museum space, and additional space for education.
Enhance the recognition of Dali’s work and protect “the largest collection of work outside of Spain”.
A master plan was generated for the area. The MuSe became only a portion of a large whole, in attempt to create an integrated cultural center. The MuSe was an innovative building to attract people to visit, as well as promote professional and cultural development. Square footage was increased by 70% through the Steven Holl addition. Holl’s image promoted revitalization, and building integration with the existing scultpure garden. The buildings educational portion of the building more than doubled.
The building represented Dali’s work in the arcitecture by creating an image to attract more visitors. Thick concrete construction was used to protect the work from hurricanes, etc.
178
6.3.1 - Object to Volume Scale
24’
The height to object ratio can change in rotating exhibits making it difficult to find solvable height. Having varying exhibits with varying heights will allow you to place appropriate objects into appropriate volumes. The top image shows a very high object to height ratio. The space overwhelms the object and detracts attention from the image. The lower image shows a lower height to object ratio creating a more user friendly interaction. The image takes over a much larger surface area on the wall, drawing in attention.
1’ 5’ 8” Height to Object Ratio: 24’ / 1’ = 24 Fig. 6.3.1
12’
2.75’ 5’ 10”
Height to Object Ratio: 12’/ 2.75’ = 4.36 Fig. 6.3.2
179
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
6.3.2 - Industrial Design Exhibits
Fig. 6.3.3
Fig. 6.3.4
Museum of Science and Industry (Chicago)
Cummins Corporate Museum (Columbus, IN)
This museum has incorparted a lot of theme based interactive displays. The museum staff is also readily available to answer questions or put on displays. The exhibits are interactive and offer learning. The buildings’ circulation is color coordinated and offers a sense of organization, because the visitors are able to orient themselves by color.
The cummins corporate headquarters offers a small museum in their lobby. The museum is successful because it targets a specific audience, and offers them an innovative way to look at items they’ve seen many times before.
Arch 401 - Fall 2013
Fig. 6.3.5
MuSe museum of science (Italy) This museum acts as an informative space for those in Trento to learn about the biodiversity of their area. The building offers many innvative exhibit proposals, and even uses the white facade of the building as a surface for presentations and learning. The museum has many interactive exhibits for children and even offers them facilities like a bubble room and a relaxation room for breaks during a day spent at the museum.
180
6.4.1 - Columbus, Indiana Industrial Innovation Museum
Promote education by keeping it centralized in the museum. This keeps visitors actively engaged in the building exhibitions. All other components of the museum are able to promote learning through this connection.
Lobby
PROBLEM Consolidate the work from other Columbus museums into a single space, and reinterpret the works to generate a wider appeal.
Actively engage visitors by exploring the existing environment.
Archives
Education
Focus on innovative objects of design from both the regional and local areas.
Columbus Art & Culture
SOLUTION
The museum should be an innovative design that is reminiscent of the innovative objects within it. The circulation should promote an interatctive relationship with the visitors, and promote wayfinding.
Gift Shop/ Cafe
Innovative objects of Design
Suggest Curational content and a thematic focus.
Rotating Exhibits Fig. 6.4.1
181
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
Rotating Exhibits Permanent Exhibits Innovative Objects of Design Education Archives Support Offices, Lobby, Cafe
6.4.2 - Profile The profile of the museum has an impact on both the surrounding contexts and the experience inside. This study shows how the room can be divided into different height categories determined by their program. There are many solutions for the profile of this museum. Fig. 6.4.2 - Profile Diagram
Some profiles can be driven by the surrounding site. A neighboring building may be short on one end of the site while there is a taller one at the other end. The cafe could be placed by the shorter building while the innovative objects could be placed by the taller building. The arrangement of the programs can be related to the maximization of daylighting. Placement of taller buildings further north then the shorter buildings could eliminate the possibility of southern light reaching the shorter rooms. Studying the arrangement of the programs in the third dimension helps to understand how your museum will relate to the context and how you can use natural lighting features.
Fig. 6.4.3- Profile Diagram
Arch 401 - Fall 2013
182
6.4.3 - Contextual Design Themes “”The congregation was searching for an architect who could find the right expression for their desire to live a rich inner life and a simple outer life.” -Jay Erwin Miller
The quote is in regards to the first project Eliel Saarinen did for the city of Columbus, The First Christian Church, when Miller was trying to convince Saarinen to accept the project that he had originally declined. The statement is a spitting image of the architecture created in the heart of Columbus. The library, churches, and the schools designed by an architect on a list of firms assigned by Cumins Inc. changed the community of Columbus. Most of the great architecture in Columbus has a simplified exterior and creates an extraordinary rich experience on the interior. Only the churches try and symbolize anything from the exterior while the others stay quite. They all take advantage of lighting, material, and details in an innovative way.
Innovation
Lighting Material Detail
Fig. 6.4.4 - First Baptist Church, Columubus Indiana
183
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
6.4.4 - Programatic Lighting Some of the innovative objects will be engines and large machinery that can be lit with direct lighting. This would be the only place in all of the exhibits that has this opportunity. Artificial lighting would also have to be used during the night.
Fig. 6.4.5 - Innovative Objects
Both the permanent and rotating exhibits will contain art that can’t be exposed to direct lighting. Natural light can be used if the apertures are designed specifically to reflect the light into the space and keep it from damaging the objects on display.
Fig. 6.4.6 - Permanent and Rotating Exhibits
The archives hold objects that are not out for display. Any light that is applied to the archives should not be damaging to any objects that may go through the museum. All of the objects from all the exhibits can pass through the archive limitinvg its options for lighting conditions.
Fig. 6.4.7 - Archives
Arch 401 - Fall 2013
184
6.4.5 - City Plan vs. Site Options The visitors center site is right in the heart of the cultural buildings while the post office site is sitting on the edge. The post office is in the government center while the visitors site is in the institutional area. This map will help guide in choosing a site that best fits the interests of the community. The museum belongs in the institutional area making it easy to choose the visitors center as a site for the new museum. An argument to be made is that the museum is for innovative objects and the Cumin’s Inc. is directly across from the post office. Parking in the Institutional area is already nearly impossible and the addition of a museum there would eliminate the existing parking while the post office has a parking ramp conveniently located across the street.
Fig. 6.4.8
185
Columbus Museum of Industrial Objects
Precedents: Programming for Columbus
6.5 - Columbus Project: Sq. Footages The client had specified square footages for certain portions of the building, including gallery and archival spaces. The program dictated a limit of 40,000 sq. ft., and indicated a large interest in education facilities. Based on the previous studies, the following sqare footages are an estimates for the Columbus program.
Archives Curator Visual Connection
Cafe
Storage & Documentation
Gift Shop
Galleries
Education
Galleries: 11,000 sq. ft. + Innovative Objects: 8,000 sq. ft. + Other Exhibits: 1,500 sq. ft. + Rotating Exhibits: 1,500 sq. ft. Back of House: 10,000 sq. ft. + Loading + Storage + Documentation + Curator Space Education: 500 sq. ft.
Support Spaces
Meeting Room: 1,100 sq. ft. Gift Shop: 500 sq. ft. Support: 1,000 sq. ft.
Meeting/ Conference Rooms
Lobby
Coat Check
Cafe/Kitchen: 2,500 sq. ft. Lobby + Entry Spaaces: 900 sq. ft. Circulation: 3,000 sq. ft. ______________________________________
Total: 35,000 sq. ft.
Fig. 6.4.9
Arch 401 - Fall 2013
186
6.6 - Conclusion Programming can be completed in a variety of ways. Looking at precedents has proven helpful for generating proximity diagrams and thinking about circulation and flow between spaces. Precedents also offer ideas about space, square footages, and total volumes for the kinds of spaces needed in a museum. Programming seeks to solve a problem, or a design challenge by implementing a concept. The concept is what helps to define the criteria for the spaces and occupancies withing the building.
Columbus Museum of Industrial Objects
Precedents
6.7 - Works Cited Subasinghe, Chamila. “Architectural Programming.” Lecture, Iowa State University, Ames, IA, September 30, 2013. Lord, Barry, and Gail Dexter Lord. Manual of Museum Planning: Sustainable Space, Facilities, and Operations. Lanham, MD: AltaMira Press, 2012. —. The Manual of Museum Exhibitions . Walnut Creek, CA: AltaMira Press, 2001.
Arch 401 - Fall 2013
Chapter 7 - Programming Through Education Ahmed Al Monsouri, Alex Olevitch, Bec Ribeiro
7.0 Introduction
178
7.1 The Educational Crisis of Columbus Indiana
179
7.2 Client Letter 7.2.1 Room Tests 7.2.2 Square Footages 7.2.3 Hourly Occupancy
180
7.3 Programatic Flow Chart
187
7.4 Site Analysis 7.4.1 Site 1 7.4.2 Site 2 7.4.3 Site 3
188
7.5 Trends in the Programming 7.5.1 Circulation variations
193
7.6 Final Bubble Diagram
195
7.7 Conclusion
196 Colombus Museum of Industrial Objects
chapter title
An Education in Programming After the research conducted on the demographic conditions of Columbus, we noticed that there are a large number of imbalances in the city’s present system. The most prominent imbalance exists within the educational attainment of the residents. This imbalance is not for lack of formal educational institutions but rather a lack of informal opportunities. Informal opportunities provide an excellent outlet for those already receiving a formal education to expand their outlook beyond the classroom. Additionally, they create a culture of education within the city by mixing those with higher educational attainment with those without. The idea is not to raise all those without higher education to the level of those with, but rather to promote a culture of education weather or not it is pursued further. This chapter will examine the letter provided by the CCF and weather or not it can be adjusted to address the educational disparity of Columbus.
Arch 401 - Fall 2013
7.1 The Educational Crisis of Columbus Indiana
The graph and chart show that there is not a lack for formal educational opportunity with in the city but that, despite this existing infrastructure Columbus suffers from educational polarity. It can safely be assumed that the majority of those who have only a high school degree or equivalant were raised in Columbus and remained there after high school. On the other hand, a further inference can be made that those with bachelor’s degrees or higher were not raised in Columbus and were brought in by the promise of Cummins Industry. It is the goal of this programatic study to figure out a way to mix these two residential groups and create a better culture of educational respect and interest within the city.
Figure 7.1.2 shows a map plotting the schools in Columbus
Figure 7.1.1 shows a bar graph with the educational attainment in 2011
191
Columbus Museum of Industrial Objects
Education Program
7.2 Client Letter
Spatial – (30,000-40,000)
Community room/meeting/education area
- Accommodates up to 150 people With our primary focus of the museum being on education, we found it necessary to alter the suggested number of people accommodating each space. Instead of the proposed 150 people per space, we increased it to accommodate 200 people. This will aid to increase all educational opportunities offered at the museum.
- Exhibit - Formal presentation - Research/reading - Hospitality (shop/café) - Community education - Back of house storage/operations (movement of the exhibits) - Preservation area - Administrative - Clear point of entry (security)
Likewise, the CCF left an open-ended suggestion of having spaces for education, so we decided to designate open classrooms with movable partitions for a more flexible environment. This ended up being a large part of our design, therefore strategically placed next to interactive and more versatile exhibits. Additionally we saw it fit to add a larger classroom capabvle of holding up to 80 people for smaller events and lectures. Museum shop/café
- Must include: - Bag/coat check area - Public restrooms - Janitorial services - MEP rooms (building code standards)
Administrative/support spaces:
Innovative Industrial objects (8000-10,000) - Columbus art and design (1,500-3000) - Housing work that’s donated by the president of Cummins - May have space for a center piece
- Flexible for museum direction, museum curator, archival librarian, education director, 6 docents and several teachers.
Functional - Back of house - Functional elevator (passenger/objects) - How does art correspond to other spaces - Accommodate a wide variety of exhibit sizes and types
Rotating exhibit design (1,500-2,500)
- Modest size relative to the rest of the building
- Spatial potential to house a variety of exhibit sizes/types
Architectural archives/exhibits (3,500-5,000)
Emotional
- Display space for models/drawings - Reading/research room that includes an architectural library, long-term storage space, and necessary administrative/conservation spaces
Arch 401 - Fall 2013
- Architectural archives/exhibit - To enliven and reiterate and contextualize these exhibits.
192
7.2.1 Room Tests
Community Gathering Space
We sized this space according to an event that requires seating for up to 200 people. While the CCF required up to 150, we chose to expand this in favor of an educational reception space. The kitchen is the shared between this space and the cafe, capable of servicing both, hence the division in the tables. This space can be flexible in its programming but square footages have been determined around the largest equipment requirements: tables and seats for all. 54 ft
30 ft
30 ft
54 ft
Kitchen seats 144 people 6150 sqft seats 192 people
84 ft
Figure 7.2.1: the proposed room test for cafe (including kitchen) and common open area
193
Columbus Museum of Industrial Objects
Education Program
Museum Shop
16 ft
Museum Entrance
31 ft Janitorial
Public Restrooms
Gift shop 496 sqft Figure 7.2.2 : a proposed room test for the gift shop
Arch 401 - Fall 2013
Bag/Coat Check
Lobby
1610 sqft
The entrance lobby is a difficult area to do a room test for as it is so often subject to the desired emotion of the spactial sequence. The lobbies of many museums are often larger open spaces but per the requirements of a room test, we have found the minimum space to be roughly 300 gsf. We justify this extremely small footprint by saying that the gathering space tested on the page before could be the main loby circulation space, another scheme found in many contemporary museums
Figure 7.2.3 : a proposed room test for the entrance area
194
Educational Outreach Spaces For this we used our initial gathering space as an indicator of “student body size”. Given that there are 200 possible seats we assume that this space will be filled in the event of an opening for student work within the Rotating Gallery. That being said, it is safe to assume that parties of 3 (one student and two parents) make up the audience. This leaves a total of 65 students.. Classroom sizes of 15 students, each with a desk of 2’ x 3’, open space for group activity, and personal storage results in the room seen below. There will be 5 of these rooms, enough for up to 65 students at a time. The presentation area can seat 80 and therefore could service both the students for any total group lecture or be used for a community gathering more intimate than the larger gathering space.
30 ft
29 ft
29 ft
24 ft
classroom 696 sqft
x 5 = 3,480 gsf Figure 7.2.4 : a proposed room test for the classroom
195
Presentation Area 870 sqft Figure 7.2.5: a proposed room test for the presentation area Columbus Museum of Industrial Objects
Education Program
Administrative Facilities Four lead administrators: Museum Director Museum Curator Archival Librarian Education Director
15 ft
Each of these positions requires an individual office. A 15’ x 15’ space should suffice for the needs of each
15 ft
The museum docents do not need individual offices but rather individual storage space for their belongings while on the museum floor. We chose to include these “lockers” in the break room. Also in this room is a set of tables and a counter top for food prep. Finally, a meeting v
36 sqft
40 ft
480 sqft
13 ft
13 ft
Meeting room 520 sqft seats 18 people
Figure 7.2.6 : a proposed room test for the administrative area Arch 401 - Fall 2013
Figure 7.2.7 : a proposed room test for the meeting room
196
7.2.2 Square Footages
Entrance
Lobby / Ticketing Bag / Coat Check Public Restrooms Janitorial Services
Exhibition Space
Innovative Industrial Objects Art and Design Collection Rotating Exhibit Architectural Archives Display Space A Reading/research library long term storage Administrative Spaces
Community Room
197
Flexible Common Area Storage Space Museum Shop Museum Cafe (Kitchen)
1,610 gsf
300 gsf 300 gsf 460 gsf 350 gsf
15,000 gsf
9,000 gsf 1,500 gsf 2,500 gsf 4,000 gsf
Educational Outeach
4,350 gsf
Administrative Space
2,800 gsf
Classrooms Larger gathering space Offices Break Room / Docent Space Restrooms Conference Room Teachers office
Back of House 6,640 gsf
4,800 gsf 800 gsf ~ 500 gsf 540 gsf
Storage Loading Dock Service Elevator Security Booth Active Systems
Museum Total
3,480 gsf 870 gsf 900 gsf 480 gsf 290 gsf 520 gsf 500 gsf
5,000
35,400
Columbus Museum of Industrial Objects
Education Program
7.2.3 Building Hourly Occupancy
Occupants 12:00 AM 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 Museum Director 1 1 1 1 1 1 1 1 1 1 1 1 1 Museum Curator 1 1 1 1 1 1 1 1 1 1 1 1 1 Docents 5 5 5 5 5 5 5 5 5 5 5 Teachers 6 7 8 8 7 6 Archival Librarian 1 1 1 1 1 1 1 1 1 1 1 1 Education Director 1 1 1 1 1 1 1 1 1 1 1 1 Janitorial Staff 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 2 2 2 2 2 Architectural Archive Staff 2 2 2 2 2 2 2 2 2 2 2 Visitors 50 50 50 50 120 120 120 120 120 120 100 Students 65 65 65 65 65 65
This number is subject to change. Within the museums program, there is capacity enough for a maximum of 65 students at any given time. In the event of an educational class or camp offered to the youth of Columbus this number could flucuate greatly. For the purpose of this chart, the maximum was chosen to design for the “extremes”. The hours this is offered is directly related to the avaliability of the citie youth who typically attend school until 2-3pm each day. This portion is highlighted due to its uncertaintity. We’ve chosen to represent 120 guests at any given time in the afternoon, should an event be held in the museums commons this number could experience dramatic variation. The commons area is designed with a 200 person capacity in mind for a variety of events and receptions. However, because this is a very specific variable, we feel it is safe to assume an average and in this case not highlight the most extreme scenario due to its great uncertainty.
Arch 401 - Fall 2013
198
7.4 Flow Chart
Administrative Offices Teacher Space
Community Space
Education
Entrance
Classrooms / Studios
Exhibits
Break Room / Docent Storage Restrooms
Admin.
B.O.H.
Museum Shop Museum Cafe Entrance
Restrooms
Education Lecture Hall
Innovative Objects
Loby/Ticket Bag / Coat Check
Rotating
Exhibit Entrance Art / Design
Community Space
Storage Janitorial
Storage
Architectural Archives
Back of House Storage Elevator Loading Dock Figure 7.4.1 shows a flow chart with the spatial relationships within the museum
199
Columbus Museum of Industrial Objects
Education Program
7.4 - Site Analysis 8th street
8th street
1 Franklin St.
7th street
7th street
Franklin St.
We’ve chosen to examine this program in three very different contextual conditions. The first site lends itself well to an educational program as it is surrounded on 3 sides by residential infrastructure. Site Two has a combination of commercial and residential with good oportunity for the creation of public space within the surrounding parking lots. Site 3, the post office site, has little to no connection to the residential neighborhood and much greater connection to the business and commercial districts of downtown. However, it also has a considerable amount of open space near by that could be developed into public plazas.
2
Site
Residential Jackson St.
Attractions/businesses
5th street
5th street
Church
Parking lot
3
Open area/playground
Figure 7.4.2 shows a color coded map signifying different programs within the city Arch 401 - Fall 2013
200
7.4.1 Site Analysis 1 For this site the layout of the programs based on site related information and how the ciruclation should respond. The entrance is located on the street with the most active traffic. It makes sense to immediately be able to access the café/shop or the exhibit spaces. Since the common area does not need to be consistently used and could be a part of the café, it is not necessary for it to be at the entrance. The exhibit space is located on the side of the street that could allow for natural light if that was desired. The back of house is located at the corner of the elementary school and housing neighborhood. It is not necessary for this area to have adequate natural light so this is an ideal location. The rotating exhibit and innovative industrial objects are on the second floor with the classroom because it is more beneficial for the students to be surrounded by this. The classrooms and administrative area are here because they do not need to be accessed by all visitors, so it would make more sense to have it out of the way of the museum guests.
8th street
8th street Playground/Plaza Area
Playground/Plaza Area
Franklin St.
Art & Entrance Design Archives
7th street
Archtitectural Archives
Cafe Shop
Back of House
7th street
Common Area
Second Floor
Administative Classrooms Gather Space
Rotating Exhibit
Innovative Industrial Objects
Storage
First Floor
Second Floor
Figure 7.4.3 examines the site north of the visitor’s center and shows a proposed layout of the museum’s programs
201
Columbus Museum of Industrial Objects
Education Program
7.4.2 Site Analysis 2
The entrance is located on the west side of the building, close to a parking lot and other public parking spaces for easy access. All the public amenities like the cafe and the museum shop are located on the west side of building by the street for easy accessibility. Having it with close proximity to the street can bring in people and workers to stop and maybe grab breakfast or come at their lunch breaks. Which in return brings more life to the museum. The rotating exhibit is also located by the street and next to the shop because it changes more frequently relative to the other exhibits. This keeps the museum front constantly changing and more relevant to bring people in. The common area faces an open parking lot located on the north side of the building. It has a private entrance that can be open for a variety of events, as well as a good view towards the church. The back of house and storage spaces is all located on the south side of the building to keep it accessible through the open area. Trucks can easily drive up to the back of house and drop off art pieces, which can easily be transferred to the first and the second floor through a large elevator. Close to the back of house elevator on the second floor is the innovative industrial objects. This exhibit takes the largest amount of space, and will have interactive objects that encourage informal education. The classrooms are located right next to the innovative industrial objects exhibit, for both areas have strong educational purposes. They also face the residential areas, promoting a more family oriented program.
Cafe shop
Rotating exhibit
art and design exhibit
Back of house
storage space
Back of house
administrive space
Architectural Archives
Innovative industrial objects
classrooms
Common area
Figure 7.4.4 examines the visitor’s center site and shows a proposed layout of the museum’s Arch 401 - Fall 2013
202
7.4.3 Site Analysis 3 Site 3, the post-office site, has dramatically different conditions than other two. This site has no immediate connection to any residential neighborhood and is surrounded by either business or parking with little exposure to the commercial establishments of downtown just a block south. That being the case, the location of the more public programs of the museum, such as the gathering space, the cafe, the shop, and the entrance, all address the north-south axis connecting the business and commercial zones of downtown columbus. The location of the entrance is a response to the open plaza of the bank, on the opposing corner. The remainder of the city block is currently dedicated to parking and thus would be an ideal private entrance for personel and loading/shipping.
Administrative Back of House
Group Classroom
Rotating
Community Gathering
Art and Design
Cafe
Industrial Objects
Architectural Archives
Storage
Studios
Figure 7.4.5 examines the post office site and shows a proposed layout of the museum’s programs
203
Columbus Museum of Industrial Objects
Education Program
7.5 Trends in mapping Coat Check
Janitorial
Art + Design Exhibition
Cafe
Entrance
Lobby
Art and Design Collection is always nearest the entrance. Followed by the Architectural Archives.
Architectural Archives
The Classrooms are never directly accessed through the entrance but hold a central location with access to most of the rest of the program
Community Space Educational Gathering Space
The Cafe and Museum Shop are always side by side and on the exterior. Administrative path typically follows the inverse of the public path, working backwards through circulation
Storage Teachers Offices Rotation Exhibit
Educational Studios
Conference Room Docent Lounge
Open connection
Flexible barrier
Imdustrial Objects Exhibit
Admin. Offices
Back of House
Circualtion zone Figure 7.5.1 : a map of the trends in the programs and the circulation Arch 401 - Fall 2013
204
7.5.1 Circulation Variations
Museum Guest
Event Atendee
Student
Teacher / Administrator
Figure 7.5.2: a series of different paths taken by 4 different visitors
205
Columbus Museum of Industrial Objects
Education Program
7.6 The Final Bubble Diagram - The Columbus Condition many factors including the lack of school diversity, the minimal amount of private schools, and low household incomes. A museum that bridges that disconnection of education is a good option because it would encourage people to come in as opposed to push them away for it’s foreign language in design. Therefore, our main focus was focusing the program of the museum on education people both formally and informally.
Arch 401 - Fall 2013
Figure 7.6.1 shows a bubble diagram of the existing condition in Columbus and the factors influencing it
206
Conclusion As evidenced by the final page of this chapter, education of a community is inherently linked to its prosperity. Given this actuality, the museum has the opportunity to directly affect this system, promoting education through informal opportunity and programming. In order to achieve the potential positive effects, certain programatic moves could be considered. The considerations are as follows: Enlargement of educational programs capable of supporting larger numbers of students and visitors.
Areas where students, educators, and the general public may interact and co-mingle.
Centralization of educational programs with the other programs acting as supports.
However, there are inherent dangers of promoting a strictly educational program. The program must be sensitive to those who do not wish education forced upon them but rather approachable by all, regardless of educational attainment. It should not promote exclusivity as higher education often does but rather welcome people, through its versitile progamatic offerings. Moves to avoid are as follows:
Avoid creating a “school�.
Avoid excluding visitors from the other programs during educational sessions.
Avoid a division of educational classes among visitors.
Colombus Museum of Industrial Objects
Education Program
CHAPTER 7 - Programming Through Education: WORKS CITED Ahmed Al Monsouri, Alex Olevitch, Bec Ribeiro
Development Concepts Inc. Downtown Columbus Strategic Planning . Strategic Plan. Columbus : City of Columbus, 2005.
Welcome To Columbus Indiana. 1 1 2013. 20 9 2013 <>.
Bartholomew County, Indiana . 10 1 2012. 1 10 2013 <. asp?scope_choice=a&county_changer=18005&button1=Get+Profile&id=2&page_path=Area+Profiles&path_ id=11&panel_number=1>.
Columbus Indiana Population and Demographics Resources . 1 January 2013. 10 September 2013 <http:// columbusin.areaconnect.com/statistics.htm>.
Columbus, Indiana. 2012. 20 9 2013 <>. Population in Columbus. 2013 <>.
Arch 401 - Fall 2013
Columbus, Indiana. 10 1 2013. 1 10 2013 <>.
CHAPTER 8: Contextual Programming David Greco, Jackie Katcher, and Sarah Ward
8.0 Introduction 8.1 Site Selection 213 8.1.1 Initial Site Options 8.1.2 Second Phase of Selection 8.1.3 FInal Site Selection 8.2 Site Analysis 216 8.2.1 Post Office Site 8.2.2 Visitor’s Center Site 8.3 Programming 221 8.3.1 The Problem 8.3.2 Recommended Solutions 8.3.3 Parking Space Requirements 8.3.4 Needs Assessment 8.3.5 Museum Departments 8.3.6 Room Test Diagrams 8.4 Conclusion + Design Recommendation 227 8.5 References + Works Cited 228
Colombus Museum of Industrial Objects
Trends and Forecasts
Introduction Our programming section uses the context around the two sites to determine specific design rules. Since the program relies heavily on the site, we went through a detailed process of assessing the possible sites in and around Columbus’s downtown. When picking sites, we looked at land use diagrams produced by the City of Columbus. In these documents, they analysized the levels of use according to several factors. The main determinate on which sites were recommended for development was the amount of square footage on the plots that were already developed and whether or not those sites were being used to their full potential. The city of Columbus sees the under-use issue as an economic as well as social issue. We add to this, by recognizing that these sites create a discontinuity in the city. After visiting Columbus, we ultimately narrowed down these underutilized sites down to two: the Visitor’s Center Site, and the Post Office Site. We feel that by selecting either one of these sites, the community of Columbus will gain a better connection between its different neighborhoods, corridors, and people.
Arch 401 - Fall 2013
8.1 Site Selection 8.1.1 Initial Site Options Because we saw importance in exploring different site options than the one that was originially proposed, we have gone through an analytic process of choosing the best potential sites for the museum project. We first began this process by closely examining the Downtown Strategic Plan’s map for redevelopment opportunities and looked at each site carefully before making our initial choices. These nine sites are highlighted in Fig. 1.1 with the ninth site being one that we added to the potential sites due to our analysis of what we felt the city needed and what this site could potentially be. Based on our initial understanding of the sites, we created a list of pros and cons of each site so that when we visited Columbus, we could walk around and analyze the sites at a human scale and look at the context of each site in depth. The pros and cons of each site highlighted in Fig. 1.1 are as follows: Site # 1
Cons Close to outskirts of town; surrounded by residential neighborhoods; next to railroad 2 Open, green space; interesting view from inter- Far away from the entrance point to the city; secting streets next to railroad 3 Open parking lot space; opportunity to conWould have to provide alternate parking; nect with Cummins insignificant in terms of proximity to heart of downtown 4 Room for landscape development, good oppor- Uses Cummins’ parking lot (but also gives tunity to make a connection between industry them the opportunity to use a parking garage and the natural environment instead of giant lot) 5 Underdeveloped part of Columbus; downtown Very small plot of land; would have to be leaves lots of room for growth more creative with interactive outdoor exhibits 6 Underused parking lot; close proximity to Located close to residential neighborhoods main downtown 7 Open parking lot space (across from original Smaller than original site proposal site proposal) 8 (Original Close proximity to historical landmarks and Doesn’t necessarily connect overall urban Site Proposal) the main downtown area; activates downtown fabric area more 9 Connection to Mill Race Park, walking trail, Situated on a floodplain; corridor into Coriver, and entry corridor lumbus; two railroad tracks run through the site
214 213
Pros Open, green space; close to Noblitt Park
Fig. 1.1 - The initial site options overlaid on top of the redevelopment opportunities map from the Downtown Strategic Plan.
Colombus Museum of Industrial Objects
Trends and chapter Forecasts title
8.1.2 Second Phase of Selection After visiting Columbus and further analysis of each site, we were able to eliminate four of the initial site options with the addition of one new site. The reasoning for eliminating site #1 and 9 was because these sites are on the floodplain, and since Columbus has had issues with flooding in the past, the city does not allow new buildings to be built on those areas. We also decided to eliminate site #2 because we felt that the context and the site were not engaging enough for the museum project. The issues of using site #4 were that since it is Cummins’ parking lot and Cummins is a private company, the site could not be used for a public museum even though developing a parking garage on the site for them to use instead would have been a more efficient use of the space. The new site that was added to the site option list is the post office located across from Cummins Headquarters and the new parking garage in the downtown area. This site was added to the list because of its great potential to be able to reacitivate and expand the downtown ideals to other parts of the downtown that are lacking. It is important to be able to make this area of the downtown more engaging, especailly since this area is close to the main entry corridor and one of the city’s greatest revitalization projects, Mill Race Park. This project also has a greater potential than the other sites do because there is opportunity for adaptive re-use instead of using up parking space that is vital to the downtown area right now. The post office is located right across from the new parking garage that was built recently so there is already opportunity for parking, but of course, additional parking will still be necessary.
Fig. 1.2 - The second phase of selecting potential sites overlaid on top of the redevelopment opportunities map from the Downtown Strategic Plan.
Arch 401 - Fall 2013
The downside of using this site is the fact that the post office still uses the building so in order to build on this site, it will be necessary to find a new home for the post office so that it will have its own place. The building is not well suited for the post office anyway and acts mostly like an empty shell in which the post office just happens to be inhabiting. Another issue with the site is that the building hasn’t been properly maintained in the past so this is leading to deterioration and general unattractiveness, giving the city reason to tear the building down. If one is to use this site, they can advocate that this building should stay because it was designed by a well-known architect, Kevin Roche, and later informed the design of the Cummin Headquarters, which was also designed by the same architect. The building will need some work in terms of maintenance, but overall, it will be a more rewarding opportunity for design because you will be able to save a building from being demolished and at the same time, help to reactivate the downtown district.
215 214
8.1.3 Final Site Selection Due to the constraints of building a 1/16� scale site model, boundaries for which sites could be used needed to be set, which further reduced the site option list. The boundary for the site model is highlighted in the dashed box in Fig. 1.3. The final two sites that were chosen are the parking lot next to the Visitor Center and the library and the post office site. Besides the constraints of the model, another reasoning for eliminating the other four sites were due to lot size and creating a better investment of land use. The original site proposal needed to be kept because of its rich cultural context with all of the great historical landmarks surrounding it and the opportunities to encourage more mixed use and walkability within the downtown. The cons of using this site still are the fact that this area won’t necessarily help to connect the overall urban fabric and that it is a parking lot which will use up space that is crucial for this downtown to thrive. By using up more of the downtown’s parking spacea and building a significant public museum project on the site, this will likely create more traffic congestion within the area, which could make the area less safe for both pedestrians and drivers. The post office site was also kept because it is the only site that is not a parking lot and is a more appropriate lot size for the type of project that will be developed here. This site has plenty of potential to be a great redevelopment site for Columbus as was mentioned before. Again, the cons of using this site are dealing with the lack of maintenance of the current building on the site and being able to relocate the post office onto a new site that will be better suited for them. Both sites will need a well-thought-out plan that will use outdoor public space to their advantage. This can best be done by integrating the landscape with the museum in a holistic manner so that it is cohesive with the rest of the downtown and overall city.
216 215
Fig. 1.3 - The final two sites overlaid on top of the redevelopment opportunities map from the Downtown Strategic Plan with a dashed box highlighting the area that the site model will show.
Colombus Museum of Industrial Objects
Trends and Forecasts
8.2 Site Analysis
Fig. 2.1 - Post Office site in its context.
For each site chosen, we looked at the context of the street, the surrounding buildings, and how the entire site plays a role in public interactions. As part of programming, very street on the site should be treated as a singular component and should be developed with the intent to increase the connectivity of the city. By focusing on one side of the site at a time, we are able to make an educated assumption on how to integrate the museum into the site while maintaining the goals and values of the city. After evaluating each facade separately, the design can then focus on how two streets converge at each corner. Every street interlaces itself into the urban fabric which creates a hierarchy into how the corner should be addressed. The focal point of the corner is centered on pedestrian movement respecting the surrounding buildings and integrating with the landscape. We suggest addressing the sites based on which street will have a bigger impact and by addressing the needs of Columbus. The post office centers itself where the architecture, entertainment, and the art/education corridors meet. By developing this site, we will be able to reactivate the surrounding area while providing connectivity to the surrounding city blocks. The visitor’s center site sits in-between historical architecture and the museum design should respect the existing architecture of the neighborhood while designing to the needs of the community.
Fig. 2.2 - Visitor’s Center site in its context.
Arch 401 - Fall 2013
217
8.2.1 Post Office Site
NORTHWEST: NEEDS: To unify the transition down Brown street. RECOMMENATIONS: Use landscaping to make the connection between the two blocks.
NORTH: Currently Cummins is right across the street. The North facade is mostly masonry and creates a large wall that blocks it from the entryway. NEEDS: To pull pedestrians to and from the entertainment corridor. RECOMMENDATIONS: Provide and outdoor space with a platform (a program or preformance to engage people), a bench (a place for people to sit, relax and enjoy), and a pathway (leads people through)
WEST: Current: Heading north on Brown St. from 4th St. towards Cummins Inc, there is a nice tree buffer that separates the company headquarters from the parking creating a tree canopy over the road. On Brown St., inbetween 4th and 5th Street, there is no landscaping. This can seem uninviting. NEEDS: A buffer zone between the museum building and the road to cancel out the noise. An entrance to the loading dock and parking for administration. RECOMMENDATIONS: Provide plantings or a wall for privacy and noise control from the road. This should be based on how you choose to use the open space behind the building. Have the entrance of parking and loading dock on Brown St. Make it closer to 5th street to accomodate the pedestrian area on the southern end of the site. SOUTHWEST: There are standards for preserving landscape design like the Miller House. NEEDS: To be pedestrian friendly and direct traffic towards the entertainment corridor. To tie into whatever accommodations you are making on Brown St. and 4th St. RECOMMENDATIONS: Keep the corner open. Create a diagonal path leading to building.
218
NORTHEAST: NEEDS: To unify the connection between the public building and Cummins at the corner. Pull pedestrians’ eye to the Post Office entrance. To supply an alternative route to the entertainment corridor and architectural corridor. RECOMMENDATIONS: Give clues as to where the entrance is instead of hiding it.
EAST: There is currently a collonade site in front of the building. Cummins Inc. was built 13 years after the Post Office and continues the movement of collonades down the street. Unlike the lush green columns at Cummins that diffuse light, the Post Office is very dark and stays true to its masonry form. This seems very imposing and uninviting. NEEDS: A front entrance that is more inviting to pedestrians. RECOMMENDATIONS: Create a front entrance that is inviting and opens up to the streetscape instead of tucking itself into the shadows of the collonades. Try and get more movement to and from the corner of Jackson and 4th street.
SOUTH: Currently there is an enclosed parking lot on the south side of the building as you head twards the entertainment corridor. It is not very pedestrian friendly and lacks a lush landscape. NEEDS: Accomodate a higher amount of pedestrian traffic. RECOMMENDATIONS: Create a sidewalk that is larger than normal to help encourage pedestrian movement through the site.
SOUTHEAST: NEEDS: To pull pedestrians to and from the entertainment corridor. RECOMMENDATIONS: Provide and outdoor space with a platform (a program or preformance to engage people), a bench (a place for people to sit , relax and enjoy), and a pathway (to lead people through).
Colombus Museum of Industrial Objects
Trends and Forecasts
Fig. 2.4 - View From North West
Fig. 2.5 - View From North East
Architecture Corridor
5th Street Jackson Street
Brown Street
Entertainment Corridor
4th Street
Fig. 2.6 -View From South West
Arts and Education Corridor
Fig. 2.7 - View From South East Fig. 2.3 - Map showing analysis of the Post Office site and how its facades relate to its surrounding corridors.
Arch 401 - Fall 2013
219
8.2.2 Visitor’s Center Site NORTHWEST: NEEDS: Pedestrian movement. RECOMMENDATIONS: If continuing the grid system, create a buffer zone between you, the museum, and the parking lot to provide enough room for pedestrian movement. Design a place to gather or pass by to the distant terrace.
NORTH: CURRENT: The parking lot extends north and serves as parking for the First Presbyterian Church, the Bartholomew County Library, the First Christian Church, and the Visitors Center. The parking lot causes 6th street to break from Columbus’ grid system. NEEDS: Connect the northeast to Washington St. by making the area around the below-ground service ramp more pedestrian friendly. Connect Franklin and Lafayette ave by extending 6th street. RECOMMENDATIONS: Work on the relationship between the below-ground service ramp and Washington St. in order to make more foot traffic.
EAST: Current: Bartholomew County Library sits east of the site adjacent to its lower grade entrance and below grade service ramp. NEEDS: Must respect the existing historic building. Must be pedestrian friendly so people can continue to use the other half of the existing parking lot so they will still have access to the library and First Christian church. RECOMMENDATIONS: Design most of the façade to be brick and integrate a glass curtain wall on the front so light may enter into the library. It is recommended to design a building with minimal levels in order to avoid interfering with the library’s quality of light.
WEST: Current: The parking lot is adjacent to Franklin St. There is an entrance to the parking lot that passes through to 6th street. Go one block east on 6th Street you reach Washington. NEEDS: Create a landscape buffer while trying to build close to the street in order to foster continuity between the urban fabric. RECOMMENDATIONS: Propose a main entrance to the museum off of Franklin St.
SOUTHWEST: NEEDS: Respect the Visitors Center. RECOMMENDATIONS: Connect the museum to the Visitors Center or create an intimate plaza space in-between so people can gather or pass by to the patio/ pavilion behind.
220
NORTHEAST: Connect the northeast to Washington St. by making the area around the below-ground service ramp more pedestrian friendly. NEEDS: Provide a pathway from parking to library entrance. RECOMMENDATIONS: Provide landscaping along the pathways to hide the enterence ramp to the library.
SOUTH: There is a little patio area behind the visitor center for people to sit in the shade of a gridded landscape which is raised above ground. NEEDS: Provide connectivity from the museum, Visitor Center and the library. RECOMMENDATIONS: Remove the gridded pattern of trees that are raise 2 ft off the ground in a planting bed. Provide landscape that interact with terrace as well as gathering space outside the library.
SOUTHEAST: NEEDS: To be pedestrian friendly. RECOMMENDATIONS: The spot has potential for congestion. Keep this open to pedestrian traffic flow from the parking lot and the Visitors Center. Provide an alternative entrance to the museum from this end.
Colombus Museum of Industrial Objects
Trends and Forecasts
Franklin St
Washington St
Fig. 2.9 - View from center of site.
Fig. 2.11 - View From North East. Fig. 2.10 - View From South West corner of site. 6th st
5th st
4th st
Lafayette Ave
Fig. 2.12 - View From North West.
Fig. 2.8 - Map showing analysis of the Visitor’s Center site and how its facades relate to its surrounding corridors.
Fig. 2.13 - View From South East. Arch 401 - Fall 2013
221
8.3 Programming
8.3.1 The Problem Columbus has scattered urban fabric. This means that the city has a problem with connectivity. The city also has a problem with the amount of open usuable public space compared to large single-story parking lots.
8.3.2 Recommended Solution We suggest incorporating plaza space by creating voids or set-backs on the chosen site. We also suggest trying to create efficient parking solutions in order to maximize usable public space.
8.3.3 Parking Space Requirements Museum Department Administrators: 5-10 spaces Museum Workers:10-25 spaces Visitors: 30-40 spaces The museum department administrators and workers would probably use parking spaces more often than the visitors. This is due to the number of people who would be arriving by school or charter buses. If choosing the Post Office site, there is a large possiblity that the out-of-town tourists would stay at the hotel in back of the site. This would mean that a large amount of people would walk or car-pool. Most visitors would not be traveling alone. Therefore, even if there are 200 visitors a day, there may only be 50 cars. This is also a very high estimated number of spaces needed because the visitors will not all be coming at the same time of day. This means that at any given time, cars would be coming and going, which keeps the amount of spaces needed at a minimum. This amount of parking would probably not warrant an entire public lot. There may need to be a small lot for workers though. This lot could easily be accomodated on either site. If parking is a significant problem already around the site, a parking ramp or garage should be proposed.
222
Fig 3.1 Parking Garage Adjacent to Post Office Site
Colombus Museum of Industrial Objects
Trends and Forecasts Meeting Room
8.3.4 Needs Assessment
Preservation/ Conservation
Administrators
A needs assessment was used to analyze the wants and needs of the client and formulate them into a set of standards that should be followed. The qualitative list expresses the main concerns dealing with the outside of the building and the level of human comfort and desires. The quantitative list deals with the pragmatic concerns that are vital to its function at the most basic level of needs, and ultimately, to the overall success of the building.
Electrical/ Mechanical Room Back Storage
Qualitative
Main Lobby
Rotating Exhibits
Cafe
Architetural Archives
Kids Area
Columbus Art and Design
Innovative Industrial Objects
Education Room
Fig 3.2 Bubble diagram that shows the proximity of rooms based on programmatic needs. Arch 401 - Fall 2013
• • • • •
proper indoor and outdoor public gathering space pedestrian movement along the outskirts of site proximity to transportation work with traffic patterns adequate parking
Quantitative • • • • • • • • • • • • • • • • • • • • • •
FRONT OF HOUSE- PUBLIC Lobby/ticketing /coats 1800 NSF 1000 NSF Gathering space Museum Shop 900 NSF Cafe 1800 NSF Meeting Room/Education 2700 NSF EXHIBIT SPACE Innovative Industrial Objects 8000-10000 GSF Columbus Art and Design 1500-3000 GSF Rotating Exhibit 1500-2500 GSF Architectureal Archives / Exhibit 3500-5000 GSF BACK OF HOUSE Administrative offices 1800 NSF Pres.conservation areas 900 NSF Storage 900 NSF Loading Dock 1800 NSF CIRCULATION/SERVICE vertical circulation 1000 NSF toilets 1000 NSF Mechanical/ Electric rooms 900 NSF circulation/gross up 5000 NSF TOTAL 36,000 GSF
223 222
Museum Director
8.3.5 Museum Departments
Curators
Director of Finance Museums have many departments that must work together in order to accomplish their common goal of serving the community. The hierarchy between the directors of each department keeps the flow of communication moving. Due to this, the program should directly relate not only to how the visitors circulate, but how the departments are connected. There is also an important differentiation between the directors that can do most of their work in their offices, and the directors which spend most of their day moving around the museum. There should be a central place where the employees converge at least once a day. This will help communication function on both a formal and informal level.
Preservation Specialists Events Coordinator Museum Technicians Maintenance Staff
Education Coordinator
Security Personel
Cafe Workers
Shop Workers Ticket Booth Worker Cleaning Crew
Fig 3.3 - Flow diagram showing the relationships between different department coordinators.
224
Colombus Museum of Industrial Objects
Trends and Forecasts
Museum Director
Director of Finance
Events Coordinator
Education Coordinator
The Movement of Positions and Office Proximities
Curators Preservation Specialists Security
The way that the different department directors and workers move should directly inform the adjacency diagrams in programming. The figure on the left shows that the Museum Director, the Director of Finance, the Events Coordinator, and the Education Coordinator should have offices that are similar to each other in terms of type and proximity. The three latter of the listed directors function underneath the Museum Director. Therefore, they should have smaller offices around his or her office. This will make it easy for them to communicate, and will create an opportunity for the more static offices to be located in a shared wing in the building. The office spaces for the more mobile directors and workers are dependent on where their particular program is located. In other words, the program locations must be determined prior to the office locations.
Museum Technician Cafe/Shop Manager
Fig 3.4 - Movement abstraction representing departments as various shapes. The smaller shapes across the diagram show movement from the main office of the director.
Arch 401 - Fall 2013
225
8.3.6 Room Test Diagrams Post Office Site
Lobby/ Ticketing
1800 SF
Gathering 1000 SF Museum Shop 900 SF Cafe 1800 SF Meeting Room
2700 SF
Exhibit Space
8000 SF
Columbus Art and Design
1500 SF
Rotating Exhibit
1500 SF
Architectural Archives
3500 SF
Administrative Office
1800 SF
Preservation/ Conservation
Front of House
Exhibit
900 SF
Storage 900 SF Loading Dock
1800 SF
Vertical Circulation
1000 SF
Back of House
Toliets 1000 SF Fig 3.5 - Programming of room spaces- Option One.
Mechanical/Electrical 900 SF Circulation
226
3600 SF
General
Colombus Museum of Industrial Objects
Trends and Forecasts
Visitor’s Center Site
Fig 3.6 - Programming of room spaces- Option Two.
Fig. 3.7 - Programming of room spaces- Option Three.
*Note: Both sites were space-tested for minimum vertical square footage. By creating more floors, the footprint of the building would be smaller, therefore allowing more open public space. As mentioned earlier, creating public space is crucial to the success of the downtown area.
Arch 401 - Fall 2013
227
Conclusion The best way to create continuity between the different areas of Columbus is to tie them together with a cultural center. By selecting underutilized sites, we can reinvigorate neighborhoods, Downtown Columbus, and overall, the city. We recommend selecting either the Visitor Center site or the Post Office Site. After setting this framework, we suggest analyzing the surrounding corridors and determining how you want to address the site. In addition, we recommend treating the different cardinal directions of the site as separate entities. By using our programming grids which explain the current use of the site, the needs of the each direction, and the recommendations we have, teams will have the tools to improve the area. The most crucial aspect of new development on either one of the sites is landscape. Much of the downtown area is lacking pedestrian friendly landscape, proper parking areas, and public open space. By addressing these needs and creating open spaces, the city will improve in connectivity, and as a result, will improve their health, their social functioning, and general well-being.
Colombus Museum of Industrial Objects
Trends and Forecasts
CHAPTER 8: WORKS CITED David Greco, Jackie Katcher, and Sarah Ward
“Downtown Columbus: Downtown Strategic Plan.� City of Columbus, Indiana. N.p., August 2002. Web. 9 September 2013. <>. Todd Williams Architects. N.p., n.d. Web. 30 Sept. 2013. <. com/parking%20garage%20columbus%20indiana.JPG>.
*Note: All other images, diagrams, and content are original work by the authors. Arch 401 - Fall 2013
229
CHAPTER 9: Programming with Code Yan Chen, Jessalyn Lafrenz, Deep Shrestha
9.1 Introduction 228 9.2 Museum Lighting Conditions 9.2.1 Hidden Source Lighting 9.2.2 Artificial Lighting 9.2.3 Primarily Natural Lighting 9.2.4 Natural Lighting for User Experience
229
9.3 Museum Temperature and Humidity 9.3.1 Mixed Gallery 9.3.2 Fully Encased Displays 9.3.3 Exposed Systems
233
9.4 Accessible Viewing 9.4.1 Displayed Objects 9.4.2 Seating Spaces
236
9.5 Hierarchy
238
9.6 Procession
239
9.7 Adjacencies
240
9.8 Programming Specifics
241
9.9 Conclusion + Design Recommendation
242
9.10 References + Works Cited
243 Columbus Museum of Industrial Objects
Programming Through Code
Introduction This section breaks down three areas of code guidelines into design suggestion specific to museums: accessibility considerations, lighting considerations, and temperature and humidity considerations. Through analysis of existing spaces and application of the principles set forth by the code, design solutions are shown to give a basis for problem solving in the design process. Each part is broken down into components to address each design issue with specificity. These ideas are meant to be design recommendations and guidelines to clarify and put code into design practice. Programming in this section addresses spatial qualities as well as specific technical qualities of each sub program within the museum. These suggestions focus heavily on funcional practicalities of space to create efficiency in layout and form. Though code is a highly specific measure of spatial concerns and restrictions, this section focuses on interpretations of these codes to fit the needs of the client for the Columbus Museum of Industrial Innovation.
Arch 401 - Fall 2013
9.2 Museum Lighting Conditions
9.2.1 Hidden Source Lighting Indirect or direct lighting can be hidden from the viewer by placing a drop down ceiling in between. This method can allow the art work to remain highlighted and draw attention to the piece.
Fig 9.2.1 Hidden Source Lighting Effect
229
Columbus Museum of Industrial Objects
Programming Through Code
9.2.2 Artificial Lighting Pictured here is a similar setting, light travels from above and is carefully directed towards the western wall. Notice the painting on the northern wall is left untouched. This could allow for a different method of lighting to be implimented.
Fig 9.2.2 Artificial Lighting Effect
Arch 401 - Fall 2013
230
9.2.3 Primarily Natural Lighting In this scenario, the lighting engulfs both the painting and the user, thus creating a space within a space. This method can be used in different ways to isolate the viewer and the object.
Fig 9.2.3 Primarily Natural Lighting Effect
231
Columbus Museum of Industrial Objects
Programming Through Code
9.2.4 Natural Lighting for User Experience Lighting in this setting is used to highlight and illuminate a hallway and corridor, notice how the wall adjcent to the stairwell reflects the light cast from above.
Fig 9.2.4 Natural Lighting for User Experience
Arch 401 - Fall 2013
232
9.3 Museum Temperature + Humidity 9.3.1 Mixed Gallery All objects involved need a moderate degree of temperature and humidity control to prevent deterioration and damage to the curated objects. Breakdown: Natural lighting must be diffused to help limit heat gain and cooling load for the space. For constant air flow and control, there are air intakes and returns regularly throughout the space and are located at different levels to prevent stagnation- this is key for human comfort as well as protection of all of the displays. There is a completely enclosed display case in the lower left of the image. This is necessary for high levels of contaminate, temperature, and exposure control. Depending on the subject matter in the space, it may be necessary to have this variety of display to keep the narrative cohesive throughout the museum as a whole.
Fig 9.3.1 Mixed gallery distribution. Red indicates passive heat sources and blue indicates controlled, isolated environments
233
Colombus Museum of Industrial Objects
chapter title
9.3.2 Fully Encased Displays Total environmental control for user and work separately. This is ideal for highly sensitive and delicate works that are most susceptible to affects through changes in humidity and temperature levels. This however, requires more air handling systems as opposed to other systems that integrate human comfort and display requirements onto the load for one. This is a highly effective system but it must be noted that entire displays encased in glass can limit user interaction and viewing. Direct sunlight is a large factor in controlling temperatures within this spaces as it can increase the cooling loads for both active systems rather than just one, so heat gain possibilities should be considered very carefully.
Fig 9.3.2 Fully encased displays. Red indicates passive heat sources and blue indicates controlled, isolated environments Arch 401 - Fall 2013
234
9.3.3 Exposed Systems This is a solution for a set of objects that do not have a need for highly controlled environments. The benefits to an exposed ceiling are easy maintenance of systems, heat collection (to prevent heat from settling to the level of use), and a certain aesthetic (depending on the display). Negatives to a fully exposed system include: noise pollution within a space, direct affect of systems onto displayed objects (in opposition to fully concealed systems), potential difficulty of full environmental control within a space. Temperature could become uneven, with pockets of hot and cold space that can detract from the viewing experience as well as the integrity of the works displayed.
Fig 9.3.3 Exposed systems. Red indicates trapped heat from usage and mechanical system output
235
Colombus Museum of Industrial Objects
Programming Through Code
9.4 Accessible Viewing
9.4.1 Displayed Objects Spaces need to account for peoples emotional and physical needs. Everyone has right to see and learn things within the museum and those accommodations need to be made to allow all users to enjoy and experience the space and objects it displays. Wheelchair users have lower eye level, so the height of exibihition cases and paintings have to be lower but also at a comfortable height for people who will be standing to view the exhibits. This in turn allows the design to be universally used and understood.
Fig 9.4.1 Diagram of appropriate levels for optimal viewing by all users
Arch 401 - Fall 2013
236
33x48in single wheelchair space
9.4.2 Seating Spaces.
66x48in back or front row position for two wheelchairs
60x48in midpoint position for two wheelchairs
66x48In
Spaces for people using wheelchairs must be an integral part of seating plan and must be near fixed seating.
aisle width must allow passage of wheelchair uers; fire codes should be comsulted to determine required width
Fig 9.4.2 Varied seating for handicap accessibility
237
Columbus Museum of Industrial Objects
Programming Through Code
Lobby/Entry Sequence
Gallery + Display Spaces
Support and Service Spaces
Industrial Innovation
Offices
Cafe
Rotating Displays
Classroom + Community
Gift Shop
Art + Design
Delivery + Workshop
Curated Archives
Janitorial + Mechanical
Ticket + Entry Station
Coat Room
Fig. 9.5.1 Heirarchy of space from the visitor’s perspective
Arch 401 - Fall 2013
9.5 Hierarchy Program is partially defined by the need for space to be organized by heirarchical sequence. The entry sequence must be captivating to entice and encourage user interaction. Without a captivating and user friendly entry sequence, it could hinder business transactions and the success of the museum as a whole. The entrance must spark interest in the museum while still providing functional and transactional programmatic needs. The gallery spaces need to be organized in such a way that it promotes interaction through and between different exhibits. Breaks in display flow are not suggested, as it is ideal to have a cohesive cognitive experience. Wayfinding is crucial in exhibits and museums, so limited breaks in flow are important to a successful user interaction The spaces that are highlighted in red indicate restricted customer access and are considered “back of the house” functions as they are not often seen by the visitor. These spaces are for the museum staff as support spaces. The community room and class rooms are exceptions to this standard. Those must be rented and have approved use through the museum staff. However these are crucial to community involvement as well as a revenue generator for a museum. These are arranged in a heirarchy of access rather than procession, which will be discussed later in the chapter.
238
Entry
9.6 Procession
Cafe
Coat Check
This is a very distilled version of how spaces could be linearly arranged to create flow through a museum scheme. Each component is broken down into how they would link together to create the most easily navigated plan. The red areas are again, restricted access but this shows how they should be directly related to each of the spaces, not necessarily separate. The tendency is to remove the back of hosue spaces and administratives from the flow of use completely, giving them their own separate wing when in fact, each of these components must be interrelated.
Lobby/Atrium
Gallery
Gift Shop
Gallery
Back of House Support Spaces
Admin/Support Spaces
Gallery
Gallery
Fig. 9.6.1 Procession of space
239
Columbus Museum of Industrial Objects
Programming Through Code
Gallery
Atrium
Vertical Circulation
Second Floor
9.7 Adjacencies
Gallery
Classroom/ Community Space
The adjacency diagram to the right shows relationship of scale and proximity of space. These show the max amount galleries should be spread apart. This puts slightly more strain on HVAC systems but with stacked plans, the zones can still be limited. This allows for high clarity in wayfinding and maximum relationship between administrative spaces and their service areas. This is based off of the standard set forth in the Code Precedent chapter and the programming principles outlined throughout this section. This is a generic adjacency model to illustrate ideal relationships between spaces.
Gallery Security
Coat Room
Lobby/Atrium
Vertical Circulation
Ticket/ Welcome Center
Cafe
First Floor
Gift Shop
Gallery
Fig. 9.7.1 Adjacency of program based on code efficiencies Arch 401 - Fall 2013
240
9.8 Programming Specifics Based on client needs and code requirements of minimum allowable spaces, this is a proposed plan of Gross Square Footage breakdowns. The overall breakdown of studied museums averages a space allocation in four categories: 10% Circulation 30% Back of House Functions 30% Administrative and Community Support Spaces 30% Gallery and Display use These averages are based on a number of precedents and functional considerations (all breakdowns can vary but for the purposes of initial programming, this model works well). With this in mind, all spaces must be used to their maximum potential, and efficiency is encouraged to increase the amount of revenue generating space for the museum.
Room Requirements Ticket Purchase Desk/Area Bag and Coat Check Restrooms Mechanical Rooms Janitorial Rooms Loading Dock Workshop Art Storage Museum Gift Shop Museum Café Administrative Offices Curator Offices Community Room Security Rotating Exhibit Space Curated Archives Columbus Art + Design Innovative Industrial Objects
Requests by Client Formal and Distinct Must Have Must Have Support 30,000-‐40,000 gsf Support 30,000-‐40,000 gsf Large enough for truck access *Added program *Added program Small, not focus Small, not focus Enough for 20 staff Enough for 2 staff 150 minimum capacity *Added program 1,500-‐2,500 gsf 3,500-‐5,000 gsf 1,500-‐3,000 gsf 8,000-‐10,000 gsf
Room Test 1,500 gsf 200 gsf
Criteria Lobby 60 lockers
12,000 gsf
30% total gsf
2,000 gsf 500 gsf 2,000 gsf 500 gsf 1,000 gsf 1,050 gsf 200 gsf 2,500 gsf 5,000 gsf 3,000 gsf 10,000 gsf
% not displayed works small retail Kitchen + Dining Flexible Desks Plus Workspace 7 sf/person back of house Max Possible Max Possible Max Possible Max Possible
Fig 9.8.1 Approximate square footage recommendations based on precedent and code analysis
241
Columbus Museum of Industrial Objects
Programming Through Code
Arch 401 - Fall 2013
242
Conclusion Programming through code considerations is highly important in the way it affects a volume of space. The chapter is meant to reflect how spatial systems should inform planning and adjacencies in plan. Each section interprets code requirements in the most basic format to create a basis for efficient and functional museum design. Design Recommendations:
Maximize spatial efficiency through careful placement of necessary adjacent systems
Limit the need for sizeable back of house systems through careful planning and design of gallery spaces.
Account first and foremost for the needs of the displayed work to protect the collection. Stem all other needs for comfort and spatial requirements to enhance the display spaces.
Remember the need to create a space to conduct successful business.
Make the experience one a customer would want to repeat.
Programming Through Code
CHAPTER 10: Climatactic Programming Meghan Bouska, Dane Buchholz, Heather Wailes
10.1 Landscape Scheme 10.2 Temperature Scheme 10.3 Water Scheme 10.4 Solar Lighting Scheme 10.5 Conclusion
Colombus Museum of Industrial Objects
Arch 401 - Fall 2013
Introduction Thinking about programming through a climactic perspective led us to developing 4 unique, individual schemes. Each scheme focuses on a specific aspect of the climate in Columbus, Indiana: landscape, temperature, water, and solar. We developed different layouts, room connection diagrams, and sample floor plans for each scheme.
Arch 401 - Fall 2013
Chaper 10: Climatic Programming 10.1 Landscape Scheme
Community Administration
Entry
Back of House Shop/Cafe A&D Innovation Arch Archive Rotating
This diagram shows the connections between rooms for a linear layout of the landscaping scheme.
245
Fig.10.1.1 Landscape Scheme- Linear
Colombus Museum of Industrial Objects
Climatic Programming
Community Administration Back of House Shop/Cafe
Entry A&D Innovation Arch Archive Rotating Fig.10.1.2 Landscape Scheme- Circular
This diagram shows the connections between rooms for a more circular layout of the landscaping scheme.
Arch 401 - Fall 2013
246
Landscape Community
Entry
Innovative
Art & Design
Shop/Cafe
Admin
Archives
Rotating
Back of House
Fig.10.1.3 Landscape Scheme- Bubble Diagram
This diagram is a spatially proportioned bubble diagram of our recommended landscaping scheme. (The green lines represent outdoor circulation options.)
247
Colombus Museum of Industrial Objects
Climatic Programming
Entry
Community
Innovative
Art & Design Admin
Shop/Cafe
Archives
Rotating
Back of House
Landscape program idea consists of an outter shell design with an inner shell. This idea encourages the separation of spaces with an outside space in between. This gives the guests an oportunity to be inside or outside. The indoor and outdoor spaces also encourages guests to enjoy what the landscape has to offer. Plants that are native to the area would be used in this area for guests to enjoy and learn about the importance of these plants in their environment. This programed spaces would allow each room to have acess to both interior and exterior spaces.
Fig.10.1.4 Landscape Scheme- Plan
Arch 401 - Fall 2013
248
10.2 Temperature Scheme
Community Administration
Entry
Back of House Shop/Cafe A&D Innovation Arch Archive Rotating Fig.10.2.1 Temperature Scheme- Linear
This diagram shows the room connections for a linear layout of the temperature scheme.
249
Colombus Museum of Industrial Objects
Climatic Programming
Community Shop/Cafe
Entry
Administration A&D Innovation Arch Archive Rotating
This diagram shows the room connections for a more circular layout of the temperature scheme.
Arch 401 - Fall 2013
Fig.10.2.2 Temperature Scheme- Circular
250
Rotating
Temperature
Entry
Art & Design
Innovative
Admin
Archives
Shop/Cafe
Community
Fig.10.2.3 Temperature Scheme- Bubble Diagram
This diagram is a proportioned and organized bubble diagram focusing on thermal issues.
251
Colombus Museum of Industrial Objects
Climatic Programming
Admin
Rotating
Art & Design
Entry
Innovative Back of House
Archives
Shop/Cafe
Community
The larger gallery spaces are centrally located in this prgramming diagram. This is because they are the largest rooms within the building and require more heating and cooling. With the rooms centrally located, it is easier to heat and cool the rooms because the outer rooms keep the heat in the center space. The administration portion of the building along with the rotating exhibit are on the north end of the building to protect from too much sun lighting during the day. The community space is located on the southern end of the building to incorporate more sun lighting throughout the day.
Fig.10.2.4 Temperature Scheme- Plan
Arch 401 - Fall 2013
252
10.3 Water Scheme
Community
Entry
Administration Back of House Shop/Cafe A&D Innovation Arch Archive Rotating Fig.10.3.1 Water Scheme- Linear
This diagram shows the room connections for a linear layout of the water scheme.
253
Colombus Museum of Industrial Objects
Climatic Programming
Entry
Community Shop/Cafe
Innovation A&D Arch Archive Rotating Back of House Administration Fig.10.3.2 Water Scheme- Circular
This diagram shows the room connections for a more circular layout of the water scheme.
Arch 401 - Fall 2013
254
Water Collection
Community Innovative Entry
Art & Design
Archives
Shop/Cafe
Rotating
Back of House
Admin
Fig.10.3.3 Water Scheme- Bubble Diagram
This diagram shows the proportions and sectional layout of the water collection based scheme.
255
Colombus Museum of Industrial Objects
Climatic Programming
Community
2nd Floor
Entry
Shop/Cafe Innovative Art & Design Archives Rotating Back of House
1st Floor
Admin
The water program for the building would entail a large, enlongated building with a sloped roof from one end to the other. The idea behind this is to encorporate the collection of rain water to help with sustainability aspects within the building
Fig.10.3.4 Water Scheme- Plan
Arch 401 - Fall 2013
256
10.4 Solar Lighting Scheme
Community Administration
Entry
Back of House Shop/Cafe A&D Innovation Arch Archive Rotating Fig.10.4.1 Solar Lighting Scheme- Linear
This diagram shows the connections between rooms for a linear layout emphasizing solar lighting.
257
Colombus Museum of Industrial Objects
Climatic Programming
Community Administration Back of House Shop/Cafe
Entry A&D Innovation Arch Archive Rotating Fig.10.4.2 Solar Lighting Scheme- Circular
This diagram shows the connections between rooms for a circular layout of the solar lighting scheme.
Arch 401 - Fall 2013
258
Solar Innovative
Archives
Back of House Art & Design
Rotating
Entry
Admin
Shop/Cafe
Community
Fig.10.4.3 Solar Lighting Scheme- Bubble Diagram
This diagram is a spatially organized bubble diagram of our recommended solar lighting scheme.
259
Colombus Museum of Industrial Objects
Climatic Programming
Innovative
Archives
2nd Floor
Entry
Art & Design
Shop/Cafe
Rotating
Community
Back of House
Admin
The Solar Lighting Program would involve two floors to accommodate for each gallery space. Innovative arts and Architectural Archives would be located on the second floor so that they can absorb natural sunlight hitting the building throughout the entire day. The A & D gallery along with the Rotating exhibit would be located below. This is because there may be large sculptures and other art work that do not require as much sunlight; or there may be work that could be easily damaged if exposed to natural lighting. The community space would be located on the southern end of the building to absorb as much daylighting as possible.
1st Floor Fig.10.4.4 Solar Lighting Scheme- Linear
Arch 401 - Fall 2013
260
10.5 Client Square Footage Requirements
30,000 - 40,000 Square Feet Total ____________________________ 9,000 2,000 2,000 4,000
| | | |
Invention Through Design Art & Design Rotating Exhibit Architectural Archives
1,000 - 2,500 | Community Room (A3 occupancy = 7sf * 150 people without tables or 15sf * 150 people with tables) 2,500 | Museum Shop 1,000 | Cafe 2,000 | Administration
20,000 +/- Square Feet
15,500 +/- Square Feet
10,000 | Back of House/Storage
261
Colombus Museum of Industrial Objects
Climatic Programming
Arch 401 - Fall 2013
262
Conclusion Overall, we feel that the landscape scheme we presented is the most effective. It not only incorporates landscape around the building, but also within the building. The room layout resembles both the solar scheme and the temperature schemes with the galleries being the interior portions. Water collection could still be easily applied to this scheme as well. The layout also links and separates spaces for good circulation and security. -incorporate landscape within the building -plan layout for solar scheme (daylit spaces on outside) -incorporate a water collection area of 16,000+ sf -link spaces with good circulation and security in mind
Colombus Museum of Industrial Objects
Climatic Programming
Arch 401 - Fall 2013
Colombus Museum of Industrial Objects
THANK YOU
Arch 401 - Fall 2013 | https://issuu.com/columbusbook/docs/columbus_museum_site_book | CC-MAIN-2017-04 | refinedweb | 54,397 | 55.95 |
joshd7227840
MVP
17-08-2018
Go to Rules. Under EVENTS, click the + icon to add a new condition.
In the Event Configuration, choose:
Extension: Core
Event Type: Custom Event
This is for triggering a rule based off a custom event listener to be dispatched elsewhere. Exact syntax depends on details, but assume
Custom Event Type: someEvent
Elements matching the CSS selector value: document
The rendered js is basically this:
document.addEventListener("someEvent", function(evt) { /* other stuff from rule placed here */});
document.addEventListener("someEvent", function(evt) {
/* other stuff from rule placed here */
});
Custom Events have ability to have data payloads passed to the receiving callback function when they are dispatched. Here is a generic example:
CustomEvent - Web APIs | MDN
// add event listener for custom event "someEvent"document.addEventListener("someEvent", function(evt) { console.log(evt.detail); // output: {'foo':'bar'}});// payload of data to pass to custom eventvar data = { 'foo' : 'bar'};// create and dispatch custom event, including the payloadvar ev = new CustomEvent('someEvent',{detail:data}); document.dispatchEvent(ev);
// add event listener for custom event "someEvent"
console.log(evt.detail); // output: {'foo':'bar'}
// payload of data to pass to custom event
var data = {
'foo' : 'bar'
};
// create and dispatch custom event, including the payload
var ev = new CustomEvent('someEvent',{detail:data});
document.dispatchEvent(ev);
Unless I somehow missed something, Launch currently does not expose the data payload to the rule (evt.detail in the example above).
I would like to see this exposed to the rule.
1) This can be added as part of the Event Configuration itself, where you specify other things, such as the event type (name), css selector, etc. It can be used to offer more granular level of qualifying the condition for triggering the rule based on some property value in the payload.
2) Similar (or alternative to) point #1, conditions(s) and/or exceptions based on payload data can be added
3) Actions can be added that can utilize the data passed to the custom event, for example, setting an Adobe Analytics variable.
For Launch fields, this can be referenced with syntax similar to clicked link syntax (e.g. %this.href% for the link href attribute). But for example, could do %evt.details.foo% or cut out the passed arg namespace for just %details.foo% or whatever.
For custom code boxes, maybe it can be pushed to the event object, similar to how you do event.$rule (e.g. event.$rule.name) today. Maybe event.$details object or whatever.
Well, regardless of how you want to present it in the interface, I imagine it should be relatively easy to implement, since the core / underlying javascript functionality for this already exists.
.josh
Okay, disregard this. Apparently this is already a thing, and I typo'd something while testing.
You can in fact use (e.g. based on above example) %event.detail.foo% syntax in the fields, or event.detail.foo in custom code boxes.
jantzen_belliston-Adobe
Community Manager
20-08-2018
Thanks for reporting back Josh! I'll go ahead and mark the idea as already offered.
never-displayed | https://experienceleaguecommunities.adobe.com/t5/adobe-experience-platform-launch/custom-events-expose-detail-payload-passed-to-custom-event/idi-p/333073 | CC-MAIN-2020-50 | refinedweb | 508 | 57.47 |
Readme for Websphere MQ fix pack 7.0.1.11
Product readme
Abstract
This document contains the WebSphere MQ readme for fix pack 7.0.1.11
The English language version of this document is the most up to date version.
Content
DESCRIPTION
------------
This file describes the following:
1. Limitations, known problems and other changes.
2. How to obtain and install WebSphere MQ version 7.0.1.11 as a fix pack.
3. IBM Message Service Client for .NET V2.0.0.11
4. Notices and trademarks
A minor update has been made to the product license, removing an incorrect reference to z/OS,
please see here for details:
The latest version of this file can be found at:.0 (US English) is available at:
(for distributed platforms), or
(WebSphere MQ for z/OS ).
See the announcement letter for the following types of information:
- Detailed product description, including description of new function
- Product-positioning statement
- Ordering details
- Hardware and software requirements
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
UPDATE HISTORY
20 Aug 2013 - updates for WebSphere MQ version 7.0.1.11
17 Dec 2012 - updates for WebSphere MQ version 7.0.1.10
06 Jul 2012 - updates for WebSphere MQ version 7.0.1.9
13 Jan 2012 - updates for WebSphere MQ version 7.0.1.8
03 Oct 2011 - updates for WebSphere MQ version 7.0.1.7
03 Jun 2011 - updates for WebSphere MQ version 7.0.1.6
03 Feb 2011 - updates for WebSphere MQ version 7.0.1.5
01 Oct 2010 - updates for WebSphere MQ version 7.0.1.4
12 Aug 2010 - updates for WebSphere MQ version 7.0.1.3
28 Jan 2010 - Updates for WebSphere MQ version 7.0.1.2
20 Nov 2009 - Updates for WebSphere MQ version 7.0.1.1
16 July 2009 - Updates for WebSphere MQ version 7.0.1.0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++ ++
++ Section 1 : Limitations, known problems and other changes ++
++ ++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
7.0.1.11 The Introduction section of the EGA.readme.mbr file shipped with
WebSphere MQ 7.0.1 iSeries base code contains an outdated reference to the Quick
Beginnings book being available from the download site
---------------------------------------------------------------------
"Introduction
These instructions apply to installing WebSphere MQ for i5/OS
Version 7.0.1 from an installation image downloaded from IBM.
Use it with the WebSphere MQ for i5/OS Quick Beginnings manual
for this release. A version of the Quick Beginnings book is
available from the download site, it has a description of
'WebSphere MQ V7.0 Install Doc'."
This document is no longer available on the Passport Advantage
site. The information is now available online in the WebSphere MQ
version 7 Information Center in the 'Quick Beginnings for i5/OS' section:
7.0.1.10.0.1.10.
7.0.1.7 Installing a v7.0.1 fix pack on HP-UX
when a v7.1 installation exists
------------------------------------------------------------------
When an attempt to upgrade 7.0.1.6 to 7.0.1.7 on HP-UX is made on a
machine that also has a WeSphere MQ v7.1 installation, the fix Pack
installation log may contain the following errors repeated a number
of times. However, this does not prevent the fix pack from installing
successfully.
rm: MQSERIES.MQM-RUNTIME: non-existent
ERROR: You have specified more than one fileset selection. You must
specify only a single fileset when performing file or control
file modifications.
7.0.1.7 Removing a v7.0.1 fix pack from a Linux
while a v7.1 queue manager is running
------------------------------------------------------------------
On Linux (all architectures), the removal of a v7.0.1 fix pack which
would leave the installed version of v7.0.1 at an earlier version than
v7.0.1.6, may partially fail if there is a v7.1 queue manager running
on the system at the time. The error messages output from the 'rpm'
command have the following form:
ERROR: WebSphere MQ shared resources for this installation are still in use.
Please ensure no WebSphere MQ processes are running
before removing this maintenance package
error: %preun(MQSeriesJRE-U844092-7.0.1-7.s390x) scriptlet failed, exit status 1
For example, on a system where the following installation order has been used:
(1) Install a fresh installation of WebSphere MQ at the v7.0.1.3 level
(2) Install the WebSphere MQ v7.0.1.7 fix pack
(3) Install WebSphere MQ v7.1.0.0
(4) Create and start a v7.1 queue manager
(5) Uninstall the WebSphere MQ v7.0.1.7 fix pack
To avoid this issue, either:
(a) Ensure that all queue managers on the system are in the stopped
state before removing the fix pack
(b) Ensure that the 'MQSeriesRuntime' rpm package is the
last package to be removed from the fix pack uninstallation.
The rpm syntax varies by rpm version - you may need to consult your
Operating System vendor in order to determine this information for your
system. For Red Hat Enterprise Linux Server 5.7, which uses rpm version
4.4.2.3, the removal order is the reverse of that specified on the
command line. Therefore specifying the 'MQSeriesRuntime' first in the
removal list will allow the fix pack to be removed.
For example, to successfully remove a complete v7.0.1.7 fix pack while
a v7.1.0.0 queue manager is running on Red Hat Enterprise Linux Server 5.7,
the following command syntax should be used:
rpm -e MQSeriesRuntime-U844091-7.0.1-7 MQSeriesConfig-U844091-7.0.1-7
MQSeriesMsg_es-U844091-7.0.1-7 MQSeriesMsg_ru-U844091-7.0.1-7
MQSeriesMan-U844091-7.0.1-7 MQSeriesMsg_ko-U844091-7.0.1-7
MQSeriesServer-U844091-7.0.1-7 MQSeriesJava-U844091-7.0.1-
MQSeriesMsg_cs-U844091-7.0.1-7 MQSeriesMsg_hu-U844091-7.0.1-7
MQSeriesMsg_pl-U844091-7.0.1-7 MQSeriesMsg_Zh_TW-U844091-7.0.1-7
MQSeriesTXClient-U844091-7.0.1-7 MQSeriesClient-U844091-7.0.1-7
MQSeriesJRE-U844091-7.0.1-7 MQSeriesMsg_de-U844091-7.0.1-7
MQSeriesMsg_it-U844091-7.0.1-7 MQSeriesMsg_pt-U844091-7.0.1-7
MQSeriesSamples-U844091-7.0.1-7 MQSeriesKeyMan-U844091-7.0.1-7
MQSeriesMsg_ja-U844091-7.0.1-7 MQSeriesSDK-U844091-7.0.1-7
MQSeriesFTA-U844091-7.0.1-7 MQSeriesMsg_fr-U844091-7.0.1-7
MQSeriesMsg_Zh_CN-U844091-7.0.1-7
If you do encounter this issue, then you must stop all WebSphere MQ
queue managers on the system to complete removal of the v7.0 fix pack,
by removing the remaining rpm packages associated with the fix pack
which were not removed with the first removal attempt.
7.0.1.7 Changes to DEFXMITQ attribute of the queue manager object
------------------------------------------------------------------
Additional checks have been introduced in WebSphere MQ fix pack 7.0.1.7
to disallow setting the value "SYSTEM.CLUSTER.TRANSMIT.QUEUE" for the
DEFXMITQ attribute for the queue manager object.A new reason code
"MQRCCF_DEF_XMIT_Q_CLUS_ERROR" (number 3269) has been created to
describe a failed attempt to set the DEFXMITQ attribute of the
queue manager object to "SYSTEM.CLUSTER.TRANSMIT.QUEUE".
Additionally, to prevent problems if DEFXMITQ has been incorrectly
set to SYSTEM.CLUSTER.TRANSMIT.QUEUE in the past, a new validity
check has been added to prevent an MQOPEN or MQPUT1 call from
using this incorrect mechanism. The reason code on failure will
be an existing code "MQRC_DEF_XMIT_Q_USAGE_ERROR" (number 2199).
7.0.1.6 Changes to the default behaviour of MQPUT1 API with MQPMO_SYNCPOINT
when used in a client application
--------------------------------------------------------------------------
The default behaviour of MQPUT1 with MQPMO_SYNCPOINT has been changed
in WebSphere MQ 7.0.1.6, to put the message synchronously so that MQMD
and MQOD output fields are populated during the MQPUT1 call.
The default behaviour can be changed by setting the client configuration
file attribute Put1AsyncPut in the Channels stanza.
Valid values for Put1AsyncPut are:
(a) no : Put is always synchronous. This is the default.
(b) yes : Put is always asynchronous.
(c) syncpoint : Put is asynchronous if MQPMO_SYNCPOINT is set
and synchronous otherwise.
The new attribute Put1AsyncPut does not replace Put1DefaultAlwaysSync.
If there is a need to change the default back to the pre-7.0.1.6
default, the client configuration file attribute Put1AsyncPut
must be set to "syncpoint".
7.0.1.5 Improved handling of WebSphere MQ running on
HP-UX 11i V3 (Itanium hardware)
----------------------------------------------------
This item affects only users on HP-UX 11i V3 (Itanium hardware) where
the system hostname is longer than 8 characters.
Before 7.0.1.5, WebSphere MQ obtained the hostname by calling a system
call, uname(). On HP-UX 11i V3 (Itanium hardware), the uname() call
can fail if the hostname is greater than 8 characters.
During the install processing and also during normal queue manager
processing, the failed uname() call could then cause the WebSphere MQ
code to fail to operate correctly. Directories under /var/mqm that
contain the hostname can then incorrectly be created with the name
"Unknown" instead of the hostname.
APAR IZ87770 addresses the problem by using the gethostname() system
call instead. After installing 7.0.1.5 or higher fix packs, please
manually remove the directories incorrectly named "Unknown".
Specifically the directories named "Unknown" will be under these
locations:
- /var/mqm/ipc
- /var/mqm/sockets
- /var/mqm/qmgrs/<qmgrname>/@app
- /var/mqm/qmgrs/<qmgrname>/@qmgr
Do NOT delete any other directories, for example those that hold
message data under /var/mqm/qmgrs/<qmgrname>/queues.
More information about this APAR can be found at:
7.0.1.5 Improvement to the way in which trigger monitor
sends information to the triggered application
-------------------------------------------------------
APAR IC72003 improves the way in which the trigger monitor sends
information to the triggered application.
Percent/dollar characters are now escaped by double quotes (Windows)
or a backslash (UNIX).
This change causes the expansion of environment variables by
the operating system to be suppressed for the data inside the
MQTMC2 structure that is passed as a parameter to the triggered
application.
For example, if the APPLICID field of a process was defined to
be "java cp %CLASSPATH% TriggerApp", then on Windows this will
be changed to "java cp "%"CLASSPATH"%" TriggerApp" when
constructing the MQTMC2.
In this example, the application will receive "java cp %CLASSPATH%
TriggerApp" as an input parameter. From 7.0.1.5, applications
will no longer receive an expanded version of environment variables
inside the MQTMC2 structure. Instead they must use alternative
means of expanding the environment variables if their content
is important.
7.0.1.5 Changes to the usage of MQGMO_CONVERT
in WebSphere MQ v7.0 classes for JMS
---------------------------------------------
APAR IC72897 changes the way in which WebSphere MQ V7.0 classes
for JMS request messages from the queue manager.
Prior to 7.0.1.5, the WebSphere MQ V7.0 classes for JMS requested
messages from the queue manager using the WebSphere MQ API option
MQGMO_CONVERT. This option requests that the queue manager
performs message data conversion before sending the message to
the receiving JMS application.
The WebSphere MQ V6.0 classes for JMS did not use this option.
This change of JMS client behaviour can affect JMS applications
when migrating from V6.0 to V7.0.
The code change associated with this APAR reverts the default
action of the V7.0 classes to the V6.0 behaviour. So, once
fix pack 7.0.1.5 or higher is installed, WebSphere MQ V7.0 classes
for JMS applications will no longer use MQGMO_CONVERT, resulting
in the queue manager performing no message data conversion when
messages are requested.
New properties have also been added as part of this APAR,
to allow the application to choose whether queue manager data
conversion is requested or not. These properties can be
configured either programmatically using the setter method on the
com.ibm.mq.jms.MQDestination class, administratively via a property
to be set in the JNDI namespace for the destination, or as a system
wide environment variable within the JVM using the property
"com.ibm.msg.client.wmq.receiveConversionCCSID".
A full description of this APAR, including how to use the new
properties, and the settings to use to retain the behaviour of the V7.0
classes prior to this fix pack, is available at:
7.0.1.4 Behaviour of the UNCOM output field for DISPLAY QSTATUS
---------------------------------------------------------------
APAR IZ76189 has changed the behaviour of the DISPLAY QSTATUS command
in runmqsc so that the behaviour of UNCOM now matches the description
under the runmqsc Information Center pages.
The runmqsc interface will now display the number of uncommitted
messages rather than simply YES or NO.
You can now use UNCOM in a filter: for example, WHERE(UNCOM LT 10).
It may be necessary to modify your scripts that contain DISPLAY QSTATUS
commands to runmqsc, if they rely on this value being YES or NO.
7.0.1.4 Improvement to dead-letter handling within the
WebSphere MQ v7 classes for JMS.
------------------------------------------------------
APAR IZ78515 has changed the default behavior for dead-letter handling
within the WebSphere MQ v7 classes for JMS.
When using the WebSphere MQ v7 classes for Java Message Service (JMS)
before version 7.0.1.4, poison messages that cannot be sent to the
named backout queue were converted to type JMSBytesMessage before being
sent to the queue manager's dead-letter queue (for example,
SYSTEM.DEAD.LETTER.QUEUE).
This had been a change in behavior from WebSphere MQ v6. In v6, the
JMS classes put the messages to the dead-letter-queue in their original
format, without converting them to type JMSBytesMessage.
APAR IZ78515 changes the behavior so that poison messages are now put
to the dead-letter queue without doing any conversion. This is the
correct behavior for this function, and matches WebSphere MQ v6.
7.0.1.4 New information in WebSphere MQ Explorer : some non-English
text missing.
-------------------------------------------------------------------
Some of the newest text (added in 7.0.1.4) is missing from non-English
versions of the WebSphere MQ Explorer.
(a) The MQ Explorer authorization functionality now contains a new
panel which allows you to "Add Role Based Authorities". There are two
"More Information" links on this screen. At V7.0.1.4, the lower of the
two links doesn't give any information unless you are running the MQ
Explorer in English.
(b) A number of items of recently introduced SSL/TLS functionality have
been added into the MQ Explorer at version 7.0.1.4. These items relate
to channel and queue manager configuration. Specifically, they allow:
- Configuration of SHA-2 CipherSpecs from the MQ Explorer.
- Specification of an HTTP Proxy Server for OCSP use.
- Configurable change to the way certificates are validated which is
useful to a small number of users.
When running the MQ Explorer in English the information in these panels
has been updated to reflect the new SSL/TLS values; in other languages
some of this new information is not displayed, though the main panels
themselves are properly translated.
7.0.1.0+ 64-bit GSKit 7 not updated with MQ fix pack install on Windows
-----------------------------------------------------------------------
Due to a known limitation in the install code, the 64-bit version of
GSKit 7 on Windows is not upgraded when these WebSphere MQ fix packs
are installed.
This means that 64-bit MQ client applications using SSL/TLS connections
on Windows are not running at the latest GSKit fix level.
HOWEVER, the 32-bit GSKit 7 installation WILL be upgraded when the fix
pack is installed. This means that queue managers, server-side
channels and 32-bit SSL/TLS client applications will still be able to
use the newer GSKit fix levels when WebSphere MQ maintenance is applied.
Java and JMS clients do not use GSKit for their SSL/TLS functionality,
and so are therefore unaffected by this issue.
7.0.1.3+ Clients using JMS connecting to 7.0.1.2 Servers
--------------------------------------------------------
If upgrading to 7.0.1.3 or a more recent fix pack on an installation
where WebSphere MQ classes for Java Message Service (JMS) are in use,
please read the following.
When using the WebSphere MQ classes for Java Message Service (JMS)
from WebSphere MQ 7.0.1.3 or higher, to connect to WebSphere MQ 7.0.1.2
queue managers, there can be a problem when stopping the
SVRCONN channel instance.
For example, you type:
STOP CHANNEL(MY.SVRCONN.CHANNEL)
Stopping a channel being used by the Java application can result in the
channel status remaining in a STOPPING state. When the channel is restarted,
the channel status moves to STARTED and another channel instance is created.
This issue has been resolved by APAR IZ77323, fixed in WebSphere MQ 7.0.1.3.
Therefore it is advisable to upgrade the server installation of WebSphere MQ
to 7.0.1.3 or higher before the clients that connect to it, if your clients
are using JMS.
Additional error cases now found by Explorer security tests
-----------------------------------------------------------
After you install 7.0.1.3 or a more recent fix pack, if you run
the Explorer Default Tests you will probably get error messages in
the Test Results, even if your results were error-free previously.
This is because the Default Tests now test aspects of your configuration
which were not previously tested.
The most common errors arise because your inbound system and default channels
(ones with names starting with "SYSTEM") have not been adequately protected
against unwanted access.
Another common cause of new errors is that your SSL/TLS system files allow
unnecessarily high levels of access. You can reconfigure your system to
remove error Test Results, and, where appropriate, warning Test Results.
Advice on how to do this is given in the "Further Information" associated with
the Test Result.
You may wish to use the same SSL/TLS system files to hold both client and queue
manager keys and certificates. However, this is an insecure configuration and
is discouraged. In general the access control/permission required for such a
configuration are incompatible with the new SSL/TLS system file test. If you
persist with sharing SSL/TLS system files between the client and queue manager,
but want error-free Explorer Test Results, you can configure your own set of
standard tests using the Explorer "Run Custom Test Configuration..." facility.
Information relating to maintaining multi-instance queue managers
-----------------------------------------------------------------
Guidance can be found here for applying fix packs where multi-instance queue
managers are in use:
In general, a multi-instance queue manager can have instances running at a
mixture of 7.0.x.y and 7.0.a.b. Exceptions to this are listed below. Other
than these exceptions, it is the general rule that the standby instance can
be on an installation at a newer or older fix pack level than the active
instance. This enables upgrading or rolling back of maintenance with the
minimum of down time.
The following is the list of exceptions:
Windows: 7.0.1.0 and >= 7.0.1.0 cannot be used in combination.
i5/OS: 7.0.1.0 did not support multi-instance queue managers so cannot
be used for the standy or active instance.
Migration from WebSphere MQ Version 6.0.2.3 or earlier on Windows
-----------------------------------------------------------------
When you migrate from WebSphere MQ Version 6.0.2.3 or earlier to Version 7.0.1,
the "exits" directory is emptied.
This is normally c:\Program Files\IBM\WebSphere MQ\exits or c:\Program
Files\IBM\WebSphere MQ (x86)\exits, but you might have installed to a different
directory.
This issue only affects Windows installations and has been resolved, but requires
maintenance to be applied to the version 6 installation prior to migration, see
APAR IC48397 for details.
Either apply service patch 6.0.2.4 or later, or back up your "exits" directory
before migrating.
If you have a previous installation of WebSphere MQ Version 7.0.1
-----------------------------------------------------------------
If you have a previous installation of WebSphere MQ Version 7.0.1, for example a
Beta version, you might experience a problem when you launch the WebSphere MQ
Explorer. If you find that Eclipse starts but the WebSphere MQ perspective does
not open properly, perform the following actions, as necessary.
In the Explorer, click the Window menu and select "Restart perspective".
If this does not work, close the Explorer and issue the command
strmqcfg -i
at the command line, the start the Explorer again.
If this does not work, close the Explorer and check there is no Eclipse
process running. This procedure will remove Explorer data such as remote
queue manager definitions and custom schemes and filters, so make a note
of any such data you want to keep. Delete the following folders:
On Windows:
C:\Documents and Settings\<userid>\Application Data\IBM\MQ Explorer
C:\Documents and Settings\<userid>\.eclipse
On Linux:
/home/<userid>/.mqdata
/home/<userid>/.eclipse
Issue the command
strmqcfg -i
at the command line, the start the Explorer again
Migration from WebSphere MQ Version 6 to WebSphere MQ Version 7 on Windows
--------------------------------------------------------------------------
When migrating WebSphere MQ from v6 to v7, if a queue manager has a
startup property set to Automatic, the property will get
changed to Interactive( manual) during the migration.
To set queue manager startup property back to Automatic use MQ command
amqmdain auto QMgrName.
Migration from WebSphere MQ Version 6 earlier than CSD 6.0.2.3
--------------------------------------------------------------
The first time the WebSphere MQ Explorer is started after migration, error
message AMQ4473 might be displayed and an FDC file written stating that the
WMQ_Schemes.xml file used to save schemes was found to be in an invalid format,
and that all user-defined schemes must be re-created.
Before CSD 6.0.2.3 schemes were not saved correctly. If you use schemes, you
will have experienced this problem and upgraded to a later CSD level. If you
have not created your own schemes and are using an earlier version of WebSphere
MQ you will see this message on starting WebSphere MQ Explorer. However,
MQ Explorer continues, creating a new default set of schemes, so no action is
required.
Migration: save JMSAdmin.config
-------------------------------
Save the file JMSAdmin.config (in the <install_root>/java/bin directory) to
another location before upgrading to WebSphere MQ v.7.0 or it will be
overwritten.
Support for WebSphere MQ on Windows Vista Japanese Edition
----------------------------------------------------------
WebSphere MQ does not support the new Japanese standard, JIS X 0213:2004.
Windows Vista Japanese Edition formally supports JIS X 0213:2004 characters.
These JIS X 0213:2004 characters include Unicode CJK Ideographs Extension-B
characters (UTF-16 surrogate pair (four bytes) characters).
Installation verification test program for JCA with WAS CE
----------------------------------------------------------
For the JCA IVT to work on WebSphere Application Server Community Edition
(WAS CE) there must be a connection factory with a name of "qcf" and a
queue with a message-destination-name of "jmstestQ" defined in the
geronimo-rar.xml when you deploy the RAR file.
For the wmq.jmsra.ivt.ear file to deploy on version 2.0.1 or later of WAS CE
the following xml files within the EAR file have to be updated as below:
geronimo-application.xml
xmlns:
<dep:environment xmlns:
openejb-jar.xml
<openejb-jar xmlns=""
xmlns:nam=""
xmlns:sec=""
xmlns:sys=""
xmlns:
Clustering of topics
--------------------
Though clustering of topics is generally supported, do not cluster
SYSTEM.BASE.TOPIC or SYSTEM.DEFAULT.TOPIC. The CLUSTER parameter of these topics
must be blank.
Topics in overlapping clusters
------------------------------
If you define two or more topics, with one topic string being a more qualified
version of the other (for example SPORTS and SPORTS/FOOTBALL), with the topics
in different but overlapping clusters, publications can be delivered to
unexpected recipient queue managers. For example:
- Queue manager qmAB is in both clusters ClusterA and ClusterB
- SPORTS is a cluster topic in ClusterA
- SPORTS/FOOTBALL is a cluster topic in ClusterB
Subscribers to SPORTS/FOOTBALL on queue managers which are only in ClusterA also
receive publications from queue manager qmAB.
Similarly, if a cluster topic and a local topic share a name, unexpected
publications can be received. For example:
- SPORTS is a cluster topic in ClusterA
- Queue manager qmC in ClusterA defines a topic SPORTS with no cluster
Subscribers for SPORTS on queue managers in ClusterA receive publications from qmC.
WebSphere MQ Explorer fails using Firefox on Linux
--------------------------------------------------
The WebSphere MQ Explorer might fail with the message "JVM terminated" when
attempting to launch the Help Center. There is a known problem when running the
Standard Widget Toolkit (SWT) browser inside Eclipse on some Linux distributions
when the default browser is Firefox. To avoid this problem, set the value of the
MOZILLA_FIVE_HOME environment variable to the folder containing your Firefox
installation. For example, if Firefox is installed in /usr/lib/firefox-3.0.10,
use the command:
export MOZILLA_FIVE_HOME=/usr/lib/firefox-3.0.10
or
setenv MOZILLA_FIVE_HOME /usr/lib/firefox-3.0.10
depending on the shell you are using.
(This requirement is noted in the SWT FAQs section of the Eclipse Web site:)
File Transfer Application help system not automatically available on Linux
--------------------------------------------------------------------------
The Help in the GUI is unavailable, an error is displayed when the File Transfer
Application is started. You can find the html help files in
/opt/mqm/eclipse/fta-help/plugins/com.ibm.mq.fta.doc_7.0.1.0/doc.zip
Multi-instance queue managers are not supported on IBM i
--------------------------------------------------------
Multi-instance queue managers are introduced with WebSphere MQ Version 7.0.1.
The Information Center states that these are available on all platforms except
z/OS. However, they are not available on IBM i (formerly i5/OS).
"Java Messaging and Web Services" component now includes XMS
------------------------------------------------------------
On Windows the component formerly known as "Java Messaging and Web Services" is
now called "Java and .NET Messaging and Web Services". As part of this change
additional files are installed with this component to enable use of the Message
Service Client for .NET, also known as XMS. If you upgrade a Java-only
installation you will also install the Message Service Client for .NET. For more
information about XMS messaging see the WebSphere MQ information center.
migmbbrk might not migrate ACLS correctly when subscription points are used
---------------------------------------------------------------------------
The WebSphere Message Broker publish/subscribe state migration tool (migmbbrk)
might not migrate ACLS correctly when subscription points are used. Consider
the following scenario:
1. A subscription to a WebSphere Message Broker broker references the topic
tree a/b/c.
2. A publication to a/b/c uses a subscription point called SB1
3. The topic string a/b has an ACL that grants publish access to a user, USER1.
The migration to the queue manager creates two topic objects and a setmqaut
command. It emulates subscription points by creating a topic object for SB1
with the topic string SB1/a/b/c. It also creates a topic object for a/b with an
ACL granting publish access to USER1. The setmqaut command grants publish
access to USER1 for the topic a/b.
As a result of the migration the topic tree has two branches, a/b and SB1/a/b/c.
The ACL is defined for the a/b branch but not for the SB1/a/b/c branch.
As a consequence publish access permission is granted to USER1 on a/b, but not
granted to USER1 for the topic string SB1/a/b/c.
If your topic tree includes both subscription points and user-defined ACLs,
review both to see whether the ACLs are defined in the broker and then migrated
to the queue manager need to be manually replicated for the topic trees that lie
below subscription points.
Simplification of publish/subscribe authorities
-----------------------------------------------
The way authorities are determined at the top of the topic tree has been
simplified. If authorities are defined on the SYSTEM.BROKER.DEFAULT.STREAM or
SYSTEM.BROKER.DEFAULT.SUBPOINT topic objects (and those topic objects still have
empty topic strings), those authorities will no longer take effect.
In order to apply topic authorities at the top of the topic tree, you should
apply the authorities to the SYSTEM.BASE.TOPIC object.
Use of HP Hotspot JVM
---------------------
If you use an HP Hotspot JVM, set the Java System property -XX:+UseGetTimeOfDay.
If you do not set this property, your application might not be aware of system
time changes.
WebSphere MQ Explorer - English messages in non-English environment
-------------------------------------------------------------------
When WebSphere MQ v7.0.1.10 is installed in a non-English environment, some
messages in WebSphere MQ Explorer mighty still appear in English and might not
display the latest translation. This is caused by Eclipse Bug 201489:
To install the Eclipse fix for this issue, first run WebSphere MQ Explorer in
Eclipse Workbench mode. To select Eclipse Workbench mode,
click Window > Preferences > WebSphere MQ Explorer > in an Eclipse Workbench.
Restart WebSphere MQ Explorer for this to take effect.
On Linux platforms, in order to be able to view and install updates run
WebSphere MQ Explorer as root.
Launch the update manager by clicking Help > Software Updates
Select "RCP Patch (bug:201489)" from the available updates and follow the
instructions to install the fix.
To return WebSphere MQ Explorer to stand-alone mode if required, select the
appropriate option from the WebSphere MQ Explorer preferences page.
WebSphere Application Server support on z/OS
--------------------------------------------
Before you deploy WebSphere MQ v7.0.1.10 on or within a WebSphere Application
Server environment on z/OS, read the technotes below that are relevant to
the versions of WebSphere Application Server you are using:
Version Link
-------- -------------------------------------------------------------
6.0.2
6.1
7.0
XMS assemblies left in Global Assemblies Cache after uninstall
--------------------------------------------------------------
On windows, uninstalling WebSphere MQ after installing WebSphere MQ7.0.1 through
the fix pack might result in some assemblies (IBM.XMS.*.dll) being left in
the .NET Global Assembly Cache (GAC).
This will occur if the .NET framework is available on the machine and the
"Java Messaging and Web Services" feature was selected in the original WebSphere
MQ v7 installation (selected by default). This behaviour can be prevented by
either rolling back the WebSphere MQ v 7.0.1 refresh pack or by running
"amqiRegsiterdotNet.cmd /u" (available in the "<WMQInstallRoot>\bin" directory)
before uninstalling WebSphere MQ.
Other changes
=============
Updates in the XA sample makefile
---------------------------------
Updates are made to the xaswit.mak file, so that it now contains up-to-date
references to database product install directories.
The default directories used in the sample makefile are changed to be more
up-to-date.
If you need to rebuild a switch load file, and need to continue to use old
database product install directories or versions, please check and amend the
xaswit.mak file so that it contains the correct information for your target
machine.
Do not rebuild your switch load file(s), unless it is necessary.
You will ONLY need to rebuild a switch load file IF the install directory of
your database product changes - eg. after installing a new version of the
database product.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++ ++
++ Section 2 : How to obtain and install WebSphere MQ version 7.0.1.11 as a ++
++ fix pack ++
++ ++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Product/Component Release: 7.0
Update Name: WebSphere MQ fix pack 7.0.1.11
2.1 Download location
---------------------
Download WebSphere MQ fix pack 7.0.1.11 from the following location:
2.2 Prerequisites and corequisites
----------------------------------
Before downloading WebSphere MQ V7.0.1.11,(R) and Windows(R) product support Web
site:
APARs describe defects that have been addressed in a fix pack, For a
list of WebSphere MQ v7.0.1.10 product installed and have
obtained special fixes, contact IBM(R) support to determine whether you
need an updated version of the fixes before you install WebSphere MQ
v7.0.1.11. This helps to ensure that your system is in a consistent
state and that no special fixes are lost.
2.3 Installing
--------------
Before proceeding with the installation process, view the online
version of the readme file to check if information has changed since
the readme file download:
3.3.1 Prior to installation
---------------------------
Before installing WebSphere MQ v7.0.1.11, perform the following steps:
1. Read all of this readme. Note particularly the additional steps
that are necessary if you have installed GSKit in order to use SSL
channels with WebSphere MQ.
2. Download the fix pack to an appropriate location.
3. Uncompress the fix pack.
4. Stop all WebSphere MQ processes and its applications.
21/ ...
It should not resemble the following:
/home/WMQ FixPack/FP1/ ...
2.
2.
2
2.3.6 Installation
------------------
For more information about how to install the fix pack see
the appropriate Quick Beginnings section of the WebSphere MQ
Information Center at:
AIX:
HP-UX:
Linux:
Solaris:
Windows:
2.4 Post installation
---------------------
2.4.1 Perform the necessary tasks after installation
----------------------------------------------------
After installing WebSphere MQ v7.0.1.11, perform the following steps:
1. Verify the installation. Refer to the Quick Beginnings section of
the WebSphere MQ Information Center for the appropriate platform.
2. Restart the queue managers
After applying a fix pack, it is not mandatory to recompile
applications.
2.4.2 More information about installations of GSKit and JRE
------------------------------------------------------------
For information regarding the levels of GSKit and JRE included with this product, see the
following link:
2.4.3 Installing GSKit version 7
--------------------------------
These instructions are applicable to GSKit version 7 only.
2.4.3.1 AIX
Install the gskta.rte and gsksa.rte packages from the installation
media using the installp command, for example, assuming the media
containing the fix pack is the current directory:
installp -ac -d. gskta.rte gsksa.rte
The level of GSKit provided with this fix pack, 7.0.4.45, requires the
following system patches (listed by version of AIX) to be installed
before GSKit is installed.
AIX 5.2.0:
xlC.aix50.rte.6.0.0.3 or later
AIX 5.3.0:
xlC.aix50.rte.6.0.0.3 or later
If running Technical Level 5, bos.rte.libc:5.3.0.53 or greater is required
2.4.3.2 HP-UX
The GSKit packages are included in the same depot as the WebSphere MQ
fix pack. Selection of the MQSERIES filesets for installation will
automatically select the appropriate GSKit packages.
HP-UX for PA-RISC:
The level of GSKit provided with this fix pack, 7.0.4.45, requires the
following system patches (listed by version of HP-UX) to be installed
before GSKit will install.
32-bit HP-UX 11i:
PHSS_26946 or later,
PHSS_33033 or later,
PHCO_34275 or later
64-bit HP-UX 11i:
PHSS_26946 or later,
PHSS_33033 or later,
PHCO_34275 or later
2.4.3.3 Linux
Before installing the WebSphere MQ fix pack upgrade the GSKit packages
using the rpm -U command as follows ( xxxx represents the architecture
string for the platform )
For a system where a 32-bit version of GSKit is installed:
rpm -U gsk7bas-7.0.4.45.xxxx.rpm
and
For a system where a 64-bit version of GSKit is installed:
rpm -U gsk7bas64-7.0.4.45.xxxx.rpm
Ensure that, on 64-bit platforms, BOTH packages are installed. This is
because both 64-bit and 32-bit processes may need to load GSKit
libraries.
The level of GSKit provided with this fix pack, 7.0.4.45, requires the
following system patches (listed by version/flavour of Linux) to be
installed before GSKit will install.
Linux on PPC32 / PPC64:
RHEL v5.0 requires compat-libstdc++-33-3.2.3-61 or later
Linux on x86:
Red Hat Enterprise Linux 4.0 ES requires compat-libstdc++-33-3.2.3-47.3
or later
Red Hat Enterprise Linux 5.0 ES requires compat-libstdc++- 33-3.2.3-61
or later
Linux on s390:
Red Hat Enterprise Linux 4.0 AS requires compat-libstdc++-33-3.2.3-47.3
or later
RHEL 5.0 requires compat-libstdc++-33-3.2.3-61 or later
Linux on x86_64:
RHEL 4.0 WS/AS/ES requires compat-libstdc++-33-3.2.3-47.3 or later
RHEL 5.0 ES requires compat-libstdc++-33-3.2.3-61 or later
Linux on s390x:
Red Hat Enterprise Linux 4.0 AS compat-libstdc++- 33-3.2.3-47.3 or
later
RHEL 5.0 compatlibstdc++-33-3.2.3-61 or later
2.4.3.4 Solaris
Shut down all WebSphere MQ applications and any other products that are
using GSKit. Before installing the WebSphere MQ fix pack uninstall the
old level of the GSKit packages using the commands:
pkgrm gsk7bas
pkgrm gsk7bas64
During the uninstallation process you will be warned that other
products have a dependency on the GSKit packages, reply 'Yes' to
continue the uninstallation process.
When installing the WebSphere MQ fix pack the new GSKit packages
gsk7bas and gsk7bas64 will be listed with the WebSphere MQ package.
If the system uses SSL channels, or will use SSL channels, these should
BOTH be selected for installation.
This is because both 64-bit and 32-bit processes may need to load GSKit
libraries.
The level of GSKit provided with this fix pack, 7.0.4.45, requires the
following system patches (listed by version of Solaris) to be installed
before GSKit is installed.
Solaris v8 for SPARC(for both 32 and 64 bit GSkit binaries):
108434-14 or later,
111327-05 or later,
108991 or later,
108993-31 or later,
108528-29 or later,
113648-03 or later,
116602-01 or later,
111317-05 or later,
111023-03 or later,
115827-01 or later
Solaris v9 for SPARC(for both 32 and 64 bit GSkit binaries):
111711-08 or later
2.4.3.5 Windows
On Windows the new version will be installed automatically when the
fix pack is installed and will be removed if the fix pack
is removed.
2.4.4 Installing GSKit version 8
--------------------------------
For information about installing GSKit version 8, refer to the Quick Beginnings
section of the WebSphere MQ Information Center for the appropriate platform:
AIX:
HP-UX:
Linux:
Solaris:
Windows:
2.4.6 Troubleshoot Installation from Support site
-------------------------------------------------
Before contacting IBM you are recommended to view the web site which has
specific advice on problem solving and data collection for WebSphere
MQ.
2.5 Uninstalling
----------------
For more information about how to remove the fix pack see
the appropriate Quick Beginnings section of the WebSphere MQ
Information Center:
AIX:
HP-UX:
Linux:
Solaris:
Windows:
2.6 List of fixes
-----------------
Fixes included in WebSphere MQ v7.0.1.11 :
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++ ++
++ Section 3 : IBM Message Service Client for .NET V2.0.0.11 ++
++ ++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3.1 Changes in this release
---------------------------
This version of XMS .NET has been tested for interaction with
IBM WebSphere MQ 7.0.1.11.
In addition, several fixes have been applied to resolve issues highlighted
from continued testing and customer feedback.
If Microsoft .NET Framework Version 4.0 is installed on the machine, you might need to
Create or set the HKEY_LOCAL_COMPUTER\SOFTWARE\Microsoft\.NETFramework registry key as follows:
"OnlyUseLatestCLR"=dword:00000001
3.2:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++ ++
++ Section 4 : Copyright, Notices and trademarks ++
++ ++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
4.1 Copyright and trademark information
---------------------------------------
4.
THIRD-PARTY LICENSE TERMS AND CONDITIONS, NOTICES AND INFORMATION
The:
AIX FFST i5/OS IBM MQSeries SupportPac
WebSphere z/OS
Microsoft and Windows are trademarks of Microsoft Corporation
in the United States, other countries, or both.
Java and all Java-based trademarks are trademarks of Oracle.
Product Alias/Synonym
MQ WMQ
Document information
More support for:
WebSphere MQ
APAR / Maintenance
Software version: 7.0.1, 7.0.1.1, 7.0.1.2, 7.0.1.3, 7.0.1.4, 7.0.1.5, 7.0.1.6, 7.0.1.7, 7.0.1.8, 7.0.1.9, 7.0.1.10
Operating system(s): AIX, HP-UX, IBM i, Linux, Solaris, Windows
Reference #: 7039552
Modified date: 20 March 2014 | http://www-01.ibm.com/support/docview.wss?uid=swg27039552 | CC-MAIN-2017-13 | refinedweb | 6,624 | 56.45 |
This is the mail archive of the archer@sourceware.org mailing list for the Archer project.
Sorry for the delay. I'm now trying to hose down the various hornets' nests I stirred up in DWARFland. > So after a few (really, many) reads of this email I think I can > summarize the issues and solutions discussed there. I just wanted to > make sure I have a proper understanding of the issue before filing a gcc > feature request. So, Is this a correct summary: Ok. I don't think I stated an actual conclusion, just tried to air all the nuances needing consideration. Perhaps appropriate conclusions were implied by the confluence of nuances, but I did not quite assert any. > The goal is the help gdb find the proper location for variables where > declarations and definitions are separated over CU's or so's. Yes, that's the problem that we started discussing. In my ramblings, I extended it to consider finding the proper code address (or function descriptor, as appropriate) for functions too (the cases with complexity analogous to the examples we've discussed with variables being C++ methods and namespace-qualified functions). > - It requires a search of all other CU's/so' to locate the definition. > Which is inefficient "It requires" sounds like this is the only option today. That's not so. I think you might be conflating two different things. One option is to search all other CUs (in all objects) to locate a DIE that is both a defining declaration and matches the original declaration DIE of interest. That is inefficient because it's an exhaustive-search kind of method. It's incomplete for all cases where the definition you are looking for does not have a DWARF CU you can find (due to stripping or due to lack of -g at compilation, etc). Another option today is to glean an ELF symbol name by one method or another, and then look for that. This has two components: coming up with the symbol name, and searching for it. The symbol search portion is presumed to be more efficient than searching through DWARF, though its largest-granularity scaling problem is the same one of searching across all the ELF objects. > but also inaccurate since > > - The scope of the declaration can be different from that of the > definition (e.g. class members). That issue per se does not render the grovel-all-CUs method inaccurate, just more complex than you might think at first blush. In each CU, you have to notice each matching non-defining declaration (which does indeed have the DIE topology matching your original declaration), and check for defining declarations whose DW_AT_specification points to the match. This is just a detail of how it is both complex and costly to check all CUs for a DIE that's an appropriate defining declaration. > If DW_AT_MIPS_linkage_name is > available it can be used to resolve this, however The "glean an ELF symbol name" portion can be done in two ways. One is DW_AT_MIPS_linkage_name, which is trivially simple to code for in the debugger and trivially cheap to extract. The other is to apply the language/ABI-specific symbol name mangling algorithm to the DIE topology of your original declaration DIE of interest. If DW_AT_MIPS_linkage_name is available, it supplies the same answer(*) that you get via mangling based on DIE topology. It's not that it "resolves inaccuracy", it's just that it yields from a very simple and cheap procedure (looking at the attribute) the same answer that the much more complex procedure should yield. (*) Conversely, I had the impression from Keith that GCC (at least in the past and maybe still today) emitted the wrong mangled name for DW_AT_MIPS_linkage_name sometimes. That sort of boggles the mind, but apparently is an issue of potential concern weighing against using DW_AT_MIPS_linkage_name. > - if the definition is in a stripped DSO there is indeed a definition > (ELF) but nowhere is there a DW_AT_location pointing to it. Also, That is true but is not a "however" about using DW_AT_MIPS_linkage_name. Nor is it a "however" about NOT using DW_AT_MIPS_linkage_name. Rather, it is a "however" about using CU grovelling to find a definition rather than gleaning an ELF symbol name. If you rely on CU grovelling, you of course only grovel the DWARF CUs that you have, which might not include the definition. > - it is possible to have two names defined in two separate so's with the > same linkage name. eg: Yes. I gave the concrete example for this situation, but I consider it part of the same point that you can also have two symbols with the same name inside one object. For that to happen, either one or both will be local or hidden symbols, or two global symbols will be in different symbol version sets. To be fair, this could be considered an entirely orthogonal issue. It applies here no different than it does to very simple non-mangled symbol names (e.g. from C). If at any point you glean a symbol name and then look it up by directly name, you can have multiple ambiguous matches. However, if you glean a specific ELF symbol--not the name, but a particular symbol index in a particular ELF symbol table--then you can disambiguate (with potentially very complex effort, but you in theory have enough information). In the vast majority of cases where you have a DWARF CU with a non-defining declaration to start from, you should be able to glean the particular ELF symbol in that object to use. In the object containing the non-defining declaration itself there will almost always be only one ELF symbol by that name. If it's local or has non-default visibility, you can use it right there--it's the defining symbol you're looking for. If it's global, then it has a symbol version association that you can use to drive your ELF symbol search unambiguously. > Proposed solution: > > Teach the compiler to generate a DW_AT_location for a non defining > declaration that is applicable in that die's scope. That location > expression would be parallel to the assembly generated for the symbol I only sort of proposed this, and it's not a complete solution. > The following part I don't quite understand: > > > We could certainly teach GCC to do this. > > It would then be telling us more pieces of direct truth about the code. > > Would that not be the best thing ever? > > Well, almost. > > > > First, what about a defining declaration in a PIC CU? [...] > Why is there a need for second artificial location describing die ? As I > understand it declarationhood is specified by the die's nesting in the > die hierarchy not its DW_AT_location. In other words, what is missing in > the current way gcc specifies locations for defining declarations ? Declarationhood per se (or perhaps we should say "non-definingness") is specified by the presence of DW_AT_declaration, not by DIE toplogy. The issue is that in PIC code, what's a defining declaration in the source might actually be acting as a non-defining declaration at runtime. Every defining declaration serves two purposes. The first is to describe the declaration. Just like a non-defining declaration, this wants to tell the debugger what using this particular name in this particular context (i.e. containing DIE) means in the source program. That's the frame of mind you want when doing things like expression evaluation in a given context. This corresponds to how assembly code is generated in that context to find the address of the entity described (data address or target PC/function descriptor). The second purpose is to describe the definition. This wants to tell the debugger what piece of memory this definition is providing. That's the frame of mind you want when resolving someone else's non-defining declaration, or when trying to examine initializer values before a program is running. This corresponds to how assembly code is generated to create the definition and (perhaps) initialize it. In code generated today for all defining declarations, we only have a description of the definition. In non-PIC code, that suffices to describe the declaration, since it's always resolved to that selfsame definition. In PIC code, that declaration is resolved like non-defining declarations and may or may not wind up matching this same definition. Thus there is the idea for PIC code generation to emit both a declaration DIE and a definition DIE for each defining declaration. When looking to evaluate the named variable in that context, you'd use the one. When looking for a definition, you'd use the other. > This summary does not include the part starting with "Before dynamic > linker startup" to the end of the email. Mainly because I am assuming > that the main use case is after dynamic linker startup. Well, I have a few problems with assuming that. Firstly, it's just not the way to do business. If we're going to contemplate changing or refining the contract between compiler and debugger in subtle ways, we don't do it lightly and we don't consider just one use case and go and change things purely to satisfy that purpose. We need to thoroughly consider what is most correct for each case we know of, and understand what methods do or don't achieve that. We may very well decide to trade off better support for some cases against less perfect support for cases deemed less common or important. But we'll do that explicitly after understanding what we're giving up and what we're getting. Secondly, is it really the main use case? Well, maybe it is for variables. That is all you actually asked about, but I insist on answering about what you need to know, not just what you asked. For variables, what it doesn't cover is printing initial values (which ordinarily works today) and setting watchpoints. The other half of the problem is functions (including methods). I'll grant that they are not quite as central in debugger expression evaluation as variables, but they're important there. Moreover, they are key to a use case every bit as important to the debugging experience as expression evaluation: setting breakpoints. Finally, after the dynamic linker details, and where I ranted a little about the functions, near the end is where I came closest to drawing an actual conclusion. That putative conclusion is more or less that all the preceding new ideas are sufficiently incomplete in their own ways that no such proposals really warrant pursuit and we might as well admit we are stuck with mangled symbols and faking the ELF dance as best we can. If that's the conclusion, then the only proposals are either to have a reliable linkage_name attribute and rely on it, or to drop it as useless and expect to construct a mangled name from DIE topology. I still wouldn't say I have come to that conclusion quite yet. I described (almost) everything I understand about the possibilities and constraints. I was hoping for some other folks to gain that understanding and share their opinions about how it all fits together. Thanks, Roland | https://www.sourceware.org/ml/archer/2010-q2/msg00014.html | CC-MAIN-2017-47 | refinedweb | 1,876 | 61.06 |
Converting a sketch from 1.5.x to 2.0.x
2.0.x allow configuration (old MyConfig stuff) to be done directly in the sketch using simple c #defines. You can still make modification in MyConfig if you want to override the default values once and for all.
It is important to do these defines before including MySensors.h, otherwise they will go unnoticed when the library evaluates which parts to include.
The most common thing you need to do is to decide which radio to use (or not to use). We currently support three different transport layers. Only activate ONE transport per node.
// Activate one of these #define MY_RADIO_NRF24 #define MY_RADIO_RFM69 #define MY_RS485
In addition to the transport you can also allow the node to be a gateway (communicating to the controller). We currently offer a few variants here as well. Note that ESP8266 is always considered to be a gateway node (but it can still communicate with a sub-network of none-esp sensor nodes). If you want to enable gateway functionality enable one of these:
// Enable serial gateway #define MY_GATEWAY_SERIAL // Enable gateway ethernet for a ENC28J60 module #define MY_GATEWAY_ENC28J60 // Enable gateway for a W5100 ethernet module #define MY_GATEWAY_W5100 // Enable gateway for a ENC28J60 ethernet for a module #define MY_GATEWAY_ENC28J60 // Enable a MQTT client gateway for W5100 or ESP node #define MY_GATEWAY_MQTT_CLIENT
Each type of gateway has additional configuration available (ip settings etc). Please look at the prepared example for more details.
To upgrade a sketch old 1.5 sketch to 2.0 there is just a few steps to go through.
Remove the
MySensors gw;class initialization and
gw.begin()-call. It's no longer needed. Use #defines to set radio channels or other settings. They are all documented in MyConfig.h.
The library now calls process() automatically. So you must remove it from you main loop().
Remove the
gw.part before any library call. It's no longer needed as library functions is available in the main scoop. For instance
gw.send(msg)should just be
send(msg).
The controller can now re-request presentation information from a node. This means we have to move all node presentation to a new special function
presentation() {}. This will be called at node startup or when controller requests a new presentation.
So if your old node had this:
void setup() { gw.sendSketchInfo("Distance Sensor", "1.0"); gw.present(CHILD_ID, S_DISTANCE); }
Do like this instead:
void presentation() { sendSketchInfo("Distance Sensor", "1.0"); present(CHILD_ID, S_DISTANCE); }
If you need to do initialization before the MySensors library starts up, define a
void before() { // This will execute before MySensors starts up }
You can still use setup() which is executed AFTER mysensors has been initialised.
void setup() { }
To handle received messages, define the following function in your sketch
void receive(const MyMessage &message) { // Handle incoming message }
If your node requests time using
requestTime(). The following function is used to pick up the response.
void receiveTime(unsigned long ts) { }
Here follows a small sample of the defines you can use in your sketch
#define MY_DEBUG // Enables debug messages in the serial log #define MY_REPEATER_FEATURE // Enables repeater functionality for a radio node #define MY_BAUD_RATE 9600 // Sets the serial baud rate for console and serial gateway #define MY_NODE_ID 42 // Sets a static id for a node #define MY_PARENT_NODE_ID xx // Sets a static parent node id #define MY_OTA_FIRMWARE_FEATURE // Enables OTA firmware updates #define MY_RF24_CE_PIN 9 // Radio specific settings for RF24 #define MY_RF24_CS_PIN 10 // Radio specific settings for RF24 (you'll find similar config for RFM69) #define MY_INCLUSION_MODE_FEATURE // Enables inclusion mode (for a gateway) #define MY_INCLUSION_BUTTON_FEATURE // Eables inclusion mode button (for a gateway) #define MY_LEDS_BLINKING_FEATURE // Enables transmission led feature for a node or gateway #define MY_SIGNING_ATSHA204 // Enables hardware signing using ATSHA204 #define MY_SIGNING_SOFT // Enables software signing
Security related settings (HMAC and AES keys among other things) are now configured using the SecurityPersonalizer sketch. Secrets are stored in eeprom for all software based security features (including rfm69 encryption).
There are many more things you can tweak using defines and we've tried to include most of them in relevant examples. The MyConfig.h/keywords.txt shows you the full list of defines available.
Hi,
How does this work when your gateway also have local sensors? I tried to merge the Humidity sensor (DHT11) with the SerialGateway but it seems that presentation is not called after uploading the sketch. This is what I have:
void setup() { // Setup locally attached sensors dht.setup(HUMIDITY_SENSOR_DIGITAL_PIN); metric = getConfig().isMetric; } void presentation() { // Present locally attached sensors sendSketchInfo("My Gateway", "1.0"); present(CHILD_ID_HUM, S_HUM); present(CHILD_ID_TEMP, S_TEMP); }
And the output is:
0;255;3;0;14;Gateway startup complete. 0;255;3;0;9;No registration required 0;255;3;0;9;Init complete, id=0, parent=0, distance=0, registration=1 0;1;1;0;0;28.0 0;0;1;0;1;59.0
If I put the "present" lines at the setup() then I got the expected lines:
0;255;3;0;14;Gateway startup complete. 0;255;3;0;11;My Gateway 0;255;3;0;12;1.0 0;0;0;0;7; 0;1;0;0;6; 0;255;3;0;9;No registration required 0;255;3;0;9;Init complete, id=0, parent=0, distance=0, registration=1 0;1;1;0;0;28.0 0;0;1;0;1;59.0
Is this expected or I missed something? I'm trying to use this gateway with homeassistant and it seems that homeassistant expect those presentation lines. There's a issue with the use of node "0" id too, but i should leave it to another topic.
- rollercontainer last edited by rollercontainer
Maybe related to this problem?
My gateway does the same. No presentation, but sending data.
The API looks very clean and simple, I look forward to use it more.
Does Arduino/MySensors uses C++ features (namespaces?) to prevent name collision? In typical C projects I've seen the functions all beginning with some prefix (mysensors_).
@rollercontainer Where should the "if (presentation).." be? At the setup()?
@chrlyra there is no such thing as if(presentation)...
The presentation function implementation can be anywhere in your sketch.
@Yveaux rollercontainer pointed to this issue:, where hek asked him to add two lines of code, the first one being "if (presentation)".
Great work ! This version looks a lot cleaner and the support for the RFM69 is more integrated. Thanks for that. One thing I'm missing and that's the specification of the encryption key for the RFM69. This was done in version 1.5.x by a user defined 16 bytes string. Now i see only a #define MY_RFM69_ENABLE_ENCRYPTION.
How and where is the key specified ? How does this make my setup unique ?
Thanks.
@Rolo6442u
If I remember well, it is handled by SecurityPersonalizer.ino It is sketch for signing options, and i think you can set/generate your rf aes key there. It store it in eeprom. Then you upload the new sketch with encryption define enabled. For me it's ok, as i use signing which needs to use the personalizer.
I think this works like this, not sure as i have not enabled encryption yet. Am I right @Anticimex ?
- Anticimex Contest Winner last edited by
@Rolo6442u
@scalz is correct. Details are in the doxygen documentation for usage. Link is on the GitHub readme. Look under the signing module there.
@Anticimex
@scalz
Thanks, I got it working. By setting it to "soft" this sketch writes all key's to eeprom and will not look for a hardware siging module. The RFM69 uses the EAS key for encryption. I defined my own key again in the sketch.
Nice solution !
- Anticimex Contest Winner last edited by
@Rolo6442u precisely. Glad to hear that the docs are helping
and thanks!
How can i call the process() inside a while in the loop() ?
Thanks
- nielsokker last edited by
I think the function "wait()" might be useful. It is like a sleep, but now it calls "process()"
@nielsokker said:
I think the function "wait()" might be useful. It is like a sleep, but now it calls "process()"
I was thinking of using it with wait(0) but I was wondering if there was any way to call the process directly.
- nielsokker last edited by
I'm not sure. I think the developers will know.
@pascalgauthier why do you need to call process ? It's already done internally by the lib. Or is it for inside some longtime loop? you still can use process but now it's _process()
Wait is not the same as sleep.
Wait : wait for a time and call _process
sleep: does not call _process. it sleeps. for a time if set
@scalz
Because i have while() that increment motion detected in a 30sec timeframe. And i would like to be sure that i'm not missing any cmd from the gateway. Do i need to define any additional library to use the _process() ?
- AWI Hero Member last edited by
@pascalgauthier I would recommend making your while loop non-blocking i.s.o. hacking into a MySensors function.
@pascalgauthier
yes it's better non blocking. for non blocking it's better to use "if" like. for beginning, I advise you to look at "blink without delay" concept. then learning a bit how a state machine works etc...could be useful to you. but it's a bit more advanced, or not..
Little bit confused. reading through the release notes...
Deprecated variables: V_DIMMER (use V_PERCENTAGE), V_HEATER (use V_HVAC_FLOW_STATE), V_LIGHT (use V_STATUS)
Now back to the API page for 2.0...
S_DIMMER 4 Dimmable device of some kind V_STATUS (on/off), V_DIMMER (dimmer level 0-100), V_WATT
WHICH ONE TRUST?!
And if my loop statement consists only of
void loop() { gw.process(); }
Rewrite like
void loop() { }
or exclude loop completely?
@moskovskiy82 said:
Rewrite like
void loop() { }
That's correct
@moskovskiy82 said:
WHICH ONE TRUST?!
The release notes (as I added this statement
)
Dear all, I'm a bit lost.
I upgraded the library in Arduino to 2.0.0 and now I am having big problems.
I downloaded the sketch for the relay from the MySensors website:
but I suppose this is still for 1.5
I tried to modify the sketch as suggested - I do not receive any compiling error, but the skectch simply does not do anything. If I use the "monitor" in Arduino IDE, I do not see anything 0_1469262004446_RelayActuator2.ino
There must be something fundamentally wrong in my code, can anyone help?
@Maurizio-Collu the examples for 2.0 are included in the library. They can be opened directly fromthe Examples menu in the Arduino IDE. It is also possible to fetch the examples from github, see For the relay example.
You have a few mistakes in your sketch, Download the V2 sketch that @mfalkvidd has pointed you to and compare the two side by side. you will soon see where you have gone wrong. don't forget you will need to update your gateway to V2 as well, a version 1.5 gateway will not connect with a version 2 node.
Thanks a lot, I knew I was missing something basic. For the newbies like me, it would be good to mention this somewhere (if you have not done so already and I missed it).
In the Home Assistant website it is mentioned that they only support 1.4 and 1.5. I'm trying with 2.0.
Thanks Again
- martinhjelmare Plugin Developer last edited by
2.0 should work with home assistant, but all new features of 2.0 are not supported yet. WIP.
I'm trying to upgrade my gateway to 2.0, but every time I go to upload, I get a 'error' not in sync.
I'm using just a standard nano w/ NRF24.
Note, my hardware profile is set properly to nano, 328p and proper com.
I restart app, and switched to my APM hardware profile and loaded my new 2.0 sensor code into my new APM node (DHT11 + motion) and was able to upload no problem.
Is there something i'm missing for the gateway?
Idon'tunderstandwhatwouldcauseittostopbeingabletocommunicate,
Are you able to upload some other sketch to the nano?
Or has something bad happen to the FTDI chip on your nano perhaps?
Just for clarification. V2.0 gateway (MQTT client esp8266) - will it work with the 1.5/1.6 sensors? Or everything needs to be flashed as soon as possible?
@hek it's actually a close nano with an FTDI board connected via serial inputs. I had this directly connected to my Vera, and communication with my old sensors was working just fine.
EDIT: Fixed. All in all, it was user error. I was trying to connect through FTDI adapter. I unplugged FTDI board, connected USB directly to nano clone, and upload worked fine.
Do I need to update mysensors plugin as well on Vera UI5?
@moskovskiy82 said:
Just for clarification. V2.0 gateway (MQTT client esp8266) - will it work with the 1.5/1.6 sensors?
You could try, but the recommendation is to update the nodes as well.
?
@martinhjelmare
Plus, when starting home assistant, I get this error
16-07-26 23:03:44 homeassistant.bootstrap: Error during setup of component mysensors
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/homeassistant/components/mysensors.py", line 62, in setup_gateway
socket.inet_aton(device)
OSError: illegal IP address string passed to inet_aton
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/homeassistant/bootstrap.py", line 150, in _setup_component
if not component.setup(hass, config):
File "/usr/local/lib/python3.4/dist-packages/homeassistant/components/mysensors.py", line 109, in setup
device, persistence_file, baud_rate, tcp_port)
File "/usr/local/lib/python3.4/dist-packages/homeassistant/components/mysensors.py", line 73, in setup_gateway
baud=baud_rate)
File "/home/pi/.homeassistant/deps/mysensors/mysensors.py", line 326, in init
persistence_file, protocol_version)
File "/home/pi/.homeassistant/deps/mysensors/mysensors.py", line 40, in init
self.const = _const
UnboundLocalError: local variable '_const' referenced before assignment
- martinhjelmare Plugin Developer last edited by martinhjelmare
Regarding the error in home assistant, that's a bug, due to specifying mysensors version other than
1.4or
1.5. Use
1.5and you should be fine. The bug is fixed in the dev branch of pymysensors and will be fixed in home assistant when real mysensors 2.0 support is merged. WIP.
Please post home assistant topics in the home assistant category under controllers. This is off topic in this thread.
- martinhjelmare Plugin Developer last edited by
@Maurizio-Collu said:
?
@tekka knows this best, and he wrote a post explaining some of those messages here:
@martinhjelmare Thanks a ot.
Is there any guide/manual where all these messages are explained?
Plus, is there anywhere written what all these terms mean?
TSP:MSG:SEND 255-255-0-0 s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=ok:
Kind Regards
@Maurizio-Collu do you mean except the link martinhjelmare provided?
@mfalkvidd Yes, basically this part (I'm sure it is explained somewhere, but I can't find where)
255-255-0-0 s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=ok:
SOrry for my newbieness...
@Maurizio-Collu Yes, the doc is in preparation, the meaning of these messages is described here (this is a PR that harmonizes the log message and will be pushed to 2.0.1).
In brief:
TSP:MSG:SEND 255-255-0-0 s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=ok:
TSP:MSG:SEND refers to message sending function in the transport state machine
255-255-0-0 is the routing information, i.e. sender-last-next-destination
s=255,c=3,t=3,pt=0,l=0,sg=0,ft=0,st=ok:
- s=sensor ID (255 = internal)
- c=command (3 = C_INTERNAL)
- t=type (3 = I_ID_REQUEST)
- pt=payload type (0 = P_STRING)
- l=message length (0)
- sg=signature flag (0 = not signed)
- ft=failed transmission counter (0 = no failed uplink transmission at this* oint)
- st=send status (OK)
- : actual message (empty)
=> Your node has no assigned ID (=255) and is requesting a new ID from the controller.
Please refer to the serial protocol or API for additional information.
Good luck!
Hi,
I was trying to implement the irrigation controller but realized that was not migrated to MySensors 2.0.
I followed the instructions provided in the first post and arrived to a version that compiles with no errors. Nonetheless, as I am newbie, I would appreciate if someone could review some changes that I am not sure about. Those changes are related with the original sketch which has direct calls to process() in several while statements. First I tried to change the process() with _process() but several errors appeared, so finally I removed all the calls to process() but am not sure. I have the modified code here IrrigationController.ino in which I commented the lines added / deleted / modified. Any help / direction would be greatly appreciated.
The sketches that require external libraries are not included in the V2 install anymore but they are available here in the new V2 format. You will find the irrigation sketch there.
@Boots33 Great! Thank you for the quick response and the URL. I took that version. Just wanted to let you know that to successfully compile it was needed to remove the #include <LiquidCrystal.h> and probably an update to that sketch would be needed at Github respository.
Thank you again!
@pndgt0 That may be because you do not have the LiquidCrystal library installed. They have the external libraries there as well
This post is deleted!
I built a 2.0 MQTT gateway and am experimenting with a sensor using 1.5 library. I can see incoming data in the serial monitor, but the data is a bit different then when I was using the 1.5 MQTT gateway. In openhab I used this:
{mqtt="<[mysensor:MyMQTT/3/2/1/V_TRIPPED:state:CLOSED:1],<[mysensor:MyMQTT/3/2/1/V_TRIPPED:state:OPEN:0]"}
But now I must skip the V_Tripped part to make it work like this:
{mqtt="<[mysensor:mygateway1-out/3/2/1/0/16:state:OPEN:1],<[mysensor:mygateway1-out/3/2/1/0/16:state:CLOSED:0]"}
Is this how it is supposed to be or do I have to make the sensor node 2.0 compatible first? I thought the payload is saved in the V_TRIPPED variable, but it doesn´t seem to work in the new 2.0 library. Pls advise-
edit:
Ok, digged a bit deeper: if I understand it right, I don´t need "V_TRIPPED" or any other value in my controllers code (which is openhab actually) anymore but still in my sensor node code of course. So the gateway will transform "V_Tripped" into sub-typ "16". Correct? So all I´ll have to change is my openhab code, right?
@siod correct. The conversion of value code to text has been removed in the 2.0.0 implementation, so now the raw values codes are reported in the topic. | https://forum.mysensors.org/topic/4276/converting-a-sketch-from-1-5-x-to-2-0-x/28?lang=en-US | CC-MAIN-2020-24 | refinedweb | 3,211 | 66.94 |
types(5) types(5)
NAME [Toc] [Back]
types - primitive system data types
SYNOPSIS [Toc] [Back]
#include <sys/types.h>
DESCRIPTION [Toc] [Back]
Remarks
The example given on this page is a typical version. The type names
are in general expected to be present, although exceptions (if any)
may be described in DEPENDENCIES. In most cases the fundamental type
which implements each typedef is implementation dependent as long as
source code which uses those typedefs need not be changed. In some
cases the typedef is actually a shorthand for a commonly used type,
and will not vary.
The data types defined in the include file are used in HP-UX system
code; some data of these types are accessible to user code:
typedef struct { int r[1]; } *physadr;
typedef char *caddr_t;
typedef unsigned int uint;
typedef unsigned short ushort;
typedef unsigned long ino_t;
typedef short cnt_t;
typedef long time_t;
typedef long dev_t;
typedef long off_t;
typedef long paddr_t;
typedef long key_t;
typedef int32_t pid_t;
typedef long uid_t;
typedef long gid_t;
typedef long blkcnt_t;
Note that the defined names above are standardized, but the actual
type to which they are defined may vary between HP-UX implementations.
The meanings of the types are:
physadr used as a pointer to memory; the pointer is aligned
to follow hardware-dependent instruction addressing
conventions.
caddr_t used as an untyped pointer or a pointer to untyped
memory.
uint shorthand for unsigned integer.
ushort shorthand for unsigned short.
Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003
types(5) types(5)
ino_t used to specify I-numbers. All native file systems
(including HFS and VxFS 3.5) through HP-UX 11i, use
values that will fit within 32-bits. Some remote NFS
servers may use larger values, which will be
truncated without error for 32-bit applications and
may not result in unique values.
cnt_t used in some implementations to hold reference counts
for some kernel data structures.
time_t time encoded in seconds since 00:00:00 GMT, January
1, 1970.
dev_t specifies kind and unit number of a device, encoded
in two parts known as major and minor.
off_t offset measured in bytes from the beginning of a
file. If a 32-bit application is compiled with
-D_FILE_OFFSET_BITS=64 or -D_LARGEFILE64_SOURCE,
off_t will become an int64_t.
paddr_t used as an integer type which is properly sized to
hold a pointer.
key_t the type of a key used to obtain a message queue,
semaphore, or shared memory identifier, see
stdipc(3C).
pid_t used to specify process and process group
identifiers.
uid_t used to specify user identifiers.
gid_t user to specify group identifiers.
blkcnt_t disk quota or transfer size measured in blocks. If a
32-bit application is compiled with
-D_FILE_OFFSET_BITS=64 or -D_LARGEFILE64_SOURCE,
blkcnt_t will become an int64_t.
STANDARDS CONFORMANCE [Toc] [Back]
<sys/types.h>: AES, SVID3, XPG2, XPG3, XPG4, FIPS 151-2, POSIX.1
Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003 | http://nixdoc.net/man-pages/HP-UX/man5/types.5.html | CC-MAIN-2019-43 | refinedweb | 491 | 63.8 |
Hi Bruce, On Mon, Jul 24, 2017 at 10:33:25AM -0400, bruce wrote: > Hi. > > I've seen sites discuss decorators, as functions that "wrap" and > return functions. > > But, I'm sooo confuzed! My real question though, can a decorator have > multiple internal functions? All the examples I've seen so far have a > single internal function. Yes, a decorator can have multiple internal functions. A decorator is just a function, and it can contain anything a function contains. What makes it specifically a decorator is what you use it for. Let's step back and cover a basic: nested functions. def function(x): def add_one(): return x + 1 def times_two(): return x*2 return (add_one(), times_two()) function(10) Can you predict what the result of that will be? I hope you can predict that it will return (11, 20). Can you see why? That sort of nested function isn't very interesting, and you won't see much code doing that. But it demonstrates that a function can contain multiple inner functions. Now let's look at something that is often called a *factory function* -- a function which creates and returns a new function. def factory(a): def inner(x): return a + x return inner # No parentheses! add_one = factory(1) # create a new function & assign it to add_one add_two = factory(2) add_one(100) # returns 101 add_two(100) # returns 102 How this works isn't important (the technical term is "a closure") but the important factor is this: factory() creates a new function, and returns it. That function can then be used like any other function created with "def". Functions can not only *return* functions as their return result, but they can take functions as arguments. We say that "functions are first class values" -- in Python, functions are just another kind of data, like ints, floats, strings, lists, dicts and more. The most common examples of passing a function as input to another function include: - builtin functions map() and reduce(); - the key argument to sorted() and list.sort(); - GUI libraries that take callback functions; - and of course, decorators. So let's start to create our first decorator. A decorator takes a function as argument, "decorates" it in some way (usually by adding some sort of extra functionality), and then returns the decorated function. Here's a simple example: a decorator that makes the other function print a message. def decorate(func): def inner(arg): print('received argument %r' % arg) return func(arg) return inner Now let's set up a couple of functions: def add_five(x): return x+5 def add_ten(x): return x+10 And decorate them: add_five = decorate(add_five) add_ten = decorate(add_ten) Try predicting what result = add_five(100) will print, and what the result will be. Likewise for: result = add_ten(50) Try running the code and see if you are correct. The way I decorated the functions above is a little bit clumsy, so Python has a special syntax to make it easier: the "@decorate" syntax. @decorate def times_three(x): return x*3 result = times_three(5) Can you predict what that will do? There's more to decorators than that, but hopefully that will demonstrate some of the basic concepts. Feel free to ask any more questions on the mailing list, and we will answer if we can. -- Steve | https://mail.python.org/pipermail/tutor/2017-July/111641.html | CC-MAIN-2019-26 | refinedweb | 551 | 63.19 |
stuck in bitwise operators
Aj Mathia
Ranch Hand
Joined: Apr 11, 2003
Posts: 478
posted
Jul 23, 2004 01:20:00
0
Hi All,
I am trying to create a file which i will later send to a cobol based machine . so i have to compress numbers in comp3 format.
to achieve this i need to represent 2 numbers in one byte.
eg to send the number 12 what i do is
0x0010 | 0x0002 which returns 0x0012
etc and write the result in a file
in all cases from 00 to 79 it works fine but 80 82-89 91-99
for all these mentioned numbers i am getting the result of the OR operation as 0x003f
ie 0x0080 | 0x0000 returns a 0x003f
can anyone tell me what simple stuff i am missing
and also if any one has any
java
utility or approach for decimal to COMP-3 conversions
Thanks
You think you know me .... You will never know me ... You know only what I let you know ... You are just a puppet ...
--CMG
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Jul 23, 2004 08:56:00
0
Try a forum search on COMP3 or packed decimal. We had a thread on this earlier in the year. I may still have some code on another PC that I can bring back.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Dirk Schreckmann
Sheriff
Joined: Dec 10, 2001
Posts: 7023
posted
Jul 23, 2004 11:22:00
0
Ajay,
]
Dirk Schreckmann
Sheriff
Joined: Dec 10, 2001
Posts: 7023
posted
Jul 23, 2004 11:25:00
0
i am getting the result of the OR operation as 0x003f
ie 0x0080 | 0x0000 returns a 0x003f
can anyone tell me what simple stuff i am missing
What code are you using to create such a result?
Aj Mathia
Ranch Hand
Joined: Apr 11, 2003
Posts: 478
posted
Jul 23, 2004 22:00:00
0
Hi Dirk,
Firstly i chave changed the fisplay name to include my surname as well
was that the problem? coz i had a quick read through the naming policy and could not figure out any specific problem as per my display name provided
if Ajay is offencive then im sorry but thats my name
ok coming to the problem
i have made a method as follows
public void convertToComp3(int num1, int num2){
int[] a = {0x0000,0x0010,0x0020,0x0030,
0x0040,0x0050,0x0060,0x0070,0x0080,0x0090};
int[] b = {0x0000,0x0001,0x0002,0x0003,
0x0004,0x0005,0x0006,0x0007,0x0008,0x0009};
num1 = a[num1];
num2 = b[num2];
// open a file in append mode
int result = (num1 | num2);
// write(result);
}
calling method{ // pls ignore syntax
for i = 0 to 9;
for j = 0 to 9;
convertToComp3(i,j);
}
now when i open the created file in a hex editor or as a hex file
i have entries
0000
0001 etc till 0079 then insted of 0080 i have 003f this is from 80 to 99 except for 2 entries that i mentioned abouve the 80 and the other one
i changed the int a[]={0x000F, 0x001F, 0x002F .....} and b[]={0x00f0, 0x00f1,....}
and the step result = (num1 & num2) but got the same result.
hope i am clear in what i explained
Thanks
Dirk Schreckmann
Sheriff
Joined: Dec 10, 2001
Posts: 7023
posted
Jul 23, 2004 22:09:00
0
i chave changed the fisplay name to include my surname as well
was that the problem? coz i had a quick read through the naming policy and could not figure out any specific problem as per my display name
Yes. We require display names of the
pattern
first name + SPACE + a last name. I agree that the naming policy document doesn't currently make that immediately clear. I'll see about changing it.
Thanks.
Dirk Schreckmann
Sheriff
Joined: Dec 10, 2001
Posts: 7023
posted
Jul 23, 2004 22:39:00
0
I just wrote a program following your posted logic, and it worked fine.
Perhaps you're doing something while writing the results to the file that isn't working as expected. What does that code look like?
(I don't think it's relevant, but note that I don't have a hex editor, so I'm merely looking at regular text results.)
[ July 23, 2004: Message edited by: Dirk Schreckmann ]
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Jul 24, 2004 12:50:00
0
Here's what I had from last time this came up ... I hope the
test
will show how the pd class can be used ...
package com.saxman.sandbox; import junit.framework.*; public class PackedDecimalTester extends TestCase { public void testZero() { PackedDecimal pd = new PackedDecimal(0, 4); assertEquals(0, pd.toInt()); byte[] bytes = pd.toByteArray(); assertEquals(0x0F, bytes[3]); } public void testTwo() { PackedDecimal pd = new PackedDecimal(2, 4); assertEquals(2, pd.toInt()); byte[] bytes = pd.toByteArray(); assertEquals(0x2F, bytes[3]); } public void testTwoDigits() { PackedDecimal pd = new PackedDecimal(21, 4); assertEquals(21, pd.toInt()); byte[] bytes = pd.toByteArray(); assertEquals(0x1F, bytes[3]); assertEquals(0x02, bytes[2]); } public void testThreeDigits() { PackedDecimal pd = new PackedDecimal(321, 4); assertEquals(321, pd.toInt()); byte[] bytes = pd.toByteArray(); assertEquals(0x1F, bytes[3]); assertEquals(0x32, bytes[2]); } public void testFourDigits() { PackedDecimal pd = new PackedDecimal(4321, 4); assertEquals(4321, pd.toInt()); byte[] bytes = pd.toByteArray(); assertEquals(0x1F, bytes[3]); assertEquals(0x32, bytes[2]); assertEquals(0x04, bytes[1]); } public void testSevenDigits() { PackedDecimal pd = new PackedDecimal(7654321, 4); assertEquals(7654321, pd.toInt()); byte[] bytes = pd.toByteArray(); assertEquals(0x1F, bytes[3]); assertEquals(0x32, bytes[2]); assertEquals(0x54, bytes[1]); assertEquals(0x76, bytes[0]); } public void testNegative() { PackedDecimal pd = new PackedDecimal(-4321, 4); assertEquals(-4321, pd.toInt()); byte[] bytes = pd.toByteArray(); assertEquals(0x1C, bytes[3]); assertEquals(0x32, bytes[2]); assertEquals(0x04, bytes[1]); } public void testOneByte() { byte[] lBytes = new byte[] { 0, 0, 0, 0x1f }; PackedDecimal pd = new PackedDecimal(lBytes); assertEquals(1, pd.toInt()); } public void testFourBytes() { byte[] lBytes = new byte[] { 0x76, 0x54, 0x32, 0x1f }; PackedDecimal pd = new PackedDecimal(lBytes); assertEquals(7654321, pd.toInt()); } public void testNegativeBytes() { byte[] lBytes = new byte[] { 0x76, 0x54, 0x32, 0x1c }; PackedDecimal pd = new PackedDecimal(lBytes); assertEquals(-7654321, pd.toInt()); } //=============================== Housekeeping =============================== public PackedDecimalTester(java.lang.String testName) { super(testName); } public static void main(java.lang.String[] args) { junit.textui.TestRunner.run(suite()); } public static Test suite() { TestSuite suite = new TestSuite(PackedDecimalTester.class); return suite; } public void setUp() { } public void tearDown() { } }
and
package com.saxman.sandbox; /** * Converts between integer and an array of bytes in IBM mainframe packed decimal * format. The number of bytes required to store an integer is (digits + 1) / 2. * For example, a 7 digit number can be stored in 4 bytes. Each pair of digits is * packed into the two nibbles of one byte. The last nibble contains the sign, * 0F for positive and 0C for negative. For example 7654321 becomes 0x76 0x54 * 0x32 0x1F. * * This class is immutable. Once constructed you can extract the value as an int, * an array of bytes but you cannot change the value. Someone should implement * equals() and hashcode() to make this thing truly useful. */ public class PackedDecimal { private byte[] mBytes; private int mInt; private static final byte POSITIVE = 0x0f; private static final byte NEGATIVE = 0x0c; public PackedDecimal(int aInt, int aByteCount) { mInt = aInt; mBytes = new byte[aByteCount]; int lByteIx = aByteCount - 1; // right nibble of first byte is the sign if (aInt >= 0) mBytes[lByteIx] = POSITIVE; else { mBytes[lByteIx] = NEGATIVE; aInt = -aInt; } // left nibble of the first byte is the ones int lDigit = aInt % 10; mBytes[lByteIx] |= (lDigit << 4); while (lByteIx > 0) { // next byte over lByteIx--; // right nibble aInt = aInt / 10; lDigit = aInt % 10; mBytes[lByteIx] |= (lDigit); // left nibble aInt = aInt / 10; lDigit = aInt % 10; mBytes[lByteIx] |= (lDigit << 4); } } public PackedDecimal(byte[] aBytes) { mBytes = aBytes; mInt = 0; int lByteIx = aBytes.length - 1; mInt += aBytes[lByteIx] >> 4; int lFactor = 10; while (lByteIx > 0) { lByteIx--; mInt += ((aBytes[lByteIx] & 0x0f) * lFactor); lFactor *= 10; mInt += ((aBytes[lByteIx] >> 4) * lFactor); lFactor *= 10; } if ((aBytes[aBytes.length-1] & 0x0f) == NEGATIVE) mInt *= -1; } public int toInt() { return mInt; } public byte[] toByteArray() { return mBytes; } }
Aj Mathia
Ranch Hand
Joined: Apr 11, 2003
Posts: 478
posted
Jul 25, 2004 19:42:00
0
Hi Dirk, Stan,
thanks Stan for your swift reply.
I had a quick scan through the code you gave me and it works fine till the point where my initial problem comes in.
I am quite certain that it is becoz int, byte etc is acting differently the moment the sigh bit is set ie for 0x0080 80 is repesented as 1000 0000
in int and byte does not accommodate this number either.
Stan in the code you gave me if i try
PackedDecimal pd = new PackedDecimal(7779821,4 ); or
PackedDecimal pd = new PackedDecimal(9874321,4 );
the 98 is not handled right.
similarly
byte[] lBytes = new byte[] { 0x80, 0x54, 0x32, 0x1c };
PackedDecimal pd = new PackedDecimal(lBytes);
throws a compilation error as 0x80 is out of range for byte.
any tips as to how i can handle this
Dirk my file writing routine is very basic as this is the first time i am writing to a file
try {
BufferedWriter
out = new
BufferedWriter
(new
FileWriter
("am.txt", true));
out.write(myByte);
out.close();
}
catch (
IOException
e) {
e.printStackTrace();
}
i also tried out.write(128) ie 128 hex value is 80
i tried int a=0x0100;
a= a>>1;
out.write(a);
but all the cases result is 0x3f
to see the hex values i use textpad and when i open the am.text(output file)
i select the option view binary.
i really appreciate your responses
Thanks
Aj Mathia
Ranch Hand
Joined: Apr 11, 2003
Posts: 478
posted
Jul 26, 2004 00:32:00
0
Hi guys
I figured out where i was going wrong.
the problem was i was using the
BufferedWriter
to write to the file.
and there is no character associated with hex 80 or 82,83....99
so it by default puts in a ? ie 0x3f
when i changed the file approach to
DataOutputStream
it puts the hex values just fine
Thanks a lot for your help till now
Ajay
I agree. Here's the link:
subject: stuck in bitwise operators
Similar Threads
Webservice client program to return an xml file
anonumous arrays?
NX: Contactor: Query on Search method
FTP commons.net FTP never ends
NX: how to fill long[] results in findByCriteria()
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/396860/java/java/stuck-bitwise-operators | CC-MAIN-2015-18 | refinedweb | 1,767 | 68.81 |
My scenario is simple - I am copying script samples from the Mercurial online book and pasting them in a Windows command prompt. The problem is that the samples in the book use single quoted strings. When a single quoted string is passed on the Windows command prompt, the latter does not recognize that everything between the single quotes belongs to one string.
For example, the following command:
hg commit -m 'Initial commit'
cannot be pasted as is in a command prompt, because the latter treats 'Initial commit' as two strings - 'Initial and commit'. I have to edit the command after paste and it is annoying.
'Initial commit'
'Initial
commit'
Is it possible to instruct the Windows command prompt to treat single quotes similarly to the double one?
EDIT
Following the reply by JdeBP I have done a little research. Here is the summary:
Mercurial entry point looks like so (it is a python program):
def run():
"run the command in sys.argv"
sys.exit(dispatch(request(sys.argv[1:])))
So, I have created a tiny python program to mimic the command line processing used by mercurial:
import sys
print sys.argv[1:]
Here is the Unix console log:
[hg@Quake ~]$ python 1.py "1 2 3"
['1 2 3']
[hg@Quake ~]$ python 1.py '1 2 3'
['1 2 3']
[hg@Quake ~]$ python 1.py 1 2 3
['1', '2', '3']
[hg@Quake ~]$
And here is the respective Windows console log:
C:\Work>python 1.py "1 2 3"
['1 2 3']
C:\Work>python 1.py '1 2 3'
["'1", '2', "3'"]
C:\Work>python 1.py 1 2 3
['1', '2', '3']
C:\Work>
One can clearly see that Windows does not treat single quotes as double quotes. And this is the essence of my question.
The quoting character can't be changed in the command.com prompt. You can, however, use PowerShell which accepts both single and double quotes as quoting characters. They function the same as in Unix shells. I.e., single quotes do not expand variables while double quotes will.
You might still run into problems with quotes inside quotes. For example, I have strawberry perl installed on my Windows computer. When I run perl -e 'print time, "\n" ' in PowerShell, I see output such as 1321375663SCALAR(0x15731d4). I have to escape the double quotes for it to work as expected: perl -e 'print time, \"\n\" '
perl -e 'print time, "\n" '
1321375663SCALAR(0x15731d4)
perl -e 'print time, \"\n\" '
Curiously using single quote instead of double quote was working fine for me with Windows 7 but no more with Windows 8.1 ....
Ex: echo "toto" | sed 's/o/i/' => titi (windows 7)
echo "toto" | sed 's/o/i/' => unknown option for 's'
Curiously using single quote instead of double quote was working fine for me with windows 7 but no more with windows 8.1
I'm quite sure that you can't edit the way that DOS parses commands. It's inherent in it's base programming.
The only solution I can think of to speed things up, is keeping a Notepad window open and running a 'Find and Replace' -- replacing all single quotes with with double quotes. And then copy-pasting into DOS from there.
First, a command prompt is not a command interpreter. (A command prompt is the thing displayed by a command interpreter.) Second, your command interpreter, the prompts that it issues, and Win32 consoles, have nothing at all to do with this.
In Win32 programs, splitting up the command line into "words" — the NUL-terminated multi-byte character strings that programs in the C and C++ languages see as the argument array passed to main() — is the province of the runtime libraries of those programs. On Unices and Linux, the shell does the word splitting, because the operating system actually works in terms of an argument string array. This is not the case for Win32. On Win32, the operating system itself operates in terms of a command tail: a single long string that still contains all of the quotation marks that one originally typed on the command line. (There is some processing done to this command tail by a command interpreter before it is passed to the target program, but it isn't related to word splitting.)
main()
In your case, the runtime library for your hg program is being delivered this command tail:
hg
commit -m 'Initial commit'
The runtime library that that program was compiled with doesn't know that you meant a single quotation mark to be a whitespace quoting character, because that isn't the convention. The convention deals only in double quotation marks (and backslashes before double quotation marks).
This convention is built into the runtime library that was provided with the compiler used to create the program in the first place. If you want to change the convention, you'll have to re-link every individual program that you want to run this way with a special runtime library of your own making that recognizes single quotation marks as well. Clearly this is impractical (unless these are all Cygwin programs).
A far more practical approach is to do what you're already doing: recognize that Windows isn't Unix, and adjust the examples accordingly before using them.
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
3844 times
active
9 months ago | http://superuser.com/questions/324278/how-to-make-windows-command-prompt-treat-single-quote-as-though-it-is-a-double-q | CC-MAIN-2015-32 | refinedweb | 907 | 69.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.