text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
You can subscribe to this list here. Showing 5 results of 5 In SnmpPduTrap, the following code results in a class cast exception: public SnmpVarBind[] toVarBindArray( ) { return ((SnmpVarBind[])m_variables.toArray()); } The problem is that the toArray method returns a type of Object[], even if all of the objects in the array are SnmpVarBinds. The solution is to pass a parameter to the toArray method, telling it what type should be used: public SnmpVarBind[] toVarBindArray( ) { return ((SnmpVarBind[])m_variables.toArray(new SnmpVarBind[0])); } Regards, Jim FYI ... Is it possible that mentioned problem below is solved? Thanks, Rita Costa -----Original Message----- From: discuss-admin@... [mailto:discuss-admin@...]On Behalf Of Tarus Balog Sent: terça-feira, 15 de abril de 2003 13:40 To: discuss@... Subject: Re: [opennms-discuss] Interface Names Rita said: > I have a router Cisco that is sending Traps for the OpenNMS but, the > event arrives with a configuration that hardly will be understood by the > operator. "Agent Down Interface (linkDown Trap) > enterprise:.1.3.6.1.4.1.9.1.108 (1.3.6.1.4.1.9.1.108) on interface 160". > It would like to know as to make to substitute this information for the > following one: Down interface in router "XX", interface s4/5 (for > example). Is that possible editing eventconf.xml? Not at the moment, although it has been a requested feature (but no one has entered it into bugzilla). -T -- Tarus Balog Consultant Sortova Consulting Group, +1-919-696-7625 tarus@... _______________________________________________ discuss mailing list (discuss@...) To subscribe, unsubscribe, or change your list options, go to: On Sat, Mar 22, 2003 at 07:21:27PM -0500, Tarus Balog wrote: > Now, since Solaris SPARC systems are BIG ENDIAN, setting ntohll, etc. to > null works just fine. However, since Solaris Intel is LITTLE ENDIAN, we > need a byte swap macro (__bswap_64 is not defined on Solaris Intel) - > similar to what NXSwapHostLongLongToBig(_x_) does on Darwin (Mac OS X). Grab __swap64gen from: - deej -- Daniel (DJ) Gregor, <dj@...> On Tue, Mar 18, 2003 at 10:44:12AM -0700, Ian Wallace wrote: > It's different I guess on FreeBSD but we'll cross that bridge when we > get to it, and I have no clue about in Darwin if they have it. It looks like the BSD systems (and Linux 2.4) have SO_TIMESTAMP, which seem to include the timestamp with the received packet (so you don't have to make another syscall). - deej -- Daniel (DJ) Gregor, <dj@...> These are trivial changes which only correct spelling in the javadoc of one file. As I read through the code others may follow. cd opennms/src/core patch -p0 -u < Fiber.patch
http://sourceforge.net/p/opennms/mailman/opennms-devel/?viewmonth=200304
CC-MAIN-2014-23
refinedweb
440
65.12
.” N Scale-out NAS is becoming popular, with most major vendors offering these types of products. As a reminder, scale-out NAS systems will increase performance and capacity at the same time – although you don’t have to scale the systems in the same ratio. You can add controllers for performance, storage for capacity, or both. There are several things to look at when considering a scale-out NAS system. 1. Is a single namespace provided across all the nodes (also called controllers or heads) so that a file system can be spread across the nodes but the user does not need to take any special action for accessing a file? There are different ways that a single namespace can be implemented, and some may be better than others. Mounting or sharing a file system on a scale-out NAS system should require no more effort than if it was on a single-node system. 2. Does the management software manage across all nodes as an aggregate but still allow individual node communication to detect problems in the system? 3. Is there load balancing across nodes? The load balancing can be automatic when files are stored to distribute data across the different nodes. Will data automatically be redistributed across nodes (in the background) for capacity or load balancing? 4. Can it scale independently? In other words, can you scale nodes for more performance and the underlying storage for more capacity? This provides the greatest flexibility in usage. If the answer is yes, then how many nodes can the system scale to include? And how much capacity (including storage controllers) can it scale to? 5. Is there a back channel for communication between nodes? This requires another communication path between nodes rather than using the same path clients may be using to access data. Examples of this may be an InfiniBand connection between nodes or a 10-Gigabit Ethernet connection. Usually there would be a pair of back channels for availability. 6. Are there any features that are not included that would normally be part of standard NAS systems? A few to consider are: snapshots, remote replication, NDMP support, NFS and native CIFS support, security controls such as Active Directory, LDAP and file locking for shared access between CIFS and NFS, anti-virus software support and quotas. 7. Does the scale-out NAS support both small and large files? Some of the distributed file systems used for scale-out NAS come from the high-performance computing area where the optimization was around large files. It is important to understand whether the system supports small files and large files equally. This list is a first level look with more detailed differences to be explained in an upcoming Evaluator Group article. (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm)..” Blu. Xiotech’s SSD strategy: beat Fusion-io.”
https://itknowledgeexchange.techtarget.com/storage-soup/page/109/
CC-MAIN-2018-17
refinedweb
478
54.63
Saturday 26 May 2012 A bet-hedging sort of report into the UK's economy from the IMF today, which largely supports George Osborne's deficit reduction plan, but will also give some encouragement to his detractors. By way of a summary, here are the parts that might satisfy Osborne himself, as well as Vince Cable, Ed Balls and Mervyn King: The passage that the Chancellor will flash around Westminster comes on the very second page of the IMF document. "Strong fiscal consolidation is under way," it reads, "and remains essential to achieve a more sustainable budgetary position, thus reducing fiscal risks." And the endorsements for the Chancellor's deficit reduction plan continue inside, not least in the claims that the public finances were treading a "clearly unsustainable path" before last year's Emergency Budget, and that "the overall focus on expenditure reduction appears appropriate, as cyclically adjusted spending rose by 9 percent of GDP over the last decade." As for the wider economic picture, there is a good dose of optimism in the IMF's forecasts: "[Our] central scenario is that rebalancing occurs and financial sector health continues to improve," they say. "In this scenario, [we] project growth — led by investment and net exports — to gradually rise from 1.5 per cent in 2011 to around 2.5 per cent over the medium term." And although they add that this central scenario might be upset by external shocks, such as the eurozone crisis, the overall emphasis is on a return to relative health within the next couple of years. If Corporal Cable were scrabbling around for more ammunition in his war against the banks, then he might reach for the IMF's observation that "credit availability has improved for large companies, but remains restrictive for smaller businesses and for CRE companies." But I imagine he'll be even more pleased with the organisation's call for further quantitative easing should the recovery go awry, as recently recommend by one Vince Cable. In a blog post to accompany today's report, the IMF's mission chief to the UK writes." Ed Balls will feast enthusiastically on one point within the IMF document: that VAT hikes hurt growth. Not that the monetary-funders put it quite like that, of course. But they do come close to it in their suggestion that, "[The recent] slowdown [in growth] partly reflects intensifying fiscal consolidation — most notably with a 2.5 percentage point VAT hike on January 1, 2011 — and weak consumer confidence in the wake of spiking import prices and a soft housing market." And then there's the line that, "Unlike in other countries, indirect tax hikes have been a key driver of UK inflation. They include VAT hikes of 2.5 percentage points each in January 2010 and January 2011" But aside from VAT, Balls might also take hypocritical glee from the IMF's prediction that Osborne will struggle to meet his deficit reduction targets on time. According to their forecasts, the UK's structural deficit will be eradicated during 2015-16, over a year after Osborne's plan. Presumably, this is because: "As consolidation becomes more reliant on structural spending cuts going forward, implementation challenges may rise" — a crucial point that we have raised on Coffee House before now. (The New Statesman's George Eaton has more on this here) Mervyn King's argument on inflation has effectively been that we shouldn't worry about it unduly; that it's down to a range of temporary factors, and should simmer down towards target levels next year. And, happily enough, the IMF's argument on inflation is that we shouldn't worry about it unduly; that it's down to a range of temporary factors, and should simmer down towards target levels next year. Or as they put it in today's report: "The inflation overshoot is driven largely by transitory factors, and hence maintaining the current scale of monetary stimulus is appropriate given fiscal adjustment and subdued wage growth." And they add, with regards to interest rates: "The current monetary stance remains appropriate for now. The BoE['s] highly accommodative stance is appropriate given the central scenario projection that inflation will return to target within a reasonable timeframe, the uncertainty regarding the strength of recovery, and the need to offset significant disinflationary impulses from fiscal policy." Whether that vindicates the Bank, we shall see. They have, after all, had a horrible tendency to underestimate our country's inflationary problems in recent years. More articles from: Peter Hoskin | tbAugust 2nd, 2011 2:54pm Report this comment "They have, after all, had a horrible tendency to underestimate our country's inflationary problems in recent years." Except when they're managing they're own pensions... TulkinghornAugust 2nd, 2011 2:58pm Report this comment Speaking as a small businessman, I resent the profits reported by Barclays who refused me, a perfectly dreditworthy citizen, a modest loan. The IMF are quite correct Perry, - Heartless, Hard, RomanticAugust 2nd, 2011 3:01pm Report this comment Never mind the IMF - why was a piece about Syria pulled just now? Could it 'of' been because Daniel, the darling of the EUSSR, had put it up? Oh yes, - and on the IMF,- if Bruvva Brown wanted a job there it surely has a question mark hanging over it. Sir Eveard DigbyAugust 2nd, 2011 3:03pm Report this comment In January 2011,the WSJ reported: ' Neither factor,of course, are of much help to the man in the street. I wonder where it will end? Perhaps 100% inflation would please Mr King as the country would be competitive....like the Weimar Republic. FernandoAugust 2nd, 2011 3:14pm Report this comment Poor old Cable: his every word scrutinised for heretical thoughts. “I imagine he'll be even more pleased with the organisation's call for further quantitative easing should the recovery go awry, as recently recommend by one Vince Cable.” Cable was merely restating government policy. Osborne made it clear when he took office that QE was the preferred response if growth proved to be illusive, something he explicitly reinforced in October of last year. PayDirtAugust 2nd, 2011 4:51pm Report this comment "bet hedging" report sounds pretty useless to me, perhaps I'll be more enlightened when Radio 4 broadcast the views of the great and the good of our Economists tomorrow evening: 8pm... Ian WalkerAugust 2nd, 2011 5:01pm Report this comment Tulkinghorn: me too. I met with my Barclays "small business adviser" complete with 3 years of solvent accounts and a solid five year business plan with low medium and high growth cash forecasts. All very amicable, hands shaken etc. Then it went to some anonymous 'credit team' who dismissed the application out of hand. Complete waste of my time. When the banks say that they are "lending to small businesses" what they really mean is that they are offering business credit cards with 30% interest rates. ndmAugust 2nd, 2011 5:40pm Report this comment The IMF report states on page 2 (no less) that: -- Strong fiscal consolidation is underway and remains essential to achieve a more sustainable budgetary position, thus reducing fiscal risks. Has there ever been an IMF report that did not either wish for this or applaud this in page 2. This should be viewed more as an example of blinkered thinking at the IMF than as a genuine sign of success. disenfranchisedAugust 2nd, 2011 9:48pm Report this comment "the inflation overshoot is driven largely by TRANSITORY factors, and hence maintaining the current scale of monetary stimulus is appropriate given fiscal adjustment and subdued wage growth". i'd love to know how long the imf's TRANSITORY factors are going to persist. six months? a year? longer? i say longer, in which case the word TRANSITORY was a misleading word to use. but there are a lot of misleading words coming out of the imf, just as there are a lot of them coming out of westminster and brussels. as if it isn't bad enough having to contend with a ruined country, without having to endure all these misleading words..... Back to top
http://www.spectator.co.uk/business-and-investments/blog/7138938/the-imf-manages-to-please-everyone.thtml
crawl-003
refinedweb
1,358
57.1
Masonite takes security seriously and wants full transparency with security vulnerabilities so we document each security release in detail and how to patch or fix it. There were 2 issues involved with this release. The first is that input data was not being properly sanitized so there was an XSS vulnerability if the developer returned this directly back from a controller. Another issue was that there were no filters on what could be uploaded with the upload feature. The disk driver would allow any file types to be uploaded to include exe, jar files, app files etc. There was no reported exploitation of this. These were both proactively caught through analyzing code and possible vulnerabilities. The fix for the input issue was simply to just escape the input before it is stored in the request dictionary. We used the html core module that ships with Python and created a helper function to be used elsewhere. The fix to the second issue of file types was to limit all uploads to images unless explicitly stated otherwise through the use of the accept method. The patch for this is to simply upgrade to 2.1.3 and explicitly state which file types you need to upload if they are not images like so: def show(self, upload: Upload):upload.accept('exe', 'jar').store('...') Those choices to accept those files should be limited and up to the developer building the application sparingly.
https://docs.masoniteproject.com/v/v2.2/security/releases
CC-MAIN-2020-16
refinedweb
239
53.41
Beware! A new Python is approaching! . Overview The most fascinating feature in 3.10 is Structural Pattern Matching. It might look like a simple switch statement, but it’s actually so much more than that. The second important set of changes are improving Python’s interaction with static type checkers. Included is the much-awaited support for using X | Y syntax for Union types and a couple of other typing-related changes. A bit of a surprise was one change of plans after the latest alpha release: Postponed evaluation of annotations (PEP 563) was supposed to be the default in 3.10, but that change got postponed to Python 3.11. There is also a bunch of other changes which won’t be covered here. See the whole list of changes in Python 3.10 change log. Structural Pattern Matching Structural pattern matching is a quite powerful new feature. It allows detecting the shape and type of a data structure and assigning selected properties of it to variables. Its form is twofold: First, there is a match statement, which states what will be matched. Then follows one or more case statements. Each case statement specifies a pattern and a block of actions to take if the pattern matches. Let’s check a few examples to clarify. Pattern Matching example 1 Suppose that we’re building an application that reads commands from the user as text and we need to act based on form of the text that the user entered. That kind of logic can be implemented very easily with pattern matching as you can see in the analyze_input_from_user function below. def analyze_input_from_user(): text = input("Enter some text: ") match text.split(): case []: print("Got nothing") case [("add" | "subtract") as cmd, a, b]: print(f"Got operation {cmd} with parameters {a} and {b}") case [x] if x.isdigit(): number = int(x) print(f"Got a single integer number: {number}") case [x] if re.match(r"^[+-]?\\d+(.\\d*)?$", x): number = float(x) print(f"Got a single floating point number: {number:.3f}") case [word]: # Must be non-numeric, since numerics are matched above print(f"Got a single non-numeric word: {word}") case _: print("Got something else") To implement similar functionality with Python 3.9 or older, the code gets a few lines longer and somewhat harder to read, because the expected patterns of the user input are no longer as visual. def analyze_input_from_user_the_old_way(): text = input("Enter some text: ") words = text.split() if len(words) == 0: print("Got nothing") elif len(words) == 3 and words[0] in ["add", "subtract"]: (cmd, a, b) = words print(f"Got operation {cmd} with parameters {a} and {b}") elif len(words) == 1 and words[0].isdigit(): number = int(words[0]) print(f"Got a single integer number: {number}") elif len(words) == 1 and re.match(r"^[+-]?\\d+(.\\d*)?$", words[0]): number = float(words[0]) print(f"Got a single floating point number: {number:.3f}") elif len(words) == 1: word = words[0] print(f"Got a single non-numeric word: {word}") else: print("Got something else") Pattern matching example 2 Here’s another use case where pattern matching could be beneficial. Let’s say that we receive a list of items from an API which combines several kind of issues to a single list. We have to process them based on the form of each item. Our data and calling code could look like this: issues = [ { 'issuer': "Mr Praline", 'date': date(1969, 12, 7), 'type': 'complaint', 'title': "a Dead Parrot", }, { 'issuer': "A man", 'date': date(1972, 11, 2), 'type': 'request', 'subject': "an argument", 'location': "the Argument Clinic", }, "Now something completely different", ]def print_info_about_issues(): for issue in issues: print_info_about_issue(issue) Processing of each issue could utilize a dictionary kind of pattern. See print_info_about_issue. def print_info_about_issue(issue): match issue: case {'type': 'complaint', 'date': d, 'title': t, 'issuer': i}: print(f"{i} complained about {t} on {d}.") case { 'type': 'request', 'date': date(year=y), 'subject': s, 'location': l, 'issuer': i, }: print(f"{i} requested to have {s} on {y} at {l}.") case str(text): print(f"Textual issue: {text}") That is quite readable way to represent what kind of objects we expect to process. Compare this to the old way of doing the same thing in print_info_about_issue_the_old_way. def print_info_about_issue_the_old_way(issue): if isinstance(issue, dict) and ( issue.get('type') == 'complaint' and 'date' in issue and 'title' in issue and 'issuer' in issue): d = issue['date'] t = issue['title'] i = issue['issuer'] print(f"{i} complained about {t} on {d}.") elif isinstance(issue, dict) and ( issue.get('type') == 'request' and isinstance(issue.get('date'), date) and 'subject' in issue and 'location' in issue and 'issuer' in issue): y = issue['date'].year s = issue['subject'] l = issue['location'] i = issue['issuer'] print(f"{i} requested to have {s} on {y} at {l}.") elif isinstance(issue, str): print(f"Textual issue: {issue}") Tool support The implementation of pattern matching uses so-called soft keywords for the “match” and “case” statements because those words are used as identifiers already. Making them hard keywords would break many old code using “match” as an identifier (e.g. re.match). However, this soft keyword implementation makes things harder for tools that process Python source code, since it is no longer possible to a use a simple LL(1) parser to parse the source code, but a more sophisticated parser is needed. This means that there’s some work to do for maintainers of the IDEs, linters, code formatters etc. Currently it seems that at least PyCharm, Jedi, Flake8 and Black seem to struggle with the new syntax. Here’s some links to follow their progress on this issue: - PyCharm: - Jedi: - Flake8 (pyflakes): - Black: More pattern matching If you want to learn more about the structural pattern matching feature, check out the tutorial from PEP-636. And to get even deeper, there’s also the specification in PEP-634 and the motivation and rationale in PEP-635. Typing related improvements Writing Union types as X | Y Type specifications for Unions are now easier to write, since this new syntax doesn’t need an import and is shorter to type. Compare these two: from decimal import Decimaldef format_euros(value: float | Decimal | None) -> str | None: if value is None: return None return '{:.2f} €'.format(value)from decimal import Decimal from typing import Optional, Uniondef format_euros_old(value: Optional[Union[float, Decimal]]) -> Optional[str]: if value is None: return None return '{:.2f} €'.format(value) Clearly the readability is better in the new version compared to the old one. And as you can see, also the Optional is no longer needed, since Optional[X] can be written as X | None. This might become the preferred spelling for writing type specifiers of values that can be None. It’s also possible to use the new syntax for isinstance and issubclass: assert isinstance(3.5, float | None) assert issubclass(bool, int | str) # since bool is subclass of int assert not isinstance("3.10", float | int | bytes) Exact details are specified in PEP-604. Parameter Specification Variables This new feature is useful for creating type safe decorators or other code that modifies the signature of an existing function. The new tool for this is called ParamSpec. Check this example to see it in action: from typing import Callable, ParamSpec, TypeVarP = ParamSpec("P") R = TypeVar("R")measured_times: dict[str, float] = {}def time_measured(f: Callable[P, R]) -> Callable[P, R]: def inner(*args: P.args, **kwargs: P.kwargs) -> R: t0 = time.perf_counter() result = f(*args, **kwargs) t1 = time.perf_counter() measured_times[f.__name__] = t1 - t0 return result return inner@time_measured def repeated_string(a: str, n: int) -> str: return a * nrepeated_string("ABC", 123) # This is OK repeated_string(123, "ABC") # This should be rejected by type checker However, this feature is not yet very well supported by the type checkers, so it might take a while before it can be utilized. See for updated status information. Full specification is in PEP-612. Explicit Type Aliases Type aliases are very useful to shorten the type signatures of functions or variables when some commonly used type is very complex or otherwise too long to be used nicely. Sometimes there was just a little problem with them though: If you needed to use a forward reference in the type alias, because some part of the type is not yet defined, then the alias would have to be defined as a string, but such string cannot be distinguished from any other global string variable and so type checker might not allow using it as a type. For example, check this small snippet: TreeMap = "dict[str, Tree]"class Tree: def __init__(self, subtrees: TreeMap) -> None: self.subtrees = subtrees The snippet makes Mypy to report the following error: Variable “typealiases.TreeMap” is not valid as a type Pyright reports the following two errors from it: Illegal type annotation: variable not allowed unless it is a type alias (reportGeneralTypeIssues)Expected class type but received Literal[‘dict[str, Tree]’] (reportGeneralTypeIssues) The issue can be fixed with TypeAlias: from typing_extensions import TypeAliasTreeMap: TypeAlias = "dict[str, Tree]"class Tree: def __init__(self, subtrees: TreeMap) -> None: self.subtrees = subtrees That won’t yield any errors with Pyright or Pyre, but Mypy doesn’t have PEP-613 support yet so it still complains. Also, the import still has to be done from typing_extensions even though TypeAlias was added to the typing module in Python 3.10. This is probably just a small update to those type checkers that already support the explicit type aliases. Full specification is in PEP-613. The status of its implementation to the type checkers can be followed from. Postponed evaluation of annotations Making postponed evaluation of annotations, the default (PEP 563) was postponed to Python 3.11. So you’ll still need to use the from __future__ import annotations in Python 3.10 for that functionality, but be mindful that there are some other planned changes to this whole annotation evaluation subject in PEP-649 and it might even be that from __future__ import annotations will be deprecated rather than becoming the default. See the the Steering Council’s message for details. Should I already update? It’s still an early beta release, so of course, it’s not recommended to use it in production. However, it’s wise to add 3.10 to the CI matrices of your projects already to be able to see pending problems and possibly resolve them before the final release in October. Especially if you’re a maintainer of a popular library, make sure that your library supports 3.10 and states so in its metadata so that it will appear green on pyreadiness.
https://anders-innovations.medium.com/beware-a-new-python-is-approaching-a5e2939084b5?responsesOpen=true&source=user_profile---------1----------------------------
CC-MAIN-2021-39
refinedweb
1,771
61.56
On StackOverflow, there's a question about the most efficient way to compare two integers and produce a result suitable for a comparison function, where a negative value means that the first value is smaller than the second, a positive value means that the first value is greater than the second, and zero means that they are equal. There was much microbenchmarking of various options, ranging from the straightforward int compare1(int a, int b) { if (a < b) return -1; if (a > b) return 1; return 0; } to the clever int compare2(int a, int b) { return (a > b) - (a < b); } to the hybrid int compare3(int a, int b) { return (a < b) ? -1 : (a > b); } to inline assembly int compare4(int a, int b) { __asm__ __volatile__ ( "sub %1, %0 \n\t" "jno 1f \n\t" "cmc \n\t" "rcr %0 \n\t" "1: " : "+r"(a) : "r"(b) : "cc"); return a; } The benchmark pitted the comparison functions against each other by comparing random pairs of numbers and adding up the results to prevent the code from being optimized out. But here's the thing: Adding up the results is completely unrealistic. There are no meaningful semantics that could be applied to a sum of numbers for which only the sign is significant. No program that uses a comparison function will add the results. The only thing you can do with the result is compare it against zero and take one of three actions based on the sign. Adding up all the results means that you're not using the function in a realistic way, which means that your benchmark isn't realistic. Let's try to fix that. Here's my alternative test: // Looks for "key" in sorted range [first, last) using the // specified comparison function. Returns iterator to found item, // or last if not found. template<typename It, typename T, typename Comp> It binarySearch(It first, It last, const T& key, Comp compare) { // invariant: if key exists, it is in the range [first, first+length) // This binary search avoids the integer overflow problem // by operating on lengths rather than ranges. auto length = last - first; while (length > 0) { auto step = length / 2; It it = first + step; auto result = compare(*it, key); if (result < 0) { first = it + 1; length -= step + 1; } else if (result == 0) { return it; } else { length = step; } } return last; } int main(int argc, char **argv) { // initialize the array with sorted even numbers int a[8192]; for (int i = 0; i < 8192; i++) a[i] = i * 2; for (int iterations = 0; iterations < 1000; iterations++) { int correct = 0; for (int j = -1; j < 16383; j++) { auto it = binarySearch(a, a+8192, j, COMPARE); if (j < 0 || j > 16382 || j % 2) correct += it == a+8192; else correct += it == a + (j / 2); } // if correct != 16384, then we have a bug somewhere if (correct != 16384) return 1; } return 0; } Let's look at the code generation for the various comparison functions. I used gcc.godbolt.org with x86-64 gcc 7.2 and optimization -O3. If we try compare1, then the binary search looks like this: ; on entry, esi is the value to search for lea rdi, [rsp-120] ; rdi = first mov edx, 8192 ; edx = length jmp .L9 .L25: ; was greater than mov rdx, rax ; length = step test rdx, rdx ; while (length > 0) jle .L19 .L9: mov rax, rdx ; sar rax, 1 ; eax = step = length / 2 lea rcx, [rdi+rax*4] ; it = first + step ; result = compare(*it, key), and then test the result cmp dword ptr [rcx], esi ; compare(*it, key) jl .L11 ; if less than jne .L25 ; if not equal (therefore if greater than) ... return value in rcx ; if equal, answer is in rcx .L11: ; was less than add rax, 1 ; step + 1 lea rdi, [rcx+4] ; first = it + 1 sub rdx, rax ; length -= step + 1 test rdx, rdx ; while (length > 0) jg .L9 .L19: lea rcx, [rsp+32648] ; rcx = last ... return value in rcx Exercise: Why is rsp - 120 the start of the array? Observe that despite using the lamest, least-optimized comparison function, we got the comparison-and-test code that is much what we would have written if we had done it in assembly language ourselves: We compare the two values, and then follow up with two branches based on the same shared flags. The comparison is still there, but the calculation and testing of the return value are gone. In other words, not only was compare1 optimized down to one cmp instruction, but it also managed to delete instructions from the binarySearch function too. It had a net cost of negative instructions! What happened here? How did the compiler manage to optimize out all our code and leave us with the shortest possible assembly language equivalent? Simple: First, the compiler did some constant propagation. After inlining the compare1 function, the compiler saw this: int result; if (*it < key) result = -1; else if (*it > key) result = 1; else result = 0; if (result < 0) { ... less than ... } else if (result == 0) { ... equal to ... } else { ... greater than ... } The compiler realized that it already knew whether constants were greater than, less than, or equal to zero, so it could remove the test against result and jump straight to the answer: int result; if (*it < key) { result = -1; goto less_than; } else if (*it > key) { result = 1; goto greater_than; } else { result = 0; goto equal_to; } if (result < 0) { less_than: ... less than ... } else if (result == 0) { equal_to: ... equal to ... } else { greater_than: ... greater than ... } And then it saw that all of the tests against result were unreachable code, so it deleted them. int result; if (*it < key) { result = -1; goto less_than; } else if (*it > key) { result = 1; goto greater_than; } else { result = 0; goto equal_to; } less_than: ... less than ... goto done; equal_to: ... equal to ... goto done; greater_than: ... greater than ... done: That then left result as a write-only variable, so it too could be deleted: if (*it < key) { goto less_than; } else if (*it > key) { goto greater_than; } else { goto equal_to; } less_than: ... less than ... goto done; equal_to: ... equal to ... goto done; greater_than: ... greater than ... done: Which is equivalent to the code we wanted all along: if (*it < key) { ... less than ... } else if (*it > key) { ... greater than ... } else { ... equal to ... } The last optimization is realizing that the test in the else if could use the flags left over by the if, so all that was left was the conditional jump. Some very straightforward optimizations took our very unoptimized (but easy-to-analyze) code and turned it into something much more efficient. On the other hand, let's look at what happens with, say, the second comparison function: ; on entry, edi is the value to search for lea r9, [rsp-120] ; r9 = first mov ecx, 8192 ; ecx = length jmp .L9 .L11: ; test eax, eax ; result == 0? je .L10 ; Y: found it ; was greater than mov rcx, rdx ; length = step test rcx, rcx ; while (length > 0) jle .L19 .L9: mov rdx, rcx xor eax, eax ; return value of compare2 sar rdx, 1 ; rdx = step = length / 2 lea r8, [r9+rdx*4] ; it = first + step ; result = compare(*it, key), and then test the result mov esi, dword ptr [r8] ; esi = *it cmp esi, edi ; compare *it with key setl sil ; sil = 1 if less than setg al ; al = 1 if greater than ; eax = 1 if greater than movzx esi, sil ; esi = 1 if less than sub eax, esi ; result = (greater than) - (less than) cmp eax, -1 ; less than zero? jne .L11 ; N: Try zero or positive ; was less than add rdx, 1 ; step + 1 lea r9, [r8+4] ; first = it + 1 sub rcx, rdx ; length -= step + 1 test rcx, rcx ; while (length > 0) jg .L9 .L19: lea r8, [rsp+32648] ; r8 = last .L10: ... return value in r8 The second comparison function compare2 uses the relational comparison operators to generate exactly 0 or 1. This is a clever way of generating -1, 0, or +1, but unfortunately, that was not our goal in the grand scheme of things. It was merely a step toward that goal. The way that compare2 calculates the result is too complicated for the optimizer to understand, so it just does its best at calculating the formal return value from compare2 and testing its sign. (The compiler does realize that the only possible negative value is -1, but that's not enough insight to let it optimize the entire expression away.) If we try compare3, we get this: ; on entry, esi is the value to search for lea rdi, [rsp-120] ; rdi = first mov ecx, 8192 ; ecx = length jmp .L12 .L28: ; was greater than mov rcx, rax ; length = step .L12: mov rax, rcx sar rax, 1 ; rax = step = length / 2 lea rdx, [rdi+rax*4] ; it = first + step ; result = compare(*it, key), and then test the result cmp dword ptr [rdx], esi ; compare(*it, key) jl .L14 ; if less than jle .L13 ; if less than or equal (therefore equal) ; "length" is in eax now .L15: ; was greater than test eax, eax ; length == 0? jg .L28 ; N: continue looping lea rdx, [rsp+32648] ; rdx = last .L13: ... return value in rdx .L14: ; was less than add rax, 1 ; step + 1 lea rdi, [rdx+4] ; first = it + 1 sub rcx, rax ; length -= step + 1 mov rax, rcx ; rax = length jmp .L15 The compiler was able to understand this version of the comparison function: It observed that if a < b, then the result of compare3 is always negative, so it jumped straight to the less-than case. Otherwise, it observed that the result was zero if a is not greater than b and jumped straight to that case too. The compiler did have some room for improvement with the placement of the basic blocks, since there is an unconditional jump in the inner loop, but overall it did a pretty good job. The last case is the inline assembly with compare4. As you might expect, the compiler had the most trouble with this. ; on entry, edi is the value to search for lea r8, [rsp-120] ; r8 = first mov ecx, 8192 ; ecx = length jmp .L12 .L14: ; zero or positive je .L13 ; zero - done ; was greater than mov rcx, rdx ; length = step test rcx, rcx ; while (length > 0) jle .L22 .L12: mov rdx, rcx sar rdx, 1 ; rdx = step = length / 2 lea rsi, [r8+rdx*4] ; it = first + step ; result = compare(*it, key), and then test the result mov eax, dword ptr [rsi] ; eax = *it sub eax, edi jno 1f cmc rcr eax, 1 1: test eax, eax ; less than zero? jne .L14 ; N: Try zero or positive ; was less than add rdx, 1 ; step + 1 lea r8, [rsi+4] ; first = it + 1 sub rcx, rdx ; length -= step + 1 test rcx, rcx ; while (length > 0) jg .L12 .L22: lea rsi, [rsp+32648] ; rsi = last .L13: ... return value in rsi This is pretty much the same as compare2: The compiler has no insight at all into the inline assembly, so it just dumps it into the code like a black box, and then once control exits the black box, it checks the sign in a fairly efficient way. But it had no real optimization opportunities because you can't really optimize inline assembly. The conclusion of all this is that optimizing the instruction count in your finely-tuned comparison function is a fun little exercise, but it doesn't necessarily translate into real-world improvements. In our case, we focused on optimizing the code that encodes the result of the comparison without regard for how the caller is going to decode that result. The contract between the two functions is that one function needs to package some result, and the other function needs to unpack it. But we discovered that the more obtusely we wrote the code for the packing side, the less likely the compiler would be able to see how to optimize out the entire hassle of packing and unpacking in the first place. In the specific case of comparison functions, it means that you may want to return +1, 0, and -1 explicitly rather than calculating those values in a fancy way, because it turns out compilers are really good at optimizing "compare a constant with zero". You have to see how your attempted optimizations fit into the bigger picture because you may have hyper-optimized one part of the solution to the point that it prevents deeper optimizations in other parts of the solution. Bonus chatter: If the comparison function is not inlined, then all of these optimization opportunities disappear. But I personally wouldn't worry about it too much, because if the comparison function is not inlined, then the entire operation is going to be dominated by the function call overhead: Setting up the registers for the call, making the call, returning from the call, testing the result, and most importantly, the lost register optimization opportunities not only because the compiler loses opportunities to enregister values across the call, but also because the compiler has to protect against the possibility that the comparison function will mutate global state and consequently create aliasing issues. Once again, turns out readable code means good (and efficient) code. While I have no insight on how compilers evolved (nor I have used any pre-2010 compilers), this kind of an optimization might have made sense 10 years ago, back when optimizers weren’t as smart as nowadays? Seems to me the obvious way to code this is: int compare0(int a, int b) { return (a-b); } Awfully long discussion that boils down to Dr. Knuth’s long-standing axiom that the root of all evil is premature optimization. And then it returns the wrong value for compare0(-1610612736, 1610612736). Given that intcan be a short int, the call is wrong to begin with. Really, the function is safe if you know of the range limitations. Okay, let me rephrase it then: It returns the wrong value for compare0(INT_MIN, INT_MAX). I wonder how the corrected version int64_t compare64(int32_t a, int32_t b) { return int64_t(a) – int64_t(b); } compares to the others, especially if compiled as x64 instead of x86. It probably doesn’t work. The prototype returns int. I can make it execute in zero cycles if it doesn’t have to work. No need to ask. Type it into gcc.godbolt.org and find out! (Note that gcc.godbolt.org doesn’t support 32-bit compilers so the extra cost of 64-bit arithmetic on 32-bit systems is not visible there.) I can, but I’m afraid I don’t know how to properly interpret the results because I don’t speak x86/x64 assembly. (Joshua, the prototype was templated and the return value was assigned into an ‘auto’ variable, so it wouldn’t be truncated either. It should work.) If you look at the assembly results, you can hover over the instructions (or right-click and choose “View asm doc”) and you’ll get a brief synopsis of what the instruction does. Also, you can click on “Show optimization output (clang only)” to see what choices the optimizer made for each line of code (gcc has a similar option.) If you grok ARM(32/64), AVR, MIPS(32/64), MSP430, or PPC better, those compilers are available too, as well as a couple of other compilers (MSVC++, elicc, icc, and Zapcc). It’s pretty cool: I wish I had this when I first started out learning programming! I had a friend in college who wrote, as part of his first programming course, a version of a factorial program. If the caller passed in a number greater than around, or whatever would result in integer overflow, it would return 0 and print out “I only work with small, non-negative integers.” “premature optimization is the root of all evil” Is very out of context; full quote%.” “@Raymond I’m forced to agree.” Heh. The dangers of being too clever. As compilers get smarter, the sins of “premature optimization” have worse and worse consequences. Always write the code to be understood by some human being that comes after you and has to maintain it. Seriously. Let the compiler did its job and you do -your- job: writing maintainable code. Somebody once had a quote about “Premature Optimization”. I’m not sure who it was, maybe someone famous. /s Also: we tend to spend many times more hours reading our code rather than writing it. If you optimize for readability, chances are you have a simpler bit of code, and the compiler will be able to do more optimizations. If you write the world’s most clever code, chances are the poor intern who tries to fix a bug six years later will have trouble, and the compiler won’t like it either. Also: I had not seen this compiler explorer website before ( ). This is the greatest thing since compiled bread. If only I still wrote c++! I sometimes regret switching to C# and leaving the world of pointer math and insane templates behind. He recently did a talk about it @ cppcon Premature optimisation is subjective, one person thought it was the right and someone else said it was done too early. We assume the person who comes along and says it was premature optimisation, but maybe at the time it was the right thing to do. Or it’s not, who knows. I’ve been optimising some 30 year old 6502 code, does that count as premature or not? I always see the term premature optimisation as quite objective. What it means is an attempt at manual optimisation before you know if what you are intending to optimise is really the bottle neck. So any time you look at code and think “that looks slow” then it is likely premature. So basically if you have tested and timed and after all this you find that you are in a certain code path a lot and that is causing you a lot of slow down, then that is the code that you should optimise. Agreed – everything is subjective until you measure it! Shorthand: Write code that is easy to read until you have a measurement. > But it had no real optimization opportunities because you can’t really optimize inline assembly. At first I found this confusing, but then I realized why: Inline assembly cannot be decompiled into an intermediate representation, because it does not follow the semantics of the C abstract machine. Even assuming you have a really good decompiler available, you cannot apply as-if rule transformations to decompiled inline assembly. You might end up altering the meaning of the assembly in ways that the C standard does not consider “observable behavior,” but which nonetheless leave the CPU or other components in a different state from what the programmer expected. While this might be legal because asm is implementation-defined, it is obviously unhelpful to the point that the average user would call it a bug. So the compiler is obligated to treat the assembly as a black box, as Raymond says. It is possible to optimize inline assembler but there’s not much reason a C/C++ compiler would provide that functionality. The inline assembler feature exists solely to allow the programmer to specify an instruction sequence exactly and expects the compiler not to change it. Correct. The only purpose left for asm blocks is to give the compiler information it cannot possibly have. About half the asm blocks I see don’t have any assembler code in them, just compiler hints (hey compiler, this empty asm block modifies this local variable). The whole point of inline assembly is for when you want to do something the compiler doesn’t allow. Perhaps you are using an assembly instruction the compiler doesn’t allow. For example: CPUID, when it first came out. Or perhaps you have some heavily-optimized code that you’ve written taking into account caches and things that the compiler isn’t smart enough to optimize on its own. Or perhaps you’re doing something really weird where you need exact control over the instructions. For example, a popular open-source kernel has a copy-from-userspace macro that uses inline assembly, and emits special exception-handling instructions so that a segfault during the copy can be handled specially. Another example: detecting 386 vs 486 processor by emitting instructions that get processed differently on the two processors. A third example: An open source memory debugging tool runs a program in a sort-of-VM, with JIT compilation to add checks to all memory accesses, and it has a magic sequence of instructions that do nothing useful on a normal CPU but that precise sequence of instructions are detected by the JIT and allow you to pass information to the memory debugger. In ALL of those cases, there’s nothing sensible or helpful that the compiler’s optimizer can do, and if it touches the code at all then it’s likely to break it. So compilers do not try to “optimize” inline assembly, and this is the correct behaviour. In the cases where the compiler’s optimizer could do something useful, such as writing SSE code, then the opcodes are understood and supported by the compiler. In those cases, if you want the compiler to optimize for you, you could and should be using compiler intrinsics instead. Good post! As you point out at the end, while modern compilers do a wonderful job of optimization, far beyond what most programmers would come up with given limited time and imagination, there are still reasons for programmers to look at what the compiler generated in order to do further optimization but the task has changed. Instead of trying to improve on the compiler’s code generation, the programmer should look for things that the source code implicitly asks to be computed and remove those that are not needed to achieve the program’s purpose. Perhaps some future compiler could output suggestions along this line. I tried pasting your code into Compiler Explorer, but got different disassembly. Did you use any options other than -O3? Hm, the output is slightly different for compare1now. The conclusions still stand, though. This discussion completely ignores what happens once the CPU gets hold of it. CPU effectively runs another optimizer on the code , but at run time. I was surprised that you didnt actually time the execution, rather than just reading the generated code. I totally agree that people are fixated on ‘less lines means faster’. The object code for the different methods is very different. You can assume that three instructions will execute faster than nine instructions, so compare1 and compare3 will be faster without doubt. This can be false if something goes wrong (i.e., unexpected cache fail or pipeline flush), but in that case, any optimization is futile. You can’t. You once could but not these days. Modern processors do a lot of runtime optimizations. Three instructions might generally show worse results than nine equivalent instructions due to pipelining issues (instructions pairing, for example). Besides, I once saw a (synthetic) test where a floating-point instruction in a separate procedure turned out to be faster when timed than the same instruction inlined (the call cost turned out to be negative) with processors of one major CPU vendor and slightly slower on processors of another one, even though the name of the second vendor was shorter than that of the first :-) >You can assume that three instructions will execute faster than nine instructions You can assume that, but that might not be a correct assumption. It would be nice if you could see the timing for each different cpu, but it’s a huge amount of work. I’d agree in case of a smaller difference (i.e., four/five instructions versus three). But in the case of nine to three, you’d have to go to the most extreme case for the nine instructions to be faster (nine simple register-to-register instructions without memory access which translate into a single micro-op each versus three register-to-memory with complex addressing modes which can translate to three or four micro-ops each). Both compare1 and compare3 are a memory (register indirect, without indexing) to register compare (two micro-ops) and two conditional branches (one micro-op each). A total of four micro-ops. I can’t imagine how a processor can optimize at run time nine instructions in less than four micro-ops, when the compiler hasn’t been able to do the same at compile time. You can argue about out-of-order execution, pipeline stalls and the like. But when you take into account that compare1 is a strict subset of compare2 (in other words, compare2 contains all three compare1 instructions plus six additional ones), it’s difficult to imagine a case where the longer sequence is less probable to get stalled. You can most definitely not assume that. Comparisions actually being one of the most common cases nowadays since conditional jumps can really screw with branch prediction if the pattern isn’t, well, predictable. In this case I don’t think it really matters unless you can do the entire operation the comparision is used for without conditional branching, but there are some cases where it definitely does. For example, binary searches perform much worse than you’d expect from just looking at the instructions themselves, since by definition they branch in the worst possible pattern (0.5/0.5 taken/not taken). A linear search is actually faster for N < surprisingly large number – rule of thumb says around 8 or 10 depending on who you ask. And for some other, more traditional, examples: x86 has quite a few 'big' instructions that are actually slower than just doing the same thing on your own. The real classic here being LOOP, which no performance-aware assembly language programmer or compiler has ever used. Even last time I benched it on a modern system it was significantly slower than the equivalent DEC + JNZ (roughly 2x). I suppose the modern CPUs could very well optimize it to be just as fast but there's no need to because of this. (And DEC is in turn slower than SUB reg,1 on modern CPUs because of a historical design mistake which gives DEC a dependency on a flag which SUB lacks…) For future posts you can use for quick comparisons The -120 comes from GCC taking advantage of the 128-byte red zone. The AMD64 ABI guarantees that the 128 bytes beyond whatever rsp points to will never get modified by interrupt handlers, so it’s okay to grow the stack pointer by slightly less than you actually need and just make the array start in that memory region. “with x86-64 gcc 7.2 and optimization -O3.” wait, what? godbolt supports msvc too, so why did you show us the result of gcc???? Yeesh, I poke my head out of the bubble and you tell me to get back in the bubble. It’s not like this issue applies only to Microsoft compilers. I agree. Yeesh. My faux surprise was a fishing attempt in case there was some interesting news that you wouldn’t answer a direct question about. You can stay out of the bubble :D I can already see the clickbait headline : “Breaking news – MSVC might be replaced by gcc, a Microsoft lead architect says.” (Oh God, what have I done…) Why not? Almost every compiler does things differently, and it’s instructive to see what the differences are (and why they’re done that way.) For example, it appears that clang prefers cmov instructions, which is a potential speed boost but is a problem if you have to support pre-P6 processors. My immediate thought was “if it isn’t inlined, the cost is dwarfed by stack operations.” I’m not specifically (C++ or C99, and let’s be honest, that’s the level of language we’re talking here) bothered about offering up a comparison function that “possibly” mutates global state. Well, I would be, if the compiler in question doesn’t respect the const qualifier. But then I’d have other worries… I love this post. I like the argument that “if you’re going to think about optimisation, think very carefully about it,” which is actually Knuth’s corollary. May I also recommend Joe Duffy’s thoughts on the matter to your readers? joeduffyblog.com/2010/09/06/the-premature-optimization-is-evil-myth/
https://blogs.msdn.microsoft.com/oldnewthing/20171117-00/?p=97416
CC-MAIN-2018-13
refinedweb
4,753
67.28
A list will keep its elements in the same order that they were added, but searching for elements takes linear time (i.e. time proportional to the length of the list). Removing an element from a list is a constant-time operation if you have an iterator that refers to the element in question, but if you don’t have such an iterator then the element must first be found and this will also take linear time. In the case of a set, searching and removing is faster (logarithmic, and in the case of the SGI extension hash_set, amortized linear). However, sets don’t remember the order that the elements were added in. If you want to have an ordered list of elements, but you also want to be able to find given elements quickly, then you can combine a list with a map to index it. This article describes a brief template class called hashed_list, which uses the SGI extension hash_map and the STL class list; it could also use the STL class map with little modification. The template class described here does not support all STL semantics (for example, you can’t get a nonconst iterator out of it); this article is meant to demonstrate the concept. Note that hash_map will assume that all the elements are unique. Generalising this class to support non-unique elements is left as an exercise to the reader (besides using multimap, you need to think about the erase() method that is defined below). The general idea The objects are stored in a list, and also in a map. The map will map objects to list iterators. Whenever an object is added to the list, an iterator that refers to that object is taken and stored in the map under that object. Then when objects need to be counted or deleted, the map is used to quickly retrieve the list iterator, so that the list does not need to be searched linearly. This relies on the fact that iterators in a list will remain valid even if other parts of the list are modified. The only thing that invalidates a list iterator is deleting the object that it points to. Hence it is acceptable to store list iterators for later reference. The code First, we include the two container classes that we will be using: #include <list> #include <hash_map> using namespace std; Now for the hashed_list class itself. Since hash_map is a template class with three arguments (the object type, the hash function, and the equality function), we need to support the same three arguments if we’re making a general template: template<class object, class hashFunc, class equal> class hashed_list { I like to start with some typedefs to save typing later. We will need iterators and const iterators to the list, and also a type for the map: typedef list<object>::iterator iterator; typedef list<object>::const_iterator const_iterator; typedef hash_map<object,iterator, hashFunc,equal> mapType; And the data itself. For brevity I’ll be storing it directly and I won’t write an explicit constructor, so we don’t have to worry about the allocation. list<object> theList; mapType theMap; Now for some methods. Obtaining iterators is simply a matter of getting them from the list; we only allow const iterators because otherwise we’d have to write lots more code to deal with what happens when the user changes things via the iterators. Similarly, obtaining a count of objects just involves getting it from the map (since hash_map does not allow duplicate keys, the result will be 0 or 1). public: const_iterator begin() const { return theList.begin(); } const_iterator end() { return theList.end(); } mapType::size_type count(const object& k) const { return theMap.count(k); } Now if we want to add an object to the list, we must also add it to the map. Since the list’s insert() method returns an iterator that refers to the object we just added, we can give its return value to the map, so our add() method is one statement: void add(const object &o) { theMap[o] = theList.insert(theList.end(),o); } To check that the “objects must be unique” constraint is not being violated, you might wish to add assert(theMap.count(o)==0); to the beginning of the above method (and #include <cassert>). The following method will erase a given object. First, a map iterator is obtained; this is dereferenced to get the list iterator and erase it from the list, and then the map iterator is used to erase it from the map. That way, only one lookup operation is needed in the map (when the iterator is found); it is not necessary to look up the object twice (once to reference it, again to erase it). void erase(const object &o) { mapType::iterator i=theMap.find(o); theList.erase((*i).second); theMap.erase(i); } For readability you could also write it like this, but it will be less efficient if the lookup takes longer (even in a hashtable it can take a while in the worst case): void slower_erase(const object &o) { theList.erase(theMap[o]); theMap.erase(o); } Finally, finish the class: }; To test it, you might want to write something like this: #include <string.h> struct strEqual { bool operator()(const char* s1, const char* s2) const { return (s1==s2 || !strcmp(s1,s2)); // (although strcmp() might already have the s1==s2 check) } }; typedef hashed_list<const char*, hash<const char*>, strEqual> TestType; int main() { TestType t; t.add("one"); t.add("two"); t.add("three"); cout << t.count("two") << endl; // should be 1 t.erase("two"); cout << t.count("two") << endl; // should be 0 copy( t.begin(), t.end(), ostream_iterator<const char*>(cout," ")); cout << endl; } Conclusion The above code will increase the speed of finding items in lists, particularly when they are long. This is at the expense of consuming more memory (because of the map) and making the maintenance of the list slightly slower (because the map needs to be maintained with it). If hash_map is being used then the overhead of maintaining the map is (in the amortized case) a constant for each maintenance operation, which may or may not be acceptable depending on the application and on what proportion of its operations are maintenance. Silas S Brown ssb22@cam.ac.uk
https://accu.org/index.php/journals/2011
CC-MAIN-2017-43
refinedweb
1,052
58.92
Introduction to #else in C The following article provides an outline for #else in C. Else is a directive in C programming language that helps to provide the statements those needs to be executed when the conditions given using #if, #ifdef or #ifndef directives evaluates to false. Once the condition given in these directives evaluates to false, #else directives provides an alternate statements to be executed. It is a part of preprocessor directive since it is called by the compiler automatically before actual compilation starts. Before a C program is compiled by the compiler source code is processed thus this process is called preprocessing. All the commands used for the preprocessor are known as preprocessor directives and all preprocessor directives are defined using #. Syntax of #else in C Preprocessors is a feature provided in C to process the source code written by the programmer before its actual compilation is done. Before the program is passed through a preprocessor compiler passes the code through the preprocessor where specific instructions such as directives are looked for in the C program known as preprocessor directives that can be easily understood by the preprocessor. These preprocessor directives are must begin with (#) sign. Preprocessor is that part of the compiler which executes essential operations in the given code before the compiler actually compiles it. The transformations performed by the preprocessors are lexical which tells that the output of the preprocessor is in text form. #if _condition_ // Statements to be executed when condition returns TRUE #else // statements to be executed when condition in #if results to false. #endif Example: Code: #if 4>5 printf("Statements inside if block") #else printf("Statements inside else block") Here # specifies that it is a preprocessor directive and is compiled using the preprocessor before actual code is sent for the compilation to the compiler. One can use macro defined in the program for the conditions in the if directive and those macros needs to be defined using #define directive in C. How #else Directive work in C? Preprocessors refers to the programs that are processed in our source code even before code enters the compiler for compilation. # undef is such a command for the preprocessor. There are various preprocessor directives that can be defined which can be categorized into below 4 main categories. There are 4 main types of preprocessor directives: - Macros - File Inclusion - Conditional Compilation - Other Directives The source code written by the user is first sent for preprocessing to the preprocessors which generates an expanded source file with same name as that of the program. This expanded file is further sent for the compilation to the compiler to generate an object code of the library functions and once this object code is linked to the various library functions being used, an executable ( .exe) file is generated. #else directive is used to provide an alternate statements need to be executed when the condition given using #if, #ifdef or #ifndef. Whenever the condition returns false compiler sends the control directly to the #else block statement. There are certain rules need to be followed for declaring conditional expression: - Expressions must be of integral. It can also include integer constants, character constants, and the defined operator. - sizeOf or typecast operator cannot be used in the expression. - All the types such as int, long or unsigned long are translated in the same manner. - The expression should not include any query related to the environment on which the program is running. After the #if or #elif directives #else blocks come into action. All the #if.. #elif.. #else block must be ended using #endif directive that tells the compiler that if- else block is over. Examples of #else in C Given below are the examples mentioned : Example #1 In this example we will use #If directive to declare a condition for the execution for the statements. And if the condition results to false the statements given in else block will be executed. Here we will use LIMIT macro name defined using #define directive. Code: #include <stdio.h> #define LIMIT 5 int main() { int number; printf("Enter a number : "); scanf("%d",&number); #if number < LIMIT printf("Entered Number is less than the limit \n"); #else printf("Entered Number is greater than the limit \n"); #endif return 0; } Output: Example #2 In this example we will see if the student has passed or not using PASS variable defined using #define directive. We will compare the marks of the student being entered to the PASS macro name and print the result for that particular student. Code: #include <stdio.h> #define MARKS 50 int main() { #ifdef MARKS printf("MARKS macro has been defined \n"); #endif #if MARKS >90 printf("Student has scored GRADE A"); #elif MARKS >60 printf("Student has scored GRADE B"); #else printf("Student has scored GRADE C"); #endif return 0; } Output: Conclusion While working with preprocessor directives in a large C program one can declare conditional statements for executing some statements using #ifdef or #if or #ifndef directives. Thus #else directive here provides the block to be executed when the condition given in above block results to false. Recommended Articles This is a guide to #else in C. Here we discuss the introduction to #else in C, how #else directive work along with programming examples respectively. You may also have a look at the following articles to learn more –
https://www.educba.com/hash-else-in-c/?source=leftnav
CC-MAIN-2021-31
refinedweb
892
50.77
Django Testing Admin# The admin section of django is part of your site too. Why should it not be tested? Every part of your site should be able to be tested Authenticating# Create a file called test/test_admin.pyin your test folder that is a module (ie. it has a __init__.py) Create the test class, create a super user and log the user in class PasswordChangeTests(TestCase): '''Check that changing the password on admin side works ''' def setUp(self): self.super_user = get_user_model().objects.create_superuser( email='testsuper@testsuper.co.za', password='1234test' ) self.client.login( username='testsuper@testsuper.co.za', password='1234test' ) Now how do we get anywhere, well we need to know the names of urls to reverse in the docs but we can find this in the django.contrib.authpackage as well Use the reversemethod to test the response def test_password_change_link_exists(self): '''Test on the user change page a password change button exists ''' response = self.client.get( reverse( 'admin:users_user_change', args=(self.super_user.id,) ) ) self.assertContains(response, 'Change user') self.assertContains( response, "Raw passwords are not stored, " "so there is no way to see this user's password," " but you can change the password using this form." ) The imports needed are from django.test import TestCase from django.contrib.auth import get_user_model from django.core.urlresolvers import reverse
https://fixes.co.za/django/django-testing-admin/
CC-MAIN-2020-50
refinedweb
220
51.44
I got installed Esri ArcGIS SDK (C++) samples (I got it as developer). The library´s used by the samples are been recognized from QtCreator as it´s spected (I´ve been running the samples and everything went well in QT). I tried to use different classes part of the SDK C++ for Qt and I got problems with the headers files not been recognised. For exemple when I include the WmsDynamicMapService as below: #include "WmsDynamicMapServiceLayer.h"...........( Cannot open include file: 'WmsDynamicMapServiceLayer.h'). The same is happening with others classes too. I looked into include directory (C:\Program Files (x86)\ArcGIS SDKs\Qt100.0\sdk\include) and I could not find this library mentioned above and many others libraries that I would like to use. Need I install something else to have all headers files available in SDK C++ for Qt refeference classes? I have a similar issue with ArcGis Runtime sdk version 10.2.6 there are no header files in the sdk. Version 100.0 has include files but the header files have changed drastically which I assume why Mario can not find what he needs. The funny thing is the API documentation lists all the header files I need, but they are not in the SDK. Is there a version of 10.2.6 that has include files?
https://community.esri.com/thread/193077-arcgis-sdk-c-developer-some-header-s-files-are-not-been-recognised
CC-MAIN-2018-43
refinedweb
221
66.03
#include <FXBitmapView.h> #include <FXBitmapView.h> Inheritance diagram for FX::FXBitmapView: Thus, a single bitmap image can be displayed inside multiple bitmap view widgets. See also: NULL 0 Construct a scroll window. [virtual] Destroy. Create server-side resources. Reimplemented from FX::FXComposite. Detach server-side resources. Perform layout immediately. Reimplemented from FX::FXScrollArea. Image view widget can receive focus. Reimplemented from FX::FXWindow. Return the width of the contents. Return the height of the contents. Change image. [inline] Return image. Set on color. Get on color. Set off color. Get off color. Set the current alignment. Get the current alignment. Save list to a stream. Load list from a stream.
http://www.fox-toolkit.org/ref16/classFX_1_1FXBitmapView.html
CC-MAIN-2017-43
refinedweb
110
58.04
- Author: - AndrewIngram - Posted: - February 6, 2009 - Language: - Python - Version: - 1.0 - soap soaplib wsdl web-services - Score: - 1 (after 1 ratings) This snippet is a replacement views.py for SOAP views with on-demand WSDL generation It iterates over your installed apps looking for web_service.py in each one, any methods decorated with @soapmethod within web_service.py will automatically be imported into the local namespace making them visible in the WSDL. It will blindly override local objects of the same name so it's not very safe (could do with some more error checks) but it works very well. More like this - Convert django model into soaplib model, to expose webservices by s.federici 7 years, 7 months ago - SOAP web service with soaplib 0.9+ by wRAR 5 years, 11 months ago - SOAP web service with soaplib 2.0 by treyh 4 years, 8 months ago - soaplib service integration 2 by erny 7 years, 5 months ago - django soaplib test client by erny 7 years, 5 months ago # Please login first before commenting.
https://djangosnippets.org/snippets/1311/
CC-MAIN-2016-36
refinedweb
174
63.29
A Practical Intro to Streaming MapReduce Processing A Practical Intro to Streaming MapReduce Processing Join the DZone community and get the full member experience.Join For Free The Architect’s Guide to Big Data Application Performance. Get the Guide. In this article I’ll introduce the concept of Streaming MapReduce processing using GridGain and Scala. The choice of Scala is simply due to the fact that it provides for very concise notation and GridGain provides very effective DSL for Scala. Rest assured you can equally follow this post in Java or Groovy just as well. The concept of streaming processing (and Streaming MapReduce in particular) can be basically defined as continues distributed processing of continuously incoming data streams. The obvious difference between other forms of distributed processing is that input data cannot be fully sized (or known) before the processing starts, and the incoming data appears to be “endless” from the point of view of the processing application. Typical examples of streaming processing would be processing incoming web event-level logs, twitter firehose, trade-level information in financial systems, facebook updates, RFID chips updates, etc. Another interesting observation is that streaming processing is almost always real time. The important point here is that streaming nature of input data necessitates the real time characteristic of processing. If your processing lags behind the volume of incoming live data – you will inevitably run out of space to buffer the incoming data and system will crash. I will provide two code examples to highlight streaming MapReduce processing with GridGain: - First is a very simply canonical MapReduce application that I’ll use to illustrate the basics of GridGain. - Second is a bit more involved and will demonstrate how you can write a start-to-end streaming MapReduce application (from ingestion to querying). Example 1 Let’s start with GridGain. GridGain is Java-based middleware for in-memory processing of big data in a distributed environment. It is based on high performance in-memory data platform that integrates world’s fastest MapReduce implementation with In-Memory Data Grid technology delivering easy to use and easy to scale software. For the first example we’ll develop an application that will take the string as an argument and will calculate number of non-space characters in it. It will accomplish it by splitting the argument string into individual words, and calculating the number of characters in each word on remote nodes that are currently available in the grid. In the end – it will aggregate the lengths of all words into the final result. This is a standard “HelloWorld” example in the word of distributed programming. First off, we need to create the cluster to work with. If you download GridGain and unzip it – all you need to do is to run a node start script passing it a path to XML configuration file 'bin/ggstart.sh examples/config/spring-cache-popularcounts.xml' to start a node: Note that you can start as many local nodes as you need – just run this script as many times. Note also that you can start standalone nodes from Visor – GridGain DevOps Console (discussion on that is outside of this blog). Once you started all the nodes (let’s say 2) you’ll notice that all nodes started and discovered each other automatically with no drama. You have exactly zero configuration to worry about and everything works completely out-of-the-box. Now that we have the cluster running let’s write the code. Open your favorite IDEA or text editor and type this: import org.gridgain.scalar.scalar import scalar._ object Main extends App { scalar("examples/config/spring-cache-popularcounts.xml") { println("Non-space chars: " + grid$.spreadReduce( for (w <- input.split(" ")) yield () => w.length)(_.sum)) } } Depending on what build system you use (SBT, Ant, IDEA, Eclipse, etc.) you just need to include the libs from GridGain (main JAR + JARs in '/libs' subfolder) – and compile. If everything compiles – just RUN it passing it some input string. That’s all there is to it: Let me quickly explain what’s happening here (it will apply to the following example as well): - First we use scalar “keyword” passing it a path to configuration XML file to startup a node from within our Scala app. - grid$ denotes a global projection on all node in the cluster (GridGain employes functional API in its core). Projection provides a monadic set of operations available on any arbitraty set of GridGain nodes. - We use method spreadReduce(...) on projection that takes two curried arguments: - set of closures to spread-execute on the cluster, and - reduction function that will be used to aggregate the remote results. - When spreadReduce(...) completes (and it’s a synch call among synch and async options) – it returns the non-space count of characters. Now – let me ask you a question… Did you notice any deployment steps, any Ant, Maven, any copying of JAR or any redeploying after we’ve changed the code? The answer is no. GridGain provides pretty unique zero deployment technology that allows for complete on-demand class deployment throughout the cluster – leaving you the developer to simply write the code and run your applications as you would do locally. Pretty nifty, isn’t it? Example 2 Ok, now that we tried something very simple and trivial let’s develop a full featured streaming MapReduce app using what we’ve learned so far. We’ll adopt a canonical example from Hadoop: we’ll ingest number of books into in-memory data grid, and will find 10 most frequent words from those books. The way we’ll be doing it is via streaming MapReduce: while we are loading books into memory we will be continuously querying the data grid for 10 most frequent words. As data gets loaded the results will change, and when all books are fully loaded we’ll get our correct (and final) tally of 10 most frequent words. - We’ll show both programmatic ingestion and querying in one application (no need to pre-copy any stuff into anything like HDFS), and - We’ll develop this application in true streaming fashion, i.e. we won’t wait until all data is loaded and we’ll start querying concurrently before all data is loaded Here’s the full source code: import org.gridgain.scalar.scalar import scalar._ import org.gridgain.grid.typedef.X import java.io.File import io.Source import java.util.Timer import actors.threadpool._ object ScalarPopularWordsRealTimeExample extends App { private final val WORDS_CNT = 10 private final val BOOK_PATH = "examples/java/org/gridgain/examples/realtime/books" type JINT = java.lang.Integer val dir = new File(X.getSystemOrEnv("GRIDGAIN_HOME"), BOOK_PATH) if (!dir.exists) println("Input directory does not exist: " + dir.getAbsolutePath) else scalar("examples/config/spring-cache-popularcounts.xml") { val pool = Executors.newFixedThreadPool(dir.list.length) val timer = new Timer("words-query-worker") try { timer.schedule(timerTask(() => query(WORDS_CNT)), 3000, 3000) // Populate cache & force one more run to get the final counts. ingest(pool, dir) query(WORDS_CNT) // Clean up after ourselves. grid$.projectionForCaches(null).bcastRun( () => grid$.cache().clearAll()) } finally { timer.cancel() pool.shutdownNow() } } def ingest(pool: ExecutorService, dir: File) { val ldr = dataLoader$[String, Int](null, 2048, 8, 128) // For every book, allocate a new thread from the pool and start // populating cache with words and their counts. try { (for (book <- dir.list()) yield pool.submit(() => Source.fromFile(new File(dir, book), "ISO-8859-1"). getLines().foreach( line => for (w <- line.split("[^a-zA-Z0-9]") if !w.isEmpty) ldr.addData(w, (i: Int) => if (i == null) 1 else i + 1) ))).foreach(_.get) } finally ldr.close(false) // Wait for data loader to complete. } def query(cnt: Int) { cache$[String, JINT].get.sql(grid$.projectionForCaches(null), "length(_key) > 3 order by _val desc limit " + cnt). toIndexedSeq.sortBy[JINT](_._2).reverse.take(cnt).foreach(println _) println("------------------") } } Few notes about the code in general: - We use the books that are shipped with GridGain’s examples - We are passing specific XML configuration file for 'scalar' keyword (it configures TCP discovery and partitioned cache with one backup) - We use a simple timer to run a query every 3 seconds while we are loading the books - After everything is done – we are cleaning after ourselves (so that you can run this app multiple times without leaving garbage in the data grid) Notes about ingest(...) and query(...) method: - We use GridGain’s data loader in ingest(...) method that provides advanced back-pressure management for asynchronous bulk load distributed operations - We use method sql(...) on cache projection (cache projections provide monadic set of data grid operations) to issue a simple distributed SQL query - In GridGain you can omit “select * from table” in most cases, and just supply a where clause That’s all there is to it. Compile it (as always, no deployment or redeployment is necessary) and run it. You will see print out of 10 most frequent words every 3 second while books are being read on and put into the data grid: Final Thoughts In about 50 lines of code we’ve put together both ingestion and querying streaming MapReduce app. We’ve run it on the local cluster – and it will run just the same way on 3, 10, 100s or 1000s of nodes deployed anywhere in the world (as long as we have some way to connect to them). Keep in mind that this is obviously a very simply (almost trivialized) example of streaming MapReduce. Yet with additional few lines of code you can replace book with, let’s say, Twitter firehose keyed by hashtags, and print outs with updates to your social dashboard – and you get a pretty useful system tracking most popular Twitter hashtags in real time in a few hundred lines of code – while automatically scaling to 100s terabytes of data being processed on 1000s of nodes. Learn how taking a DataOps approach will help you speed up processes and increase data quality by providing streamlined analytics pipelines via automation and testing. Learn More. }}
https://dzone.com/articles/practical-intro-streaming
CC-MAIN-2019-04
refinedweb
1,666
54.42
- Ian Gillespie - Sep 29, first portion will cover the basics of setting up an app through the Splunk Web Framework, which will result in the creation of a custom input field and table. It’s pretty simple to create a table in Splunk. By default, Splunk needs to refetch the data in order to filter it down. However, what if you had a set of data and you wanted to easily filter that table in real-time? Let's say you have a predefined list of subnets in a lookup. You shouldn't have to refetch the data to find a match, if you're searching for something specific, especially since the data isn't changing frequently enough. In this case, having something that filters in real-time would be much more effective. I am going to take you through step-by-step how to do just that. Due to the amount of content we will be covering, this tutorial will be split into two separate posts. The first portion will cover the basics of setting up an app through the Splunk Web Framework, which will result in the creation of a custom input field and table. The second will cover how to add the filtering functionality to what we have built in the first. Oh, and if you enjoy a more visual route, there are related screencasts split across three videos. Already familiar with the Splunk Web Framework? You will probably be alright skimming through this first part. Caution: There's some heavy coding ahead, specifically in regards to JavaScript. I will do my best to guide you through each step. Download the zipped db_exploits.csv file. This contains a list of database exploits () and we will use this data to populate our lookup. Once you have this downloaded, go into Splunk and create a new lookup table from this .csv file. We’ll be referencing it in our search as| inputlookup db_exploits.csv Feel free to also download working examples of the app: Since we are using the built-in Splunk Web Framework, we are going to create our app from the command line at $SPLUNK_HOME/etc/apps/framework and run: ./splunkdj createapp <appname> #name whatever you like It will then ask for your username and password and then prompt you with: The <appname> app was created at '$SPLUNK_HOME/etc/apps/<appname>'. Please restart Splunk. Once you restart, go to<appname>/home/ and you should see something like this: Note: If you don’t restart Splunk you will get a 404 error. In the console go back to$SPLUNK_HOME/etc/apps/<appname>/ and you will see a directory structure like this: Everything we will be doing will be happening inside the Django directory. Go to django/<appname>/ and you will see a directory structure like this: The three important directories we will be dealing with are: First, let’s take a look at the default Django template inside of the templates directory called home.html: {% extends "splunkdj:base_with_account_bar.html" %} {% load splunkmvc %} {% block title %}{{app_name}} Home Page{% endblock title %} {% block css %} {% endblock css %} {% block content %} Template message: {{message}} You should also look in the JavaScript console... {% endblock content%} {% block js %} {% endblock js %} Here's an outline of what each section in this template is for: Before continuing, we're going to add in a new block called {% block managers %}. This is where we will be keeping our search manager, which will call our Database Exploit lookup and populate our table. Right after the {% endblock content %} add: {% block managers %} {% searchmanager id="dbe" search="| inputlookup db_exploits.csv | rename date as Date, description as Description, file as File, platform as Platform, port as Port, type as Type | table Date, Description, File, Platform, Port, Type | sort -Date | head 500" preview=True cache=True %} {% endblock managers %} We provide an id of 'dbe' in order to reference this search later, when we add our table template tag. This is so it knows where to pull it’s data from. We are using a lookup called db_exploits.csv to populate our search and the search itself is pretty straightforward. I’m also limiting this to 500 results, because if we try to filter a ridiculous amount it could return 10,000 results and be way too performance heavy on the browser. Let's first create our input field JavaScript template. This will be a simple input field, so there's no need to use one of Splunk’s built in form elements. Also, because I want the input field to pass it’s value to the table, I will be using a Backbone View that will utilize this template. If you’ve never used Backbone before, don’t worry, it should all make sense once you see how it fits together. Just know that Splunk Web Framework's JavaScript components use Backbone as their core, so it makes sense for us to do the same. For now, we will need a template to reference, which will be added into our{% block js %}. This template will be referenced inside our filterinput.js file, and will allow us to attach the functionality we define there to the JavaScript template we define in our home.html file. Go ahead and remove the default tags inside the {% block js %} inside the Django template located at appname>/django/<appname>/templates/ and add the following and save the file: <script type="text/template" id="filterFieldTemplate"> <input type="text" class="form-control" id="filterField" placeholder="Filter table" /> </script> At this point, we need to create custom template tags for our table and input. Go to <appname>/django/<appname>/templatetags/ and create two new files called filterinput.py and filtertable.py. Go into filtertable.py and add the following: from django import template from splunkdj.templatetags.tagutils import component_context register = template.Library() @register.inclusion_tag('splunkdj:components/component.html', takes_context=True) def filtertable(context, id, *args, **kwargs): # The template tag return component_context( context, "filtertable", # The custom view's CSS class name id, "view", "<appname>/filtertable", # Path to the JavaScript class/file for the view kwargs ) Here we are creating our filtertable tag. The name itself is derived from the method name. If we called this method foobizbaz then in our .html template we would define it as {% foobizbaz %}. The path to the javascript file links to the location of <appname>/django/<appname>/static, which can be confusing since it's just <appname>/filtertable. Keep in mind that directory is where the file actually exists. Add the following to the filterinput.py file: from django import template from splunkdj.templatetags.tagutils import component_context register = template.Library() @register.inclusion_tag('splunkdj:components/component.html', takes_context=True) def filterinput(context, id, *args, **kwargs): # The template tag return component_context( context, "filterinput", # The custom view's CSS class name id, "view", "mynewapp/filterinput", # Path to the JavaScript class/file for the view kwargs ) Go back to the home.html file located in <appname>/django/<appname>/templates/and first remove the following from the{% block content %} : <div> <div> <p>Template message: {{message}}</p> <p>You should also look in the JavaScript console...</p> </div> </div> then add inside the {% block content %}: <div id="dashboard"> <div> <h2>Database Exploit Filter Table</h2> </div> <!-- filterinput custom template tag --> {% filterinput id="filterinput" %} <!-- filtertable custom template tag --> {% filtertable id="filtertable" managerid="dbe" %} </div> The managerid in the filtertable tag references the search manager we added earlier, so it knows where to pull in the data from. At this point, it’s not going to know how to render these tags and, if we were to visit our page at, we would get a Django template error. This is because Django has no idea how to render these tags and this is where the JavaScript will come into play. When we use one of our custom tags, JavaScript will handle how that data should be rendered. Go to <appname>/django/<appname>/static/<appname>/ and create two new .js files called filterinput.js and filtertable.js. The basic structure of filtertable.js is as follows: return data; }, //creates the view createView: function() { return this; }, // Override this method to put the Splunk data into the view updateView: function(viz, data) { //returns back the first row of data var myResults = data[0]; //appends data to dom this.$el.html(myResults); } }); return FilterTable; }); Looking through filtertable.js, we are first using the define() method to define a new module. Then, we load in the necessary files including Underscore, splunkjs mvc and the SimpleSplunkView. The filter table extends the SimpleSplunkView inheriting all of its properties and providing us an easy way to handle the data that Splunk gives us from our search. The options() method tells Splunk to return "results," instead of the other option of a "preview." The formatData() method is where we will eventually be formatting the data into a table format. createView() creates the view, we won't be doing anything else with this method. updateView() updates when the table needs to be re-rendered by Splunk. Finally, we return FilterTable at the end. As for filterinput.js, we are going to add the following: define(function(require, exports, module) { var _ = require('underscore'); var Backbone = require("backbone"); var FilterTable = require('./filtertable'); var SimpleSplunkView = require('splunkjs/mvc/simplesplunkview'); /* Create a simple backbone view that will be used for our filter field */ var FilterInput = Backbone.View.extend({ el: '#filterinput', initialize: function() { //define the template we want to use for this -- //this is defined in the 'block js' of home.html this.template = _.template($('#filterFieldTemplate').html()); this.render(); }, //render the input render: function() { this.$el.html(this.template()); return this; } }); return FilterInput; }); There are some differences between filterinput.js file and filtertable.js file. As we won't need to be handling data sent by Splunk, it would be unnecessary to extend from SimpleSplunkView. Instead, I am using a Backbone View, which is what SimpleSplunkView also extends from. However, we don't need all the added Splunk specific functionality. Due to the fact we are using a Backbone View we need to load in Backbone directly up top. We are also adding a reference to our filtertable.js, since we will be connecting the two. First, el: '#filterinput' defined a pre-exisitng element in our html to attach this element to. If you remember #filterinput is defined in our template in the content block as {% filterinput id="filterinput" %}. In theinitialize() method I am defining the template for this input, which is the JavaScript template that was added to our home.html file earlier. It then calls the render() method and attaches the template to the DOM with . Now, before we can view this in the browser, a reference needs to be added to the JavaScript files in our Django template. Go back to your home.html template in >span<appname>/django/<appname>/templates/ and right below {% load splunkmvc %} in home.html you will want to add the following: <!-- load in filtertable.js file --> {% load filtertable %} <!-- load in filterinput.js file --> {% load filterinput %} If you go to view the page in the browser, you should see one line of data output below the input field: This is because in the updateView() method inside of filtertable.js we have var myResults = data[0] effectively pulling out the first line. What we need to do next is loop through our data, output all of the rows, and format them into an actual table. Add the following to the formatData() method in filtertable.js method (be sure to remove the original return data): var myDataString = ""; //format each row -- this uses the underscore method _.each to loop through the results _.each(data, function(row, index) { myDataString = myDataString + '<tr class=“body”><td>' + row[0] + '</td><td>' + row[1] + ' ' + row[2] + '</td><td>' + row[3] + '</td><td>' + row[4] + '</td></tr>'; }); //wrap the string in a <table> tag and give it some headers myDataString = "<table class='table table-striped' id='dbetable'><thead><tr><th>Updated</th><th>Description</th><th>Category</th><th>Port </th></tr></thead><tbody>" + myDataString + "</tbody></table"; return myDataString; Above, we first define a new empty string myDataString, followed by the Underscore method _.each() to loop through each row of data and wrap table rows and columns around them. At the end, we take the rows and wrap a <table /> tag around them so it formats it nicely. Also, in the updateView() method, replace what is currently there with since the data has already been formatted in the >spanformatData() method, there is nothing else we need to do here. In the end, your filtertable.js file should look like this: var myDataString = ""; //format each row _.each(data, function(row, index) { myDataString = myDataString + '<tr><td>' + row[0] + '</td><td>' + row[1] + ' ' + row[2] + '</td><td>' + row[3] + '</td><td>' + row[4] + '</td></tr>'; }); //wrap the string myDataString = "<table class='table table-striped' id='dbetable'><thead><tr>" + "<th>Updated</th><th>Description</th><th>Category</th><th>Port</th></tr>" + "</thead><tbody>" + myDataString + "</tbody></table"; return myDataString; }, //creates the view createView: function() { return this; }, // Override this method to put the Splunk data into the view updateView: function(viz, data) { this.$el.html(data); } }); return FilterTable; }); Now, save the file. If you go back to the browser and view the page (<appname>/home/), you should see the input field and the table. At this point, if you type anything into the input field, it won’t filter the table. This will be handled in the next part as we connect the input field to the table so it does its job and filters as we type. If you're looking for something different than the typical "one-size-fits-all" security mentality, you've come to the right place.
https://www.hurricanelabs.com/splunk-tutorials/learn-how-to-build-a-real-time-filtering-table-in-splunk-part-1
CC-MAIN-2019-43
refinedweb
2,287
63.29
was a day much awaited. It was the day the big meeting happened, during which each team discussed its storage needs and everyone hoped their prayers would be answered. Each team had an opportunity to voice its storage requirements - capacity, type, performance, availability, price, scale - and there was promise that everyone’s requirements would be met. As the meeting progressed, and each team voiced its opinions, the room got hotter with conflict and talk of how each team’s requirements would not satisfy another team’s. Just when the room was about to reach explosive levels, a knight in shining armor rode into the room bearing a flag with the word “Ceph”. There was silence before the commotion, but the commotion was different in nature. My imagination of this scenario: too far fetched? Maybe not. We’re going through an age of data explosion. There’s data constantly generated. According to Statista, as of last year, there were more than 2.51 billion active mobile social accounts globally. Not only is new data constantly generated, there are new types of data in the landscape - object, file and block. Given that there is ever increasing data and different types of data, this blog is about what Ceph is and how it can help alleviate common enterprise storage concerns. Ceph is open source. It can help avoid vendor lock-in. It is designed from scratch with no single point of failure and high availability. By design when any hardware component fails, the storage cluster is still accessible and functional. It is designed for performance scaling with capacity. It is one of few storage technologies to offer “unified storage” i.e. block, file and object storage. Let’s delve a little into the three kinds of storage: Block storage: emulates a physical drive. Your /dev/sda is the classic example of block storage. This data is split into evenly sized blocks, each with its own address. File storage: Ceph provides a traditional file system interface with POSIX semantics enabling users to use a hierarchy in organizing files and folders. It’s used as a backend for the OpenStack Manila project, offering a shared file system. The traditional equivalents are NFS and CIFS. Object storage: Object data consists of metadata and a globally unique identifier. Objects are stored in a flat namespace. Objects allow for ease of object expansion. If you have images on Facebook or files in Dropbox, you’ve used object storage. Let’s go over Ceph’s main architecture components. Starting from the bottom: Reliable Autonomous Distributed Object Store (RADOS): This is the backbone of the cluster. Librados: This library enables applications to access the object store. This library is even available in several languages to facilitate custom application integration. Application libraries Rados Gateway (RGW)- This is the Amazon Simple Storage (S3) / OpenStack Object Storage (Swift) interface with object versioning and multi-site federation and replication. Rados Block Device (Rbd) - This allows Block Device access to the RADOS. It allows for snapshotting, copy on write and multi-site replication for disaster recovery. CephFS - This is the POSIX-compliant distributed file system. Other - A custom application can be written that can interface directly with the Librados API layer to avoid software overhead. RADOS stands for “Reliable Autonomous Distributed Object Store”. This is a self-managing/ self-healing layer composed mainly of the two types of entities, OSDs and MONs. OSDs (or Object Storage Daemons) are the data storage elements in the RADOS layer. This tuple of a disk, file-system and object storage software daemon is referred to as the OSD. Ceph is designed for an infinite number of OSDs and you are free to study reference architectures on what has been done in production. OSDs serve stored data to clients. They peer intelligently for replication and recovery without the need of a central conductor. You can easily add or remove OSDs and the changes will ripple through the cluster to reach a healthy state by peering and replication. A best practice recommendation to storage administrators is to estimate the impact to the cluster when a change is made by ways of adding or removing OSDs. A monitor or MON node is responsible for helping reach a consensus in distributed decision making using the Paxos protocol. In Ceph, consistency is favored over availability. A majority of the configured monitors need to be available for the cluster to be functional. For example, if there are two monitors and one fails, only 50% of the monitors are available so the cluster would not function. But if there are three monitors, the cluster would survive one node’s failure and still be fully functional. Red Hat supports a minimum of three monitor nodes. A typical cluster would have a small odd number of monitors. If you’ve stayed awake this far into this blog, you’re probably wondering, “Where do objects actually live?” In Ceph, everything is natively stored as an object in the RADOS cluster. Everything is chopped up into little chunks. This chunk size can be set but it has a default of four megabytes. After being chopped up, the resulting objects are saved in the RADOS cluster. Retrieval is done in parallel and assembled together at the client. The cluster itself is sliced up into smaller units called “placement groups” or “PGs”. Maintenance in the cluster is done at the placement group level and not at the object level. A “pool” is a logical grouping of placement groups. The degree of replication can be set at the pool level. It can be even different for every pool. So an object lives in a pool and it is associated with one placement group. Depending on the properties of the pool, the placement group is associated with the number of OSDs as the replication count. eg. if for a replication count of three, each placement group with be associated with three OSDs. A primary OSD and two secondary OSDs. The primary OSD will serve data and peer with the secondary OSDs for data redundancy. In case the primary OSD goes down, a secondary OSD can be promoted to become the primary to serve data, allowing for high availability. When using multiple data pools for storing objects, both the number of PGs per pool and the number of PGs per OSD need to be balanced out. That number should provide a reasonably low variance per OSD for optimum performance. Having a link () to this calculator handy is a great idea when deciding on these numbers. Ceph’s fundamental data placement algorithm is called CRUSH. CRUSH stands for “Controlled Replication Under [Scalable Hashing].” Its salient features include: The ability to do data distribution in a reasonable time. It is pseudo random in nature. It’s deterministic in nature (i.e. functions called with the exact same arguments yield the same results on any component of the cluster). The client and the OSD is capable of calculating the exact location of any object. CRUSH is implemented with the help of a crush map. The main map contains a list of all available physical storage devices, information about the hierarchy of the hardware (OSD, host, chassis, etc.) and rules that map PGs to OSDs. The native interface to the storage cluster is via the Librados layer. The library has wrappers in several languages eg. ruby/erlang/php/c/c++/python to ease interfacing any application written in those languages. The three main application offerings are “radosgw”, “rbd” and “cephfs client”. Radosgw offers a web-like services gateway to offer an AWS S3 compatible interface and a Swift interface. If you have an application that communicates via an S3 interface to AWS, it enables switching to use Ceph just a redirection to the radosgw. Rados Block Device (Rbd) is perhaps Ceph’s most popular use case. It enables block-level access to a Ceph object store. It has support for snapshots and clones making it a wonderful replacement for expensive SANs. The librdb library is tasked with translating block commands(scsi commands) with sectors and length of data requests to object requests. Rbd finds heavy usage in OpenStack as an OpenStack Image Service (Glance) and OpenStack Block Storage (Cinder) back-end. The third type of storage Ceph offers is a POSIX-compliant shared file system called CephFS. Ceph has been designed with performance in mind from the get-go. It offers a feature known as journaling. Fast media, preferably solid state drives, could be dedicated to a journal. All writes are temporarily stored in the journal until the writes are flushed from memory to the backing storage. Then, the journal is marked clean and is ready to be overwritten. This can absorb burst write traffic, accelerate client ACKs, and create longer sequential write IO, which is more efficient for the backing storage. It's important to note that the journal contents are not read unless there's an unclean shutdown of the OSD process in which case the journal data is read back into memory and processed to backing storage. Thus the data flow is not journal to backing storage, but memory to backing storage. This is a common misconception. They increase the write throughput seen by the client significantly. Another optional feature is client side caching for librbd users. The latest feature the industry has its eyes set on for an upcoming release is called BlueStore (“Bl” as in Block and “ue” as pronounced in “New”). This is a new architecture to help optimize and further reduce overhead in the existing Ceph architecture. The journey of enterprise storage has come a really long way. The storage landscape has choices varying from traditional storage subsystems to open source solutions like Ceph and Gluster. Refraining from a personal preference for “open source first”, a key factor in making a choice should be the required workload characterization among others. With careful analysis of requirements and a systematic provisioning for functionality and performance, it’s hard to see how one could go wrong with Ceph as a choice. Ruchika Kharwar is a cloud success architect at Red Hat. She spends her time working with customers helping them take their proof of concept to production by enabling integration of various features and components with the ultimate goal of getting them the infrastructure they want.
https://www.redhat.com/it/blog/ceph-alot
CC-MAIN-2018-26
refinedweb
1,721
56.76
Welcome to the seventh lesson ‘Inheritance’ of the Core Java Tutorial, which is a part of the Java Certification Training Course. In this lesson, we will talk about Inheritance, which is a core pillar of the object-oriented programming systems. At the end of this lesson on Inheritance, you should be able to: Define Inheritance Understand the use of Polymorphism Determine when casting is necessary Discuss “super” and “this” keywords Use abstract classes and interfaces Inheritance is a mechanism by which one class acquires all the properties and behaviors of the parent class. It can be used for method overriding and code reusability. Inheritance, in the true sense, is very similar to what we face in real life, We inherit whatever our ancestors have and that's the same concept. So rather than reinventing the wheel, inheritance helps us to reduce costs and effort of testing in application development. It is because when you create a class with implementation, you wouldn't want to make major changes to that class when your customers come to you for change requests in the court because you may break something that's already working. And there is a cost of retesting the entire class. So it's much better to use an extended class, or a subclass, or a child class which inherits from the base class. We can provide that extended feature or new functionality or change request in the inherited class. The syntax of Java Inheritance is as shown: class Subclass-name extends Superclass-name { //methods and fields } We have the name of the class, the extents keyword which provides inheritance in Java, and the name of the superclass. So the child class is called the subclass and the base class is called a superclass. Using inheritance feature in Java, you can create new classes that are built by reusing code already existing in the base classes. Polymorphism is a feature of an object that takes on different forms depending on the object on top of it. It is of two types: Method Overriding Method Overloading Let's take an example: public interface Veg{ } public class Animal{ } public class Cow extends Animal implements Veg{ } In the above example, Animal is the base class and cow is the child class. So cow is the specialization while Animal is a generalization. So we can say that the cow class is polymorphic in nature which means it takes on multiple forms. For example, the cow is also a type of animal since it extends from the base class animal and the cow is also vegetarian in its behavior since it implements the vegetarian interface. You can produce a new class based on an old one and also modify the existing behavior of the parent class. If a new method is defined in a subclass with name, return type, and argument list that matches the method in the parent class, then the method is said to overriding the old method. class House { void shelter(){ System.out.println(“We live in rooms"); } } class Villa extends House { public static void main(String args[ ]) { Villa obj = new Villa(); obj.shelter(); } } In the above example, the base class is class house. It has a method called void shelter with a simple print message ‘we live in rooms.’ Here we have an extended class called villa that extends the House class. This has the entry point of our program and then we are creating an object of this class. Then we have obj.shelter() command, which executes the shelter method that is inherited from the base class house. Method overloading is a feature that allows a class to have two or more methods that have the same name but different arguments. So these three methods are expected to be in the same class in method overriding. The methods are expected to be in the child and the base class. For example, we create a class called taxcalculator and have a method in it called calculatetax(). Now we are writing the implementation of calculatetax() inside the base class since we have complete clarity over what that method should be doing. But post the budget, let's say the government decides of different rates for GST etc., for tax calculation. Rather than modifying the base method, we could have an extended calculator class called GST which extends from the base class. The extended class then provides a method called calculatetax(). So we don't change the name of the method. Now we have a method with the same name in the base and the child class but with different implementations. So, that is an example of method overriding. In method overloading, you have exactly the same method with the same name in the same class but with different signatures. By signatures, we mean that the argument list can be different. So either the number of parameters that you passed to each function can be different, or the data types of those parameters can be different, or the sequence of those data types or parameters passed can be different. Consider an example shown below: public void add(int a,int b) { } public void add(int a,int b, int c) { } public void add(float a,float b) { } As can be seen, all the three methods are ‘add’. In the first method, you can see it takes two integers, the second method takes three integers, while the third method takes two floats. Now, method overloading helps in encapsulation because it maintains the same interface name. Clients, or other classes that are using your API or making use of the class objects you've created, will always call your method named add. Based on the parameters that they pass, they would decide which add method gets called. Now, this helps when you already have declared an add method and there are some existing APIs or classes that are using your add method. Going forward, to make it simple, you do not want to give added function names so you can create another add method with different parameters. This can be done so that the newer clients, who are connecting to your code base, can call your new function whereas older clients can continue to call the older add method. So the name of the interface does not change. All clients continue to call add methods, but depending on the number of parameters or the data types they pass, they can decide whether they call the newer add method or the older one. Object type is the type of object that we create. The reference type is used to refer to an object. Let us understand them with an example: class Animal { } class Lion extends Animal { } public class Test { public static void main() { Animal l = new Lion(); // reference type is Animal and object is Lion } } In the above example, we have a class Animal, which is a base class. Then we have class Lion, which is a child class that inherits and extends from the Animal class. In the main, we create an object of the animal class and making it point to the child class lion. So this is a reference type. Here, ‘l’ is an object of the base class, which is pointing to a type of lion. So here you see that ‘Animal l’ is actually a reference type, and the object is Lion because this particular class object is referencing an object of type Lion. Reference data types are created using defined constructors of classes. They are declared to be of a specific type that cannot be changed. They are used to access objects. They have a default value of null. So if we just declare animal as an object, it'll be null, unless it points to the animal or to the specialized class of Lion. Example: When you pass objects using references to their parent class and you want to know what objects you have, you can use instanceof operator. Let us now understand instanceof operator using an example: public class Employee extends Object public class Manager extends Employee public class Engineer extends Employee Here, we have a class Employee, which extends from the base java type object. Then we have class Manager, which extends from employees. Then we have engineer class which further goes and extends from the Manager. So this is a multi-level inheritance where manager is a type of employee. There are some properties which are specific to the Manager. Engineer is also the manager, so he has the same properties that are defined by the manager, but some additional responsibilities. And, of course, he is also an employee. Now, given the context of that example, if you receive an object using a reference of type “Employee,” it might refer to a “manager” or an “engineer.” You can test it using the instanceof operator as shown: public void doSomething (Employee e) { if (e instanceof Manager) { //Process a Manager } else if (e instanceof Engineer) { //Process a Engineer } else { //Process any other type of Employee } So here is a function named “doSomething” and we use the instanceof operator to check if he is an instance of manager (process as per the manager functions) else if we can check if the object passed to this function is of base type Employee. Now we need to check whether that is a manager object or an Engineer object. If it is a Manager object, instanceof will return true. If it is an engineer object we can check with the instanceof operator to specify which type of object we're receiving and this is very important in our inheritance scenario. Casting is used to get the full access to the object that has been determined using the instanceof operator. For example, if you have received the information of a parent class and determined that the object is a specific subclass using instanceof operator and you require the full access to that object, you can use casting. Consider the code given below: public void doSomething (Employee e) { if (e instanceof Manager) { Manager m = (manager) e; System.out.println(“This is the manager of ” + m.getDepartment()); } //rest of operation } Here we're actually passing a reference of type employee. So the first line denotes a reference type which is not an object. Now we need to check whether ‘e’ is holding a reference to the type manager or to type engineer. If the mentioned if condition returns true, (indicating that this reference type ‘e’ that was passed to this function is holding a reference of object type manager), we will need to explicitly typecast this particular object to manager, so that we can store it as a Manager object. So we create the object of type Employee and make it point to manager. Now, this object, being a reference of type Employees, will only be able to access the Employees section of the data that is there in the memory for this object. The moment you make it a point to manager, there is also an additional aspect or additional data and properties that are associated with this reference object. But since it is essentially a reference of the type Employee or the base type, it will not be able to access the content that is specific to manager. It will only access the content that is specific to the Employee within the object. Once you do a typecast and forcibly say that this object has to be typecast as manager, that is when it will get access to the properties that are specific to the Manager. And this is where typecasting is very essential in inheritance because this employee object is essentially a reference of type Employee. To extract the manager aspect or data out of it, we do an explicit typecast of type Manager and then store it in a manager object. What happens if we don’t perform casting? If you don't perform casting and attempt to execute any method on the manager, it will simply give you an error. ‘super’ and ‘this’ keywords are also very essential when working with inheritance. The super keyword is a reference variable used to refer immediate parent class object. For example, the Employee class is the base class and the Manager class is inheriting from it. Let's say, here we have a variable called x. To access this x variable, we can simply say: and the super keyword gives us the access to the context that is defined in the parent class. It can be used to refer to the immediate parent class instance variables. It can also be used to invoke immediate parent class methods and constructors. Let's take an example of the super keyword. The code is as shown below: class Animal { String color=“black"; } class Cat extends Animal{ String color=“white"; void printColor(){ System.out.println(color); //prints color of Cat class System.out.println(super.color); //prints color of Animal class } } class Super1{ public static void main(String args[]){ Cat c=new Cat(); c.printColor(); }} Here we have a class Animal with a string variable ‘color.’ Class Cat is the specialized class and a child class that extends from Animal. And this class has one additional parameter which says: String color = “white”; In the print color method, we're saying that we want to print color. Now if you observe, both these classes have data members with the same name which is conflicting. So when we say: System.out.println(color); This actually prints the color defined in Cat class. But if we say: System.out.println(super.color); It actually prints the color defined in the base class Animal. So now we just create an object of the Cat class and we simply go and print the color. We would get white and black. this keyword is a reference variable that refers to the current object that is calling that particular method. It can be used to refer to the current class instance variable. It can also be used to invoke the current class methods and current class constructor. Consider the code given below: class A2 { void m() { System.out.println(this); //prints same reference ID } public static void main(String args[]) { A2 obj=new A2(); System.out.println(obj); //prints the reference ID obj.m(); } } Here we have a class A2 that has a method m() which simply says system.out.println(this); It prints the ‘this’ pointer. this is a pointer that points to the instance that is calling this method m(). In the main function, we're creating an object of the A2 class and we're simply trying to print that object which gives you the reference id of the object. Now we call obj.m() It's the same object used to call the m() method which means that this pointer will actually point to the object that is calling this method which is nothing but obj. Hence the same reference ID will be printed out. Let's look at an example using super and this keyword. class Person { String name; Person(String name) { this.name=name; } } class Emp extends Person { float salary; Emp(String name,float salary) { super(name); //reusing parent constructor this.salary=salary; } void display(){System.out.println(name+" "+salary); } } class Super{ public static void main(String[] args) { Emp e1=new Emp(“Vikram",45000f); e1.display(); }} We have a class Person with a string variable name. Next, we have a constructor (with the same name as that of the class) which takes a parameter and it assigns that to the data member or the class variable. Next, we have class Emp that extends from Person class and a float variable salary. Now here we have a constructor of this class with the same name as that of the class name. It takes two parameters, name and salary, and it passes the name parameter to the base class. So the moment we say super(name); it will call the base class constructor and assign this value. This happens because we create an object of Emp class and pass name and salary, but the name does not actually belong to the Emp class but to the Person object. Due to inheritance, it is also available within the Person i.e. within the Emp object. But when we create an object of Emp class and pass the name, the name and salary both are passed to this constructor. Further, the name, using the super keyword, is sent to the base class constructor. Here you see the benefit of actually using the super keyword. And the command: this.salary = salary; is a best practice while coding. this.salary refers to this local salary variable, so that prevents you from creating errors and mixing up parameters with class member variables that have been defined. An abstract class is actually declared because it cannot be instantiated. It's just a placeholder class which defines the core business logic that is captured at the initial time of system design. We generally declare an abstract class when, at the start of the system design, we do not have clarity on the various class implementations or the various methods that need to be implemented within our application. This is the reason we would need to create an extended class. An abstract class generally will always be created as a base class. An abstract class does not define concrete methods. It only defines abstract methods which can be overwritten in the child classes. Typically an abstract class never be instantiated because it always has to be extended. Let's take an example. Let's say we are building an application to automate e-retailer or electronic store. Now the electronics store can sell multiple products. During the initial system design, we're never sure as to what product the store can sell. So you would model the class or system as a generic product class. Now we definitely cannot walk over to a store and expect them to create an object of a generic product. We would always expect them to create an object of a specialized class. Hence the moment we try to create an object of a product class, we would be given an error because it maps to real life. We cannot walk into a store and ask somebody to generate a bill or an invoice for a generic product. That is the reason, the moment we have clarity down the line that the store is going to sell a DVD player, we would have class DVD, inherit from the base product, and then we would create an object of the DVD class. So the base class is a skeletal class. It's just a placeholder class or a foundation class. So an abstract class is like a foundation in the construction industry. We can't create an object or actually live in it, but it plays a crucial role because it holds the entire class hierarchy together and plays the role of an anchor class. Abstraction is another feature available in the OOPS programming system and is a pillar of OOPS. It is the process of hiding and implementing details and showing only functionality to the user. For example, if we go to an ATM, we can withdraw cash but we will not be able to see how the withdrawal is happening or what are the steps required to withdraw cash. So the implementation of how the withdrawal happens is hidden. What we can see is only the interface in terms of what are the options that the atm provides. So the Java programming language enables a class designer to specify that a superclass declares a method that does not supply an implementation. Such a method is called is an abstract method. Now, what is an abstract method? We discussed the product class that we're going to create for this shopping store. Now if you look at the product class, you can actually see that the product class will not have a definite method defined inside it. Let's take an example of a method which says Generate Invoice. Now we can't walk into a store and asked the executive to generate an invoice for the generic product, which means a generic product would not have a price. Only when we have clarity, that the store is going to sell a DVD player, we can write the method inside the DVD class which says Generate Invoice. The method then multiplies price and the quantity and generates the bill. So when you are declaring or creating the abstract class, since it's a placeholder class, it only has a method declaration without a body. So the abstract class only defines the business rules that govern the execution of the application. So the business rule, in this case, would be to generate an invoice, but how the invoice is going to get generated is something that can be defined only in the child class DVD. It’s because only then we will have the clarity of the price and the quantity of the products that the user has ordered. So abstraction is a process of hiding data only showing what capabilities your class has and not how it implements it. A sample program depicting the use of abstract class is as shown below. abstract class Shape { abstract void draw(); } class Triangle extends Shape{ void draw(){System.out.println("drawing triangle");} } class Rectangle extends Shape{ void draw(){System.out.println("drawing Rectangle");} //In real scenario, a method is called by programmer or user } class TestAbstraction1{ public static void main(String args[]){ Shape s = new Rectangle(); //In real scenario, an object is provided through method; e.g., getShape() method s.draw(); } } So when we look at an abstract class, it has an abstract method which is only having a declaration. So you only know what that particular product class should do. We know that it should generate an invoice. But we do not know how it should generate an invoice, because we do not even know that the store may even go and sell a DVD. An interface is a blueprint of a class that has static constants and abstract methods. All the methods are abstract by default. It is a mechanism to achieve abstraction and multiple inheritance. Consider the image shown below: If you look here, a class can implement more than one interface and an interface also can implement other interfaces. Now let's take an example of the electronic store that we were building. So now the electronic store came to us and said that they needed a software solution. As developers, we started building the software solution and we created the product class because we knew that they were going to sell products. But what we did not know was what type of products are they going to sell. So the store told us that they needed an application which was scalable and would work not only today but five years down the line as well. So today we will be able to generate products but later on, we should also be able to generate DVDs, mp3 players, TVs, etc. That's the reason we took a design decision that we're going to create an abstract class called Product. Whatever we declare inside that class is going to be just generic business implementations without a body. Once the application is live, the shopping store management reported that the software did not factor in tax computation on billing or generating invoices. So there is a new government rule which says that a certain product should be taxed. Now, how do we implement it? We cannot go and put a new abstract method in the base class (being a placeholder class). It will break the entire class hierarchy. So for this extended business rule of tax computation that came much later in the development cycle, we incorporated into an interface. Whichever class, like DVD or TV, needs to pay taxes, that class will go and implement that interface. Hence it will get that extended business rule and we can overwrite that method. So interfaces only have abstract business rules. They do not have methods with the body because they just define a business rule as a contract that certain classes need to go and implement, override and provide their own implementation for that business rules. So a class can extend another class, and an interface can also extend an interface. A class can implement an interface. Consider a program shown below: interface sample { void print(); } class A implements sample{ public void print(){System.out.println(“Interface example");} public static void main(String args[]){ A obj = new A(); obj.print(); } } Static methods is an ability to define concrete (static and defaults) methods and interface. It is a new concept of default method implementation in interfaces, which is added for backward compatibility so that the old interfaces can be used to leverage the lambda expression capability of Java 8. So the static method looks like a normal class as shown. public interface Test { ... public static boolean isNull(Object obj) { return obj == null; } ... Inside the interface, we have static (constant) method that is allowed. And the reason to add static methods to the interface is is to keep related utility methods in one place so that they can be used easily by subclasses, default methods and subinterfaces, or by users of their interfaces. Now the default method looks like a typical class method but is defined inside an interface and contains the default specifier. So here is an example of a default method. The default method does a simple iteration over the set of values and the default keyword is what allows you to create a default method in an interface. default boolean removeIf(Predicate<? super E> filter) { Objects.requireNonNull(filter); boolean removed = false; final Iterator<E> each = iterator(); while (each.hasNext()) { if (filter.test(each.next())) { each.remove(); removed = true; } } return removed; } Default methods can access everything that is defined within the interface or is inherited by the interface, including: reference to this pointer all abstract methods defined in this or super or the base interfaces all default or static methods defined in this or super interfaces all static fields defined in this or super interfaces The key takeaways from this lesson are: Inheritance is a mechanism by which one object acquires all the properties and behaviors of the parent object. Polymorphism is a feature of an object that takes on different forms depending on the object on top of it. Casting is used to get the full access to the object that has been determined using the instanceof operator. The super keyword is a reference variable used to refer immediate parent class object, and this is a reference variable that refers to the current object. Any class with one or more abstract methods is called Abstract class, and interface in java is a mechanism to achieve abstraction. With this, we come to an end of this lesson on Inheritance. The next lesson focuses on Exception Handling. A Simplilearn representative will get back to you in one business day.
https://www.simplilearn.com/java-inheritance-tutorial
CC-MAIN-2021-17
refinedweb
4,544
61.97
Java String replace methods are used to replace part of the string with some other string. These string replace methods are sometimes very useful; for example replacing all occurrence of “colour” to “color” in a file. Table of Contents Java String replace Java String class has four methods to replace a substring. Two of these methods accept regular expressions to match and replace part of the string. public String replace(CharSequence target, CharSequence replacement): This method replaces each substring of this string with the replacement string and return it. Note that replacement happens from start of the string towards end of the string. This behaviour can be easily confirmed by below code snippet. String str1 = "aaaaa"; str1 = str1.replace("aa","x"); System.out.println(str1); //xxa public String replace(char oldChar, char newChar): This method is used to replace all occurrences of oldChar character to newChar character. public String replaceAll(String regex, String replacement): This is a very useful method because we can pass regex to match and replace with the replacement string. public String replaceFirst(String regex, String replacement): This string replacement method is similar to replaceAllexcept that it replaces only the first occurrence of the matched regex with the replacement string. Let’s look into java string replace methods with example code. Java String replace character example One of the popular use case for character replacement is to change a delimiter in a string. For example, below code snippet shows how to change pipe delimiter to comma in the given string. package com.journaldev.string; public class JavaStringReplaceChar { public static void main(String[] args) { String str = "Android|java|python|swift"; str = str.replace('|', ','); System.out.println(str); } } Java String replace() example Let’s look at java string replace method example to replace target string with another string. I will take user input from java scanner class for source, target and replacement strings. package com.journaldev.string; import java.util.Scanner; public class JavaStringReplace { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Enter Source Term:"); String source = sc.nextLine(); System.out.println("Enter Search Term:"); String search = sc.nextLine(); System.out.println("Enter Replace Term:"); String replace = sc.nextLine(); String result = source.replace(search, replace); System.out.println("Result = " + result); sc.close(); } } Below image illustrates the output of one of the execution of above program. Java String replaceAll example If you notice above program output, target string should be an exact match for replacement. Sometimes it’s not possible because the input string may be different because of case. In this scenario we can use replaceAll method and pass regular expression for case insensitive replacement. Let’s look at a simple program where we will match and replace string with case insensitivity. package com.journaldev.string; import java.util.Scanner; public class JavaStringReplaceAll { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Enter Source Term:"); String source = sc.nextLine(); System.out.println("Enter Search Term:"); String search = sc.nextLine(); search = "(?i)"+search; System.out.println("Enter Replace Term:"); String replace = sc.nextLine(); String result = source.replaceAll(search, replace); System.out.println("Result = "+result); sc.close(); } } Did you notice the prefix (?i) to the search term? This is to pass regex to match strings with case insensitive way. Below image shows the output where both “Android” and “android” terms are getting replaced with “Java” because we used replaceAll method. Java String replaceFirst example Java String replaceFirst is used to replace only the first matched regex string with the replacement string. Let’s look at a simple example of String replaceFirst method. package com.journaldev.string; public class JavaStringReplaceFirst { public static void main(String[] args) { String str = "Hello JournalDev Users"; str = str.replaceFirst("Hello", "Welcome"); System.out.println(str); String str1 = "HELLO Java String Tutorial"; str1 = str1.replaceFirst("(?i)"+"hello", "Welcome to"); System.out.println(str1); } } That’s all for java String replace methods with example code. Reference: String API Doc
https://www.journaldev.com/17988/java-string-replace
CC-MAIN-2021-17
refinedweb
663
51.44
One more step along the Wayland Yesterday’s lunchtime hacking was all about splitting the project into multiple files and getting it into git and onto Github – note that the mere fact of it being publically browsable does not imply that it will run, build, walk, make tea, perform any other useful function, or even forbear from exploding inside your computer and rendering the SSD to molten slag. Nor that I’m not still ashamed of it. It just keeps me slightly more honest. Today I implemented enough of pack-message to be able to recreate the initial client→compositor message that we observed weston-info send last week. Still taking extraordinary liberties with signed vs unsigned longs, and plase note that all this code will work only on little-endian machines (there are any big-endian machines left?). Lessons, puzzles Leiningen does not need you to list the source files in your repository individually: it finds them magically. I believed otherwise for a while, but it turned out (slightly embarrassingly) that I had a parenthesis i the wrong place. My working hypothesis is that it assumes there is one namespace for each file, and any reference to a namespace it doesn’t know about it can be satisfied by loading a file with that name. If I type (in-ns 'psadan.core) at the repl and that ns does not include a (:refer-clojure) form, I can’t use the symbols in clojure.core at the repl. I have not observed a similar issue wrt uses of clojure.core/foo in core.clj itself, just at the repl. atoms! An atom is dead simple, really – conceptually at least, if not also under the hood: it’s a wrapper for an object that lets you look inside with deref and lets you change what’s inside with swap!. For each connection we use an atom holding a mapping from object ids to the corresponding objects, which starts out holding the singleton object for wl_display and then needs to be updated each time we generate an object locally and each time we learn of a new object from the peer. (defn open-connection [name] (let [s (cx.ath.matthew.unix.UnixSocket. name) in (. s getInputStream) out (. s getOutputStream) wl-display (global-object-factory) ] {:socket s :input in :output out :display wl-display :objects (atom (assoc {} 1 wl-display)) })) (defn remember-object [conn id object] ;; (swap r fn args...) gets the current value of the atom inside r, ;; which for the sake of argument we shall call oldval, then sets the atom ;; to the result of calling (fn oldval args...) (swap! (:objects conn) assoc id object) object) (defn get-object [conn id] ;; @foo is another way to write (deref foo) (let [o (get @(:objects conn) id)] o)) I have probably not chosen the fastest possible way of building up the messages I plan to send, in terms of fiddling around sticking vectors of bytes together. Will worry about that later if it turns out to be a bottleneck (but suggestions are welcome). There was not a lot of Wayland learning this time. In the next round we shall be sending it the messages we have so lovingly composed from whole cloth and see if it reacts the same way as it did when the same bytes were sent from weston-info Syndicated 2013-03-12 13:57:03 from diary at Telent Netowrks
http://www.advogato.org/person/dan/diary/167.html
CC-MAIN-2016-30
refinedweb
571
57.3
Hi, we're hooking up TBB and are using it in various places, and that's all well and good. I'm trying to protect against future mem leaks and mem debugging by ensuring anything that shows up on the crtdbg radar is squashed now before some poor soul has to wade through a lot of false-positive cruft to get to a real, hard-to-track bug. The part in question I've isolated in a unit test that uses TBB code. It reports a leak thusly: void RunTest(...){ #ifdef _DEBUG _CrtMemState before, after, difference; #endif _CrtMemCheckpoint(&before); RESULT result = _pTest->Run(_StatusMessage, _SizeofStatusMessage); // uses TBB at some point _CrtMemCheckpoint(&after); if (_CrtMemDifference(&difference, &before, &after)) { if (result != FAILURE) { result = LEAKS; #ifdef _DEBUG sprintf_s(_StatusMessage, _SizeofStatusMessage, "%u bytes in %u normal blocks, %u bytes in %u CRT blocks", difference.lSizes[_NORMAL_BLOCK], difference.lCounts[_NORMAL_BLOCK], difference.lSizes[_CRT_BLOCK], difference.lCounts[_CRT_BLOCK]); #endif } _CrtMemDumpStatistics(&difference); } return result;} To address these leaks, I first tracked and freed TBB allocated threads by doing this:1. Create an object that handles the new/delete of task_scheduler_init.2. In ctor, new task_scheduler_init3. walk through all process threads and take a snap4. run a noop parallel_for so tbb threads get created5. walk through all process threads and take another snap6. diff the two snaps and record the threads that have changed On destruction of the overall object I delete the task_scheduler_init and wait on these recorded threads to shut down. Then I created "RecycleScheduler" which destroys and recreates this object. Calling this explicitly in my unit test solved my leaks most of the time, but occasionally now I get a mem diff that has negative values as the number of bytes leaking - so it seems corrupt. This 100% happens in only the unit test that run TBB code and commenting out the one line to parallel_for always protects against any leak or corruption, so I feel I've weeded out any bugs or race conditions I have insight into. Running through the list of tests is single-threaded, and I ensure the app comes to all but a halt between tests. There are no tasks being issued directly, currently only calls through parallel_for which I expect is completely done with all tasks by the time it returns. Is there a rock-solid way to "reset" or turn off TBB and its memory usage at a known point for such test/mem debugging purposes and wait for its completion other than to never have instantiated it in the first place?Please advise, thanks! TBB usage reports false-positive memory leaks when using crtdbg.h For more complete information about compiler optimizations, see our Optimization Notice.
https://software.intel.com/en-us/forums/intel-threading-building-blocks/topic/286530
CC-MAIN-2017-17
refinedweb
448
60.75
23 February 2009 17:13 [Source: ICIS news] LONDON (ICIS news)--Dow Chemical has announced a €120/tonne ($185/tonne) hike on all its polyethylene (PE) resins in Europe, the Middle East, Africa and India from 1 March, the company said in a statement on Monday. The statement gave no further details but the move came hot on the heels of the new March ethylene contract settlement, which was agreed at an increase of €85/tonne on Friday 20 February. The contract was not yet fully established. PE production was cut back in line with lower cracker operating rates in ?xml:namespace> It was too early to gauge buyers’ reactions but several had already complained of the poor state of their own markets. Food packaging was still buoyant but industrial applications were poor. “My activity is down by 40% compared to last year,” said one large drum manufacturer. “If they carry on like this they will kill us.” PE producers in (
http://www.icis.com/Articles/2009/02/23/9194498/dow-targets-120tonne-hike-for-march-pe.html
CC-MAIN-2015-11
refinedweb
161
62.48
You can subscribe to this list here. Showing 6 results of 6 Christian Hammond wrote: > > I think the SourceForge staff will remove it if you send a support Yeah, they will. Whenever I had issues with the CVS repository for Netdude (netdude.sourceforge.net -- my other little toy :) they were *very* quick and responsive. -- Christian. ________________________________________________________________________ On 27-Sep 02:46, Till Adam wrote: > # Thus spake Christian Kreibich (kreibich@...): >=20 > > In the worst case, we could just use .tdb for a theme db, .cdb for a > > cursor db, .bdb for a background db. >=20 >. Why not just use the full namespace, like: mytheme.db.theme, mycursor.db.cursors, etc. or mytheme.theme.db, mycursors.cursor.db. There shouldn't be any problems keeping track of files that e17 knows about and uses. I think this would even help in keeping track of what came from where (with many themes). Although, I thought the idea of having a db back-end for config data was trying to "fix" the problem of bundles of files arriving for each theme. Shouldn't all of these databases be folded back into one db for each theme? (and have the theme designer turf the favorite parts from other themes.) Anyway, just another $0.02. Thomas # Thus spake Christian Kreibich (kreibich@...): > In the worst case, we could just use .tdb for a theme db, .cdb for a > cursor db, .bdb for a background db.. I'd say add a magic value to edb files (different ones for different file types) and generally use the file extension as an unreliable additional guess. It only needs to reliably detect that a file is of a certain type that a certain set of programs can handle (e.g. gzip), not necessarily what the file will look like *after* having been open/processed/loaded. I think that would be taking it a bit far. Apps such as e17 can chose to use the guesswork for finding the right icon and such. Just my cents, Till -- mailto: till@... Hi, Hendryx wrote: > > I guess this is really a posting to cK but I thought I would post it to > the list since others mind have some info on how to do it and where to > get the info from. > > First of all if I want to read up on the file magic.txt that is used by > EFSD and similar files used by other file-magic programs, does anyone > have any good URLS? Well .... * that file is not actually used anymore. That information is stored in an XML file now, which is much more standardized, plus it's only a fraction of the size of the old db file, plus parsing is heaps faster. * the syntax of magic.txt is a royal pain in the ass. It's the same as in the file that the file(1) command uses, if you still want to read up on that, man 7 magic helps somewhat. There will be GUI tools for editing those entries in the near future, but I'm busy with other things right now. > Next is this: while I know that the .db file format is a binary one I > was wondering if file magic could be added to spot when a *.db file was > an E17 background/icon/cursor.etc In theory we could add a magic value somewhere by hacking edb. Another way that's more expensive to look up but simpler to add would be to just define a policy that our dbs need to have keys like "/metainfo/contenttype" or something that lets us query the type of db. In the worst case, we could just use .tdb for a theme db, .cdb for a cursor db, .bdb for a background db. > All of this files first off all match the file magic of the db stuff, > but while they are binary files they also have strings that can be > matched. Ie; > > a background .db file always has the string "/type/bg" > cursors seem to always have the string "/cursor/image" > and icons always have the string "/icon/normal" > > each of these string values are atcally the names of an important key > names that seem to always be present in files of that type, and that I > don't think would be present in any files not of that type (well not in > other db files anyhow) > > So usig that infomation shorly some file magic can be added into efsd's > magic data for them, can't it? Yes, basically. Unless we modify the way Edb stores the db, those tests would be more work (and thus slower) than ordinary ones (which just jump to some offset in the file and look for byte patterns). In the above case with /metainfo/contenttype, Efsd would need to open the file and do an actual btree query, which is really slow.. Cheers, -- Christian. ________________________________________________________________________ raster@... wrote: > > > On Wed, 26 Sep 2001 14:50:37 -0500 Kevin Brosius <Cobra@...> babbled > profusely: > > > That sucks! (Pardon my English...) Sorry to hear it. Want some > > recruiting contacts for the US East Coast area? Do you have _any_ > > desire to move back again? > > no way. i'm keeping my arse here in sydney. :) yes it limits me.. but i've coem > to realise my heart is where home is.. and thats back here in sydney. :) > Sydney is great. I was there about three years ago on business for a week, the weekend before Australia Day, and loved it. We had the weekend off, stayed in The Rocks. It was incredible. Can I come work for you? ;-) (Raster Inc...) -- Kevin On Thu, Sep 27, 2001 at 03:53:08AM +0800, Andrew Shugg wrote: > Quoth Hendryx: > > Someone please correct me if I'm wrong but I think I remember that > > imlib2_loader_db has now been incorparated into imlib2_loaders. If this > > is the case then should someone nuke the imlib2_loader_db tree as it has > > no reason to really be there anymore? or does it? > > Unless someone with shell access to the CVS server (ie a VA Linux > person) goes and deletes the directory in the repository, no it can't be > removed. I've been trying to prune it locally with a .cvsignore but > thus far have failed miserably. > > Andrew. I think the SourceForge staff will remove it if you send a support request. They've been talking about giving users the ability to SSH into the CVS server for awhile, but it hasn't happened yet. Christian -- Christian Hammond <> The GNUpdate Project chipx86@... <> DOS 6: Because there aren't enough problems in the world already.
http://sourceforge.net/p/enlightenment/mailman/enlightenment-devel/?viewmonth=200109&viewday=27
CC-MAIN-2015-27
refinedweb
1,099
82.65
Note:I have written a whole series of Visual Studio 2012 features and this post will also be part of same series. You can get the whole list of blogs/articles from the Visual studio 2012 feature series posts there. Following is a link for that.Visual Studio 2012 feature seriesIn earlier version of the asp.net we have to bind controls with data source control like SQL Data source, Entity Data Source, Linq Data Source if we want to bind our server controls declaratively. Some developers prefer to write whole data access logic and then bind the data source with databind method. Model binding is something similar to asp.net mvc binding. Model Binding in ASP.NET 4.5 and Visual Studio 2012: So what we are waiting for ? let’s take one example. First we need to create a model. For the this I have created a model class called ‘Customer’. Following is code for that. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace WebApplication2 { public class Customer { public int CustomerId { get; set; } public string CustomerName { get; set; } } } Now our model class is ready so it’s time to create the asp.net grid view in html with ItemType property which will directly bind the customer model class to grid view. Following is a HTML code for that. <asp:GridView <Columns> <asp:TemplateField <ItemTemplate> <asp:Label</asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField <ItemTemplate> <asp:Label</asp:Label> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> Here in the above code we can see that I have two template field column, one for customer id and another for customer name and I have putted the two label control separately for customer Id and Customer Name. Also I have written a select method name which will return a IQueryable of customer model class. Following is a server side code for that. public IQueryable<WebApplication2.Customer> grdCustomer_GetData() { List<Customer> customerList = new List<Customer>(); customerList.Add(new Customer {CustomerId=1,CustomerName="Jalpesh" }); customerList.Add(new Customer { CustomerId = 2, CustomerName = "Vishal" }); return customerList.AsQueryable<Customer>(); } Now that’s it. It’s time to run application. Its working fine like following. Hope you like it. Stay tuned for more. Till then happy programming. .NetGrid v2.8.4 has been released. Improvements when CPU is heavily loaded.net grid performance is very good you can read and download from dapfor. com .NetGrid v2.8.4 has been released. Improvements when CPU is heavily loaded.net grid performance is very good you can read and download from dapfor. com Isn't it mixing of Data source controls? Yes, its similar to that. But have more benefits then this. You can dynamically do all things that are not possible with data source controls
http://www.dotnetjalps.com/2012/07/Model-binding-with-ASP-NET-45-and-Visual-Studio-2012.html
CC-MAIN-2014-42
refinedweb
460
61.53
Printable View I was thinking of the for loop at the end to show the final doors that remain open, and that would be sufficient. Do you want to show the status of the doors, as they change? Little text based display to represent the doors? This is a little tricky, because you have to use the index of the doors array, in an entirely new manner (for you). Post up your code and let's see where you are at. Don't make fun of me because of how off I am please :/Don't make fun of me because of how off I am please :/Code: #include <stdio.h> int main(void) { int i,s,d[100],n; for (i=0;i<100;i++) d[i]=i; i=0; for(s=1;s<100;s++) { for(n=1;i<100;n++) { i=s*n; } i=0; } return (0); } It still looks like you're throwing code at the problem and hoping something will stick. The first thing you need to do, even before opening and shutting various doors, is decide how you are going to represent the doors. How do you tell the difference between an open door and a closed one? Actually you're not that far off at all... based on what you've posted, you mostly need to devise a way to make the door change, which you aren't doing yet... the loops are pretty close though. I don't mean to lecture... but what is happening now is that analysis phase I discussed earlier, except you're doing it with live code... yes it can be done that way but it's not always the smartest way to do it since you can easily "code yourself into a corner". The bigger the job the more likely that becomes. (And yes, I've done it to myself moe than once.) As far as figuring out which doors are doing what I can't seem to get anywhere close to making it work. I was think that you could have some sort of system that says 0 for closed doors and 1 for open doors, at the end print all the doors with a 1. I can't figure out how to implement that though. Ok, here's a hint... the door starts out closed... a student goes through and it's open... a student goes through and it closes... student open... student closed... see a pattern? What happens every second time? What is every second number? #define Open 1 I agree, I thought it added to the clarity of the whole thing. You need to swap your for loops. Put the studentNumbers on the outer for loop, and the door[] logic, inside the nested for loop. You don't need the third for loop, until you're ready to print the final status. I would change your doors array size to 101, and set the value of each element of that array, to 0, instead of i. Start your for loops at 1, and stop at <= 100, instead of just < 100. That will initialize all the doors starting to closed. (what I would call closed, anyway). While it's undoubtedly true, a simple if() statement will handle that door open or door closed action. you want s*n, but you need it as the index for the doors[]. That's the hardest thing about this problem, imo. Ignore the comment about the third for loop. Your code threw me off a bit. doors[index] Your index for the doors array needs to be a multiple of studentNumber * a multiplier, in the inner for loop. You can use just doors[studentNumber * n] = Open or 1 or Closed or 0. Student number is 3: 3 * 1 first door he visits -- n equals 1 3 * 2 second door he visits-n equals 2 3 * 3 third door he visits, etc. 3 * 4 fourth door he visits etc.
http://cboard.cprogramming.com/c-programming/142159-c-programming-assignment-arrays-2-print.html
CC-MAIN-2015-06
refinedweb
660
91.21
Navigating your Kubernetes logs with Aiven Logs are extremely important for understanding the health of your application, and in the event of problems, they help in diagnosing the issue. Methods and tools for capturing, aggregating and searching logs make the diagnosis process simpler. They are even more important with the adoption of microservices and container orchestrators, like Kubernetes, because logs come from many more places and in more formats. Aiven’s solutions architect Aaron Khan takes a closer look. With hundreds or even thousands of Pods creating logs on dozens of Nodes, it’s tedious, if not impossible, to install a log capturing agent on each Pod for each different type of service. One way to solve this problem is to coordinate a Kubernetes deployment of a log agent onto each Node, capture the logs for all the Pods, and export them somewhere. We can achieve this with a Kubernetes abstraction that does not require knowing what is running on each Pod: DaemonSet. Briefly, a DaemonSet allows the scheduling of Pods on some or all Nodes based upon a user defined criteria. Here’s the overall process: - Set up a Kubernetes Cluster - Create Pods to generate logs - Push the logs from each pod in the cluster to an external Elasticsearch cluster. I will utilize the Aiven for Elasticsearch service, because it is intuitive, secure out of the box, and provides a basis for extension (e.g. pushing logs to Kafka initially and then onto Elasticsearch). For more information about how to do that with Aiven for Kafka Connect please check out the Aiven Help article on creating an Elasticsearch sink connector for Aiven for Kafka. Install the dependencies All the code for this tutorial can be found at. The code can be used as described in this tutorial, but if you really get into it, there are also instructions for building and deploying an API into our cluster and setting up a Kafka integration. Let’s start by cloning the repository: git clone cd k8s-logging-demo Make sure you have the following local dependencies installed: Create the Kubernetes cluster To create a Kubernetes cluster with Minikube, enter the following: minikube start You can verify that your cluster is up and running by listing all the Pods in the cluster, like this: kubectl get pods --all-namespaces And you should see something like this: NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-74ff55c5b-mf9dj 1/1 Running 0 14d kube-system etcd-minikube 1/1 Running 0 14d kube-system kube-apiserver-minikube 1/1 Running 0 14d kube-system kube-controller-manager-minikube 1/1 Running 0 14d kube-system kube-proxy-bx4gl 1/1 Running 0 14d kube-system kube-scheduler-minikube 1/1 Running 0 14d kube-system storage-provisioner 1/1 Running 2 14d We will be using a non default namespace so let’s create that now: kubectl create namespace logging Add Pods to the cluster Now that we have a nice little Kubernetes cluster, let’s go ahead and do something with it. We are going to deploy a Pod that generates random logs as well as FluentD to our cluster. FluentD is a data sourcing, aggregating and forwarding client that has hundreds of plugins. It supports lots of sources, transformations and outputs. For example, you could capture Apache logs, pass them to a Grok parser, create a Slack message for any log originating in Canada, and output every log to Kafka. To generate logs in our cluster, let’s create a Pod that generates random logs every so often: kubectl create deployment -n logging --image=chentex/random-logger:latest logger We’re going to install FluentD using a pre-built Helm Chart, so before doing that, we have to add the repo and update the dependency. This repo contains the Kubernetes templates that describe all the FluentD components and then tells our chart to update its cache (if there is one) with these new components. helm repo add bitnami helm repo update helm dependency update chart The last part of the equation gets an external store for our logs. To do this, let’s use Aiven for Elasticsearch. Go ahead and create a free account; you’ll get some free credits to play around with. Then, create a new project in which to run Elasticsearch: Click Create a new service and select Elasticsearch. Then select the cloud provider and region of your choice. In the final step we choose the service plan — in this case we will use a Hobbyist plan. It’s a good idea to change the default name to something identifiable at this point, as it cannot be renamed later. After a minute or so your Elasticsearch service will be ready to use. You can view all the connection information in the console by clicking on the service that you created. Take note of the Host, Port, User and Password. You’ll need these to configure the Helm Chart. We are now ready to deploy our Helm Chart: helm install -n logging log-demo chart \ --set elasticsearch.hosts=<ES Host> \ --set elasticsearch.user=<ES User> \ --set elasticsearch.pw=<ES Password> <ES Host> should be the concatenation of the host and port we captured from the Aiven Console. In my case it looks like this:. (Note that the values set here are only a subset of the configurations for FluentD; for the full set, see the chart definition) You can check that things are building correctly by investigating the Pods in the logging namespace kubectl -n logging get pods You should see something like NAME READY STATUS RESTARTS AGE log-demo-fluentd-0 1/1 Running 0 7m32s log-demo-fluentd-wj56t 1/1 Running 0 7m32s logger-56db6f88d9-h8r8d 1/1 Running 0 7m8s If all the Pods aren’t ready yet, just give it a few seconds and check again, they should become ready shortly. View and search the log entries The configuration that has been deployed captures all logs from every Node (it can be configured not to do this) and so if we head over to Kibana we should see that happening. Aiven automatically deploys Kibana alongside Elasticsearch and the connection info can also be found in the console. Once logged into Kibana, go to the dev tools. Issue the following query, which looks for any log that originated from the kube-system namespace: GET /_search { "query": { "term": { "kubernetes.namespace_name.keyword": { "value": "kube-system" } } } } The results should look something like: { "_index" : "minikube-2021.05.27", "_type" : "_doc", "_id" : "6dXGrnkBn7BUvoFdGaNm", "_score" : 0.0020360171, "_source" : { "log" : """I0527 17:01:23.041467 1 client.go:360] parsed scheme: "passthrough" """, "stream" : "stderr", "docker" : { "container_id" : "b8a38739fc4a2694995837f2dfe773e011432b73f641b02eb54a7622ba3baffc" }, "kubernetes" : { "container_name" : "kube-apiserver", "namespace_name" : "kube-system", "Pod_name" : "kube-apiserver-minikube", "container_image" : "k8s.gcr.io/kube-apiserver:v1.20.2", "container_image_id" : "docker-pullable://k8s.gcr.io/kube-apiserver@sha256:465ba895d578fbc1c6e299e45689381fd01c54400beba9e8f1d7456077411411", "Pod_id" : "d825fef1-15d4-4202-818a-5deef0a30666", "host" : "minikube", "labels" : { "component" : "kube-apiserver", "tier" : "control-plane" }, "master_url" : "", "namespace_id" : "9925f802-c2c1-44a0-9a71-534d16a609af" }, "@timestamp" : "2021-05-27T17:01:23.041773900+00:00", "tag" : "kubernetes.var.log.containers.kube-apiserver-minikube_kube-system_kube-apiserver-b8a38739fc4a2694995837f2dfe773e011432b73f641b02eb54a7622ba3baffc.log" } } The log documents provide the log message, the namespace from which the log originated, the timestamp when the log originated, as well as several other identifying pieces of information. Going back to Kibana, let’s issue another request and see if we can find the logs from our logging Pod: GET /_search { "query": { "match": { "log": { "query": "exception" } } } } Most likely there will be several results, but the first one should be the log related to the logger Pod’s random error logs. Updates and clean up If at any point you want to make changes to any of the deployments, e.g. change the FluentD configuration, add an endpoint to the existing service or add a whole new service: helm upgrade -n logging log-demo <other parameters set during install> You may need to redeploy Pods for changes to take effect. To tear down the installation: helm delete -n logging log-demo kubectl delete -n logging deployment/logger Wrapping up This guide started from nothing and created a Kubernetes application with a logging layer. The code and steps here could easily be expanded upon to use a different Kubernetes provider such as Google Kubernetes Engine (GKE) or Elastic Kubernetes Service (EKS) and the Helm configuration could easily be expanded to include other use cases such as sending data to Kafka or capturing metrics as well. Regardless of where the data comes form or where it goes or what kind it is, the Aiven platform has the tools and services to assist you on your journey. Further reading External Elasticsearch Logging Kubernetes Logging Architecture Kubernetes Logging with ELK Stack.
https://aiven-io.medium.com/navigating-your-kubernetes-logs-with-aiven-e84e3feae449
CC-MAIN-2021-43
refinedweb
1,455
56.79
31993/correctly-return-dictionary-output-zappier-code-using-python I followed Zapier code documentation for Python but I'm still having this issue: Objective: I'm trying reformat my input(Feets) from Acuity Scheduling and update it on Salesforce. Code: if " ' " in input_data['Feets']: output = {'Feets':Feets.split("'")[0],'Inches':Feets.split("'")[1]} else output = {'Feets':Feets,'Inches':Inches} David here, from the Zapier Platform team. You've got two issues: The following code works as expected: if "'" in input_data['Feets']: output = {'Feets': input_data['Feets'].split("'")[0], 'Inches': input_data['Feets'].split("'")[1]} else: output = {'Feets': input_data['Feets'],'Inches': input_data['Inches']} This works fine for me: while True: ...READ MORE Here is an easy solution: def numberToBase(n, b): ...READ MORE HDF5 works fine for concurrent read only ...READ MORE You don't need to change your existing .. In Logic 1, try if i<int(length/2): instead of if i<int((length/2+1)): In ...READ MORE down voteacceptTheeThe problem is that you're iterating ...READ MORE OR
https://www.edureka.co/community/31993/correctly-return-dictionary-output-zappier-code-using-python
CC-MAIN-2019-22
refinedweb
168
52.46
It looks like your system's sys/wait.h defines macros to access the wait argument as a structure, but that is not what Emacs expects on a POSIX system. I don't have a copy of the POSIX spec here; does anyone know what it says about this? Meanwhile, I think some people have build Emacs on HPUX 11 and do not have this problem. Why is that? Could others who have HPUX 11 see how these macros are defined in sys/wait.h? Do they expect to operate on a struct or on an int? If you add #undef HAVE_SYS_WAIT_H to s/hpux11.h, does that make it work? To: address@hidden Cc: address@hidden Subject: Re: build under HP-UX-B.10.20 or 11.11 fails in process.c References: <address@hidden> <address@hidden> X-Uboat-Death-Message: ATTACKED BY ATOMIC BOMB. CAPTAIN INTOXICATED. SINKING. U-144. From: Klaus Zeitler <address@hidden> Date: 19 Jul 2002 17:42:46 +0200 In-Reply-To: <address@hidden> Message-ID: <address@hidden> Lines: 60 User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Length: 1510 >>>>> "Richard" == Richard Stallman <address@hidden> writes: Richard> Richard> Can you figure out what caused the problem? WAITTYPE is defined as int in syswait.h, but macros defined in /usr/include/sys/wait.h that deal with WAITTYPE, e.g. WIFSTOPPED, try to access variables of type WAITTYPE as a struct. e.g. status_convert from process.c: Lisp_Object status_convert (w) WAITTYPE w; { if (WIFSTOPPED (w)) return Fcons (Qstop, Fcons (make_number (WSTOPSIG (w)), Qnil)); looks in preprocessor file like: int status_convert (w) int w; { if (((w).w_S.w_Stopval == 0177)) return Fcons (Qstop, Fcons (((((int) (((w).w_S.w_Stopsig))) & ((((int) 1)<<(32 - 4)) - 1)) | ((int) Lisp_Int) << (32 - 4)), Qnil)); So either the definition or the macros need to change. I've done it the bold way, by undefining HAVE_SYS_WAIT_H (first I changed config.h but hpux8.h explicitly defines HAVE_SYS_WAIT_H so I went ahead and inserted in syswait.h: #if defined(HPUX10) || defined(HPUX11) #undef HAVE_SYS_WAIT_H #endif right after the line: #include <sys/types.h> that works, but may not be the developers intention, but with this change emacs cvs compiles on HP-UX 10.20 and 11.11 Klaus -- ------------------------------------------ | Klaus Zeitler Lucent Technologies | | Email: address@hidden | ------------------------------------------ --- Never let your sense of morals prevent you from doing what is right. -- Salvor Hardin, "Foundation"
http://lists.gnu.org/archive/html/emacs-devel/2002-07/msg00779.html
CC-MAIN-2015-48
refinedweb
410
77.94
Ents and Entwives in Middle Earth Mythos A paper for HUML199 -- Lord of the Rings Seminar at the University of Toronto By Ben Spigel (me) copywrite 2003 Note: Refrences to the Lord of the Rings series are done (book.chapter) meaning that (1.1) would mean a refernce to the first chapter of the first book (of 6 total) Ents are one of the most interesting creatures in Tolkien's Middle Earth mythos. Ents by themselves are nothing more then mobile trees. Only when provoked, such as by Saruman, or motivated, by Gandalf among others, do they act. There is only one exception to this rule, the Ent's ceaseless search for their feminine counterparts, the Entwives. Ents, therefore, are a balancing force in Middle Earth, providing strength when strength is needed for a beleaguered ally, thus turning the tide of battles, and the fate of Middle Earth. However, while Ents help balance Middle Earth, their lack of internal balance leads to their eventual extinction. According to Tolkien in letter 157, the name Ent comes from the Anglo-Saxon word for giant. In Anglo-Saxon, ent can be used as a term for a hero of old and it was a person to whom all ancient Anglo-Saxon works were dedicated. Tolkien states that he has always been interested by the implications of the word, and it is easy to see how the creatures known as Ents emerged from the Anglo-Saxon word (Tolkien 1981: 208).>? Ents are "tree-herds," meaning that they are the shepherds and protectors of the trees (3.4). In this capacity, Ents work to protect the forest of Fangorn from outside threats, such as the wood gathering of Dwarves, or attacks from Orcs. However, far from being visible defenders of the trees, Ents have descended into the realm of myth. Prior to Marry and Pippin's meeting with the Ent Treebeard, only two people living in the Third Age had met Ents, the wizards Gandalf and Saruman. Ents are tree-like in appearance, but far from trees in nature. They have the capacity for movement, thought, and speech. Described as "large Man-like, almost Troll-like, figure, at least fourteen foot high, very sturdy, with a tall head, and hardly any neck," they have seeming taken the attributes of those that they guard (3.4). Their large stature gives them great power in combat, as seen in the Ents destruction of Isengard. The supremacy of Ents in battle is unmatched among those who fight the Darkness of Sauron. Their strength is almost beyond imagination, within five minutes of attacking the fortress of Isengard, they had reduced the gates to ruble and they imprisoned Saruman, a Maia and leader of the White Council, in the tower of Orthanc. Counter-balancing their amazing power when enraged, Ents are mainly passive creatures. Millennia of watching over trees have given them a plant-like outlook on life. Above all, they cherish deliberate and well thought out actions. At the end of the Third Age, many Ents have started to become "tree-ish," laying down roots and no longer patrolling the forest that they protect. (3.4) At the same time, more trees are becoming Entish, gaining the power of movement. These Ents become Hurons, Ents that no longer actively move or talk, but when needed they can display the same strength as Ents. Hurons, however, still guard their forest from outsiders, and are quite dangerous if there is no true Ent to control their rage (3.6) Treebeard, the Ent that plays the largest role in the Lord of the Rings series, is the oldest of the Ents. Of the three Ents that existed before the Darkness, he alone remains a true Ent (3.4). Treebeard is an important source for the history of the Ents; it is through though that we learn of the tragedy of the Entwives. Treebeard is also important in that he is the connection between Gandalf and the Ents. Through him, Gandalf is able to acquire Entish reinforcement at Helms Deep, and though Treebeard the Ents are tasked with imprisoning Saurian in Orthanc. Unlike other creatures in Tolkien's mythos, they were not sung of in Illuvitar's song that created the Middle Earth. They were created instead at the behest of the Valar Yavanna, to protect Nature, which she is the guardian of. She perceived that the secret creation of Dwarves by Aule would be a threat to her wards as "the kelvar can flee or defend themselves, whereas the olvar that grow cannot." (Tolkien 1999: 40) In order to protect she pleaded with Iluvitar to make "Shepherds of the Trees" who will protect the forests in times of need (Tolkien 1999: 41). Hence, even in their creation, Ents were used as a balancing force. Yavanna believed that Dwarves would have no love or respect for her trees and would cut them down at will. Therefore, she reasoned, a power, equal to that of the Dwarves, must exist for the sake of the forest. Ents, then represent the polar opposite of Dwarves. While Dwarves are industrious and quick to anger, Ents display almost a Zen-like calmness until something endangers the forest. While Dwarves attempt to shape nature by carving and grinding stone, Ents, as the shepherd of the trees, shape and move the forest by a more innate and hidden power. Constant with their roles as protectors of the forest, Ents have a special relationship with Elves, who are forest dwellers. The Elves first taught the Ents new languages, and it with them that the Ents first aliened themselves with (3.4). After Dwarves from Nogord passed through Sarn Athrad, an army of Elves attacked them. Those who escaped the Elven sneak attack were cut off and killed by the Ents (Tolkien 1999: 282). Of the four Entish military actions Tolkien describes: the attack on the Dwarves, the Last Alliance, their actions in Helm's Deep, and their invasion of Orthanc, the first three of them involve the Ents acting as reinforcements to the main battle. In this capacity, Ents act as a balancing force, allowing smaller forces, such as the defensive force of Helms Deep, to triumph over a numerically superior army. The only time that the Ents acted alone was when they were directly attacked. All of their other actions are at the behest of those who are friendly to the forest. Ents are passive creatures, so passive that they are beginning to become completely sedentary. This is caused by their isolationist view, as Treebeard says, "I am not altogether on anybody's side, because nobody is altogether on my side." (3.4) Ents are willing to endure massive suffering before resorting to violence. Treebeard states that Ents will accept the necessity of using trees for firewood, and only the Orc's wanton violence and Saruman's deception that drives the Ents to war. The only time that the Ents act when not immediately threatened is the disappearance of their female counterparts, the Entwives. The Entwives are an interesting example of the dual nature of a species. While the Ents, the males, roamed around the forest alone, learning to talk to Elves and Men, and enjoying natural beauty, Entwives instead enjoyed ordering and controlling natural things. Unlike the Ents who were content to live in the forest and protect the trees, Entwives "ordered them to grow according to their wishes, and bear leaf and fruit according to their liking; for the Entwives desired order, and plenty, and peace." (3.4). Due to their differences on the definition of beauty, the Ents and the Entwives drifted further and further apart, as the Ents explored around the great forests of Middle Earth, and the Entwives went South, fleeing the Darkness of Sauron and passed over the Andulin into South Gondor (3.4). This land was made into a desert by war, and the Entwives passed out from existence into the realm of myth. Their disappearance is one of the great tragedies Middle-Earth. By the end of the Return of the King, most of the damage caused by Sauron and Saruman have been repaired, Aragorn has taken his rightful on the Throne, the once scoured Shire has been restored, and all the major themes of Middle Earth have resolved themselves in preparation for the end of the Third Age. However, the Entwives remain missing, and according to Tolkien in letter 338, they will never return (1981: 419). The reasons for the Ents endless search for the Entwives are twofold. First, the Entwives are necessary to the survival of the species. It is only through mating with Entwives that Entlings can be made who will replace the older Ents who slowly become completely sedentary and non-communicative. Without Entwives, the Ents will slowly die off. Secondly, the Ents feel that they have driven the Entwives into exile. In Treebeard's song to the Entwives, we can see a continuing conflict over which way of ascetic philosophy, natural beauty or intelligent, was better. "Come back to me! Come back to me, and say that my land is best!" and Ent sings to an Entwife, who then replies: "I'll linger here beneath the Sun, because my land is best!" (3.4) The song ends with a lingering hope that the Ents and Entwives could somehow find a land that both ideals of beauty could coexist in, but in letter 338, Tolkien writes that he does not think that this will happen (Tolkien 1981: 419). The loss of the Entwives filled the Ents with an overwhelming sense of sorrow that colors all their actions. Even as the Ents triumphantly march to Isengard, Treebeard is filled with pessimism about the Ents future. He confided in Marry and Pippin that this may be the last march of the Ents, and that they are only going because "doom would find us anyways, sooner or later." (3.4) This is the only military action that the Ents take on their own behalf. All others, their attack on Dwarfs, their part in the Last Alliance, and their support of Gandalf at Helm's Deep where at the behest of others for the benefit of others. Militarily, the Ents are a balancing force. They balance out power between a smaller army and a larger army. Tolkien explains in letter 249 that because of the Ents, the elf Dior could defeat the Dwarves who had stolen Thingol's necklace, which held the Simril, even though he had, no army (Tolkien 1981: 334). The Ent reinforcement routed the Orcs and prevented any chance of them reforming and attacking the tired defenders of Helm's Deep. However, Ents are not harmonious within their own species. Their obsession with natural beauty led to the loss of the Entwives, who sought to create beauty through intelligence and intervention. Lacking the Entwives, many Ents began to shy away from the duty that they were given since their creation, to protect Yavana's nature, and have become sedentary trees. Thus, while the Ent's affect on the balance of power is a major factor in the history of Middle Earth, their lack of internal balance doomed them to extinction. References Rosebury, Brian. Tolkien A Critical Assessment. London: St Martin's Press, 1992. Tolkien, J.R.R. The Two Towers. London: HarperCollins, 1999. Tolkien, J.R.R. The Simarillion. London: HarperCollins, 1999. Tolkien, J.R.R. The Letters of J.R.R Tolkien. Ed. Humphrey Carpenter. Boston: Houghton Mifflin, 1981 Shippy, Tom. J.R.R. Tolkien: Author of the Century. London: HarperCollins, 2000 Log in or registerto write something here or to contact authors. Need help? accounthelp@everything2.com
http://everything2.com/title/The+Ent+and+the+Entwife
CC-MAIN-2016-30
refinedweb
1,950
68.6
Marcos Duarte Laboratory of Biomechanics and Motor Control () Federal University of ABC, Brazil This will be a very brief tutorial on Python. For a complete (and much better) tutorial about Python see A Whirlwind Tour of Python and Python Data Science Handbook for a specific tutorial about Python for scientific computing. To use Python for scientific computing we need the Python program itself with its main modules and specific packages for scientific computing. See this notebook on how to install Python for scientific computing. Once you get Python and the necessary packages for scientific computing ready to work, there are different ways to run Python, the main ones are: pythonor ipythonthat the Python interpreter will start Jupyter notebookand start working with Python in a browser Spyder, an interactive development environment (IDE) Jupyter qtconsole, a more featured terminal We will use the Jupyter Notebook for this tutorial but you can run almost all the things we will see here using the other forms listed above. 1 + 2 - 30 -27 4/5 0.8 Using the print('1+2 = ', 1+2, '\n', '4*5 = ', 4*5, '\n', '6/7 = ', 6/7, '\n', '8**2 = ', 8**2, sep='') 1+2 = 3 4*5 = 20 6/7 = 0.8571428571428571 8**2 = 64 And if we want the square-root of a number: sqrt(9) --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-4-c836dfef5db4> in <module>() ----> 1 sqrt(9) NameError: name 'sqrt' is not defined We get an error message saying that the sqrt function if not defined. This is because sqrt and other mathematical functions are available with the math module: import math math.sqrt(9) 3.0 from math import sqrt sqrt(9) 3.0 We used the command ' import' to be able to call certain functions. In Python functions are organized in modules and packages and they have to be imported in order to be used. A module is a file containing Python definitions (e.g., functions) and statements. Packages are a way of structuring Python’s module namespace by using “dotted module names”. For example, the module name A.B designates a submodule named B in a package named A. To be used, modules and packages have to be imported in Python with the import function. Namespace is a container for a set of identifiers (names), and allows the disambiguation of homonym identifiers residing in different namespaces. For example, with the command import math, we will have all the functions and statements defined in this module in the namespace ' math.', for example, ' math.pi' is the π constant and ' math.cos()', the cosine function. By the way, to know which Python version you are running, we can use one of the following modules: import sys sys.version '3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 12:30:02) [MSC v.1900 64 bit (AMD64)]' And if you are in an IPython session: from IPython import sys_info print(sys_info()) {'commit_hash': 'd86648c5d', 'commit_source': 'installation', 'default_encoding': 'cp1252', 'ipython_path': 'C:\\Miniconda3\\lib\\site-packages\\IPython', 'ipython_version': '6.1.0', 'os_name': 'nt', 'platform': 'Windows-10-10.0.15063-SP0', 'sys_executable': 'C:\\Miniconda3\\python.exe', 'sys_platform': 'win32', 'sys_version': '3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, ' '12:30:02) [MSC v.1900 64 bit (AMD64)]'} The first option gives information about the Python version; the latter also includes the IPython version, operating system, etc. Python is designed as an object-oriented programming (OOP) language. OOP is a paradigm that represents concepts as "objects" that have data fields (attributes that describe the object) and associated procedures known as methods. This means that all elements in Python are objects and they have attributes which can be acessed with the dot (.) operator after the name of the object. We already experimented with that when we imported the module sys, it became an object, and we acessed one of its attribute: sys.version. OOP as a paradigm is much more than defining objects, attributes, and methods, but for now this is enough to get going with Python. help(math.degrees) Help on built-in function degrees in module math: degrees(...) degrees(x) Convert angle x from radians to degrees. Or if you are in the IPython environment, simply add '?' to the function that a window will open at the bottom of your browser with the same help content: math.degrees? And if you add a second '?' to the statement you get access to the original script file of the function (an advantage of an open source language), unless that function is a built-in function that does not have a script file, which is the case of the standard modules in Python (but you can access the Python source code if you want; it just does not come with the standard program for installation). So, let's see this feature with another function: import scipy.fftpack scipy.fftpack.fft?? To know all the attributes of an object, for example all the functions available in math, we can use the function dir: print'] IPython has tab completion: start typing the name of the command (object) and press tab to see the names of objects available with these initials letters. When the name of the object is typed followed by a dot ( math.), pressing tab will show all available attribites, scroll down to the desired attribute and press Enter to select it. These are the most helpful commands in IPython (from IPython tutorial): ?: Introduction and overview of IPython’s features. %quickref: Quick reference. help: Python’s own help system. object?: Details about ‘object’, use ‘object??’ for extra details. # Import the math library to access more math stuff import math math.pi # this is the pi constant; a useless comment since this is obvious 3.141592653589793 To insert comments spanning more than one line, use a multi-line string with a pair of matching triple-quotes: """ or ''' (we will see the string data type later). A typical use of a multi-line comment is as documentation strings and are meant for anyone reading the code: """Documentation strings are typically written like that. A docstring is a string literal that occurs as the first statement in a module, function, class, or method definition. """ 'Documentation strings are typically written like that.\n\nA docstring is a string literal that occurs as the first statement\nin a module, function, class, or method definition.\n\n' A docstring like above is useless and its output as a standalone statement looks uggly in IPython Notebook, but you will see its real importance when reading and writting codes. Commenting a programming code is an important step to make the code more readable, which Python cares a lot. There is a style guide for writting Python code (PEP 8) with a session about how to write comments.. x = 1 Spaces between the statements are optional but it helps for readability. To see the value of the variable, call it again or use the print function: x 1 print(x) 1 Of course, the last assignment is that holds: x = 2 x = 3 x 3 In mathematics '=' is the symbol for identity, but in computer programming '=' is used for assignment, it means that the right part of the expresssion is assigned to its left part. For example, 'x=x+1' does not make sense in mathematics but it does in computer programming: x = 1 print(x) x = x + 1 print(x) 1 2 A value can be assigned to several variables simultaneously: x = y = 4 print(x) print(y) 4 4 Several values can be assigned to several variables at once: x, y = 5, 6 print(x) print(y) 5 6 And with that, you can do (!): x, y = y, x print(x) print(y) 6 5 Variables must be “defined” (assigned a value) before they can be used, or an error will occur: x = z --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-25-cfba4031bce1> in <module>() ----> 1 x = z NameError: name 'z' is not defined import types print(dir(types)) ['AsyncGeneratorType', 'BuiltinFunctionType', 'BuiltinMethodType', 'CodeType', 'CoroutineType', 'DynamicClassAttribute', 'FrameType', 'FunctionType', 'GeneratorType', 'GetSetDescriptorType', 'LambdaType', 'MappingProxyType', 'MemberDescriptorType', 'MethodType', 'ModuleType', 'SimpleNamespace', 'TracebackType', '_GeneratorWrapper', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_ag', '_calculate_meta', '_collections_abc', '_functools', 'coroutine', 'new_class', 'prepare_class'] Let's see some of them now. type(6) int A float is a non-integer number: math.pi 3.141592653589793 type(math.pi) float Python (IPython) is showing math.pi with only 15 decimal cases, but internally a float is represented with higher precision. Floating point numbers in Python are implemented using a double (eight bytes) word; the precison and internal representation of floating point numbers are machine specific and are available in:) Be aware that floating-point numbers can be trick in computers: 0.1 + 0.2 0.30000000000000004 0.1 + 0.2 - 0.3 5.551115123125783e-17 These results are not correct (and the problem is not due to Python). The error arises from the fact that floating-point numbers are represented in computer hardware as base 2 (binary) fractions and most decimal fractions cannot be represented exactly as binary fractions. As consequence, decimal floating-point numbers are only approximated by the binary floating-point numbers actually stored in the machine. See here for more on this issue. A complex number has real and imaginary parts: 1+2j (1+2j) print(type(1+2j)) <class 'complex'> Each part of a complex number is represented as a floating-point number. We can see them using the attributes .real and .imag: print((1+2j).real) print((1+2j).imag) 1.0 2.0 s = 'string (str) is a built-in type in Python' s 'string (str) is a built-in type in Python' type(s) str String enclosed with single and double quotes are equal, but it may be easier to use one instead of the other: 'string (str) is a Python's built-in type' File "<ipython-input-38-ca70e9285fe4>", line 1 'string (str) is a Python's built-in type' ^ SyntaxError: invalid syntax "string (str) is a Python's built-in type" "string (str) is a Python's built-in type" But you could have done that using the Python escape character '\': 'string (str) is a Python\'s built-in type' "string (str) is a Python's built-in type" Strings can be concatenated (glued together) with the + operator, and repeated with *: s = 'P' + 'y' + 't' + 'h' + 'o' + 'n' print(s) print(s*5) Python PythonPythonPythonPythonPython Strings can be subscripted (indexed); like in C, the first character of a string has subscript (index) 0: print('s[0] = ', s[0], ' (s[index], start at 0)') print('s[5] = ', s[5]) print('s[-1] = ', s[-1], ' (last element)') print('s[:] = ', s[:], ' (all elements)') print('s[1:] = ', s[1:], ' (from this index (inclusive) till the last (inclusive))') print('s[2:4] = ', s[2:4], ' (from first index (inclusive) till second index (exclusive))') print('s[:2] = ', s[:2], ' (till this index, exclusive)') print('s[:10] = ', s[:10], ' (Python handles the index if it is larger than the string length)') print('s[-10:] = ', s[-10:]) print('s[0:5:2] = ', s[0:5:2], ' (s[ini:end:step])') print('s[::2] = ', s[::2], ' (s[::step], initial and final indexes can be omitted)') print('s[0:5:-1] = ', s[::-1], ' (s[::-step] reverses the string)') print('s[:2] + s[2:] = ', s[:2] + s[2:], ' (because of Python indexing, this sounds natural)') s[0] = P (s[index], start at 0) s[5] = n s[-1] = n (last element) s[:] = Python (all elements) s[1:] = ython (from this index (inclusive) till the last (inclusive)) s[2:4] = th (from first index (inclusive) till second index (exclusive)) s[:2] = Py (till this index, exclusive) s[:10] = Python (Python handles the index if it is larger than the string length) s[-10:] = Python s[0:5:2] = Pto (s[ini:end:step]) s[::2] = Pto (s[::step], initial and final indexes can be omitted) s[0:5:-1] = nohtyP (s[::-step] reverses the string) s[:2] + s[2:] = Python (because of Python indexing, this sounds natural) help(len) Help on built-in function len in module builtins: len(obj, /) Return the number of items in a container. s = 'Python' len(s) 6 The function len() helps to understand how the backward indexing works in Python. The index s[-i] should be understood as s[len(s) - i] rather than accessing directly the i-th element from back to front. This is why the last element of a string is s[-1]: print('s = ', s) print('len(s) = ', len(s)) print('len(s)-1 = ',len(s) - 1) print('s[-1] = ', s[-1]) print('s[len(s) - 1] = ', s[len(s) - 1]) s = Python len(s) = 6 len(s)-1 = 5 s[-1] = n s[len(s) - 1] = n Or, strings can be surrounded in a pair of matching triple-quotes: """ or '''. End of lines do not need to be escaped when using triple-quotes, but they will be included in the string. This is how we created a multi-line comment earlier: """Strings can be surrounded in a pair of matching triple-quotes: \""" or '''. End of lines do not need to be escaped when using triple-quotes, but they will be included in the string. """ 'Strings can be surrounded in a pair of matching triple-quotes: """ or \'\'\'.\n\nEnd of lines do not need to be escaped when using triple-quotes,\nbut they will be included in the string.\n\n' x = ['spam', 'eggs', 100, 1234] x ['spam', 'eggs', 100, 1234] Lists can be indexed and the same indexing rules we saw for strings are applied: x[0] 'spam' The function len() works for lists: len(x) 4 t = ('spam', 'eggs', 100, 1234) t ('spam', 'eggs', 100, 1234) The type tuple is why multiple assignments in a single line works; elements separated by commas (with or without surrounding parentheses) are a tuple and in an expression with an '=', the right-side tuple is attributed to the left-side tuple: a, b = 1, 2 print('a = ', a, '\nb = ', b) a = 1 b = 2 Is the same as: (a, b) = (1, 2) print('a = ', a, '\nb = ', b) a = 1 b = 2 basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana'] fruit = set(basket) # create a set without duplicates fruit {'apple', 'banana', 'orange', 'pear'} As set is an unordered collection, it can not be indexed as lists and tuples. set(['orange', 'pear', 'apple', 'banana']) 'orange' in fruit # fast membership testing True tel = {'jack': 4098, 'sape': 4139} tel {'jack': 4098, 'sape': 4139} tel['guido'] = 4127 tel {'guido': 4127, 'jack': 4098, 'sape': 4139} tel['jack'] 4098 del tel['sape'] tel['irv'] = 4127 tel {'guido': 4127, 'irv': 4127, 'jack': 4098} tel.keys() dict_keys(['jack', 'guido', 'irv']) 'guido' in tel True The dict() constructor builds dictionaries directly from sequences of key-value pairs: tel = dict([('sape', 4139), ('guido', 4127), ('jack', 4098)]) tel {'guido': 4127, 'jack': 4098, 'sape': 4139} In computer science, the Boolean or logical data type is composed by two values, true and false, intended to represent the values of logic and Boolean algebra. In Python, 1 and 0 can also be used in most situations as equivalent to the Boolean values. True == False False not True == False True 1 < 2 > 1 True True != (False or True) False True != False or True True In Python, statement grouping is done by indentation (this is mandatory), which are done by inserting whitespaces, not tabs. Indentation is also recommended for alignment of function calling that span more than one line for better clarity. We will see examples of indentation in the next session. if True: pass Which does nothing useful. Let's use the if... elif... else statements to categorize the body mass index of a person: # body mass index weight = 100 # kg height = 1.70 # m' print('For a weight of {0:.1f} kg and a height of {1:.2f} m,\n\ the body mass index (bmi) is {2:.1f} kg/m2,\nwhich is considered {3:s}.'\ .format(weight, height, bmi, c)) For a weight of 100.0 kg and a height of 1.70 m, the body mass index (bmi) is 34.6 kg/m2, which is considered moderately obese. for i in [3, 2, 1, 'go!']: print(i, end=', ') 3, 2, 1, go!, for letter in 'Python': print(letter), P y t h o n help(range) Help on class range in module builtins: class range(object) | range(stop) -> range object | range(start, stop[, step]) -> range object | | step is given, it specifies the increment (or decrement). | | Methods defined here: | | __bool__(self, /) | self != 0 | | __contains__(self, key, /) | Return key in self. | | __eq__(self, value, /) | Return self==value. | | __ge__(self, value, /) | Return self>=value. | | __getattribute__(self, name, /) | Return getattr(self, name). | | __getitem__(self, key, /) | Return self[key]. | | __gt__(self, value, /) | Return self>value. | | __hash__(self, /) | Return hash(self). | | __iter__(self, /) | Implement iter(self). | | __le__(self, value, /) | Return self<=value. | | __len__(self, /) | Return len(self). | | __lt__(self, value, /) | Return self<value. | | __ne__(self, value, /) | Return self!=value. | | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature. | | __reduce__(...) | helper for pickle | | __repr__(self, /) | Return repr(self). | | __reversed__(...) | Return a reverse iterator. | | count(...) | rangeobject.count(value) -> integer -- return number of occurrences of value | | index(...) | rangeobject.index(value, [start, [stop]]) -> integer -- return index of value. | Raise ValueError if the value is not present. | | ---------------------------------------------------------------------- | Data descriptors defined here: | | start | | step | | stop range(10) range(0, 10) range(1, 10, 2) range(1, 10, 2) for i in range(10): n2 = i**2 print(n2), 0 1 4 9 16 25 36 49 64 81 # Fibonacci series: the sum of two elements defines the next a, b = 0, 1 while b < 1000: print(b, end=' ') a, b = b, a+b 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 A function in a programming language is a piece of code that performs a specific task. Functions are used to reduce duplication of code making easier to reuse it and to decompose complex problems into simpler parts. The use of functions contribute to the clarity of the code. A function is created with the def keyword and the statements in the block of the function must be indented: def function(): pass As per construction, this function does nothing when called: function() The general syntax of a function definition is: def function_name( parameters ): """Function docstring. The help for the function """ function body return variables A more useful function: def fibo(N): """Fibonacci series: the sum of two elements defines the next. The series is calculated till the input parameter N and returned as an ouput variable. """ a, b, c = 0, 1, [] while b < N: c.append(b) a, b = b, a + b return c fibo(100) [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89] if 3 > 2: print('teste') teste Let's implemment the body mass index calculus and categorization as a function: def bmi(weight, height): """Body mass index calculus and categorization. Enter the weight in kg and the height in m. See """' s = 'For a weight of {0:.1f} kg and a height of {1:.2f} m,\ the body mass index (bmi) is {2:.1f} kg/m2,\ which is considered {3:s}.'\ .format(weight, height, bmi, c) print(s) bmi(73, 1.70) For a weight of 73.0 kg and a height of 1.70 m, the body mass index (bmi) is 25.3 kg/m2, which is considered overweight. Numpy is the fundamental package for scientific computing in Python and has a N-dimensional array package convenient to work with numerical data. With Numpy it's much easier and faster to work with numbers grouped as 1-D arrays (a vector), 2-D arrays (like a table or matrix), or higher dimensions. Let's create 1-D and 2-D arrays in Numpy: import numpy as np x1d = np.array([1, 2, 3, 4, 5, 6]) print(type(x1d)) x1d <class 'numpy.ndarray'> array([1, 2, 3, 4, 5, 6]) x2d = np.array([[1, 2, 3], [4, 5, 6]]) x2d array([[1, 2, 3], [4, 5, 6]]) len() and the Numpy functions size() and shape() give information aboout the number of elements and the structure of the Numpy array: print('1-d array:') print(x1d) print('len(x1d) = ', len(x1d)) print('np.size(x1d) = ', np.size(x1d)) print('np.shape(x1d) = ', np.shape(x1d)) print('np.ndim(x1d) = ', np.ndim(x1d)) print('\n2-d array:') print(x2d) print('len(x2d) = ', len(x2d)) print('np.size(x2d) = ', np.size(x2d)) print('np.shape(x2d) = ', np.shape(x2d)) print('np.ndim(x2d) = ', np.ndim(x2d)) 1-d array: [1 2 3 4 5 6] len(x1d) = 6 np.size(x1d) = 6 np.shape(x1d) = (6,) np.ndim(x1d) = 1 2-d array: [[1 2 3] [4 5 6]] len(x2d) = 2 np.size(x2d) = 6 np.shape(x2d) = (2, 3) np.ndim(x2d) = 2 Create random data x = np.random.randn(4,3) x array([[-0.36123769, 0.18896133, -0.53809885], [-0.7332364 , 0.47109317, 1.06194556], [ 0.07331805, 0.72426922, -1.74606307], [-0.48601252, -0.72308218, -0.98513516]]) Joining (stacking together) arrays x = np.random.randint(0, 5, size=(2, 3)) print(x) y = np.random.randint(5, 10, size=(2, 3)) print(y) [[1 2 3] [3 2 2]] [[5 5 5] [7 5 9]] np.vstack((x,y)) array([[1, 2, 3], [3, 2, 2], [5, 5, 5], [7, 5, 9]]) np.hstack((x,y)) array([[1, 2, 3, 5, 5, 5], [3, 2, 2, 7, 5, 9]]) Create equally spaced data np.arange(start = 1, stop = 10, step = 2) array([1, 3, 5, 7, 9]) np.linspace(start = 0, stop = 1, num = 11) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3] Suppose we want to create data in between the given data points (interpolation); for instance, let's try to double the resolution of the data by generating twice as many data: t = np.linspace(0, len(y), len(y)) # time vector for the original data tn = np.linspace(0, len(y), 2 * len(y)) # new time vector for the new time-normalized data yn = np.interp(tn, t, y) # new time-normalized data yn array([ 5. , 4.52631579, 4.05263158, 6.52631579, 9.36842105, 9.26315789, 8.31578947, 5.78947368, 2.47368421, 3.36842105, 7.63157895, 8.31578947, 4.52631579, 2.78947368, 5.15789474, 6.36842105, 3.52631579, 1.10526316, 2.05263158, 3. ]) The key is the Numpy interp function, from its help: interp(x, xp, fp, left=None, right=None) One-dimensional linear interpolation. Returns the one-dimensional piecewise linear interpolant to a function with given values at discrete data-points. A plot of the data will show what we have done: %matplotlib inline import matplotlib.pyplot as plt plt.figure(figsize=(10,5)) plt.plot(t, y, 'bo-', lw=2, label='original data') plt.plot(tn, yn, '.-', color=[1, 0, 0, .5], lw=2, label='interpolated') plt.legend(loc='best', framealpha=.5) plt.show() There are two kinds of computer files: text files and binary files: Text file: computer file where the content is structured as a sequence of lines of electronic text. Text files can contain plain text (letters, numbers, and symbols) but they are not limited to such. The type of content in the text file is defined by the Unicode encoding (a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems). Binary file: computer file where the content is encoded in binary form, a sequence of integers representing byte values. Let's see how to save and read numeric data stored in a text file: Using plain Python f = open("newfile.txt", "w") # open file for writing f.write("This is a test\n") # save to file f.write("And here is another line\n") # save to file f.close() f = open('newfile.txt', 'r') # open file for reading f = f.read() # read from file print(f) This is a test And here is another line help(open) Help on built-in function open in module io: open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) Open file and return a stream. Raise IOError upon failure. file is either a text or byte string creating and writing to a new file,: ========= =============================================================== Character Meaning --------- --------------------------------------------------------------- 'r' open for reading (default) 'w' open for writing, truncating the file first 'x' create a new file and open it for writing 'a' open for writing, appending to the end of the file if it exists 'b' binary mode 't' text mode (default) '+' open a disk file for updating (reading and writing) 'U' universal newline mode (deprecated) ========= =============================================================== The default mode is 'rt' (open for reading text). For binary random access, the mode 'w+b' opens and truncates the file to 0 bytes, while 'r+b' opens the file without truncation. The 'x' mode implies 'w' and raises an `FileExistsError` if the file already exists.. 'U' mode is deprecated and will raise an exception in future versions of Python. It has no effect in Python 3. Use newline to control universal newlines mode., or run 'help(codecs.Codec)' for a list of the permitted encoding error strings., the underlying file descriptor will be kept open when the file is closed. This does not work when a file name is given and must be True in that case. A custom opener can be used by passing a callable as *opener*. The underlying file descriptor for the file object is then obtained by calling *opener* with (*file*, *flags*). *opener* must return an open file descriptor (passing os.open as *opener* results in functionality similar to passing None). open() returns a file object whose type depends on the mode, and through which the standard file operations such as reading and writing are performed.. It is also possible to use a string or bytearray as a file for both reading and writing. For strings StringIO can be used like a file opened in a text mode, and for bytes a BytesIO can be used like a file opened in a binary mode. Using Numpy import numpy as np data = np.random.randn(3,3) np.savetxt('myfile.txt', data, fmt="%12.6G") # save to file data = np.genfromtxt('myfile.txt', unpack=True) # read from file data array([[-0.141613 , 0.0158789, 1.09553 ], [-0.538208 , 0.370273 , -0.0108841], [-0.327891 , 0.117738 , 1.63674 ]]) import matplotlib.pyplot as plt Use the IPython magic %matplotlib inline to plot a figure inline in the notebook with the rest of the text: %matplotlib inline import numpy as np t = np.linspace(0, 0.99, 100) x = np.sin(2 * np.pi * 2 * t) n = np.random.randn(100) / 5 plt.Figure(figsize=(12,8)) plt.plot(t, x, label='sine', linewidth=2) plt.plot(t, x + n, label='noisy sine', linewidth=2) plt.annotate(s='$sin(4 \pi t)$', xy=(.2, 1), fontsize=20, color=[0, 0, 1]) plt.legend(loc='best', framealpha=.5) plt.xlabel('Time [s]') plt.ylabel('Amplitude') plt.title('Data plotting using matplotlib') plt.show() Use the IPython magic %matplotlib qt to plot a figure in a separate window (from where you will be able to change some of the figure proprerties): %matplotlib qt Warning: Cannot change to a different GUI toolkit: qt. Using notebook instead. mu, sigma = 10, 2 x = mu + sigma * np.random.randn(1000) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4)) ax1.plot(x, 'ro') ax1.set_title('Data') ax1.grid() n, bins, patches = ax2.hist(x, 25, normed=True, facecolor='r') # histogram ax2.set_xlabel('Bins') ax2.set_ylabel('Probability') ax2.set_title('Histogram') fig.suptitle('Another example using matplotlib', fontsize=18, y=1) ax2.grid() plt.tight_layout() plt.show() And a window with the following figure should appear: from IPython.display import Image Image(url="./../images/plot.png") You can switch back and forth between inline and separate figure using the %matplotlib magic commands used above. There are plenty more examples with the source code in the matplotlib gallery. # get back the inline plot %matplotlib inline The Scipy package has a lot of functions for signal processing, among them: Integration (scipy.integrate), Optimization (scipy.optimize), Interpolation (scipy.interpolate), Fourier Transforms (scipy.fftpack), Signal Processing (scipy.signal), Linear Algebra (scipy.linalg), and Statistics (scipy.stats). As an example, let's see how to use a low-pass Butterworth filter to attenuate high-frequency noise and how the differentiation process of a signal affects the signal-to-noise content. We will also calculate the Fourier transform of these data to look at their frequencies content. from scipy.signal import butter, filtfilt import scipy.fftpack freq = 100. t = np.arange(0,1,.01); w = 2*np.pi*1 # 1 Hz y = np.sin(w*t)+0.1*np.sin(10*w*t) # Butterworth filter b, a = butter(4, (5/(freq/2)), btype = 'low') y2 = filtfilt(b, a, y) # 2nd derivative of the data ydd = np.diff(y,2)*freq*freq # raw data y2dd = np.diff(y2,2)*freq*freq # filtered data # frequency content yfft = np.abs(scipy.fftpack.fft(y))/(y.size/2); # raw data y2fft = np.abs(scipy.fftpack.fft(y2))/(y.size/2); # filtered data freqs = scipy.fftpack.fftfreq(y.size, 1./freq) yddfft = np.abs(scipy.fftpack.fft(ydd))/(ydd.size/2); y2ddfft = np.abs(scipy.fftpack.fft(y2dd))/(ydd.size/2); freqs2 = scipy.fftpack.fftfreq(ydd.size, 1./freq) And the plots: fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(12, 6)) ax1.set_title('Temporal domain', fontsize=14) ax1.plot(t, y, 'r', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax1.set_ylabel('f') ax1.legend(frameon=False, fontsize=12) ax2.set_title('Frequency domain', fontsize=14) ax2.plot(freqs[:int(yfft.size/4)], yfft[:int(yfft.size/4)],'r', lw=2,label='raw data') ax2.plot(freqs[:int(yfft.size/4)],y2fft[:int(yfft.size/4)],'b--',lw=2,label='filtered @ 5 Hz') ax2.set_ylabel('FFT(f)') ax2.legend(frameon=False, fontsize=12) ax3.plot(t[:-2], ydd, 'r', linewidth=2, label = 'raw') ax3.plot(t[:-2], y2dd, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax3.set_xlabel('Time [s]'); ax3.set_ylabel("f ''") ax4.plot(freqs[:int(yddfft.size/4)], yddfft[:int(yddfft.size/4)], 'r', lw=2, label = 'raw') ax4.plot(freqs[:int(yddfft.size/4)],y2ddfft[:int(yddfft.size/4)],'b--',lw=2, label='filtered @ 5 Hz') ax4.set_xlabel('Frequency [Hz]'); ax4.set_ylabel("FFT(f '')") plt.show() from IPython.display import display import sympy as sym from sympy.interactive import printing printing.init_printing() Define some symbols and the create a second-order polynomial function (a.k.a., parabola): x, y = sym.symbols('x y') y = x**2 - 2*x - 3 y Plot the parabola at some given range: from sympy.plotting import plot %matplotlib inline plot(y, (x, -3, 5)); And the roots of the parabola are given by: sym.solve(y, x) We can also do symbolic differentiation and integration: dy = sym.diff(y, x) dy sym.integrate(dy, x) For example, let's use Sympy to represent three-dimensional rotations. Consider the problem of a coordinate system xyz rotated in relation to other coordinate system XYZ. The single rotations around each axis are illustrated by: from IPython.display import Image Image(url="./../images/rotations.png") The single 3D rotation matrices around Z, Y, and X axes can be expressed in Sympy: from IPython.core.display import Math from sympy import symbols, cos, sin, Matrix, latex a, b, g = symbols('alpha beta gamma') RX = Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]]) display(Math(latex('\\mathbf{R_{X}}=') + latex(RX, mat_str = 'matrix'))) RY = Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]]) display(Math(latex('\\mathbf{R_{Y}}=') + latex(RY, mat_str = 'matrix'))) RZ = Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]]) display(Math(latex('\\mathbf{R_{Z}}=') + latex(RZ, mat_str = 'matrix'))) And using Sympy, a sequence of elementary rotations around X, Y, Z axes is given by: RXYZ = RZ*RY*RX display(Math(latex('\\mathbf{R_{XYZ}}=') + latex(RXYZ, mat_str = 'matrix'))) Suppose there is a rotation only around X ($\alpha$) by $\pi/2$; we can get the numerical value of the rotation matrix by substituing the angle values: r = RXYZ.subs({a: np.pi/2, b: 0, g: 0}) r And we can prettify this result: display(Math(latex(r'\mathbf{R_{(\alpha=\pi/2)}}=') + latex(r.n(chop=True, prec=3), mat_str = 'matrix'))) "pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python." To work with labellled data, pandas has a type called DataFrame (basically, a matrix where columns and rows have may names and may be of different types) and it is also the main type of the software R. Fo ezample: import pandas as pd x = 5*['A'] + 5*['B'] x ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'] df = pd.DataFrame(np.random.rand(10,2), columns=['Level 1', 'Level 2'] ) df['Group'] = pd.Series(['A']*5 + ['B']*5) plot = df.boxplot(by='Group') from pandas.plotting import scatter_matrix df = pd.DataFrame(np.random.randn(100, 3), columns=['A', 'B', 'C']) plot = scatter_matrix(df, alpha=0.5, figsize=(8, 6), diagonal='kde') pandas is aware the data is structured and give you basic statistics considerint that and nicely formatted: df.describe() There is a lot of good material in the internet about Python for scientific computing, here is a small list of interesting stuff:
http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/PythonTutorial.ipynb
CC-MAIN-2018-39
refinedweb
5,666
64.3
- File Structure helps overview the structure of the current file that is open and active in the editor. Double-click a node in the File Structure window to navigate to the declaration of the selected member. Moreover, use context menu to apply navigation and search features, and refactorings as well right from the current window. - Type Hierarchy helps overview the inheritance hierarchy of a type. Double-click a node in the Hierarchy window to navigate to the declaration of the selected type. Moreover, use context menu to apply search features and refactoring right from the current window. - Various "Go To ..." features such as: Go to Everything/Type, Go to Symbol, Go to File Member, etc. work for VB.NET as well. - The Navigate To drop-down list is also available and helps you navigate to various destinations. - Navigating to External Sources from VB.NET works just as well as from C#. - ReSharper adds special marks to left gutter that help navigate to overriding, implementing or hiding members. Search - Find Usages and Find Usages Advanced features help you locate all usages of namespaces, types, methods, etc. in your source code. Applying the Find Usages feature is the quickest way to find all code usages in the solution. If you need more flexible search, use the Find Usages Advanced feature. It gives you an opportunity to find textual occurrences and extend the search scope, for a example. - The Highlight Usages in File feature helps focus your attention at a particular member or local variable and its occurrences. Note that write accesses are highlighted in red, and read accesses are highlighted in blue. Moreover, you can highlight usages of namespaces. Place the caret at Importsdirective: See Also ``` Procedures: Last modified: 15 February 2017
https://www.jetbrains.com/help/resharper/2016.3/ReSharper_by_Language__Visual_Basic__Navigation_and_Search.html
CC-MAIN-2017-13
refinedweb
290
58.99
You thought we were going to make a game without collectibles, didn't you? Oh no, there will be collectibles, and in this article, we're going to implement the groundwork so that as soon as ideas arise, we can quickly implement them to playtest. We'll create a simple collectible system where the player can cause a trigger collision and identify what type of collectible caused the trigger. For now, we'll simply log the type to the console. Collectible Prefab Let's create the prefab first. I'll create a new Sphere and create a new Material to add some color. I'll also create a new tag called Collectible and assign it to the GameObject, then make sure the collider is set to be a trigger, finally drag and drop it to the Prefabs folder to create a new prefab. Next, I'll create a new script called "Collectible" and add it to the scripts folder. Inside the script, I'll add movement just like we did in the enemy script. I'll add a variable speed and destroy the collectible when it goes off-screen. using UnityEngine; public class Collectible : MonoBehaviour { [ ] float speed = 5.0f; void Update() { transform.Translate(speed * Time.deltaTime * Vector3.down); } } Collectible Type On to the more interesting bit, let's create a way to make each collectible distinguishable. Again, I'm going to do this in a way that's fast for us to prototype ideas. For now, I'll identify collectibles by assigning an enum value. I'll create a separate file in the Scripts folder named CollectibleType and add some placeholder types we can change later. public enum CollectibleType { TypeA, TypeB, TypeC } Back in our collectible script, I'll add a private field for CollectibleType named type with the SerializeField attribute. Switching back to Unity, we can see that now we can easily select what type of collectible we want an instance to represent. We will need to access this value from the Player script, so let's make a get accessor. public class collectible : MonoBehaviour { [ ] float speed = 5.0f; [ ] CollectibleType type; public CollectibleType Type { get { return type; } private set; } void Update() { transform.Translate(speed * Time.deltaTime * Vector3.down); } } To make it easy to apply collectables to the player, I'll add a Rigidbody component (no gravity), then inside the Player script add an OnTriggerEnter method. Now, when the Player collides with a Collectable, we will log the type of Collectable to the console. Here's the updated Player script: public class Player : MonoBehaviour { ... void OnTriggerEnter(Collider other) { if (other.CompareTag("collectible")) { var collectible = other.GetComponent<collectible>(); Destroy(other.gameObject); Debug.Log(collectible.Type); } } } Summary Now we have a way to quickly set up new types of collectibles during the prototype phase. This can certainly be improved later, but it will serve us well for quick iteration during playtesting. Take care. Stay awesome.
https://blog.justinhhorner.com/preparing-for-collectible-items
CC-MAIN-2022-27
refinedweb
483
63.9
TiffCapture 0.1.3 Brings the power of OpenCV to TIFF videos; provides interface to multi-part TIFFs compatible with OpenCV's VideoCapture. Provides a PIL based capture interface to multi-part tiffs, allowing them to be used more easily with OpenCV. This allows you to use OpenCV’s image and video processing capabilities with tiff stacks, a video form frequently encountered in scientific video as it is lossless and supports custom metadata. Examples A minimal example looks like this: import tiffcapture as tc import matplotlib.pyplot at plt tiff = tc.opentiff(filename) plt.imshow(tiff.read()[1]) plt.show() tiff.release() More real world usage looks like this: import tiffcapture as tc import cv2 tiff = tc.opentiff(filename) #open img _, first_img = tiff.retrieve() cv2.namedWindow('video') for f,img in tiff: tempimg = cv2.absdiff(first_img, img) # bkgnd sub _, tempimg = cv2.threshold(tempimg, 5, 255, cv2.cv.CV_THRESH_BINARY) # convert to binary cv2.imshow('video', tempimg) cv2.waitKey(80) cv2.destroyWindow('video') - Downloads (All Versions): - 17 downloads in the last day - 77 downloads in the last week - 308 downloads in the last month - Author: Dave Williams - Keywords: tiff,PIL,OpenCV - License: LICENSE.txt - Categories - Package Index Owner: cdw - DOAP record: TiffCapture-0.1.3.xml
https://pypi.python.org/pypi/TiffCapture
CC-MAIN-2015-22
refinedweb
206
50.94
Last Updated on January 5, 2021 Misclassification errors on the minority class are more important than other types of prediction errors for some imbalanced classification tasks. One example is the problem of classifying bank customers as to whether they should receive a loan or not. Giving a loan to a bad customer marked as a good customer results in a greater cost to the bank than denying a loan to a good customer marked as a bad customer. This requires careful selection of a performance metric that both promotes minimizing misclassification errors in general, and favors minimizing one type of misclassification error over another. The German credit dataset is a standard imbalanced classification dataset that has this property of differing costs to misclassification errors. Models evaluated on this dataset can be evaluated using the Fbeta-Measure that provides a way of both quantifying model performance generally, and captures the requirement that one type of misclassification error is more costly than another. In this tutorial, you will discover how to develop and evaluate a model for the imbalanced German credit classification dataset. After completing this tutorial, you will know: -. - Update Feb/2020: Added section on further model improvements. - Update Jan/2021: Updated links for API documentation. Develop an Imbalanced Classification Model to Predict Good and Bad Credit Photo by AL Nieves, some rights reserved. Tutorial Overview This tutorial is divided into five parts; they are: - German Credit Dataset - Explore the Dataset - Model Test and Baseline Result - Evaluate Models - Evaluate Machine Learning Algorithms - Evaluate Undersampling - Further Model Improvements - Make Prediction on New Data German Credit Dataset In this project, we will use a standard imbalanced machine learning dataset referred to as the “German Credit” dataset or simply “German.” The dataset was used as part of the Statlog project, a European-based initiative in the 1990s to evaluate and compare a large number (at the time) of machine learning algorithms on a range of different classification tasks. The dataset is credited to Hans Hofmann. The fragmentation amongst different disciplines has almost certainly hindered communication and progress. The StatLog project was designed to break down these divisions by selecting classification procedures regardless of historical pedigree, testing them on large-scale and commercially important problems, and hence to determine to what extent the various techniques met the needs of industry. — Page 4, Machine Learning, Neural and Statistical Classification, 1994. The german credit dataset describes financial and banking details for customers and the task is to determine whether the customer is good or bad. The assumption is that the task involves predicting whether a customer will pay back a loan or credit. The dataset includes 1,000 examples and 20 input variables, 7 of which are numerical (integer) and 13 are categorical. - Status of existing checking account - Duration in month - Credit history - Purpose - Credit amount - Savings account - Present employment since - Installment rate in percentage of disposable income - Personal status and sex - Other debtors - Present residence since - Property - Age in years - Other installment plans - Housing - Number of existing credits at this bank - Job - Number of dependents - Telephone - Foreign worker Some of the categorical variables have an ordinal relationship, such as “Savings account,” although most do not. There are two classes, 1 for good customers and 2 for bad customers. Good customers are the default or negative class, whereas bad customers are the exception or positive class. A total of 70 percent of the examples are good customers, whereas the remaining 30 percent of examples are bad customers. - Good Customers: Negative or majority class (70%). - Bad Customers: Positive or minority class (30%). A cost matrix is provided with the dataset that gives a different penalty to each misclassification error for the positive class. Specifically, a cost of five is applied to a false negative (marking a bad customer as good) and a cost of one is assigned for a false positive (marking a good customer as bad). - Cost for False Negative: 5 - Cost for False Positive: 1 This suggests that the positive class is the focus of the prediction task and that it is more costly to the bank or financial institution to give money to a bad customer than to not give money to a good customer. This must be taken into account when selecting a performance metric. First, download the dataset and save it in your current working directory with the name “german.csv“. Review the contents of the file. The first few lines of the file should look as follows: We can see that the categorical columns are encoded with an Axxx format, where “x” are integers for different labels. A one-hot encoding of the categorical variables will be required. We can also see that the numerical variables have different scales, e.g. 6, 48, and 12 in column 2, and 1169, 5951, etc. in column 5. This suggests that scaling of the integer columns will be needed for those algorithms that are sensitive to scale. The target variable or class is the last column and contains values of 1 and 2. These will need to be label encoded to 0 and 1, respectively, to meet the general expectation for imbalanced binary classification tasks where 0 represents the negative case and 1 represents the positive case. 1,000 rows and 20 input variables and 1 target variable. The class distribution is then summarized, confirming the number of good and bad customers and the percentage of cases in the minority and majority classes. We can also take a look at the distribution of the seven numerical input variables by creating a histogram for each. First, we can select the columns with numeric variables by calling the select_dtypes() function on the DataFrame. We can then select just those columns from the DataFrame. We would expect there to be seven, plus the numerical class labels. We can then create histograms of each numeric input variable. The complete example is listed below. Running the example creates the figure with one histogram subplot for each of the seven input variables and one class label in the dataset. The title of each subplot indicates the column number in the DataFrame (e.g. zero-offset from 0 to 20). We can see many different distributions, some with Gaussian-like distributions, others with seemingly exponential or discrete distributions. Depending on the choice of modeling algorithms, we would expect scaling the distributions to the same range to be useful, and perhaps the use of some power transforms. Histogram of Numeric Variables in the German Credit 1000/10 or 100 examples. Stratified means that each fold will contain the same mixture of examples by class, that is about 70 percent to 30 percent good to bad customers. class labels of whether a customer is good or not. Therefore, we need a measure that is appropriate for evaluating the predicted class labels. The focus of the task is on the positive class (bad customers). Precision and recall are a good place to start. Maximizing precision will minimize the false positives and maximizing recall will minimize the false negatives in the predictions made by a model. - Precision = TruePositives / (TruePositives + FalsePositives) - Recall = TruePositives / (TruePositives + FalseNegatives) Using the F-Measure will calculate the harmonic mean between precision and recall. This is a good single number that can be used to compare and select a model on this problem. The issue is that false negatives are more damaging than false positives. - F-Measure = (2 * Precision * Recall) / (Precision + Recall) Remember that false negatives on this dataset are cases of a bad customer being marked as a good customer and being given a loan. False positives are cases of a good customer being marked as a bad customer and not being given a loan. - False Negative: Bad Customer (class 1) predicted as a Good Customer (class 0). - False Positive: Good Customer (class 0) predicted as a Bad Customer (class 1). False negatives are more costly to the bank than false positives. - Cost(False Negatives) > Cost(False Positives) Put another way, we are interested in the F-measure that will summarize a model’s ability to minimize misclassification errors for the positive class, but we want to favor models that are better are minimizing false negatives over false positives. This can be achieved by using a version of the F-measure that calculates a weighted harmonic mean of precision and recall but favors higher recall scores over precision scores. This is called the Fbeta-measure, a generalization of F-measure, where “beta” is a parameter that defines the weighting of the two scores. - Fbeta-Measure = ((1 + beta^2) * Precision * Recall) / (beta^2 * Precision + Recall) A beta value of 2 will weight more attention on recall than precision and is referred to as the F2-measure. - F2-Measure = ((1 + 2^2) * Precision * Recall) / (2^2 * Precision + Recall) We will use this measure to evaluate models on the German credit dataset. This can be achieved using the fbeta_score() scikit-learn function. We can define a function to load the dataset and split the columns into input and output variables. We will one-hot encode the categorical variables and label encode the target variable. You might recall that a one-hot encoding replaces the categorical variable with one new column for each value of the variable and marks values with a 1 in the column for that value. First, we must split the DataFrame into input and output variables. Next, we need to select all input variables that are categorical, then apply a one-hot encoding and leave the numerical variables untouched. This can be achieved using a ColumnTransformer and defining the transform as a OneHotEncoder applied only to the column indices for categorical variables. We can then label encode the target variable. The load_dataset() function below ties all of this together and loads and prepares the dataset for modeling. Next, we need a function that will evaluate a set of predictions using the fbeta_score() function with beta set to 2. We can then define a function that will evaluate a given model on the dataset and return a list of F2-Measure scores for each fold and repeat. The evaluate_model() function below implements this, taking the dataset and model as arguments and returning the list of scores. Finally, we can evaluate a baseline model on the dataset using this test harness. A model that predicts the minority class for examples will achieve a maximum recall score and a baseline precision score. This provides a baseline in model performance on this problem by which all other models can be compared. This can be achieved using the DummyClassifier class from the scikit-learn library and setting the “strategy” argument to “constant” and the “constant” argument to “1” for the minority class. Once the model is evaluated, we can report the mean and standard deviation of the F2-Measure scores directly. Tying this together, the complete example of loading the German Credit dataset, evaluating a baseline model, and reporting the performance is listed below. Running the example first loads and summarizes the dataset. We can see that we have the correct number of rows loaded, and through the one-hot encoding of the categorical input variables, we have increased the number of input variables from 20 to 61. That suggests that the 13 categorical variables were encoded into a total of 54 columns. Importantly, we can see that the class labels have the correct mapping to integers with 0 for the majority class and 1 for the minority class, customary for imbalanced binary classification dataset. Next, the average of the F2-Measure scores is reported. In this case, we can see that the baseline algorithm achieves an F2-Measure of about 0.682. This score provides a lower limit on model skill; any model that achieves an average F2-Measure above about 0.682 F2-Measure performance using the same test harness, I’d love to hear about it. Let me know in the comments below. Evaluate Machine Learning Algorithms Let’s start by evaluating a mixture of probabilistic German credit dataset: - Logistic Regression (LR) - Linear Discriminant Analysis (LDA) - Naive Bayes (NB) - Gaussian Process Classifier (GPC) - Support Vector Machine (SVM) We will use mostly default model hyperparameters. as we did in the previous section, and in this case, we will normalize the numerical input variables. This is best performed using the MinMaxScaler within each fold of the cross-validation evaluation process.. We can update the load_dataset() to return the column indexes as well as the input and output elements of the dataset. The updated version of this function is listed below. We can then call this function to get the data and the list of categorical and numerical variables. F2-Measure evaluating a suite of machine learning algorithms on the German credit dataset is listed below. Running the example evaluates each algorithm in turn and reports the mean and standard deviation F2-Measure. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. In this case, we can see that none of the tested models have an F2-measure above the default of predicting the majority class in all cases (0.682). None of the models are skillful. This is surprising, although suggests that perhaps the decision boundary between the two classes is noisy.. Box and Whisker Plot of Machine Learning Models on the Imbalanced German Credit Dataset Now that we have some results, let’s see if we can improve them with some undersampling. Evaluate Undersampling Undersampling is perhaps the least widely used technique when addressing an imbalanced classification task as most of the focus is put on oversampling the majority class with SMOTE. Undersampling can help to remove examples from the majority class along the decision boundary that make the problem challenging for classification algorithms. In this experiment we will test the following undersampling algorithms: - Tomek Links (TL) - Edited Nearest Neighbors (ENN) - Repeated Edited Nearest Neighbors (RENN) - One Sided Selection (OSS) - Neighborhood Cleaning Rule (NCR) The Tomek Links and ENN methods select examples from the majority class to delete, whereas OSS and NCR both select examples to keep and examples to delete. We will use the balanced version of the logistic regression algorithm to test each undersampling method, to keep things simple. The get_models() function from the previous section can be updated to return a list of undersampling techniques to test with the logistic regression algorithm. We use the implementations of these algorithms from the imbalanced-learn library. The updated version of the get_models() function defining the undersampling methods is listed below. The Pipeline provided by scikit-learn does not know about undersampling algorithms. Therefore, we must use the Pipeline implementation provided by the imbalanced-learn library. As in the previous section, the first step of the pipeline will be one hot encoding of categorical variables and normalization of numerical variables, and the final step will be fitting the model. Here, the middle step will be the undersampling technique, correctly applied within the cross-validation evaluation on the training dataset only. Tying this together, the complete example of evaluating logistic regression with different undersampling methods on the German credit dataset is listed below. We would expect the undersampling to to result in a lift on skill in logistic regression, ideally above the baseline performance of predicting the minority class in all cases. The complete example is listed below. Running the example evaluates the logistic regression algorithm with five different undersampling techniques. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. In this case, we can see that three of the five undersampling techniques resulted in an F2-measure that provides an improvement over the baseline of 0.682. Specifically, ENN, RENN and NCR, with repeated edited nearest neighbors resulting in the best performance with an F2-measure of about 0.716. The results suggest SMOTE achieved the best score with an F2-Measure of 0.604. Box and whisker plots are created for each evaluated undersampling technique, showing that they generally have the same spread. It is encouraging to see that for the well performing methods, the boxes spread up around 0.8, and the mean and median for all three methods are are around 0.7. This highlights that the distributions are skewing high and are let down on occasion by a few bad evaluations. Box and Whisker Plot of Logistic Regression With Undersampling on the Imbalanced German Credit Dataset Next, let’s see how we might use a final model to make predictions on new data. Further Model Improvements This is a new section that provides a minor departure to the above section. Here, we will test specific models that result in a further lift in F2-measure performance and I will update this section as new models are reported/discovered. Improvement #1: InstanceHardnessThreshold An F2-measure of about 0.727 can be achieved using balanced Logistic Regression with InstanceHardnessThreshold undersampling. The complete example is listed below. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. Running the example gives the follow results. Improvement #2: SMOTEENN An F2-measure of about 0.730 can be achieved using LDA with SMOTEENN, where the ENN parameter is set to an ENN instance with sampling_strategy set to majority. The complete example is listed below. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. Running the example gives the follow results. Improvement #3: SMOTEENN with StandardScaler and RidgeClassifier An F2-measure of about 0.741 can be achieved with further improvements to the SMOTEENN using a RidgeClassifier instead of LDA and using a StandardScaler for the numeric inputs instead of a MinMaxScaler. The complete example is listed below. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. Running the example gives the follow results. Can you do even better? Let me know in the comments below. Make Prediction on New Data Given the variance in results, a selection of any of the undersampling methods is probably sufficient. In this case, we will select logistic regression with Repeated ENN. This model had an F2-measure of about about 0.716 on our test harness. We will use this as our final model and use it to make predictions on new data. First, we can define the model as a pipeline. Once defined, we can fit it on the entire training dataset. Once fit, we can use it to make predictions for new data by calling the predict() function. This will return the class label of 0 for “good customer”, or 1 for “bad customer”. Importantly, we must use the ColumnTransformer that was fit on the training dataset in the Pipeline to correctly prepare new data using the same transforms. For example: To demonstrate this, we can use the fit model to make some predictions of labels for a few cases where we know if the case is a good customer or bad. The complete example is listed below. Running the example first fits the model on the entire training dataset. Then the fit model used to predict the label of a good customer for cases chosen from the dataset file. We can see that most cases are correctly predicted. This highlights that although we chose a good model, it is not perfect. Then some cases of actual bad customers are used as input to the model and the label is predicted. As we might have hoped, the correct labels are predicted for all cases. Further Reading This section provides more resources on the topic if you are looking to go deeper. Books APIs - pandas.DataFrame.select_dtypes API. - sklearn.metrics.fbeta_score API. - sklearn.compose.ColumnTransformer API. - sklearn.preprocessing.OneHotEncoder API. - imblearn.pipeline.Pipeline API. Dataset - Statlog (German Credit Data) Dataset, UCI Machine Learning Repository. - German Credit Dataset. - German Credit Dataset Description Summary In this tutorial, you discovered how to develop and evaluate a model for the imbalanced German credit classification dataset. Specifically, you learned: -. Do you have any questions? Ask your questions in the comments below and I will do my best to answer. Very Nice Tutorial with code. Thanks! fantastic notebook Thanks. Jason: Employing a 60/40 train-test approach, fitting the various model/sampling combinations to the training data set, and then predicting on the test data set, the best f2 scores appear to be 0.738 for RC RENN (min-max normalization) and 0.737 for LDA RENN (min-max normalization). Using RC SMOTEENN returns f2 scores of 0.732 (min-max normalization) and 0.707 (mean standardization). Of course, these score will vary as a result of the randomness in the split. After having employed cross-val to identify promising models, I like to then apply them in train-test to produce confusion matrices. Confusion matrix heat maps help visualization of the trade-off between overall accuracy (in his case, generally in the 0.6 to 0.7 range) and the reduction in false negatives (and shifting of misclassification errors to false positives) in optimizing based on he the f2 measure. For example, in the case of RC SMOTEENN (min-max normalization), out of the 400 samples in the test set, 252 were classified correctly (overall accuracy = 0.63), but of the 148 misclassified samples, only 13 were false negatives and 135 were false positives. Nice work Ron! Fantastic work Jason… Thanks! Hi Jason, Thank your for your tutorial! I understood that SMOTE techniques (for minority oversampling improvement) are not applicable to images data. because only apply to tabular data and also we have a similar techniques such as data-augmentation for image data…OK But what about Undersampling techniques such as (TL, ENN, RENN, OSS, NCR) , they can not be also used for image data ? and is there any similar counterpart for image data such as the opposite of data-augmentation? regards Most methods are based on kNN which would be a mess on pixel data. Maybe people are exploring similarity metrics in this context, I have no idea. You can achieve similar rebalancing effects with a carefully crafted data augmentation implementation – controlling the balance of samples yielded in each batch. This is where I would focus. Hi Jason, Thanks for wonderful, articles. These helps me a lot. I have a query. I had defined below instructions: num_pipeline = make_pipeline(SimpleImputer(strategy=’median’), MinMaxScaler()) cat_pipeline = make_pipeline(OneHotEncoder()) column_transformer = make_column_transformer((num_pipeline, num_cols), (cat_pipeline, cat_cols)) from imblearn.combine import SMOTEENN # define the data sampling sampling = SMOTEENN(enn=EditedNearestNeighbours(sampling_strategy=’majority’, random_state=42)) When I transform the data step by step i.e first transformed data using column transformer and then resampled using SMOTEEN. c_train_feature_df = column_transformer.fit_transform(train_feature_df) # Resampled data in the below line s_train_feature_df, s_train_target_df= sampling.fit_resample(c_train_feature_df, train_target_df) In this case score of dummy model is (‘dummy = DummyClassifier(strategy=’constant’, constant=1)’) Score of dummy model :- Mean F2:- 0.876 , standard deviation:- 0.000 Now, when I used all above pipelines & column transformer inside a new pipeline like below, model_pipeline = Pipeline(steps=[(‘ct’, column_transformer), \ (‘s’, sampling), (‘m’, dummy)]) I got below score of dummy model. Score of dummy model :- Mean F2:- 0.58 , standard deviation:- 0.000 Usually, I got this score, when I had not resampled the training data. I am confused, why combining all pipelines & column transformer are not giving the same results. Can you please help me in this? Nice work, sorry I cannot debug your code example: Hi Jason, Thank you for the notebook. I combine a cost-sensitive algorithm with a under sampling technique, then use a heuristic to improve the score. Specifically i use ridge classifier with weights and Edited Nearest Neighbors, the number of neighbors also was a parameter in the heuristic. The best result was 0.7503 in the F2-measure. Well done! Hi Jason Thanks a lot for sharing your knowledge. I found the tutorial quite enlightening, and I was able to get clarity on many issues I had on binary classification using imbalanced training datasets. To try and improve on your result and to achieve a better F2-Measure, I first used the CalibratedClassifierCV class to wrap the models. I set the method parameter to ‘isotonic’ and class_weight to ‘balanced’ for SVM and LR. From this, I obtained the following values: >LR 0.460 (0.069) >LDA 0.476 (0.083) >NB 0.374 (0.087) >GPC 0.354 (0.063) >SVM 0.485 (0.072 The only marginal improvement I noticed was for SVM and GPC. LR, LDA, and NB values were poor. I retained ‘isotonic’ as method and dropped class_weight argument for SVM and LR the results were as follows >LR 0.463 (0.072) >LDA 0.476 (0.083) >NB 0.374 (0.087) >GPC 0.354 (0.063) >SVM 0.442 (0.081) Clearly an inferior outcome for SVM as well as the first 3. I managed to record the most profound improvement in the skill of my balanced calibrated SVM using undersampling. The results of the evaluation of the SVM algorithm with five different undersampling techniques were as follows. >TL 0.516 (0.079) >ENN 0.683 (0.044) >RENN 0.714 (0.035) >OSS 0.517 (0.078) >NCR 0.674 (0.062) RENN and ENN achieved a massive improvement of the SVM over the baseline of 0.682. I found this remarkable given that without the probability calibration and undersampling of the SVM, the result was 0.436 (completely no skill) for the SVM. I am working on a binary classification project (credit scoring) using users’ mobile phone metadata. Any tips on what to watch out for or consider? Once more thanks a lot for sharing. Gideon Aswani Nice work! Notice we achieved an F2 of up to 0.741 (0.034) in the tutorial. This framework will help ensure you try a suite of different techniques and get the best from your dataset: Hi Jason Thanks again. Yes, I noticed in the tutorial you achieved 0.741(0.034). That was great. I will use the framework to improve on my work Thanks a lot for the tutorial, it is very helpful. I want to ask only one thing. Why don’t we use “DummyClassifier” model with undersampling methods and take the corresponding f2-score as a base model to outperform? If we undersample and use “DummyClassifier” model, we should get a F-2 score of around 0.83. So, no skill model (predicting everything 1) should have a lot higher f-score. We use the dummy classifier to establish a baseline. Any model that does better than it has “skill” on the problem. It is important that the dummy classifier is as simple as possible. Hi, Why do we need the “evaluate_model” function, specifically cross_val_score inside? Can’t we just use fbeta_score(y_true, y_pred, beta=2) and calculate f2 score directly? Thanks. Ulas You can calculate the score directly if you make predictions on a hold out dataset. I need to write the code that fulfills these 3 points: 1) Develop a prediction model to classify the customers as good or bad; 2) Cluster the customers into various groups; 3) Provide some ideas on how frequent pattern mining could be utilized to uncover some patterns in the data and/or to enhance the classification; a help would be appreciated. Perhaps this process will help: Hi Jason, Thanks a lot for this article. I am fairly new to this machine learning journey and your articles are great at explaining complex topic simplistically. Thanks again. With regards to this problem, I have managed to get a score of between 91-92% on train and test data, with below Logistic Regression parameters. This is without any undersampling. Below is the approach. It will be great if you could spot anything I am doing wrong, as I am suspicious of such high results. The only difference is that in preprocessing, I removed all quotes around categorical features, that is present in the dataset. Thanks again. Well done! That is a high score, let me schedule some time to check. I could not reproduce your result with that model, the best I could get was: I believe you accidentally reported accuracy instead of F2-Measure via a call to grid.score(). Perhaps that is the cause of your “91-92%” result You can see my complete code for the grid search listed below
https://machinelearningmastery.com/imbalanced-classification-of-good-and-bad-credit/
CC-MAIN-2021-31
refinedweb
4,815
55.24
- Has anyone got any experience writing a SOAP::Lite client that talks to a gSOAP server? I ve used stubmaker.pl to create stub functions from my gSOAP-generatedMessage 1 of 1 , Nov 19, 2003View SourceHas anyone got any experience writing a SOAP::Lite client that talks to a gSOAP server? I've used stubmaker.pl to create stub functions from my gSOAP-generated WSDL file, and this works fine... for methods that have simple input and output parameters (such as integers and strings). However, it doesn't seem to work very well for the methods that take and return complex data structures. I've got myself rather dizzy reading through the SOAP::Lite man pages and the half-completed documentation on the web site... but managed to make a small amount of progress when I abandoned the stubmaker-generated library and started to roll my own commands using call() and SOAP::Data The trouble is I'm getting really confused, and I'm getting some very odd behaviour. For example: I've switched trace on, and I've noticed that none of the generated XML elements corresponding to method names have got the right namespace... and me playing around with attr() only seems to make things worse. Is anyone on this list prepared to give a complete SOAP::Lite newbie a helping hand?. Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/3160?o=1&d=-1
CC-MAIN-2014-10
refinedweb
237
62.88
I am trying to add a dropdown list where users can filter results in the dataset on my page. In the drop down list properties, I have checked the enabled by default. However, when I preview, the drop down is shaded and doesn't function. Anyone have any idea what I'm missing? Thanks! Hello Shannon, Please share your page code below Hi, I'm having the same problem for the button under the "Antibuddy" tab. Here's the URL: And this is the code: import wixData from 'wix-data'; $w.onReady(function () { uniqueDropDown1(); }); function uniqueDropDown1 (){ wixData.query("Flow_Monkey_Antibodies") .limit(1000) .find() .then(results => { const uniqueTitles = getUniqueTitles(results.items); $w("#antigen").options = buildOptions(uniqueTitles); }); function getUniqueTitles(items) { const titlesOnly = items.map(item => item.antigen); return [...new Set(titlesOnly)]; } function buildOptions(uniqueList) { return uniqueList.map(curr => { return {label:curr, value:curr}; }); } } Same problem here! Has anyone found a workaround? Thanks and cheers!
https://www.wix.com/corvid/forum/community-discussion/dropdown-list-won-t-enable
CC-MAIN-2019-47
refinedweb
151
53.27
Since Pyarmor is written in the Python language, you need to install Python (the required version is at least 2.6). Pyarmor packages are available on the Python Package Index. Pyarmor is a command line tool, the main script is “pyarmor.py”. There is a gui wizard “wizard.py” which includes many examples, it’s useful to understand all of the featuers of Pyarmor. In order to run Pyarmor Wizard, be sure you have installed python package: Tkinter See next chapter to know the basic usage of pyarmor.py. Extract Pyarmor source package. $ tar xzf pyarmor-2.5.4.tar.gz Run Pyarmor. $ python pyarmor.py Start pyarmor wizard. $ python wizard.py Unzip Pyarmor source package. C:\> unzip pyarmor-2.5.3.zip Run Pyarmor. C:\pyarmor\> python pyarmor.py Start pyarmor wizard. C:\pyarmor\> python wizard.py If you have not Python, you can download an all-in-one installer: After installation, there is a desktop icon. Double-click it, Pyarmor Wizard will run. The normal scenario for a developer to use pyarmor is Next it’s an example to show you how to use pyarmor to encrypt scripts and how to distribute encrypted scripts..pye')" Or create a startup script startup.py like this,: import pyimcore import pytransform pytransform.exec_file('main.pye') Then run startup.py as normal python script. You can read the source file pyarmor.py to know the basic usage of pytransform extension. license.lic which will be expired in Jan. 31, 2015: $ python pyarmor.py license --with-capsule=project.zip \ --expired-date 2015-01-31 This command will generate a new “license.lic” will be expired in Jan. 31, 2015. Sometimes you want to run/import encrypted scripts bind to your ssh private key. You can generate a “license.lic” bind to fix file as the following steps: Generate license.lic with your ssh private key file “id_rsa”: $ python pyarmor.py license --with-capsule=project.zip \ --bind-file ~/.ssh/id_rsa /home/jondy/my_id_rsa This command will generate a new “license.lic” bind to fix file “~/.ssh/id_rsa”. After that, - Copy “license.lic” with your encrypted python scripts to target machine - Copy “~/.ssh/id_rsa” in your develop machine to target machine, and save as /home/jondy/my_id_rsa In target machine pyarmor will check “license.lic” by reading data from “/home/jondy/my_id_rsa” when run/import encrypted python scripts. You can specify any path or any name in target machine, for example, “/var/pyarmor/my_any_file”, only it’s same as your bind file. You can even mix expired date and fix file or fix key in the same license file. Generate license.lic with your ssh private key file “id_rsa” and expired at 2015-01-31: $ python pyarmor.py license --with-capsule=project.zip \ --expired-date 2015-01-31 --bind-file ~/.ssh/id_rsa /home/jondy/my_id_rsa This command will generate a new “license.lic” bind to fix file “~/.ssh/id_rsa” and will expired on Jan. 31, 2015..
http://dashingsoft.com/pyarmor/pyarmor.html
CC-MAIN-2017-34
refinedweb
491
71.41
I have an actionscript 2.0 for Flash to record a webcam stream and copy it locally to my Flash Media Server. No big deal there. It is exactly what I need. Now here is the problem. recordBTN is a movie clip in my library that controls the start and the stop of the recording. It's just one movieclip. On frame 1 there is a label marked startRec (Hence the code in the actionscript below). When you click startRec it begins recording the stream and the movie clip goes to frame 10 marked stopRec (Again, hence the code in the actionscript). The movie doesn't stop recording automatically however. The button just changes the text from Start Recording to Stop Recording. Here's the question. What EXACTLY do I do to add a countdown timer so people can see how much time they have left and have this recording stop automatically once the countdown clock reaches 0 in case they don't hit the stop recording? Below is the actionscript. The correct winner will get an iPad. Not really. But you will get a big thanks from me because I am stumped. import flash.external.*; /* ================================= Audio ================================= */ var mic:Microphone = Microphone.get(); var sound:Sound = new Sound(); var isRecording:Boolean = false; recordBtn.onRelease = function():Void { if( isRecording ) { // stop recording sound.setVolume( 100 ); this.gotoAndStop( "startRec" ); publish_ns.close(); publish_ns.attachAudio( null ); publish_ns = null; publish_ns = new NetStream( nc ); publish_ns.attachVideo( Camera.get() ); } else { // start recording this.gotoAndStop( "stopRec" ); publish_ns.publish( getUniqueStreamName(), "append" ); publish_ns.attachAudio( mic ); sound.setVolume( 0 ); } isRecording = !isRecording; }
https://forums.adobe.com/thread/609371
CC-MAIN-2018-34
refinedweb
258
71.1
After. The example here is all hardcoded, and a true plugin would have to derive the details about the GsApp, for example reading in an XML file or YAML config file somewhere. So, code: #include <gnome-software.h> void gs_plugin_initialize (GsPlugin *plugin) { gs_plugin_add_rule (plugin, GS_PLUGIN_RULE_RUN_BEFORE, "icons"); } gboolean gs_plugin_add_installed (GsPlugin *plugin, GsAppList *list, GCancellable *cancellable, GError **error) { g_autofree gchar *fn = NULL; g_autoptr(GsApp) app = NULL; g_autoptr(AsIcon) icon = NULL; /* check if the app exists */ fn = g_build_filename (g_get_home_dir (), "chiron", NULL); if (!g_file_test (fn, G_FILE_TEST_EXISTS)) return TRUE; /* the trigger exists, so create a fake app */ app = gs_app_new ("example:chiron.desktop"); gs_app_set_management_plugin (app, "example"); gs_app_set_kind (app, AS_APP_KIND_DESKTOP); gs_app_set_state (app, AS_APP_STATE_INSTALLED); gs_app_set_name (app, GS_APP_QUALITY_NORMAL, "Chiron"); gs_app_set_summary (app, GS_APP_QUALITY_NORMAL, "A teaching application"); gs_app_set_description (app, GS_APP_QUALITY_NORMAL, "Chiron is the name of an application.\n\n" "It can be used to demo some of our features"); /* these are all optional */ gs_app_set_version (app, "1.2.3"); gs_app_set_size_installed (app, 2 * 1024 * 1024); gs_app_set_size_download (app, 3 * 1024 * 1024); gs_app_set_origin_ui (app, "The example plugin"); gs_app_add_category (app, "Game"); gs_app_add_category (app, "ActionGame"); gs_app_add_kudo (app, GS_APP_KUDO_INSTALLS_USER_DOCS); gs_app_set_license (app, GS_APP_QUALITY_NORMAL, "GPL-2.0+ and LGPL-2.1+"); /* create a stock icon (loaded by the 'icons' plugin) */ icon = as_icon_new (); as_icon_set_kind (icon, AS_ICON_KIND_STOCK); as_icon_set_name (icon, "input-gaming"); gs_app_set_icon (app, icon); /* return new app */ gs_app_list_add (list, app); return TRUE; } This shows a lot of the plugin architecture in action. Some notable points: - The application ID ( example:chiron.desktop) has a prefix of examplewhich means we can co-exist with any package or flatpak version of the Chiron application, not setting the prefix would make the UI confused if more than one chiron.desktopgot added. - Setting the management plugin means we can check for this string when working out if we can handle the install or remove action. - Most applications want a kind of AS_APP_KIND_DESKTOPto be visible as an application. - The origin is where the application originated from — usually this will be something like Fedora Updates. - The GS_APP_KUDO_INSTALLS_USER_DOCSmeans we get the blue “Documentation” award in the details page; there are many kudos to award to deserving apps. - Setting the license means we don’t get the non-free warning — removing the 3rd party warning can be done using AS_APP_QUIRK_PROVENANCE - The iconsplugin will take the stock icon and convert it to a pixbuf of the correct size. To show this fake application just compile and install the plugin, touch ~/chiron and then restart gnome-software. By filling in the optional details (which can also be filled in using gs_plugin_refine_app() (to be covered in a future blog post) you can also make the details page a much more exciting place. Adding a set of screenshots is left as an exercise to the reader. For anyone interested, I’m also slowly writing up these blog posts into proper docbook and uploading them with the gtk-doc files here. I think this documentation would have been really useful for the Endless and Ubuntu people a few weeks ago, so if anyone sees any typos or missing details please let me know. 4 thoughts on “External Plugins in GNOME Software (2)” const gchar ** gs_plugin_order_before (GsPlugin *plugin) { static const gchar *deps[] = { “icons”, NULL }; return deps; } Is that really the return type you want? The characters are constant, but the pointers to them are not. Suggestion: const gchar * const* Hey, good catch! I started to migrate the code to “const gchar * const*” and then figured, “what if the plugin wants to change the list at startup” — and apart from leaking memory there was no good fix. I’ll think more about this over the weekend. The type of deps can stay unchanged even if you change the function return type. I think I’ve fixed this in a much nicer way: it’s tons simpler now:
https://blogs.gnome.org/hughsie/2016/05/20/external-plugins-in-gnome-software-2/
CC-MAIN-2019-30
refinedweb
619
50.06
At Thu, 25 Aug 2005 17:57:00 -0400, Daniel Jacobowitz wrote: > On Thu, Aug 25, 2005 at 05:53:05PM +0200, Zlatko Calusic wrote: > > GOTO Masanori <gotom@debian.or.jp> writes: > > > > > At Thu, 25 Aug 2005 12:56:04 +0200, > > > Zlatko Calusic wrote: > > >> rc 1119 root mem REG 8,9 217016 228931 /var/db/nscd/passwd > > Note, this is a long-running bash. Not many people use file-rc (that's > what this is, right?) It explains why most people don't see this issue. Why does file-rc cause problems? > Does glibc open the nscd cache files directly rather than communicating > with it via a socket? Or does it communicate via shared memory? Quick look at the source, mmap is used to share database file with mmap MAP_SHARED, so the main communication should be done via a socket. > > rc 827 root mem REG 8,9 217016 228931 /var/db/nscd/passwd > > [Is /var/db even FHS?] FHS states as follows. 5.5.2 /var/lib/misc : Miscellaneous variable data This directory contains variable data not placed in a subdirectory in /var/lib. An attempt should be made to use relatively unique names in this directory to avoid namespace conflicts. Note that this hierarchy should contain files stored in /var/db in current BSD releases. These include locate.database and mountdtab, and the kernel symbol database(s). LDAP or DB data sometimes puts their db files on /var/lib/misc (I think "misc" is vague term, though). Actually debian/patches/fhs-linux-paths.dpatch contains the following changes: --- glibc-2.1.1/sysdeps/unix/sysv/linux/paths.h~ Thu May 27 13:16:33 1999 +++ glibc-2.1.1/sysdeps/unix/sysv/linux/paths.h Thu May 27 13:17:55 1999 @@ -71,7 +71,7 @@ /* Provide trailing slash, since mostly used for building pathnames. */ #define _PATH_DEV "/dev/" #define _PATH_TMP "/tmp/" -#define _PATH_VARDB "/var/db/" +#define _PATH_VARDB "/var/lib/misc/" Another chioce is to use /var/cache - it's for application specific caching data. But currently I use /var/db instead of /var/lib/misc - I have wondered this change is widely accepted. I would like to hear from all you guys about this placement. Regards, -- gotom
https://lists.debian.org/debian-glibc/2005/08/msg00645.html
CC-MAIN-2016-40
refinedweb
368
66.23
Definitions related to 6Lo node (6LN) functionality of the NIB. More... Definitions related to 6Lo node (6LN) functionality of the NIB. Definition in file _nib-6ln.h. #include <stdint.h> #include "net/gnrc/ipv6/nib/conf.h" #include "net/sixlowpan/nd.h" #include "timex.h" #include "xtimer.h" #include "_nib-arsm.h" #include "_nib-internal.h" Go to the source code of this file. Additional (local) status to ARO status values to signify that ARO is not available in NA. Can be assigned to the variable that receives the return value of _handle_aro(), so that the case that an NA does not contain an ARO (e.g. because upstream router does not support it) can be dealt with. Definition at line 63 of file _nib-6ln.h. Calculates exponential backoff for RS retransmissions. Definition at line 87 of file _nib-6ln.h. Handles ARO. Handler for GNRC_IPV6_NIB_REREG_ADDRESS event handler. Resolves address statically from destination address using reverse translation of the IID. ncewas set, false when not.
http://riot-os.org/api/__nib-6ln_8h.html
CC-MAIN-2018-43
refinedweb
166
55.3
Archived:Delete Spam SMS using PySymb Have you receive any SMS at midnight while you are in deep sleep? Have you receive any SMS while you are important meeting and you lose your concentration? Are you receiving SMSes which are no use at all for you? Are you receiving SMS with the irritating contents in it? If any of the above question’s answer is YES then you are suffering of the SPAM SMSes. Please see the below contents about SMS spam from the popular site “Wikipedia” “Fighting SMS spam is complicated by several factors, including the lower rate of SMS spam (compared to more abused services such as Internet email) has allowed many users and service providers to ignore the issue, and the limited availability of mobile phone spam-filtering software” The SMS spam rate is lower but it may increase as the mobile user increases throughout the world. It is useless to wait from the network service provider to take necessary action on this issue when we have lots of programming weapons in our mobile to fight against the SMS SPAM. One can come out with more dedicated work and continuous efforts on this issue. I have made a small effort here to avoid SMS spam using a simple python program. Let’s see what’s there inside. How the program works The program is very simple. I have given two options for SMS spam removal. One is to add a Name/Number into the SPAM list and another is to run SPAM SMS. As soon as the name/number you add in the SPAM list, the name/number is saved into a text file in the mobile storage device. The text file will be created automatically if not exists. Now run the second option to start SPAM SMS. It will monitor every incoming SMS, will check the address of sender, compare it with each value in the text file and if it doesn’t match with any values in the text file then it will be welcomed into inbox. But, if the sender is in our spammer list, it will be punishment by deleting the SMS. Before deleting the SMS, a log is created showing which spam SMSes are deleted so that you can review it whenever you wish. Requirement You need to install PySymbian in your mobile in order to run the program. There are several articles available about how to download and install it. Once you do this create a simple text file “vote.txt” on the path shown in the program. Create four lines in the program with initial values as 0. Tested on device This python program is tested on Nokia 5230. Various options available I have used simple text file reading or writing the spam SMS list. However one can use e32db or e32dbm to make it more structured. Python Code #import necessary libraries import appuifw,inbox,e32,messaging,time spammar="" # setting the application title appuifw.app.title = u"SPAM SMS Blocker" mylist = [u"Add a SPAM Number/Name",u"Start SPAM Blocker",u"Exit"] def message_received(msg_id): global spammer box=inbox.Inbox() msg=box.sms_messages() time.sleep(0.5) print box.address(msg[0]) spam_adddress = box.address(msg_id) f=open('e:\data\python\spammer.txt','r') #opens a file into read/write mode #data=f.readline() #print data # print "Msg Received from ",box.address(msg_id) while True: data= f.readline() if data.strip() == spam_adddress.strip(): fs=file(u"e:\\data\\python\\spam_log.txt","a+") print >> fs,"Spam Message deleted from : ",spam_adddress box.delete(msg_id) while True: index = appuifw.popup_menu(mylist,u"Select") if index==2: break if index==0: spammer = appuifw.query(u"Type SPAM Number/Name:", "text") f=open('e:\data\python\spammer.txt','a+') #opens a file into read/write mode print >> f,spammer f.close() if index==1: box = inbox.Inbox() box.bind(message_received) appuifw.app.exit_key_handler = quit print "SPAM SMS Filter is ON" app_lock = e32.Ao_lock() app_lock.wait() Post Condition Adding spam number from menu SPAM filter is ON After spam filter is on script will automatically delete the SMS received from the spammer saved in our spam list. Sample log file of SPAM delete SMS
http://developer.nokia.com/community/wiki/Archived:Delete_Spam_SMS_using_PySymbian
CC-MAIN-2014-35
refinedweb
699
65.93
Note The project home for this project is: The input for a command-line batch ingest of materials to DSpace is well documented, and is called "Simple Archive Format", however there needs to be a tool that easily facilitates creating a Simple Archive Format package. The use case satisfied with the Simple Archive Format Packager is that someone has a spreadsheet filled with metadata as well as content files that are eventually destined for repository ingest. Thus the input to the Simple Archive Format Packager is a spreadsheet (.csv) that has the following columns: - filename of the content file(s) - namespace.element.qualifier metadata for the item. Examples would be: dc.description or dc.contributor.author Further, dates need to be in ISO-8601 format in order to be properly recognized. And for any column that has multiple values, you can separate each entry with a double-pipe "||". For example, for multiple files just set "filename" to "file1.pdf||file2.pdf||file3.pdf". Similarly, multiple "dc.subject" values can be separated by "||" as shown in the below example. While you are preparing the batch load, you have a directory containing a spreadsheet filled with metadata and content files. Obtaining, Compiling, and Running SAFBuilder The SAFBuilder project resides on GitHub. Please refer to the project instructions for how to install and run it. Its requires Java JDK 7+, and runs from the terminal / command prompt. The ./safbuilder.sh command with no arguments will show the help screen. Recompiling SAFBuilder, just a moment... usage: SAFBuilder -c,--csv <arg> Filename with path of the CSV spreadsheet. This must be in the same directory as the content files -h,--help Display the Help -m,--manifest Initialize a spreadsheet, a manifest listing all of the files in the directory, you must specify a CSV for -c -z,--zip (optional) ZIP the output There is sample data included with the tool to give an idea of how to use this. To run the tool over the sample data: ./safbuilder.sh -c /home/dspace/SAFBuilder/src/sample_data/AAA_batch-metadata.csv This creates the SimpleArchiveFormat directory inside of the directory specified, along with subdirectories, content files, metadata files that is ready to import into DSpace. This is then immediately ready to be batch imported into DSpace. If you created a ZIP file of this, that can be imported to DSpace using Batch Import UI. An example of DSpace command line import is. sudo /dspace/bin/dspace import -a -e peter@longsight.com -c 1811/49710 -s /home/dspace/SAFBuilder/src/sample_data/SimpleArchiveFormat/ -m /home/dspace/SAFBuilder/src/sample_data/batch1.map Further Work This packager works as a stand-alone tool, and requires knowledge of Java to be able to run. Thus satisfying the initial need to be able to package many items to be batch loaded into DSpace, using DSpace's launcher item-import. So the remaining goal of this project is to streamline the process of batch loading materials into DSpace. Possibilities include: - refactoring so that it can become a Packager Plugin. Packager plugins allow you to implement a way for DSpace to accept an input package (containing content files, manifest, and metadata) that then creates DSpace items. - creating a client GUI for the desktop. - Dedicated web service 2 Comments Pascal-Nicolas Becker Another tool with the same purpose written in GO can be found here: (See for further information) Adam Doan We've been using a similar tool for this, implemented in Python:
https://wiki.duraspace.org/display/DSPACE/Simple+Archive+Format+Packager?focusedCommentId=81954578
CC-MAIN-2018-39
refinedweb
577
56.15
# How to develop and publish a smart-contract in the Telegram Open Network (TON) What is this article about? --------------------------- In this article, I will tell about my participation in the first (out of two so far) Telegram blockchain contest. I didn't win any prize. However, decided to combine and share the unique experience I have had from the start to finish line, so my observations could help anyone who is interested. Since I didn't want to write some abstract code, instead make something useful. I created instant lottery smart-contract and website which shows smart-contract data directly from Telegram Open Network (TON) avoiding any middle storage layers. The article will be particularly useful for those, who want to write their first smart-contract for TON but don't know where to start. Using the lottery as an example, I will go from setting up the environment to publishing a smart contract, interacting with it. Moreover, I will create a website that will show smart-contract data. About participation in the contest ---------------------------------- In October 2019 Telegram organized a blockchain contest with new programming languages `Fift` and `FunC`. The developers were asked to write any smart-contract out of five suggested. I thought that it would be interesting to do something out of the ordinary: learn a new language and create smart-contract even if it can be the last time using this it. Besides this topic is in trend nowadays. It has to be mentioned that I have never had any sort of experience in writing the smart-contracts for blockchain networks. My plan was to participate until the very end. Then I was going to write a summary article, but I was not selected further than the first stage of the competition, where I have submitted workable version of [multi-signature wallet](https://contest.com/blockchain/entry380) written in `FunC`. Smart-contract was based on [Solidity](https://github.com/gnosis/MultiSigWallet/blob/master/contracts/MultiSigWallet.sol) example for Ethereum Network. I imagined that should make it, at this level of competition my work should be enough to gain a prize-winning spot. However, about 40 out of 60 participants happened to win and that excluded me. In general that is okay and could happen, but one thing bothered me: the review along with the test for my contract was not done, upon the announcement of the results. Thus, I asked other participants in the chat, if anyone else faced the same situation, there were none. I believe, my messages draw some attention and after two days judges published several comments. I still did not get it: did they skipped my smart-contract by accident during the evaluation period? Or was it such a bad work that they decided it does not worth any comment at all? I asked these questions on the contest page, unfortunately questions were ignored as well. Although, it is not a secret who the judge was, nevertheless writing a private messages would have been too much to ask. That being said it was decided to write this article that explains the subject in detail, since I have already spent plenty of time to understand how things work. In general, there is lack of information provided, so this article will save time for everyone who is interested. The concept of smart-contract in TON ------------------------------------ Before writing the first smart-contract we need to understand how to approach this thing in general. Therefore, now I will describe the complete set of the system, or more precisely which parts we should know in order to write at least some kind of functioning smart-contract. We will focus on writing smart-contract using `FunC` and `Fift`, which will be compiled into `Fift-assembler` and then will be executed in `TON Virtual Machine (TVM)`. Therefore, the article is more like a description of the development of a regular program. We will not dwell on how the blockchain platform works. How `Fift` and `TVM` work is well described in the official documentation. During the contest and now while writing current smart-contract I made a use of these docs. The main language for writing smart-contract is `FunC`. Currently there is no documentation available, therefore in order to develop something, we need to study the existing examples in the official repository, you may also find language implementation itself there and other participants' submissions from previous two contests. References are at the end of the article. Let's say we have written the smart-contract using `FunC` after that we compile `FunC` code into `Fift-assembler`. The compiled code should be published in TON. In order to be able to do that we should write `Fift` code, that takes as arguments smart-contract code and several other parameters and generates .boc file (which stands for "bag of cells") and depending on the way this code is written private key and address can be generated as well, based on the code of smart-contract. We can send grams to the generated address even if the smart-contract is not published yet. Since TON charges a fee for storage and transactions, before publishing a smart contract, you need to transfer grams to the generated address, otherwise, it will not be published. As means to publish smart-contract, we should send generated `.boc` file in TON using `lite-client` (details are below). After that, we can interact with the smart-contract by sending external messages (e.g.: using `lite-client`) or internal messages (e.g.: when one smart-contract sends a message to another). Now, when we understand the process of publishing smart-contract code, it becomes easier. We already have an idea of what exactly we want to create and how it should work. While we code we can use existing smart-contracts as a reference or check the implementation of `Fift` and `FunC` in the official repository or in the documentation. Frequently, I used Telegram-chat as a sourse searching by keywords, where during the contest all participants gathered along with Telegram employees and chatted about `Fift` and `FunC`. Link to chat in the end. Let's move to implementation. Preparing the environment to work with TON ------------------------------------------ Everything described below I have done in MacOS and re-tested in Ubuntu 18.04 LTS with Docker. The first thing we need to do is to install `lite-client` that will allow us to send requests to TON. Instruction on the website explains the process of the installation pretty clear. Here we just follow the instruction and install missing dependencies. I didn't compile each library separately instead installed them from the official Ubuntu repository (used `brew` on MacOS). ``` apt -y install git apt -y install wget apt -y install cmake apt -y install g++ apt -y install zlib1g-dev apt -y install libssl-dev ``` Once all dependencies installed we install `lite-client`, `Fift` and `FunC`. Let's clone the TON repository together with its dependencies. For convenience, we will run everything in `~/TON` folder. ``` cd ~/TON git clone https://github.com/newton-blockchain/ton.git cd ./ton git submodule update --init --recursive ``` The repository also contains `Fift` and `FunC` implementations. Now we are ready to build the project. We cloned the repository into `~/TON/ton` folder. In `~/TON` we should create `build` folder and execute the following. ``` mkdir ~/TON/build cd ~/TON/build cmake ../ton ``` Since we are going to write smart-contract we need more than just `lite-client` meaning: `Fift` and `FunC` too, therefore, we compile them as well. ``` cmake --build . --target lite-client cmake --build . --target fift cmake --build . --target func ``` Next, we should download a configuration file that contains information about TON node to which `lite-client` will connect. ``` wget https://newton-blockchain.github.io/global.config.json ``` First requests in TON --------------------- Now let's run installed `lite-client`. ``` cd ~/TON/build ./lite-client/lite-client -C global.config.json ``` If the installation was successful we will see `lite-client` connection logs. ``` [ 1][t 2][1582054822.963129282][lite-client.h:201][!testnode] conn ready [ 2][t 2][1582054823.085654020][lite-client.cpp:277][!testnode] server version is 1.1, capabilities 7 [ 3][t 2][1582054823.085725069][lite-client.cpp:286][!testnode] server time is 1582054823 (delta 0) ... ``` We can execute `help` and see available commands. ``` help ``` Let's list the commands that will be used in this article. ``` list of available commands: `last` Get last block and state info from server `sendfile` Load a serialized message from and send it to server `getaccount` [] Loads the most recent state of specified account; is in [:] format `runmethod` [] ... Runs GET method of account with specified parameters ``` Now we are ready to start writing the smart-contract. Implementation -------------- ### Idea As I described above the smart-contract that we are going to write is a lottery. Moreover, it is not just an ordinary lottery, where there is a necessity to wait an hour, a day or even more, it is an instant lottery, in which the player transfers `N` Grams to the given address and immediately receives back `2 * N` Grams or loses. Probability of winning we will set around 40%. If smart-contract do not have enough Grams to pay, we consider it as a top-up transaction. It is extremely important to see the bets in a real-time and in a convenient way so that the players could easily understand if they won or lost. To solve this we will create a website that will show bets' history directly from TON. ### Writing the smart-contract For convenience, I created a syntax highlighter for FunC language, the plugin could be found in the Visual Studio Code plugin search and code available on [GitHub](https://github.com/raiym/func-visual-studio-plugin). There is also a plugin for Fift language available and can be installed in the VSC. Right away we can create a git repository and commit interim results. To simplify our lives, we will write and test locally until the smart-contract is ready. Then once it is all set, we will publish it in TON. Smart-contract has two external methods that can be triggered. First is `recv_external()` this method is used when someone sends a request from external network (not within TON), for example when we send a message with `lite-client`. Second is `recv_internal()` this method is invoked when our contract receives message from some other contract within TON. In both cases, we can provide arbitrary parameters for these methods. Let's start with a simple example that will still work upon publishing, however will not carry actual payload. ``` () recv_internal(slice in_msg) impure { ;; TODO: implementation } () recv_external(slice in_msg) impure { ;; TODO: implementation } ``` Here we should explain what is the `slice`. All data stored in TON Blockchain is the collection of `TVM cells` or `cell` for short, in such cell you can store up to 1023 bits of information and up to 4 references to other cells. `TVM cell slice` or `slice` is part of the `cell` that is used for parsing the data, it will be more understandable later. Most important for us is that we can pass data in `slice` into smart-contract and depending on the message itself we can use `recv_internal()` and `recv_external()` methods to process it. `impure` — keyword indicates that a method changes data in the smart-contract persistance storage. Now let's save contract code in `lottery-code.fc` file and compile. ``` ~/TON/build/crypto/func -APSR -o lottery-compiled.fif ~/TON/ton/crypto/smartcont/stdlib.fc ./lottery-code.fc ``` The meaning of flags can be checked with `help` command. ``` ~/TON/build/crypto/func -help ``` Now we have compiled Fift-assembler code in a file named `lottery-compiled.fif`. ``` // lottery-compiled.fif "Asm.fif" include // automatically generated from `/Users/rajymbekkapisev/TON/ton/crypto/smartcont/stdlib.fc` `./lottery-code.fc` PROGRAM{ DECLPROC recv_internal DECLPROC recv_external recv_internal PROC:<{ // in_msg DROP // }> recv_external PROC:<{ // in_msg DROP // }> }END>c ``` We can run it locally. Let's configure the environment first. Note, that first line includes `Asm.fif` file, which contains an implementation of Fift-assembler using Fift language. Since we want to execute and test smart-contract locally we will create `lottery-test-suite.fif`, copy compiled code and change the last line, write smart-contract code into constant `code` than only we will be able to pass it to `TON Virtual Machine`. ``` "TonUtil.fif" include "Asm.fif" include PROGRAM{ DECLPROC recv_internal DECLPROC recv_external recv_internal PROC:<{ // in_msg DROP // }> recv_external PROC:<{ // in_msg DROP // }> }END>s constant code ``` So far it seems to be clear, now let's add the code to the same file that we will use to start TVM. ``` 0 tuple 0x076ef1ea , // magic 0 , 0 , // actions msg_sents 1570998536 , // unix_time 1 , 1 , 3 , // block_lt, trans_lt, rand_seed 0 tuple 100000000000000 , dictnew , , // remaining balance 0 , dictnew , // contract_address, global_config 1 tuple // wrap to another tuple constant c7 0 constant recv_internal // to run recv_internal() -1 constant recv_external // to invoke recv_external() ``` In `c7` constant we should write context with which TVM will be launched (or network status). During the contest, one developer showed how `c7` being formed and I just copied. In this article, we also might need to change `rand_seed` because it is used to generate random number and if the number will remain same during the tests, it will always return the same number. `recv_internal` and `recv_external` are just constants `0` and `-1` that will be responsible for executing the corresponding functions of the smart-contract. Now we are ready to write the first test for our empty smart-contract. For clarity, we will add all tests into the same file `lottery-test-suite.fif`. Let's create variable `storage` and write empty `cell` in it, `storage` will be the permanent storage of the smart-contract. `message` is the variable that we will pass to the smart-contract from the external environment (or as per documentation from "nowhere"). Let's make `message` empty for now. ``` variable storage **storage ! variable message **message !**** ``` Now, after we have prepared constants and variables we can run TVM with `runvmctx` and pass created parameters. ``` message @ recv_external code storage @ c7 runvmctx ``` As a result, we have [this](https://github.com/raiym/astonished/blob/4701eac89d535950c7abba85b6bbc99cf128d6f1/smartcontract/lottery-test-suite.fif) interim `Fift` code. And now let's run following code. ``` export FIFTPATH=~/TON/ton/crypto/fift/lib // export once ~/TON/build/crypto/fift -s lottery-test-suite.fif ``` The code should be executde without errors and we should be able to see the following logs. ``` execute SETCP 0 execute DICTPUSHCONST 19 (xC_,1) execute DICTIGETJMPZ execute DROP execute implicit RET [ 3][t 0][1582281699.325381279][vm.cpp:479] steps: 5 gas: used=304, max=9223372036854775807, limit=9223372036854775807, credit=0 ``` Great, we have just wrote the first workable version of the smart-contract along with the test. ### Processing external messages in a smart-contract Now let's start adding functionality. Let's start with processing messages from "nowhere" in `recv_external()`. How to structure the message is up to the developer. But usually, * At first, we want to protect smart-contract from the outside world and process messages only sent by the owner. * Secondly, when we send a valid message we want the smart-contract to process it only once even if we will send the same message more than once. Also known as "replay attack". Therefore, almost every smart-contract solves these two problems. Since our smart-contract will receive external messages we should take care of this. We will do it in the reverse order, we will solve the second issue and then move to the first. There are different ways to solve a replay attack problem. This is one of the options: in smart-contract, we will initialize the counter with `0` that will count the number of received messages. In each message, among other parameters we will send to smart-contract current counter value. If the counter value in the message doesn't match the counter value in the smart-contract we will reject such message. When it will match, we will process the message and increase the smart-contract counter by one. Let's go back to `lottery-test-suite.fif` and add the second test. We should send the wrong counter number and code must throw an exception. For example, smart-contract counter will be 166 and we will send 165. ``` **storage ! **message ! message @ recv\_external code storage @ c7 runvmctx drop exit\_code ! ."Exit code " exit\_code @ . cr exit\_code @ 33 - abort"Test #2 Not passed"**** ``` Let's execute. ``` ~/TON/build/crypto/fift -s lottery-test-suite.fif ``` And observe that test fails. ``` [ 1][t 0][1582283084.210902214][words.cpp:3046] lottery-test-suite.fif:67: abort": Test #2 Not passed [ 1][t 0][1582283084.210941076][fift-main.cpp:196] Error interpreting file `lottery-test-suite.fif`: error interpreting included file `lottery-test-suite.fif` : lottery-test-suite.fif:67: abort": Test #2 Not passed ``` At this point `lottery-test-suite.fif` should be like [this](https://github.com/raiym/astonished/blob/4b66cbd7927d7b6de22a8872175b6b26f6fe62c6/smartcontract/lottery-test-suite.fif). Now let's add counter into the smart-contract in `lottery-code.fc`. ``` () recv_internal(slice in_msg) impure { ;; TODO: implementation } () recv_external(slice in_msg) impure { if (slice_empty?(in_msg)) { return (); } int msg_seqno = in_msg~load_uint(32); var ds = begin_parse(get_data()); int stored_seqno = ds~load_uint(32); throw_unless(33, msg_seqno == stored_seqno); } ``` Message sent to the smart-contract will be stored in `slice in_msg`. First, we should check if a message is empty, if so, we just exit execution. Next, we should start parsing the message. `in_msg~load_uint(32)` that loads 32 bits from the message unsigned integer number 165. Next, we should load 32 bits from the smart-contract storage. Then check if these two numbers are the same, if not exception should be thrown. In our case, since we have sent not matching counter, the exception will be thrown. Let's compile. ``` ~/TON/build/crypto/func -APSR -o lottery-compiled.fif ~/TON/ton/crypto/smartcont/stdlib.fc ./lottery-code.fc ``` Let's copy outcome into `lottery-test-suite.fif` code, considering to change the last line. Monitor, if the test is passed successfully. ``` ~/TON/build/crypto/fift -s lottery-test-suite.fif ``` [Here](https://github.com/raiym/astonished/commit/acb37602226898947ff024e2aa2892092c9f9d0a) is the commit with the current results. Note that every time we change the smart-contract code we need to copy compiled code into `lottery-test-suite.fif` which is inconvenient. Thus, we will create a small script, which will write compiled code into a constant. We will just have to `include` this file in `lottery-test-suite.fif`. In the project folder create `build.sh` file with the following code. ``` #!/bin/bash ~/TON/build/crypto/func -SPA -R -o lottery-compiled.fif ~/TON/ton/crypto/smartcont/stdlib.fc ./lottery-code.fc ``` Make it executable. ``` chmod +x ./build.sh ``` Now in order to compile the smart-contract we just need to run `build.sh` and it will generate `lottery-compiled.fif`. Besides that we need to write that into constant `code`. Let's add a script which will copy compiled `lottery-compiled.fif` file then change the last line, like we did manually before. write the content into `lottery-compiled-for-test.fif`. ``` # copy and change for test cp lottery-compiled.fif lottery-compiled-for-test.fif sed '$d' lottery-compiled-for-test.fif > test.fif rm lottery-compiled-for-test.fif mv test.fif lottery-compiled-for-test.fif echo -n "}END>s constant code" >> lottery-compiled-for-test.fif ``` Now let's execute resulting script and will get `lottery-compiled-for-test.fif`, which we will include in `lottery-test-suite.fif`. In `lottery-test-suite.fif` remove contract code and add `"lottery-compiled-for-test.fif" include` line. Run the tests to confirm that they are passing. ``` ~/TON/build/crypto/fift -s lottery-test-suite.fif ``` Great, now lets automate test execution by creating `test.sh`, which will firstly execute `build.sh` and then will run tests. ``` touch test.sh chmod +x test.sh ``` Write inside `test.sh`. ``` ./build.sh echo "\nCompilation completed\n" export FIFTPATH=~/TON/ton/crypto/fift/lib ~/TON/build/crypto/fift -s lottery-test-suite.fif ``` Run to confirm that the code compiles and tests are still passing. ``` ./test.sh ``` Excellent, now every time we will run `test.sh`, we will compile smart-contract and run all the tests. Here is [the link to commit](https://github.com/raiym/astonished/commit/7cbdd65de60292148421c89221f9f8f05bedbc23). Before we continue let's make one more thing. Create a folder named `build` in which we will keep `lottery-compiled.fif` and `lottery-compiled-for-test.fif`. In addition to that we should create `test` folder where we will keep `lottery-test-suite.fif` and other supporting files. Link to [the corresponding commit](https://github.com/raiym/astonished/commit/7f7630b760ee29719f25770dddd1f1ac8980fa24). Let's continue smart-contract development. Next thing that needs to be done is another test that sends the correct counter to the smart-contract and we should check it then save the updated counter. We will get back to this a bit later. Now let's think about the data structure and what type of data should be kept in the storage of the smart-contract. I will describe what we store. ``` `seqno` 32 bits unsigned integer as a counter. `pubkey` 256 bits unsigned integer for storing a public key, with this key we will check the signature of the sent message, explanation follows. `order_seqno` 32 bits unsigned integer stores number of bets. `number_of_wins` 32 bits unsigned integer stores number of wins. `incoming_amount` Gram type, first 4 bits describe the length of number, remaining is the number of Grams itself, stores the number of received Grams. `outgoing_amount` Gram type, stores the number of Grams sent to winners. `owner_wc` work chain id, 32 bits (in some places said that 8 bits) integer. Currently only two work chains available 0 and -1 `owner_account_id` 256 bits unsigned integer, smart-contract address in current work chain. `orders` stores dictionary type, stores 20 recent bets. ``` Then, we should write two convenience functions. First one we will name as `pack_state()`, which will pack data for saving it in the smart-contract storage. Second one will be named as `unpack_state()`, which will read and parse data from storage. ``` _ pack_state(int seqno, int pubkey, int order_seqno, int number_of_wins, int incoming_amount, int outgoing_amount, int owner_wc, int owner_account_id, cell orders) inline_ref { return begin_cell() .store_uint(seqno, 32) .store_uint(pubkey, 256) .store_uint(order_seqno, 32) .store_uint(number_of_wins, 32) .store_grams(incoming_amount) .store_grams(outgoing_amount) .store_int(owner_wc, 32) .store_uint(owner_account_id, 256) .store_dict(orders) .end_cell(); } _ unpack_state() inline_ref { var ds = begin_parse(get_data()); var unpacked = (ds~load_uint(32), ds~load_uint(256), ds~load_uint(32), ds~load_uint(32), ds~load_grams(), ds~load_grams(), ds~load_int(32), ds~load_uint(256), ds~load_dict()); ds.end_parse(); return unpacked; } ``` Add these at the beginning of the smart-contract. At this stage, our `FunC` code should be like [this](https://github.com/raiym/astonished/blob/ea80450b55b7bd28d9e1f8e2856fda1da826f72a/smartcontract/lottery-code.fc). In order to save the data we should call built-in`set_data()` function and it will save the data from `pack_state()` into smart-contract storage. ``` cell packed_state = pack_state(arg_1, .., arg_n); set_data(packed_state); ``` Now when we have convenient functions to read and write the data, let's move on. We need to verify that the external message received by our smart-contract is signed by the private key holder(s). When we are publishing the smart-contract we can initialize it with pre-populated data in the storage. We will pre-populate it with the public key so that it could verify that the received message signed by the corresponding private key. Before we continue let's create private/public key pair. We will save the private key in `test/keys/owner.pk`. For this matter let's run Fift in interactive mode and execute four following commands. ``` `newkeypair` generates public and private key pair and places them on stack. `drop` removes the first element from the stack. `.s` just shows all stack elements. `"owner.pk" B>file` writes the first element of the stack (in our case private key) into "owner.pk" file. `bye` closes interactive mode. ``` In `test` folder we should create a folder named `keys` and put generated private key there. ``` mkdir test/keys cd test/keys ~/TON/build/crypto/fift -i newkeypair ok .s BYTES:128DB222CEB6CF5722021C3F21D4DF391CE6D5F70C874097E28D06FCE9FD6917 BYTES:DD0A81AAF5C07AAAA0C7772BB274E494E93BB0123AA1B29ECE7D42AE45184128 drop ok "owner.pk" B>file ok bye ``` Confirm that created `owner.pk` file exists. Note: we have just dropped public key from the stack, because whenever we might need it, the public key can be generated from the private key. Now we need to write signature verification functionality. Let's begin with the test. First of all we should read private key from the file with `file>B` and assign it to `owner_private_key` variable. Secondly we should convert the private into the public key using `priv>pub` function and assign it to `owner_public_key` variable. ``` variable owner_private_key variable owner_public_key "./keys/owner.pk" file>B owner_private_key ! owner_private_key @ priv>pub owner_public_key ! ``` We will need both keys. Let's initialize the smart-contract storage. We will fill `storage` variable with arbitrary data in the same sequence as in `pack_state()`. ``` variable owner_private_key variable owner_public_key variable orders variable owner_wc variable owner_account_id "./keys/owner.pk" file>B owner_private_key ! owner_private_key @ priv>pub owner_public_key ! dictnew orders ! 0 owner_wc ! 0 owner_account_id ! **storage !** ``` Next, let's form a signed message that will consist of signature and counter value. In the first place we should create the data that we want to send, then it should be signed with the private key and last but not least put together the signed message. ``` variable message_to_sign variable message_to_send variable signature **message\_to\_sign ! message\_to\_sign @ hashu owner\_private\_key @ ed25519\_sign\_uint signature !** ``` The message which we are sending to the smart-contract is assigned to `message_to_send` variable, about `hashu`, `ed25519_sign_uint` functions we can read in [Fift documentation](https://test.ton.org/fiftbase.pdf). To run the test. ``` message_to_send @ recv_external code storage @ c7 runvmctx ``` Interim resuls commited [here](https://github.com/raiym/astonished/commit/5a60cc0eaf52e1a5986f9838e52dd09f5a4c12d3). We can run the test and it will fail, therefore we will change the smart-contract, so it could receive this type of message and verify the signature. Firstly we should count 512 bits signature and write it into variable, next we should count 32 bits variable counter. Since we already have used earlier written `unpack_state()` to parse storage data, we will use it. Next, we should check the sent counter with the stored and the signature verification. If something does not match we must throw an exception accordingly. ``` var signature = in_msg~load_bits(512); var message = in_msg; int msg_seqno = message~load_uint(32); (int stored_seqno, int pubkey, int order_seqno, int number_of_wins, int incoming_amount, int outgoing_amount, int owner_wc, int owner_account_id, cell orders) = unpack_state(); throw_unless(33, msg_seqno == stored_seqno); throw_unless(34, check_signature(slice_hash(in_msg), signature, pubkey)); ``` Commit with current changes [here](https://github.com/raiym/astonished/commit/bf54ee0b7a487337004e92b0cb975e79885e9d04). We can run the test and see that the second one fails. Well, there are two reasons why code is failing during parsing: lack of bits passed and lack of it in the storage. Moreover, we need to copy the storage structure from the third test. During the second test run we will change the signature and the storage.The current state of tests' file can be found [here](https://github.com/raiym/astonished/blob/3b44919c2fcd0a89f3bc63ada4d08f07f165c962/smartcontract/test/lottery-test-suite.fif). Let's write the fourth test, in which we will send a message signed by someone else's private key. A new private key should be generated and saved in `not-owner.pk`. We will sign the message with this key. It's time to run the tests and let's make sure that tests come off. Commit with [corresponding changes](https://github.com/raiym/astonished/commit/ce65dcf6239ef66844293d09855553920e1036f7). Finally, we can start implementing business logic. In `recv_external()` we will process two types of messages. Due to the fact that our smart-contract will accumulate Grams of participants, these Grams need to be sent to the lottery owner. The lottery owner's address should be saved in the storage when the smart-contract is initialized. Yet, we need to have the option to change this address, in case it will be required. Let's start by changing the owner's address. We will write a test to check that when the message received, a new address will be saved in the smart-contract storage. Note that in addition to the counter and a new address value the message we should include `action` 7 bits unsigned integer, and depending on the value of it we will choose how to process the message. ``` **message\_to\_sign !** ``` In the test implementation, we can observe how deserialization of the smart-contract storage happens. The implemented test you can see in [this commit](https://github.com/raiym/astonished/commit/071d860967346ff9328dd5c4148a369293288067). Run the test to confirm its failure. Now let's apply the logic of changing the owner's address so that the test will pass. In smart-contract, we continue to parse `message` and read `action` number. Reminder that we will have two `action`s: owner's address change and the ability to send grams to the owner. Then we should read the new smart-contract address of the owner and save it in the storage. Run the tests and observe that the third one is failing. It is happening because smart-contract code now parses additional 7 bits from the message, which is not included in the message that we have sent in the third test. Let's add missing `action` in the message. After that we can run the tests and confirm that all of them are passing. [Here](https://github.com/raiym/astonished/commit/2a388bed854bbb944953689bf736fb47b6f02c24) is the commit with described changes. Great. Now let's write the logic of sending a requested number of Grams to the owner's address. We will write two tests. The first one is when the balance of the lottery is not enough to make Gram transfer, the second is the opposite when the amount is enough. [Here](https://github.com/raiym/astonished/commit/82e19c9428440301d167626e70e807f3e456fe64) is the commit with updated tests. Let's add code. We will start by adding two methods for convenience. The first one is the get-method that displays the remaining balance of the smart-contract. ``` int balance() inline_ref method_id { return get_balance().pair_first(); } ``` The second method is regarding Grams transfer to an arbitrary address. I copied this method from another smart-contract. ``` () send_grams(int wc, int addr, int grams) impure { ;; int_msg_info$0 ihr_disabled:Bool bounce:Bool bounced:Bool src:MsgAddress -> 011000 cell msg = begin_cell() ;; .store_uint(0, 1) ;; 0 <= format indicator int_msg_info$0 ;; .store_uint(1, 1) ;; 1 <= ihr disabled ;; .store_uint(1, 1) ;; 1 <= bounce = true ;; .store_uint(0, 1) ;; 0 <= bounced = false ;; .store_uint(4, 5) ;; 00100 <= address flags, anycast = false, 8-bit workchain .store_uint (196, 9) .store_int(wc, 8) .store_uint(addr, 256) .store_grams(grams) .store_uint(0, 107) ;; 106 zeroes + 0 as an indicator that there is no cell with the data. .end_cell(); send_raw_message(msg, 3); ;; mode, 2 for ignoring errors, 1 for sender pays fees, 64 for returning inbound message value } ``` We should add these two methods in the smart-contract and write a business logic. Firstly we should parse the number of Grams from the message. Secondly we should check remaining balance, if not enough throw the exception. In case if everything is on track, then Grams should be sent to the saved address and update counter value. ``` int amount_to_send = message~load_grams(); throw_if(36, amount_to_send + 500000000 > balance()); accept_message(); send_grams(owner_wc, owner_account_id, amount_to_send); set_data(pack_state(stored_seqno + 1, pubkey, order_seqno, number_of_wins, incoming_amount, outgoing_amount, owner_wc, owner_account_id, orders)); ``` [Here](https://github.com/raiym/astonished/commit/f2b18144b0b8e4ee4f8caab85ce40e8fee9e1b69) is the commit with described changes. let's run the tests to confirm the successful outcome. By the way, for processing the accepted message (storage and computing power) each time the smart-contract pays a fee in Grams. In order to fully process the valid external message, after basic checks, we should call `accept_message()`. ### Processing internal messages to the smart-contract Now let's work on internal messages. In fact, we will receive Grams and send back twice the amount if the player wins or the 1/3 of the amount to the owner if the player loses. Let's write a simple test. To do this, we need the test address of the smart-contract, out of which we would be sending grams to the lottery smart-contract. Any smart-contract consists of two parts, 32 bits integer number responsible for work chain and 256 bits unsigned integer is a unique account number in this work chain. For example, -1 and 12345, this address we will save in a file. I have copied the function for saving address in a file from [`TonUtil.fif`](https://github.com/ton-blockchain/ton/blob/efd47af432bb347118b80a2efc10ad595bcf2b6a/crypto/fift/lib/TonUtil.fif). ``` // ( wc addr fname -- ) Save address to file in 36-byte format { -rot 256 u>B swap 32 i>B B+ swap B>file } : save-address ``` Let's figure out how this function works, it will help us understand Fift. Run Fift in interactive mode. ``` ~/TON/build/crypto/fift -i ``` Firstly we should put on the stack numbers -1, 12345 and the string "sender.addr". ``` -1 12345 "sender.addr" ``` Next, we should run `-rot`, which moves elements in the stack to the right, allowing the unique number of the smart-contract appear above the stack. ``` "sender.addr" -1 12345 ``` `256 u>B` converts 256 bits unsigned integer into bytes. ``` "sender.addr" -1 BYTES:0000000000000000000000000000000000000000000000000000000000003039 ``` `swap` swaps top two elements of the stack. ``` "sender.addr" BYTES:0000000000000000000000000000000000000000000000000000000000003039 -1 ``` `32 i>B` converts 32-bits integer into bytes. ``` "sender.addr" BYTES:0000000000000000000000000000000000000000000000000000000000003039 BYTES:FFFFFFFF ``` `B+` concatenates two bytes sequences into one. ``` "sender.addr" BYTES:0000000000000000000000000000000000000000000000000000000000003039FFFFFFFF ``` Again `swap`. ``` BYTES:0000000000000000000000000000000000000000000000000000000000003039FFFFFFFF "sender.addr" ``` Finally `B>file` receives two parameters, bytes and string that would be a file name. Function writes bytes into a file and names the file as `sender.addr`. In current folder file has been created. We should move it into `test/addresses/`. Let's write a test which emulates sending Grams from the address that we have just created. [Here](https://github.com/raiym/astonished/commit/9fb91dd051c483395ddb36c777407776ebf70171) is the commit. Now let's work on the business logic of the internal message of the lottery. First of all we should check if the received message `bounced` or not, if `bounced` we can ignore this message. `bounced` means that the smart-contract will return Grams if an error happens during the execution. However, we will not return Grams if an error happens. Next, we must check the amount of sent Grams if it is less than half of the Grams, we can simply accept the message without doing anything else. Then we parse the address of the smart-contract where the message came from. Read data from storage and then we should remove old orders from the history, if there are more than twenty items. For convenience, I wrote three additional functions `pack_order()`, `unpack_order()`, `remove_old_orders()`. Next, we should check the remaining smart-contract balance and if there are not enough Grams to pay, we should not consider it as a bet but as a top-up and save it in `orders`. Finally, the heart of the smart-contract. If the player lost, we should save the order in the history of bets, moreover if the order amount is higher than 3 Grams, we must send 1/3 to the owner of the lottery. If the player won, then we should send a double amount back to the player and save the order in history. ``` () recv_internal(int order_amount, cell in_msg_cell, slice in_msg) impure { var cs = in_msg_cell.begin_parse(); int flags = cs~load_uint(4); ;; int_msg_info$0 ihr_disabled:Bool bounce:Bool bounced:Bool if (flags & 1) { ;; ignore bounced return (); } if (order_amount < 500000000) { ;; just receive grams without changing state return (); } slice src_addr_slice = cs~load_msg_addr(); (int src_wc, int src_addr) = parse_std_addr(src_addr_slice); (int stored_seqno, int pubkey, int order_seqno, int number_of_wins, int incoming_amount, int outgoing_amount, int owner_wc, int owner_account_id, cell orders) = unpack_state(); orders = remove_old_orders(orders, order_seqno); if (balance() < 2 * order_amount + 500000000) { ;; not enough grams to pay the bet back, so this is re-fill builder order = pack_order(order_seqno, 1, now(), order_amount, src_wc, src_addr); orders~udict_set_builder(32, order_seqno, order); set_data(pack_state(stored_seqno, pubkey, order_seqno + 1, number_of_wins, incoming_amount + order_amount, outgoing_amount, owner_wc, owner_account_id, orders)); return (); } if (rand(10) >= 4) { builder order = pack_order(order_seqno, 3, now(), order_amount, src_wc, src_addr); orders~udict_set_builder(32, order_seqno, order); set_data(pack_state(stored_seqno, pubkey, order_seqno + 1, number_of_wins, incoming_amount + order_amount, outgoing_amount, owner_wc, owner_account_id, orders)); if (order_amount > 3000000000) { send_grams(owner_wc, owner_account_id, order_amount / 3); } return (); } send_grams(src_wc, src_addr, 2 * order_amount); builder order = pack_order(order_seqno, 2, now(), order_amount, src_wc, src_addr); orders~udict_set_builder(32, order_seqno, order); set_data(pack_state(stored_seqno, pubkey, order_seqno + 1, number_of_wins + 1, incoming_amount, outgoing_amount + 2 * order_amount, owner_wc, owner_account_id, orders)); } ``` That is all. [Corresponding commit](https://github.com/raiym/astonished/commit/a47ae9fb7292f2f2a940a8ca83501b2c04a1a57d). ### Writing get-methods Let's create the get-methods that will allow us to request information about the smart-contract's state from the external world (in fact, these methods will parse and return storage data). [Here](https://github.com/raiym/astonished/commit/a71bccfe554a5dae03c7c11f44ab2185707be155) is the commit with added get-methods. Later on we will see how these methods are being used. I forgot to add code, that will process the very first request when we will publish the smart-contract. [Corresponding commit](https://github.com/raiym/astonished/commit/f42b3f71f01579007fe2d9643328e3b6a1588fa4). I also [fixed](https://github.com/raiym/astonished/commit/7eab0eed3db2fec245a07e716b8bfa5a45c36363) the bug regarding sending 1/3 Grams to the owner. ### Publishing smart-contract to the TON Now we need to publish created smart-contract. Let's create folder `requests` in the project root folder. As a base code for publishing the smart-contract, I took a simple wallet [publishing code](https://github.com/ton-blockchain/ton/blob/master/crypto/smartcont/new-wallet.fif) in the official repository and altered it a bit. Here is what we should pay attention to: we should form the smart-contract storage and entry message. After that address of the smart-contract is generated, so it is known even before deploying in TON. Next, we need to transfer several Grams to the generated address. And only after that we can deploy generated `.boc` file with the smart-contract code. Because as we already mentioned network charges a fee for the storage and processing time. Here is [deploy code](https://github.com/raiym/astonished/blob/master/smartcontract/requests/new-lottery.fif). Then we can run `new-lottery.fif` and generate `lottery-query.boc` file and `lottery.addr`. ``` ~/TON/build/crypto/fift -s requests/new-lottery.fif 0 ``` Let's not forget to save `lottery.pk` and `lottery.addr`. Also, among other things, we will see the smart-contract address. ``` new wallet address = 0:044910149dbeaf8eadbb2b28722e7d6a2dc6e264ec2f1d9bebd6fb209079bc2a (Saving address to file lottery.addr) Non-bounceable address (for init): 0QAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8Ksyd Bounceable address (for later access): kQAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8KpFY ``` For the sake of interest let's make a request to TON. ``` $ ./lite-client/lite-client -C global.config.json getaccount 0QAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8Ksyd ``` And observe that account with this address is empty. ``` account state is empty ``` Now let's transfer 2 Grams to our smart-contract address `0QAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8Ksyd` and after a few seconds make the same request. I have used [official TON wallet](https://wallet.ton.org) and test Grams could be requested in the group chat, a link will be shared at the end of this article. ``` > last > getaccount 0QAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8Ksyd ``` Observe that the smart-contract has changed its status from empty to uninitialized (`state:account_uninit`) with a balance of 2 000 000 000 nanograms. ``` account state is (account addr:(addr_std anycast:nothing workchain_id:0 address:x044910149DBEAF8EADBB2B28722E7D6A2DC6E264EC2F1D9BEBD6FB209079BC2A) storage_stat:(storage_info used:(storage_used cells:(var_uint len:1 value:1) bits:(var_uint len:1 value:103) public_cells:(var_uint len:0 value:0)) last_paid:1583257959 due_payment:nothing) storage:(account_storage last_trans_lt:3825478000002 balance:(currencies grams:(nanograms amount:(var_uint len:4 value:2000000000)) other:(extra_currencies dict:hme_empty)) state:account_uninit)) x{C00044910149DBEAF8EADBB2B28722E7D6A2DC6E264EC2F1D9BEBD6FB209079BC2A20259C2F2F4CB3800000DEAC10776091DCD650004_} last transaction lt = 3825478000001 hash = B043616AE016682699477FFF01E6E903878CDFD6846042BA1BFC64775E7AC6C4 account balance is 2000000000ng ``` Now we can deploy the smart-contract. Let's run `lite-client` and execute the following. ``` > sendfile lottery-query.boc [ 1][t 2][1583008371.631410122][lite-client.cpp:966][!testnode] sending query from file lottery-query.boc [ 3][t 1][1583008371.828550100][lite-client.cpp:976][!query] external message status is 1 ``` Confirm that the smart-contract has been published. ``` > last > getaccount 0QAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8Ksyd ``` We can observe other things in the log. ``` storage:(account_storage last_trans_lt:3825499000002 balance:(currencies grams:(nanograms amount:(var_uint len:4 value:1987150999)) other:(extra_currencies dict:hme_empty)) state:(account_active ``` Finally, we can see `account_active`. The corresponding commit is [here](https://github.com/raiym/astonished/commit/aa99cd11b48f88b2957d87bb6a54e10505ead92f). ### Sending external messages Now let's create requests to interact with the smart-contract. We support two: sending grams to the owner and changing owner's smart-contract address. We need to make the same request as in the test #6. This is the message that we will be sending to the smart-contract, where `msg_seqno` 165, `action` 2 and 9.5 Gram to be sent. We should remember to sign a message with the private key `lottery.pk`, which has been generated earlier. Here is [the corresponding commit](https://github.com/raiym/astonished/commit/3ed9f1811c3a01be6560d03b6faf70d6744d222f). ### Getting the information from a smart-contract using the get-methods Now let's see how to run the get-methods of a smart contract. Run `lite-client` and invoke `runmethod` with the smart-contract address and preferred get-method. Getting sequence number. ``` $ ./lite-client/lite-client -C ton-lite-client-test1.config.json > runmethod 0QAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8Ksyd balance arguments: [ 104128 ] result: [ 64633878952 ] ... ``` And orders' history. ``` > runmethod 0QAESRAUnb6vjq27KyhyLn1qLcbiZOwvHZvr1vsgkHm8Ksyd get_orders ... arguments: [ 67442 ] result: [ ([0 1 1583258284 10000000000 0 74649920601963823558742197308127565167945016780694342660493511643532213172308] [1 3 1583258347 4000000000 0 74649920601963823558742197308127565167945016780694342660493511643532213172308] [2 1 1583259901 50000000000 0 74649920601963823558742197308127565167945016780694342660493511643532213172308]) ] ``` We will use `lite-client` and the get-methods to show the data from the smart-contract. ### Showing the data of the smart-contract on the website I have written a website using Python to show information from the smart-contract in a convenient layout. Here I will not dwell on it in details and make [one commit](https://github.com/raiym/astonished/commit/79561ad90d2ee532079db182ac89bd1d3995eca3) with changes. Requests to TON are made using `lite-client` via Python code. For convenience, everything packed in Docker and deployed at Digital Ocean. [Website link](https://astonished-d472d.ondigitalocean.app). ### Making a bet Now let's transfer 64 Grams to our lottery to top it up using the official [wallet](https://ton.org/wallets). And make some bets for clarity. We can see that information on [the website](https://ton-lottery.appspot.com) is updating and we can observe orders' history, current winning rate, and other useful information directly from the smart-contract. Afterword --------- The article is way longer than I expected it to be. Maybe it could be shorter, or maybe it is just for a person who does not know anything about TON and wants to write and publish not the easiest smart-contract with the ability to interact with it. Maybe some things could be explained easier. Some moments could be implemented more effectively, for example, we could parse orders' history from the blockchain itself and do not need to store it inside the smart-contract. But then we couldn't be able to show how a FunC dictionary works. Since I could make a mistake somewhere or understand something incorrectly, you also need to rely on official documentation and official TON code repository. Need to mention, that TON still in an active development state and incompatible changes could be made that would break some of the steps in this article (which happened and has been fixed). I will not talk about the TON's future here. Maybe it will become something really big and we might need to spend some time to learn and start creating TON-based products or maybe not. There is also Libra by Facebook which has potential audience of users that is even greater than that is of TON's. I don't know anything about Libra, judging by [the official community](http://community.libra.org) it is more active than TON-community. TON developers and the community are more like an underground movement, which is also cool. Links ----- 1. Official TON website: <https://ton.org> 2. Official TON repository: <https://github.com/newton-blockchain/ton> 3. Official TON wallet for different platforms: <https://ton.org/wallets> 4. Lottery smart-contract discussed in this article: <https://github.com/raiym/astonished> 5. Website of the lottery: <https://astonished-d472d.ondigitalocean.app> 6. Visual Studio Code syntax highlighter for FunC: <https://github.com/raiym/func-visual-studio-plugin> 7. Link to Telegram chat devoted to TON: <https://t.me/tondev> 8. The first stage of the contest: <https://contest.com/blockchain> 9. The second stage of the contest: <https://contest.com/blockchain-2> **July 7, 2020:** Sadly, reflecting on the latest Telegram statement about test TON network servers discontinuing I decided to stop supporting the website with smart contract data. Smart-contract along with server code still accessible on the GitHub. **January 28, 2022:** Article updated with current information and checked for relevance.
https://habr.com/ru/post/494528/
null
null
7,525
57.77
render_template( name, pad=None, this=None, values=None, alt=None) Whenever Lektor needs to render a template, it will use this exact method. Here are the parameters and what they mean: name: this is the name of the template that should be rendered. It's the local filename relative to the templatesfolder and uses slashes for paths. pad: when a Pad is available, it should be provided so that the sitevariable can be populated. If a context is available then the pad will also be pulled from the context if needed. this: the value of the thisvariable in templates. This should always be the closest renderable thing. Typically this is a Record or flow block or something similar. values: optional additional variables can be provided as a dictionary here. alt: this can override the default selected alt. If not provided it's discovered from thisand it will default to _primaryif no other information can be found. from lektor.project import Project project = Project.discover() env = project.make_env(load_plugins=False) pad = env.new_pad() rv = env.render_template('hello.html', pad=pad, this={ 'title': 'Demo Object' })
https://www.getlektor.com/docs/api/environment/render-template/
CC-MAIN-2018-17
refinedweb
183
60.92
Compiz Project Releases C++ Based v0.9.0 237." Wow! (Score:5, Funny) I'm excited to learn about more software using this new programming language of the future! Re: (Score:2) Jealous of first poster or what? Re:Wow! (Score:4, Funny) troll or idiot? People with preternatural foresight will often look like the idiot or a fool. I think the grand parent sees the potential of C++ and a bright future for this new and advanced language! Re: (Score:2) Truer words haven't been spoken! I am filled with jubilant delight to hear that the Compiz team could exploit the wildly successful merge of the object-oriented and functional programming paradigms of C++! Great - Time to hold off upgrading Compiz (Score:2) The language and dependency changes aside, how much do you want to bet there will be problems in every package dist. Re: (Score. Nobody should be putting Compiz 0.9 into a shipping distribution. Hopefully by the time 0.10 comes out they'll have it unfucked again. Fedora might do it, of course. But I don't see it until some point releases have gone by.: (Score:2) Sure would be nice if they would go 1.0 instead of .10 if it's going to be a stable release... 1.0 = 100%. When they reach 1.0, there can never ever be any more releases. ;) Speed (Score:2) Would the coding switch gain any speed increase? Since having the enforced change from the ultra fast, ultra stable Beryl to the not very fast Compiz, I have not been very impressed with Compiz. The developers told me they didn't change anything to get the Beryl fork back into Compiz, but the fact on _MY_ system is simple. With Beryl I could run whatever effect I wanted and even multiple effects at the same time, and the CPU was barely used, about 98% of the work was offloaded to the graphics card. Now with C Re: (Score:2) You're not going to see any speed gain from *just* switching to C++ from C. A direct translation of code from C to some other language invariably never accomplishes this. The compilation of Compiz will also be slower if it was just a language change, anyway.*** *** Unless the authors also did a major refactor and performance enhancement job while they were sifting through the code, which is what I always strive to do when I have to refactor an entire project from scratch, but in a time crunch or to get new Re: (Score:2) First release of merged branches (Score:5, Informative) So.. what is it? (Score:5, Insightful) Re:So.. what is it? (Score:5, Funny) It sucks the paint off your house and gives you and your family a permanent orange afro. Re: (Score:2) What, a Valderama [inthestands.co.uk]? Re: (Score:2) Re:So.. what is it? (Score:4, Informative) I use the cube desktop switcher and that's it. For some reason I find the idea of a cube easier to map out my mind when I have several windows open than a chain of 4 desktops. Re:So.. what is it? (Score:4, Insightful) Nothing useful. It's eye candy, like a turbo-charged Aero Glass with 3D effects. I use the cube desktop switcher and that's it. For some reason I find the idea of a cube easier to map out my mind when I have several windows open than a chain of 4 desktops. So in other words, you find at least one aspect of it to be very useful. While some window effects are just pure eye-candy (e.g., wobbly windows), many of the added desktop effects provide various degrees of enhanced functionality. This includes: Don't dismiss the suite as just eye-candy; if the main perception of Compiz is that it exists only to make things more fun and prettier, then its overall value to the desktop is understated. Re: (Score:3, Insightful) Don't forget window grouping and tab groups. I use that a lot. Expose is nice for managing multiple desktops as well.) Re: (Score:2) oops.. I meant to do mouse+keyboard activity with either mouse or keyboard, but not both at the same time. Re: (Score:2) Re: Re: (Score:2) Re: (Score:3, Interesting) I think it's because if you want all the shiny bits of objects and encapsulation then you use Java. If you want raw speed & dirty tricks then you use C. I favor C++ myself, but I'm a huge fan of breaking encapsulation. Re: (Score:3, Funny) Fun fact: I knew somebody who added a preprocessor step to his compile process to make every class as a friend of every other class, because he was tired with "not being able to use the pesky private stuff in coworker's cold". Re: ++ Re: (Score:3, Insightful) I suspect the efficiency gap between C and C++ is smaller than you think. Even if you are very strict about encapsulation of objects, you'd be very unlikely to add more than 10% to the run time. And as others have pointed out, making use of features such as templating can actually help the compiler generate more efficient code. C++ was designed so that it adds no overheads to imperative code, while the OOP constructs such as member functions have only one extra parameter (and one level of indirection for Re: (Score:2) C++ can be faster than C... This [stanford.edu] is an old one, which proves the point... Remember, C++ is not just OO - that's one of the paradigms it supports, but not the only one. Re: (Score:3, Insightful) And Java can be faster than C++, if you write sufficiently good Java code and sufficiently bad C++ code. That you manage to find a single instance of this is true doesn't prove anything. Re:Objects... (Score:5, Insightful) I understand, but for speed I expect that C++ still outperforms Java, and while C should outperform both of them, C doesn't feature encapsulation, polymorphism and all the other goodies that OOP provides. No, C is exactly as fast as C++. C++ only becomes slower if you use certain features that have a performance impact. Example: if you use exceptions, there is a performance penalty. If you don't, you don't get the performance penalty. That is one of the design principles of C++: nothing can be included into the language that slows down code that does not use/need it. The main slow downs you will see in your average C++ program, over the corresponding C, is the use of the string class as opposed to the nasty but fast strcpy and friends, and the extra indirect function calls due to virtual functions (which causes a branch misprediction and hence a pipeline flush on modern cpus, costing you a bunch of clock cycles). Still, you only pay for virtual if you choose to use it, and manually implemented virtual function calls are used all over the place in good old C, with the same effect. Furthermore, C++ templates allow code re-use with exactly 0 performance loss and while the error messages are ugly, they're still a whole load prettier than doing the same thing the C way with recursive includes and lots of preprocessor madness. And you can link to existing C code/libraries without any problems. Frankly, there is no valid reason for starting a new program in C in this day and age. Re: (Score:2) C++ only becomes slower if you use certain features that have a performance impact. And virtually every useful feature of C++ that is not in its common subset with C is one of those. Example: if you use exceptions, there is a performance penalty. And if you use operator new, you use exceptions. The main slow downs you will see in your average C++ program, over the corresponding C, is the use of the string class That and <iostream> [yosefk.com]. Once, I tried programming in GNU C++ for a system with an ARM7 CPU and 288 KiB of RAM. Even after applying all the link-time space optimizations I could find, Hello World statically linked against GNU libstdc++'s <iostream> still took 180 KiB [pineight.com]. (Dynamic linking wouldn't even have worked because libstdc++.so itself is bigger than RAM Re: (Score:2) "As I understand it, C++ compilers implement templates by making a copy of the object code for each type for which the template code is instantiated. Once you instantiate a template numerous times, your binary gets bigger, and it slows down because it has to keep loading data from storage instead of caching it in RAM." Not really. GCC reuses the same code from different instantiations. And of course, if you follow ODR then you'll have at most 1 template instantiation for each combination of type parameters. A What Newlib++? (Score:2) if you follow ODR then you'll have at most 1 template instantiation for each combination of type parameters. The problems come when A. programmers become unaware of how many combinations of type parameters they're actually using, or B. programmers can't decipher template type names in compiler diagnostic messages. Also, libstdc++ is a beast. But so is glibc. If you compile for embedded devices - don't use it. Newlib is better than glibc for embedded devices. What C++ standard library implementation do you recommend for these? It's certainly possible to make 'Hello world' to be about 1kb in C++. I've done so with std::fputs of <cstdio>, but there are still a lot of self-proclaimed C++ purists who apply the no-true-Scotsman fallacy on C++ code using <cstdio>, claiming t Re: (Score:2) C++ only becomes slower if you use certain features that have a performance impact. And virtually every useful feature of C++ that is not in its common subset with C is one of those. What is the performance overhead of namespaces, typesafe object creation, references, function and operator overloading, use of const ints for array sizes (more efficient than C), non-virtual methods, STL (the word "virtual" does not appear anywhere in the STL sources), support for wide characters, protected/private modifiers, etc.? While features like templates and metaprogramming hav C++ as better C vs. no-true-Scotsman C++ (Score:2) you can always use nothrow new As I understand it, the standard library uses throw new, not nothrow new. So if you use the standard library, you get the exception handlers linked in. What is the performance overhead of namespaces, [...] references, [...] use of const ints for array sizes (more efficient than C), non-virtual methods, protected/private modifiers True, these features allow one to use C++ as "a better C". But a lot of C++ fanboys will claim that if a program doesn't use virtual, throw, and <iostream>, it's not in the spirit [wikipedia.org] of C++. typesafe object creation, STL (the word "virtual" does not appear anywhere in the STL sources) Exception overhead. Or is the entire C++ standard library also available in a nothrow version? function and operator overloading No runtime overhead, but especially operator Re: (Score:2) As I understand it, the standard library uses throw new, not nothrow new. So if you use the standard library, you get the exception handlers linked in. The standard library allows you to specify allocators for everything in it that requires memory allocation, precisely so that you can use your own allocation mechanisms. Writing one that does new(std::nothrow) is trivial. Of course, this assumes that you want to ignore any OOM errors (which, given the existence of things such as Linux "OOM killer", is a reasonable default), since there's no way for, say, std::string to report a memory allocation error other than just propagating the exception. If you really Re: (Score:2) this assumes that you want to ignore any OOM errors (which, given the existence of things such as Linux "OOM killer", is a reasonable default) I was referring to embedded systems and handheld devices, not PCs. I specifically had Nintendo DS (4 MB RAM, single-tasking) in mind. If you really want to check for OOM without exceptions, then, yes, you'll have to stay clear from STL and other bits of C++ standard library. Would it be safe to say that common STL implementations operate under the assumption that allocate() throws std::bad_alloc rather than returning 0? What does addAll have to do with operator+=? I mean, sure, you could overload the latter that way std::string does exactly this. I was under the impression that std::vector did the same, calling std::vector::insert() at the end, but now I guess not. C++ or C+Template? (Score:2) Classes (not virtual) Sugar for functions that take this as their first argument. But as Micropolis showed, these are useful for taking legacy code that uses global or module-scope variables and allowing it to be instantiated multiple times. I'll grant you this one. References Sugar for pointers. You have other problems when your code runs out of memory that often Only if you consider running on a microcontroller or a handheld device a "problem". In such a case, running out of memory means the allocator has to purge items from the cache. Then you run into other classes that use new as their factory, for which Sugar? (Score:2) References Sugar for pointers. And C is sugar for assembler, which is sugar for writing machine code directly using a hex editor. The whole point of any language feature is to make it easier to use machine features. Calling them "sugar" doesn't negate that. Re: (Score:2) References Sugar for pointers. And C is sugar for assembler In what situations would one use C++ references where pointers do not suffice? The whole point of any language feature is to make it easier to use machine features. Calling them "sugar" doesn't negate that. I didn't necessarily mean "sugar" in a negative way. I do remember writing that classes with no virtual methods are a useful sugar. Re: (Score:2) Not if you use nothrow. Eg.: obj *p = new(std::nothrow) obj; Does the standard library use new(std::nothrow), or does it use regular new? pr Structures (Score:2) For example, our standard apps maintain state persistence by simply writing out one or more C structures to a temp file on disk. Of course, the C standard explicitly states that the layout in memory of structures is implementation-dependent, so doing things like that sets yourself up for serious pain when you do things like change compiler versions, optimization options, or run on different platforms. In my experience, a lot of programs run without crashing only through sheer luck. Re: (Score:2) C++ coders could continue to do this, of course, but they've assumed they needed to use objects for this purpose, leading to complex schemes for streaming those objects out to disk for persistence. My PoV on C v C++ coding comes down to this kind of stuff. In C, you'll have a function that takes a struct parameter and writes it to file. In C++ you put that function inside the struct and remove the parameter. so Persist(struct Data d); becomes d.Persist(); simples! In effect, no difference - except to handily Re: (Score:3, Insightful) Which would be every feature that isn't C with added syntactic sugar. Yes, there is: it's a simple language with very predictable behaviour, compiles fast, and the resulting binary can be trivially interfaced with pretty much every other language. There's no good reason to use C++: you don't get the benefits Re: (Score:2) As GP rightly noted, unless you use specific C++ features (exception, virtual), you get opcode-for-opcode identical code from C++ compared to C. Unless your microcontroller uses Tarot cards to determine the original language in which that MOV was written, I don't see how it's possible. As a side note, I've seen drivers written in C++. Worked great. Re: (Score:2) Why would you want to break encapsulation? Because perhaps one is trying to work around poor design of a class where useful functionality has not been exposed to the public:. Using the class as intended would result in an abstraction inversion [wikipedia.org]. Re: : (Score:2) templates [...] don't carry any runtime speed penalty Unless they cause the code to spill out of the instruction cache. Or unless they cause the entire working set to spill out of a handheld device's 4 MB of RAM. Re: (Score:2) That's not the fault of the feature itself, but of people using it incorrectly (at least in a particular environment). It is still quite possible to retain full control over template instantiation by splitting template into header & implementation files (with header only containing function declarations and not definitions, and implementation containing their definitions), using extern template [open-std.org] in the header for all specializations that you need, and using explicit instantiations in the implementation fi Template misuse (Score:2) if you remove the templates by hand-instantiating them, you'd still have the same issue of code duplication. The difference is that algorithms and containers in C or Java encourage the use of erasure to a higher type (e.g. void * or java.lang.Object). C++ templates can be used this way, but they can also be instantiated once for each T* (by pointer) or even once for each T (by value). I can think of a few things to watch out for when using templates: Re: (Score:2, Insightful) -1, linux zealotry bordering on FUD Re: (Score:2, Insightful) -1, linux zealotry bordering on FUD Nah. He's karma whoring. Re:favorite way (Score:5, Informative) No, karma whoring is to post something completely obvious you know will be modded up and not add anything to the discussion. Like this comment. Re:favorite way (Score:5, Funny) No, karma whoring is to post something completely obvious you know will be modded up and not add anything to the discussion solely because you want to boost your karma. Unlike this comment. Re: (Score:2) Fewer Viruses - check Lower TCO - check CLI is not working on windows - wrong Most FLOSS runs on it - check Drivers for more hardware - check No kernels panics (BSOD) - wrong Not nearly as resource hungry - wrong because tests indicate that Windows 7 is less hungry than Ubuntu Penguins - what a BS The easyest way of making a Windows user envious = getting the hottest chick on the planet Re: (Score:2) Re: (Score:2) Ha... ha... ha... ha... ha.... Okey.... 1. Windows 7 has better OpenGL performance no matter what hardware and what drivers you throw at it. 2. 1.5GB? Sorry but I thought Windows 7 didn't use more than 200-300MB RAM and cached out wasted RAM space? 3. What are you running next to GNU+Linux? Re: (Score:2) Oh I forgot to mention less kilowatts... Re: (Score:2) Well, I know OpenGL performance on Linux is sucky, but Windows 7 is definitely using more RAM. That's what task manager shows. And yes, I know how to read the various dials on there. I don't know what your 3rd question is supposed to mean. Re:favorite way (Score:5, Interesting) Re:favorite way (Score:5, Informative) In fact, on old systems with a graphics card it is significantly faster than the traditional way of redrawing windows. Why? Because: 1. the gfx card can do part of the work 2. all windows are already drawn and kept in the graphic card's memory Re:favorite way (Score:4, Informative) Compiz doesn't actually use that much system resources, nor strain your hardware either. I have a 3.2GHz tri-core Phenom II system with a GTS 240 (~400MHz, 96 stream processors) and Compiz will easily consume 5% or more if you have a window with continual graphics updates, like a game or a video player. That's a lot of CPU! You can manually disable transforms on that window but that requires a visit to the settings manager that would leave the average user dumbfounded. Re: . Re: (Score:2) I just used the middle click / cube shrinks and becomes semi-transparent and can be rotated... effect in Compiz, which immediately shot up the CPU usage for both cores of my processor from 20% to around 60% per core. Under Beryl the CPU usage changed about 2% over what the system was already running at. I would say that Compiz does not use the graphics card like Beryl did, and the Compiz devs deny there is a problem. Re: (Score:2, Insightful). Re: (Score:2) Until of course you try and run a script written for fooshell on barshell, i.e. when a distro changes its shell. If you were using #!/bin/sh and expecting bash specific code to work, you're doing it wrong. If you want bash, call it by its proper name and it will always work. Re: (Score:2) Well, sure, if your definition of "actually works" depends on "if you use it right", which is a perfectly reasonable condition. But then that means the Windows "CLI/scripting system" also "actu Re: (Score:2) Re: (Score:3, Insightful) If you were using #!/bin/sh and expecting bash specific code to work, you're doing it wrong. If you want bash, call it by its proper name and it will always work. A more likely scenario is that a script written by someone else improperly references /bin/sh despite being chock full with bashisms. The real problem is that many people these days just assume Unix = Linux and can't even think of /bin/sh possibly not being bash (or something "compatible enough"). This is especially true of "Linux on the desktop" crowd, as server admins typically know better Re: (Score:2) I'm happy with POSIX OSen. But I would not recommend them to a Joe Windows user, ever, since I don't want to be their Support Guy from now until there's a distro that actually Just Works. Seriously? My POSIX compliant OS X is something I do recommend as it does Just Work. Re:favorite way (Score:5, Insightful) If you don't value your time. Linux is only free if your time is worth nothing. Windows is only $119.99 if your time is worth nothing. Re: (Score:3, Informative) And drunken cheerleaders get date raped more than shut-in nerd chicks. Personally, I prefer nerd chicks, and you likely do too, but most people don't. Really, they don't, and there's no use telling them that their opinion is wrong. Do people prefer Windows? After actually trying Linux? Not in my experience. If you don't value your time. Most stuff works out of the box. Some stuff does not work out of the box on Windows or Mac either. Until of course you try and run a script written for fooshell on barshell, i.e. when a distro changes its shell [ubuntu.com]. Dash is supposed to compatible with Bash if you stuck to Debian policy of affected scripts (those than use #!/bin/sh - if you useed bash specific features you should have used #!/bin/bash . Any examples of stuff that breaks? BTW Bash is still the login shell. Can be made to run on it, given enough time. Most stuff non-geeks use is in the major distros repos and is easier to install Re: (Score:3, Insightful) A) have you actually tried to figure out how to secure a network, or even your Dad's computer, when doing so requires he have the ABSOLUTE LATEST version of flash, adobe reader, and java? Not to mention those realplayer and QT plugins that are sure to get exploited one of these days? Linux gets it right with centralized software updates; Windows is an absolute nightmare in this regard. Theres WSUS, but oh Re:favorite way (Score:4, Insightful) Lower cost of Ownership - Last time I went shoppping for a computer, I didn't see any discounts for not having Windows installed from the get go. Either you go with Dell/HP/Lenovo, and they only offer windows, or when the offer Linux, it's the same price, or only a little cheaper, but you get a lot less selection of machines you can get. The other option is to build your own machine from off the shelf components. This is my favourite option, as you can get exactly what you want, but you will end up spending more. CLI/Scripting system - Almost nobody except tech geeks cares about this. Also, Powershell on Windows isn't all that bad. It has its pluses and its minuses. Most open source software runs on it - Most all of open source that is worth running will run on Windows. Maybe not all of it, but most of the more important stuff. Conversely, almost no closed source software runs on Linux. Which might not matter to you, but if you're trying to get work done, having things like Photoshop, Outlook (hate it but necessary for business), and many other closed source programs, makes a big difference. Drivers - Sure you get drivers for all the old stuff. But are you sure that shiny new piece of hardware that just came out last week will run to its full potential. Probably not. And there's also plenty of older hardware that I had that I couldn't run on Linux. No Blue Screen - I haven't seen a blue screen on a Windows machine in many years. And when I do, it's usually because of bad RAM, causing something to get corrupted. Blue screens still exist, but they don't happen quite as often as they used to. I imagine most Linux systems would also crash pretty badly when they have bad memory. I'm not some Windows Zealot. I use Windows when it makes sense, and I use Linux where it makes sense. But I don't really think that that any of the reasons you mentioned are valid. Especially if you're talking about home desktop use. Which in the case of Compiz, is exactly the kind of people we are talking about. Re: (Score:2) But the easiest way of making a windows user envious is to use a mac Something that's more closed than Windows and Linux? No, we're not envious. You might think that we should be envious, just like the guy who brags about his expensive designer clothes or Iphone, but the rest of us don't actually care. Re: (Score:2) Oh come on... how, exactly, is the Mac platform (no, not the iPad, not the iPhone, the Mac, ie Mac OS X) "more closed than Windows"? At best it's exactly as closed, though I'd argue somewhat less so (thanks to the existence of Darwin, their work on the ObjC gcc backend, Webkit, etc). Re: (Score:2) Something that's more closed than Windows and Linux? 1. OS X is not any more closed than Windows. 2. A Windows user will likely not care anyway. Re: (Score:2) CLI/scripting system that actually works Very, very true. Although PowerShell is quite powerful... but quite different from most shell scripting in the UNIX world. You really expect any CLI, no matter how awesome, will make Windows users jealous? I definitely think Compiz is one of the few ways to make your average Windows user jealous of Linux, with perhaps your favourite package manager coming next. I remember reading that MS are building an app store for Windows though, so it won't be something to be jealous of for long! Of course, trying to make other people jealous of you is pretty pathetic. Re:BS (Score:5, Insightful) * Lower cost of ownership - BS, too much time is spent hacking up config files to make crap work or work right On Windows, too much time is spent hacking up the registry to make crap work or work right. Just this last Thursday, I had to manually scan the registry to delete every reference to a printer driver that kept killing someone's spooler service... because the spooler service needed to be running to delete the printer normally. If it had been a unix system, I could have just edited a line in a file and been done. * CLI/scripting system that actually works - BS, anything you can write and make work in Linux, I can in Windows Using cygwin, bash compiled for Windows or DOS, or other scripting applications that are not guaranteed to be on every Windows system. * Most open source software runs on it - Show me anything worthwhile that doesn't run in Windows or have a better alternative there Well, Linux runs in Windows, so I'd say you've won this argument. * Drivers for just about any piece of hardware ever built - BS, that's the primary thing most users have issues with, half baked drivers Half-baked drivers in Windows XP, Vista, and 7. That printer driver mentioned above? It was an HP driver written for and installed in Win7 64bit. * No blue screen of death - Agreed, but I haven't seen one yet in Win7 I haven't either, but I have seen a Win7 machine reboot constantly (the equiv of BSOD since Win7 is set to reboot on fail). * Not nearly as resource hungry (unless of course you use Compiz :-) - Agreed, but neither was Win98 which is typically how Linux feels I still have Win98se running on an old machine for old games. Win98se is actually snappier than modern Linux, which is in turn snappier than WinXP/7. How much window compositing did Win98se do? Firewalling? Multi-user? Even the 1998 version of Linux had multi-user support and ipchains. Mod me down if you want to, but I've yet to have Windows drop me to a command prompt after an video card driver update I've had it boot up to a BSOD, which looks worse than a command prompt, or a blank screen where I had to remote in or boot up in safe graphics mode. [I've yet to have Windows drop me to a command prompt after an] OS update (Ubuntu anyone?) I've had it boot up to a BSOD, which looks worse than a command prompt. or had to recompile sound drivers after every OS update (Ubuntu on that one too). I wish I could. Sometimes vendors take years to get their sound drivers working. Google realtek, imac, and Windows 64 bit. My file manager will display in a column what date pictures were taken so I can categorize them accordingly, can yours do that? It couldn't the last time I checked. This is the first time that I ever checked. No, it does not, but it could with a little quick editing. Right clicking and selecting properties shows that the Gnome file manager (didn't check KDE) can see the image properties, including "Date Taken", so the information is there. Linux users are probably just better mentally organized, and name their photo directories YYYY_MM_DD Re: (Score:2) No need to try to make Linux users smarter than they think they are though, Windows users and possibly even Mac users can be fairly mentally organized as well. Re: (Score:2) in Windows I don't have to check every photo individually for the date taken, it's a column in the file manager. ls -lt *.jpg If you want to automatically file them into directories based on date you can use --time-style=iso and pipe it into awk or perl and write a quick script you can use every time you do this. You definitely do not have to sort them by date, create a folder for each date, and drag and drop each group of files into its directory. You can do the same sort of thing in Powershell I'm sure. Th Re: (Score:2) Re: (Score:2) Linux defaults to the command line because the command line is better. There's a reason we moved beyond pointing and grunting into symbolic language. Writing a few lines of code is in fact easier than manually copying, renaming, converting, etc dozens of files. And when you're done you get a script you can use the next time such a task comes up. If you really really want to use the GUI though, there's no shortage of file managers that will display the date in a column. Konqueror does it by default. So d Re: (Score:3, Insightful) command line is better. There's a reason we moved beyond pointing and grunting into symbolic language. Best description of why to use CLI; it allows for an explosion of thought. Re: (Score:2) Re: (Score:2) in Windows I don't have to check every photo individually for the date taken, it's a column in the file manager. ls -lt *.jpg This isn't what GP was talking about. That's file modification time, not the date the photo was taken (which is data inside the image file, not in the filesystem about the file). The closest you could get with ls would be to re-touch all the timestamps to match the image date data first, then use ls. /image/directory/ -name \*.jpg -exec touch -d `exiftime -tg {} |sed -e 's/Image Generated: //' |sed -e 's/:/-/' |sed -e 's/:/-/'` {} \; find or something similar. I can't remember if backticks work in -exec. Yes, but when you edit the file (in Photoshop, say), the date taken stays the same and the filesystem timestamp changes. It actually annoys me that Windows defaults to showing the exif date taken when it detects a directory of images - I'd much rather see the filesystem datestamp and sort by that, so I can see which I've already edited. I already organise the directory structure by date taken anyway. Re: (Score:2) That's a fair point. But at this stage it seems like we've moved beyond the field where a general purpose file manager is appropriate. There's no point in having a "date taken" column in a utility that many people will never use for photos. If you really need to sort your photos based on this photo specific metadata, there are photo managers for that. Re: (Score:2) Re: (Score:2) I've not had to edit any config files on Ubuntu since version 8, apart from Apache - which needs exactly the same setup on Windows. Evolution doesn't have a decent Windows port (there is a port available, but it crashed on installation and I couldn't be assed trying to diagnose it, just left the user with Outlook). Windows "feels" worse than any OS I've ever used, with maybe the exception of Amiga Workbench 1.3. I always file my pictures in folders with the date that they were taken in YYYY-MM-DD format, so ye Re: (Score:3, Insightful) But then I guess you have never tried to use cmake; else you would not have made the ignorant statement about its incomprehensibility. If you have never used autoconf, automake, make, libtool, m4 and friends it would be just as incomprehensible. Re: (Score:2) I have used autotools, and they're still incomprehensible. Re: (Score:2) You actually go through the trouble to reimplement build systems in autotools? That's a lot of work, dude. I'm calling foul here. Re: (Score:2) So you are saying if you were to compile kde-4.4.x (they use cmake) you would convert it all to autotools? ... I don't believe you at all, not for a second. Re: (Score:2) Good lord, I did not expect so many to come the defense of that grand old dame, X. Rather than get into an argument on the Internet about Computers, I'll just say that the Linux desktop remains a beloved canard to me. I do not doubt that others will disagree with me.
http://tech.slashdot.org/story/10/07/05/0441234/compiz-project-releases-c-based-v090?sdsrc=nextbtmnext
CC-MAIN-2015-32
refinedweb
6,091
71.04
Linked lists use dynamic memory allocation i.e. they grow and shrink accordingly. They are defined as a collection of nodes. Here, nodes have two parts, which are data and link. The representation of data, link and linked lists is given below − Operations on linked lists There are three types of operations on linked lists in C language, which are as follows − Consider an example given below − Delete node 2 Delete node 1 Delete node 3 Following is the C program for deletion of the elements in linked lists − #include <stdio.h> #include <stdlib.h> struct Node{ int data; struct Node *next; }; void push(struct Node** head_ref, int new_data){ struct Node* new_node = (struct Node*) malloc(sizeof(struct Node)); new_node->data = new_data; new_node->next = (*head_ref); (*head_ref) = new_node; } void deleteNode(struct Node **head_ref, int position){ //if list is empty if (*head_ref == NULL) return; struct Node* temp = *head_ref; if (position == 0){ *head_ref = temp->next; free(temp); return; } for (int i=0; temp!=NULL && i<position-1; i++) temp = temp->next; if (temp == NULL || temp->next == NULL) return; struct Node *next = temp->next->next; free(temp->next); // Free memory temp->next = next; } void printList(struct Node *node){ while (node != NULL){ printf(" %d ", node->data); node = node->next; } } int main(){ struct Node* head = NULL; push(&head, 7); push(&head, 1); push(&head, 3); push(&head, 2); push(&head, 8); puts("Created List: "); printList(head); deleteNode(&head, 3); puts("\n List after Deletion at position 3: "); printList(head); return 0; } When the above program is executed, it produces the following result − Created List: 8 2 3 1 7 List after Deletion at position 3: 8 2 3 7
https://www.tutorialspoint.com/explain-the-deletion-of-element-in-linked-list
CC-MAIN-2021-31
refinedweb
272
53.24
Lukas Johmann911 Points I need help with this Creating Functions Challenge Task Hello, I have been stuck with this task for a while now can't seem to figure out what code to write next. Challenge Task 2 of 2 Great now that you have created your new square method, let's put it to use. Under the function definition, call your new function and pass it the argument 3. Since your square function returns a value, create a new variable named result to store the value from the function call. def square(number): value = number * number return value 1 Answer Grigorij Schleifer10,352 Points Hi Lukas, this will do the trick: result = square(3) The square function will square the number 3 and return it, so it can be stored in the variable result Makes sence? Grigorij Schleifer10,352 Points Grigorij Schleifer10,352 Points You are very welcome!
https://teamtreehouse.com/community/i-need-help-with-this-creating-functions-challenge-task
CC-MAIN-2020-10
refinedweb
149
72.8
ultoa(), ulltoa() Convert an unsigned long integer into a string, using a given base Synopsis: #include <stdlib.h> char* ultoa( unsigned long int value, char* buffer, int radix ); char* ulltoa( unsigned long long value char* buffer, int radix ); Since: BlackBerry 10.0.0 Arguments: - value - The value to convert into a string. - buffer - A buffer in which the function stores the string. The size of the buffer must be at least 33 bytes when converting values in base 2 (binary). - radix - The base to use when converting the number. This value must be in the range: 2 ≤ radix ≤ 36 Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The ultoa() and ulltoa() functions convert the unsigned binary integer value into the equivalent string in base radix notation, storing the result in the character array pointed to by buffer. A NUL character is appended to the result. Returns: A pointer to the result. Examples: #include <stdio.h> #include <stdlib.h> void print_value( unsigned long int value ) { int base; char buffer[33]; for( base = 2; base <= 16; base = base + 2 ) printf( "%2d %s\n", base, ultoa( value, buffer, base ) ); } int main( void ) { print_value( (unsigned) 12765L );
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/u/ultoa.html
CC-MAIN-2017-13
refinedweb
205
65.62
Borislav Hadzhiev Last updated: Sep 7, 2022 The string.capwords method: str.split()method to split the string into words. str.capitalize()method to capitalize each word. str.join()method to join the capitalized words into a string. from string import capwords my_str = 'bobby hadz com' result = capwords(my_str) print(result) # 👉️ 'Bobby Hadz Com' The string.capwords() method converts the first character in each word to uppercase and the rest to lowercase. from string import capwords my_str = 'BOBBY HADZ COM' result = capwords(my_str) print(result) # 👉️ 'Bobby Hadz Com' You can also use the capwords() method if you need to capitalize the first letter of each word in a list. from string import capwords my_list = ['bobby hadz', 'dot com'] result = [capwords(item) for item in my_list] print(result) # 👉️ ['Bobby Hadz', 'Dot Com'] We used a list comprehension to iterate over the list and passed each item to the capwords() method. string.capwords()method uses the str.capitalize()method on each word in the string. The str.capitalize function returns a copy of the string with the first character capitalized and the rest lowercased. print('bobby'.capitalize()) # 👉️ 'Bobby' print('HADZ'.capitalize()) # 👉️ 'Hadz' Note that there is also a str.title() method. The str.title method returns a titlecased version of the string where the words start with an uppercase character and the remaining characters are lowercase. my_str = 'bobby hadz' result = my_str.title() print(result) # 👉️ "Bobby Hadz" However, the algorithm also converts characters after apostrophes to uppercase. my_str = "it's him" result = my_str.title() print(result) # 👉️ "It'S Him" The string.capwords method doesn't have this problem, as it only splits the string on spaces. from string import capwords my_str = "it's him" result = capwords(my_str) print(result) # 👉️ "It's Him"
https://bobbyhadz.com/blog/python-string-capwords
CC-MAIN-2022-40
refinedweb
289
59.3
On 08/27/2010 02:32 AM, Matthias Bolte wrote:. Yes, that sounds better. I guess I'll whip up the patch, then. I tried to figure out how to detect GCC version in configure.ac, but my autotools-fu is weak today. No need to worry about configure.ac. It is doable all with preprocessor statements:No need to worry about configure.ac. It is doable all with preprocessor statements: /* Some versions of libcurl trigger a gcc bug with -Wlogical-op * that was fixed for gcc 4.5.0; disable the problematic libcurl * code if we don't detect a good enough compiler. */ #if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR < 5) # define CURL_DISABLE_TYPECHECK #endif -- Eric Blake eblake redhat com +1-801-349-2682 Libvirt virtualization library
https://www.redhat.com/archives/libvir-list/2010-August/msg00687.html
CC-MAIN-2015-11
refinedweb
126
77.23
package Identity; use Moose; subtype 'first_name_type', as 'Str'; where {/^\S+$}; subtype 'last_name_type', as 'Str'; where {/^\S+$}; subtype 'full_name_type', as 'Str', where {/^\S+ \S+$/}; coerce 'first_name_type', from 'full_name_type', via {(split(' '))[0]}; coerce 'last_name_type', from 'full_name_type', via {(split(' '))[1]}; has 'first_name' => ( isa => 'first_name_type', coerce => 1, lazy => 1, default => sub {$_[0]->full_name}, ); has 'last_name' => ( isa => 'last_name_type', coerce => 1, lazy => 1 default => sub {$_[0]->full_name}, ); has 'full_name' => ( isa => 'full_name_type', coerce => 1, lazy => 1, default => sub {$_[0]->first_name.' '.$_[0]->last +_name}, ); [download] The idea is that if you've set last_name and first_name already and then you call full_name it will build full_name from first_name and last_name and vise versa. Doing it this way isn't too bad but if keep adding new ways of making things the default methods get a lot of if statements in them and get annoying. has 'full_name' => ( isa => 'full_name_type', build_by => { A => 'first_name', B => 'last_name', with => 'A B', }, [download] and that would do all the coercions and default methods behind the scenes. When calling full_name for the first time (assuming first and last have been initialized) it would build it by replacing A and B in the string with the corresponding attributes first_name and last_name. When calling last_name or first_name for the first time (assuming full_name had already been set) it would build them by looking at where they are in the string, and applying an appropriate regex (or whatever) to full_name to produce them. You wouldn't have to put a build_by in first/last name it would just recognize that full_name can use both of them to create itself and therefore it can be used to create them by reversing whatever you do to create it. package Identity; sub init_first_last { my ($c, $first, $last) = @_; die unless $first =~ /^\S+$/ && $last =~ /^\S+$/; bless { first => $first, last => $last }, $c; } sub init_fullname { my $c = shift; my ($first, $last) = split ' ', shift; init_first_last $c, $first, $last; } sub fullname { join ' ', @{$_[0]}{qw(first last)}; } sub first { shift->{first}; } sub last { shift->{last}; } [download] Well, I was going to convert your code into a moose BUILDARGS method, but decided that I think the best approach along this line would be simply lazy-building. Of course, the advantage of the OP's approach is re-usability. The re-usability can be mostly recovered by moving the attributes to a role: package HumanName; use Moose::Role; use re 'taint'; # Can use isa => first_name_type ... if you prefer for (qw/ first last full /) { has $_."_name => isa => "Str", lazy_build => 1, predicate => "has_ +${_}_name"; } sub _build_full_name { my $self = shift; if ($self->has_last_name and $self->has_first_name) { return join " ", $self->first_name, $self->last_name; } if ($self->does("Gender")) { return "John Doe" if $self->has_gender and "M" eq $self->gende +r; return "Jane Doe" if $self->has_gender and "F" eq $self->gende +r; } # ... whatever complex constructions we like die "Can not build full name"; } sub _build_first_name { my $self = shift; return (split /\s+/, $self->full_name)[0];# or more complex chain. +.. } sub _build_last_name { my $self = shift; return (split /\s+/, $self->full_name)[1];# or more complex chain. +.. } [download] Though, hopefully you intend to use these methods only to provide (semi-sane) defaults when first/last/full are needed but not known - names are too complex for the above to be reliable generally. Good Day, Dean Why do you so hate Bobbie Joe van Hilton? (I'm not sure who to feel more sorry for, the people with spaces in their first name and/or last name, the people who will try to use your software but have some of the first set of people as their customers, or the people who inherit your code and try to maintain it.) - tye Indeed. Dividing names up into "first name", "last name", etc fields is an internationalisation disaster waiting to happen. Just have a single "name" field for storing the person's full name. If you need to be able to address people in a variety of ways ("Joe Bloggs" on an envelope, "Dear Joe" at the start of the letter, and "Mr J Bloggs" on billing) then expand that into "formal_name", "informal_name", "legal_name", etc fields. Splitting the name up and then recombining it simply won't work once you step out of the cosy little world of $ENV{LC_ALL}. ---- I Go Back to Sleep, Now. OGB. has also allows an array ... has ['foo', 'bar', 'bletch', 'y2', 'plugh'] => ... Now arrange to get the property-name that is being requested and set up a hashref, indexed by property name, where each element is a sub (closure...) that returns it. Which I leave as an exercise to the
http://www.perlmonks.org/index.pl?node_id=953687
CC-MAIN-2015-14
refinedweb
762
61.5
Implementing Dynamic Binding in a Different Way Implementing Dynamic Binding in a Different Way Join the DZone community and get the full member experience.Join For Free A few weeks back I blogged about "Getting your basics right?" I don't know how many of you agreed with it, however the discussion that took place for filing an RFE, which eventually got filed as Issue #142112, was quite long. If people had understood the issue from the start, it might not have gone on for that long. Anyway, if you would like to comment on it, please read my blog for the details, then go ahead and comment on the issue. Here's the use case, which was sufficient to justify the filing of the RFE: public class UserImpl implements User { public void callCheck() { callImpl(); new UserImpl().callImpl(); } public void callImpl() { System.out.println("Implementation Invoked..."); } public static void main(String[] args) { new UserImpl().callCheck(); // Dynamic Binding... User anonUser = new UserImpl(); anonUser.callImpl(); } } interface User { public void callImpl(); }; Clicking on callImpl() in line #17 doesn't navigate to the implemented version of callImpl(), i.e. line #08. Instead the user is navigated to line #23. So, this is not how things should be, hence someone filed the RFE. What's happening? Actually, this code was written for the NetBeans IDE Java Editor, and people who were involved in this discussion were taking the IntelliJ IDEA Java Editor as their reference for filing the RFE in NetBeans Issuezilla. Interesting.. So now one needs dynamic binding while code is being written, which is interesting. As you know, NetBeans IDE provides many features to help the user code. Features such as Mark Occurences, Inline Refactoring, Goto Implementation, and many more... But do we really need dynamic binding to take place? If somehow implemented, what impact would it have on the performance of NetBeans IDE? Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/implementing-dynamic-binding-a
CC-MAIN-2020-24
refinedweb
331
55.64
I with Visual Studio, read on… What are the Python Tools for Visual Studio? The Python Tools for Visual Studio (PTVS) plugin brings Python support to Visual Studio. When Microsoft released Update 3 for VS 2015 a few weeks ago, I noticed an option for “Python Tools for Visual Studio”. I thought it might be a new feature, but it turns out the PTVS plugin has been around for at least 5 years, first on CodePlex and now on GitHub. (adding it directly to the “update” dialog, however, may be new…) Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. It lets you bring your own Python interpreter, including CPython, IronPython, PyPy, and more, and supports a broad range of features from editing with IntelliSense to interactive debugging, profiling, interactive REPLs with support for IPython, cross-platform and cross-language debugging support, and deployment to Microsoft Azure. Installing the PTVS Plugin According to the release notes for 2.2.4, you need VS2015 for the latest version. Or you can install older versions of PTVS, if all you’ve got is an older version of VS, but VS2015 Community is free so you might as well grab a copy. PTVS 2.2.4 and later will not support Visual Studio 2013 or earlier. If you are unable to obtain any of the editions of Visual Studio 2015, the last release of PTVS for Visual Studio 2013 was PTVS 2.2.2 and for Visual Studio 2010 and 2012 was PTVS 2.1.1. There are a few ways to get the plugin installed. Option 1: Select During a Major VS Update If you’re currently installing an update to VS anyway, just select the option for PTVS. Option 2: Modify VS in Add/Remove Programs You can bring up the same “update” dialog any time you want, by finding VS in the Add/Remove Programs panel and double-clicking it, then choosing “Modify” in the VS window. Option 3: Download from GitHub You can go directly to the latest release on GitHub, and download an executable from there. You might want to check that page out anyway. They’ve got a sample pack you can download (more on that later). Option 4: Install from Within VS One more way. Open the “New Project” dialog inside Visual Studio and try to create a new Python project. Oops, no projects available… if the plugin isn’t installed, you’ll be prompted to do it now. Whatever way you choose, once it successfully completes, you should have a lot more options available the next time you try to create a new Python project. Install a Python Interpreter So now Visual Studio supports Python. (woot!) But you still need to install Python… or more accurately, an *interpreter *that knows what to do with the code you’re writing. There are quite a few interpreters, and they each have their pros and cons. For example, the CPython interpreter provides maximum compatibility with the official language specs, while the PyPy interpreter executes your code faster, and the IronPython interpreter integrates your code with the .NET Framework. Microsoft describes a few (there’s other helpful info on that page too), and I found The Hitchhiker’s Guide to Python: Picking an Interpreter to be a nice, straight-forward guide as well. If you try to run some Python code without at least one interpreter installed, VS will tell you exactly where to go. To find an interpreter, that is. CPython (the default interpreter) CPython is the “official” interpreter, and will support every piece of Python code you throw at it. Unless you’ve got a specific reason to do otherwise, embrace the future and download the Python 3 version. Note: One of the more annoying decisions to have to make is between Python 2 or 3. Usually, when a language spec is updated, it maintains backward-compatibility. With Python 3, they opted to dump some of the baggage from Python 2, and that means breaking changes. Python 3 is the way of the future, but many old and useful packages are still written in Python 2 and won’t just “work” with Python 3. IronPython (the .NET Compatibility interpreter) Since we can install multiple interpreters, I installed IronPython too, so I could experiment with its .NET capabilities. It targets Python 2, but has at least partial support for 3, because both print 'hi' and print('hi') work, even though the former is Python 2 syntax and the latter is Python 3. Although you can download IronPython separately, it’s also offered as a NuGet package, which makes it really easy for you to share your code with someone. Their VS will download IronPython (and any other dependencies you’ve specified) via NuGet. IronPython is also available as a NuGet package. The standard library is a separate package. This is the recommended way to get IronPython if you are embedding it in another program. Unfortunately for me, everything I tried with NuGet failed. It was disabled in my Python project, and when I tried to install from the console, it complained very loudly. Something to figure out later I guess. Running a “Hello World!” Script Installing the interpreter separately as described above worked just fine, and I was able to run my first “hello world” scripts from VS. … from within a project Create a new Python Application and try running something simple. If you installed IronPython too, close the previous app and create an IronPython Application now. Try something simple again. Notice that creating an IronPython App loads the IronPython interpreter for us (in Solution Explorer) instead of CPython. Convenient! … from within the REPL There are REPL (read-eval-print-loop) windows for each interpreter too, for when you want to test some Python code without the overhead of creating a project. Select “View » Other Windows” from the menu (or “Tools » Python Tools”) and look for items ending in “Interactive”. You can even open up multiple REPL windows, and try out your Python 2.x and 3.x code side-by-side. PTVS Samples I briefly mentioned the samples pack at the beginning. If you’d like to check them out, go to the release page on GitHub and download the VSIX files, then double-click each one to install them. Restart VS and you should see some new projects available under Python. Taking IronPython for a Spin Finally, what does it mean to say IronPython integrates with the .NET Framework? Create a new IronPython Application and paste the following Python/.NET amalgamation into “IronPythonApplication1.py”. Some lines are passing the result of .NET functions to Python and vice-versa… it’s more seamless than I imagined at first. (The documentation provides more samples to try out, if you’re interested; I’m barely touching the tip of the iceberg here.) import clr clr.AddReference("System.Core") # Import some namespaces from System import Console, Environment, Math, String # Get absolute value and write to the console with .NET Framework Console.WriteLine(Math.Abs(-4.23)) # Use Python's abs function, then write it out using .NET Framework Console.WriteLine(abs(-66.7)) # Import an individual method from System.Console import WriteLine WriteLine(Environment.CurrentDirectory) from System.Collections.Generic import List from System.Linq import Enumerable names = List[String](["jimmy", "tom", "bill"]) # Count matching names using .NET Framework, then print results using Python print(Enumerable.Count(names, lambda name : name.Contains("m"))) raw_input("\nPress any key to exit.\n") That’s it for now. For someone familiar with the .NET Framework, this may save a lot of time and even prevent you from reinventing some wheels (there’s a lot of functionality built into the .NET Framework… almost anything you need is in there). What do you think? Did you find this useful or interesting? Run into any weird issues? Share below!
https://grantwinney.com/how-to-develop-python-in-visual-studio-and-mix-it-up-with-the-dotnet-framework/
CC-MAIN-2019-18
refinedweb
1,325
65.01
I can't find anything more recent than (github.com/dacap/sublime-text-2-git/pull/1),: But it would REALLY be great to be able to just have context support in mousemap files. At this point, it looks like users are only allowed to install a single plugin, other than the Default plugin, that handles double-click events. = The workaround idea is quite interesting. I didn't really think about it but you could actually create a wrapper like sublime_plugin does for EventListeners but for mouse events.Of course, you'd have to have this extension installed but when it works like I think it could then you just need to define a "on_mouse_event(self, view, button, modifiers, ...)" method and do some handling there. This needs some thoughts but having some experience with the sublime API and sublime_plugin handling I think this might work. Hey FichteFoll! My initial idea was to do basically something as simple as this: class DoubleClickHandler(sublime_plugin.TextCommand): def __init__(self,view): self.view = view def run_(self, args): # Iterate through all of the user-provided key/value pairs for "context" or "view name", etc., and if the current view matches any of them, # call whatever command the user provides that matches. If we can't find any matches, pass the event on through to the default plugin: else: self.view.run_command("drag_select", {'event': args'event']}) self.view.run_command("drag_select", args) I was thinking the user-supplied config file could look something like this: { { // configuration for a view context "configType": "context", "identifier": "text.find-in-files", "command": "cscope_visiter" }, { // configuration for a view name "configType": "name", "identifier": "Git Diff", "command": "git_goto_diff" }, { // configuration for a view syntax "configType": "syntax", "identifier": "Packages/CscopeSublime/Lookup Results.hidden-tmLanguage", "command": "cscope_visiter" } } Here's what these two plugins do currently to make sure they're in the right view before doing anything with the command: class GitGotoDiff(sublime_plugin.TextCommand): def run(self, edit): v = self.view if v.name() != "Git Diff": return class CscopeVisiter(sublime_plugin.TextCommand): def __init__(self,view): self.view = view def run_(self, args): if self.view.settings().get('syntax') == CSCOPE_SYNTAX_FILE: Is this a similar approach to what you were talking about? Yes, that't what I understood from your post but my idea was kinda different. I'd really like to write some code for that but apparently I have 0 time to spare atm. Bump. I'm struggling with this at the moment, with two plugins with double-click support, but one of them gets the double-click.
https://forum.sublimetext.com/t/can-we-please-get-context-support-for-mousemap-files/9434/3
CC-MAIN-2016-36
refinedweb
422
57.37
I can't really think of a good way to phrase this, but I have no idea why, even though it will compile, it crashes upon execution. I'm trying to invoke a class function, nextTrain() to my class pointer, apt, and it works before I reassign the pointer, however, after running the line apt = apt->nextTrain() #include <iostream> #include <string> using namespace std; class train { public: string cars[100]; int index; int total; train(string n) { cars[0] = n; index=0; total=0;} train(string c[100], int ix, int t) { for (int i = 0; i < 100; i++) { cars[i] = c[i]; } index = ix; total = t; } train* nextTrain() { train t(cars, index, total); train* ret = &t; ret->atoix(1); return ret; } train* prevTrain() { train t(cars, index, total); train* ret = &t; ret->atoix(-1); return ret; } void atoix(int val) { index += val; } void add(string name) { cars[total+1] = name; total++; } string getName() { return cars[index]; } }; int main() { train a("Engine"); train* apt = &a; apt->add("Train2"); apt->add("Train3"); cout << apt->nextTrain()->getName() << endl; apt = apt->nextTrain(); cout << apt->getName() << endl; cout << apt->nextTrain()->getName() << endl; cout << apt->prevTrain()->getName() << endl; cout << apt->getName() << endl; } If we take a closer look at the code in the nextTrain function: train t(cars, index, total); train* ret = &t; ... return ret; The variable t is a local variable inside the function. When the function returns t will go out of scope and the object will be destructed. However, you return a pointer to this local variable. Once the variable has gone out of scope it doesn't exist anymore, and using this (now invalid) pointer will lead to undefined behavior. What you should do to solve this problem depends on how you will use it. My suggestion is to not return a pointer from the function, but return an object instance. I.e. train nextTrain() { train t(cars, index, total); t.atoix(1); return t; } Or even train nextTrain() { return train(cars, index+1, total); }
https://codedump.io/share/SLRZBZ0zMZvZ/1/using-class-functions-with-pointers-after-reassigning-pointer
CC-MAIN-2017-17
refinedweb
334
59.06
. Step 1: Create a struct type inside Monster named MonsterData. This struct should have a member for each attribute (name, symbol, health, damage, and gold). Step 2: Declare an array of that struct as a static member of the class named monsterData (declare this like a normal array member, and then add the word static before it). Step 3: Add the following code outside of the class. This is the definition for our lookup table: Now we can index this array to lookup any values we need! For example, to get a Dragon's gold, we can access monsterData[DRAGON].gold. Use this lookup table to implement your constructor: The following program should compile: and print: A orc (o) was created. 3e) Finally, add a static function to Monster named getRandomMonster(). This function should pick a random number between 0 and MAX_TYPES-1 and return a monster (by value) with that Type (you'll need to static_cast the int to a Type to pass it to the Monster constructor). You can use the following code. The orc hit you for 2 damage. (R)un or (F)ight: f You hit the orc for 2 damage.: // If the player is dead, we can't attack the monster -isDead function return true if m_health<=0 the player always starts with 10 health. so it won't go in the "return".It will go on the next lines. ok But when health is 0 , then is true , goes into the condition of the line 2 and what returns ? (line 2, does it return 0 ?) It doesn't return any value. Control / execution just returns to the caller and the program keeps going. ok, thank you. Typo: always inherit publically --> publicly. Hi Alex, in the game quiz, when writing the get random monster function, why i get a compiler error when i put the get random number function inside the monster class? I can't diagnose this from the information you've provided. What error are you getting? Quiz time 2b: Why can't GrannySmith call the Fruit constructor? Hi Papaplatte! If @GrannySmith were to call the @Fruit constructor directly, the construction of @GrannySmith's @Apple part would be skipped, which doesn't work. I don't understand line 19 in solution 3a. const std::string& getName() { return m_name; } Why use "const" and "std::string&"? Thank you so much! (Sorry! I speak English very bad. 🙁 ) Hi there! @m_name is returned by reference, because returning by value would create a copy of the string, which is slow. It's returned by const reference, because the caller of @getName should not be allowed to modify @m_name. References Lesson 7.4a - Returning values by value, reference, and address Because we are using the getRandomNumber() function in two .cpp files I extracted it into a separate utils.h files. This files is #included in main.cpp and monster.cpp. The program wont compile if the function is not marked static, because at linking stage there would be two functions with the same name. Is it right to fix it this way? To me marking the function as static feels like a quick hack. utils.h should only contain the function declaration, not the definition. The definition goes into utils.cpp. Well thats true, youre right. Is making a function static for this reason bad? I also saw code that marked some functions as inline and the others were implemented in a separate .cpp file. What about variables? Like pi etc in a math header. They need to be static in this case. > Is making a function static for this reason bad? Yes > What about variables? If they're constants (like pi) you can define them in the header as static constexpr. If they're variable, declare them extern and define them in a source file. Final solution for the game is missing a at the end For the lookup table, I decided to use std::array rather than the C Style array. I used: Inside the class, and: outside of the class. This resulted in an error saying 'too many initialisers for 'std::array''. A bit of Googling showed that I needed: But I cannot figure out why. Presumably it's something to do with the fact that (similar to how, in the IntArray container (lesson 10.6)) std::array is built on-top of the the traditional C-style array, but why does it require extra {} at the start and the end? Hi Jack! @std::array doesn't have a constructor for list initialization, so you have to initialize it's members directly. That member is a C-Style array, which requires another set of curly braces to be initialized. In you code, you have (and need) 3 sets of curly braces: 1. Uniform initializer braces for @std::array. 2. List initializer braces for the C-style array inside @std::array. 3. Uniform initializer braces for your monsters. I don't know why @std::array doesn't have a constructor for list initialization, it would make it's use a lot more intuitive. Question 2b) I tried to give default values to the protected constructor in the Apple class. However, then the compiler complained about the call as being ambiguous. Given that main() cannot access that constructor, why would that be the desired behaviour for the compiler? Hi DecSco! The compiler tries to find a matching function _before_ resolving access modifiers. I'd like it better the other way around, but that's the way it is. Do you think its better practice to encapsulate the random number generation for the monster class? Here's what I did: Personally, I don't think so, because the act of picking which monsters should be generated is separate from how monsters are represented. Putting them together in one class adds complexity. Damn, this game is addictive! But with initial player HP set to 10 it's just too difficult. 20 works much better. This is a bit irrelevant doubt, I was just curious how the default copy constructor would work. If we were to do something like this in main() And our default copy constructor would be something like this Now in our getRandomMonster() function copy constructor would get called while returning by value, So how does the copy constructor know which type of Creature we are dealing with ? The parameter for Creature(...) in your example should be m. This will cause the Monster copy constructor to call the Creature copy constructor on the Creature portion of the m object. I wrote this as Slightly more simplistic, but made me wonder if there is a specific reason you didn't do it like that; like for clarity's sake, for convention or because the random function doesn't play nice with randoming 0? Just throwing out possibilities. I had a lot of fun with this! It really made me appreciate classes and how they allow you to put the important but intricate stuff 'behind the scenes', improving clarity tenfold! I did it this way just because most of us are familiar with the concept of rolling dice to resolve "random events" from childhood board games. In this case, I roll a 2-sided dice, and if it comes up as a 1, the user is able to flee. Your approach of picking a random number 0 or 1 is just as valid, and may be more efficient. This is my version of this game. Welcome to play and suggestions. Enjoy. I was struck by using std::array to declare monsterData. I thought it was better to use std::array instead of using the c-style array. My compiler(default MS2017 on Win7) just gave this: =============================================================== Severity Code Description Project File Line Suppression State Error (active) E0147 declaration is incompatible with "std::array<Monster::MonsterData, 3U> Monster::monsterData" (declared at line 99) =============================================================== This is where I was confused. Why std::array not work? To me, there is no difference between the c-style array: Hi Ran! Looks like you have a forward declaration of @monsterData that's different from it's definition. If that's not the case make sure @MonsterData and @Monster are defined before the definition of the array and that you If you're still having trouble please share your full code or use a C-style array for now and I'll replace it with an std::array in your submission. Could someone explain why Alex defined member variables of the class in the Creature class using protected attribute in the Question 11.x.3.b? I have tried the way (using protected), but I was not very comfortable using this kind of declaration. In fact, I tend to use "private" in the creature class. To access the private member from the derived class, I just added a public function within the Creature class: To access the m_attackDamage, in the Player class I used: What are the weaknesses of doing so? What are the advantages to use protect member variables in the creature class? > To access the private member from the derived class, I just added a public function within the Creature class Well then that function should be protected, because the outside isn't supposed to be accessing this variable. Is some cases this might work, but what if a child wants to increase @m_attackDamage by 2? Declaring the variables protected allows every child to perform completely unrestricted access to those variables so they can be modified in any imaginable way. If one uses your solution this is not possible (Unless you have a function that grants direct access to private variables, in that case you might as well not declare the variable private). I have some troubles with my code: There are somethings wrong about it but i can't find it out, help!- This is a link to the error list. Hi Tin! > error: ‘Monster’ has not been declared Add a forward declaration for @Monster, pass the @Monster and @Player to the attack functions by reference. Define @attackMonster after the declaration of @Monster. > undefined reference to `Monster::Monster()' Define Monster::Monster > error: no matching function for call to ‘Creature::Creature()’ Define Creature::Creature Don't place semicolons after function definitions. The code compiles. It should really be split into multiple files. Thanks a lot! The solution for 3a has this for the get functions: const std::string& getName() { return m_name; } char getSymbol() { return m_symbol; } int getHealth() { return m_health; } int getDamage() { return m_damage; } int getGold() { return m_gold; } Why is the only const reference the string/getName()? Also why aren't the rest of the get functions const i.e: int getGold() const { return m_gold; } ? -Thanks Hi Jeffery! getName returns a const reference to a string, if it wasn't const we'd be able to modify the value of m_name directly using the return value of getName. The other functions don't need to return const types, because they aren't references. getName is a reference, because passing a string by value is slow. Hey Jeffery, The functions really should be const functions as they are used in a read-only capacity. getName() returns a const reference because std::string is a class and we generally don't want to pass classes by value (because making a copy is expensive). For fundamental tpes (int, char), this isn't an issue, so we can just return by value. Hello, Alex, i've got here a problem, it is not from your website, that I can not solve. It sounds like this: Define a Class named Components that contain a pointer to a string called name. From this class derive two classes, Processor, with frequency as int, and Screen with diagonal also as int. Each of the two derived classes contains a name and frequency display function for Processor and a name and diagonal for Screen. Then define a Computer class that contains as member data a Processor object and a Screen object and a display function of the contained objects. Define required constructors and display functions and built into the main function () a Computer object with Dual Core processor and 17-inch Samsung monitor. Display the data for this object using the display function defined in the Computer class. This is my code. The major problem is at Computer class: Well, your Computer constructor is defined as needing two parameters (freq and diag) but you're not passing in values for those in main(). I also note that your Computer constructor calls Processor and Screen but don't pass in a name (presumably these can just be string literals). So fix those problems (and the various spelling errors) and it should at least compile. I changed the Computer class, because it was(is) probably a strong source of my problems. Does it look better now? I also replaced the old "void print()" functions from the whole program, with overloaded operator<<. Also I do not know how to call "name" corectly in the other classes, like Screen and Processor. Thank you very much for the prompt response you gave me earlyer! And for this really complex and detailed tutorial. You've moved from an inheritance model to a composition model, which I think is good. A computer _has_ a processor and a screen, so composition is a better fit than inheritance. If your intention is to allow the user to pass in the Processor and Screen, then this seems fine. For your overloaded operator <<, you should be able to std::cout << c.m_name (because Computer is publicly inherited from Component, so you can access protected member m_name directly). Thank very much! 🙂 Hi Alex, First, thanks for this tutorial. My friends ask me why I'm learning C++ instead of whatever coding boot camp language they're learning, and I tell them it's because I found a really good tutorial. On 2(b), the solution won't compile if the Apple constructors use default parameters, like this: Apparently, the default parameters make the function call ambiguous. Why is that? Hi Mike! In your case, when you call the compiler cannot know if you want (a) to call constructor 1 or 2. It also doesn't know if you want (b) to create a Fruit("green", "red") or a Fruit("Apple", "green"). Default parameters are useful to an extent. If you use them without care you'll get overlapping functions signatures. Here's what happens to the functions at compile time. (Note: Some of these will be optimized away when they aren't used). Nascardriver gave a nice, long answer. Here's a short one: A class can only have one default constructor. The above Apple class has two! (as Nascardriver points out, this causes ambiguities when trying to call Apple() or Apple(std::string), as the compiler won't know which one you mean). Ohhhh, that makes a lot of sense. Thank you! Alex, I think the following condition in "attackMonster" is redundant (at least in this version), because "fightMonster" "while" loop has the same condition and it will be executed first. Awesome exercise. Thank you sooooo much. Why we need const reference return value? Is it for just for memory optimization or?? Second, Monster class is pretty confusing to me. We create an enum ... OK! We create a struct ... OK! Why we defined name as a const char pointer? (maybe for dynamic allocation? Because names has different lengths.) Third, We defined a static array type of MonsterData and size it to MAX_TYPES, which is 3. The lookup table. We defined a static array but we didn't initialize it. We initialized it out of Monster class scope with a confusing syntax. Thoose are elements of monsterData array, Shouldn't we use paranthesis"(-)" over curly braces"{-}" for seperating elements of this array? I don't feel right with this syntax. By the way, this is my first comment on here. All I can say this guide, tutorial, book or whatever you name it, It's a treasure. Sorry for terrible English. 1) We return a const reference so we don't make a copy of the std::string every time the function is called. 2) We define the name as a const char* because the names themselves are defined elsewhere -- we just need to point to them. 3) Each element of the monsterData array is a struct object. We initialize struct objects using curly braces. Monster::MonsterData Monster::monsterData[Monster::MAX_TYPES] Could you explain why we need to have MAX_TYPES inside [] and what is its purpose? Thanks. It tells the compiler how many elements are in the array. Technically, it isn't required, as the compiler can infer how many elements should be in the array from the initialization values, but it's useful to have it so the compiler can warn you if you try to provide too many initializers. I understand this formula, but I am still struggling recalling this formula and also there seems no way I could even come up with such formula on my own although the formula seems rather simple on surface. Should I be worried? Nothing to worry about. I wouldn't have come up with this either without a lot of research and user input. In these cases, best thing to do is save the code in a file somewhere so you can reuse it whenever you need it. alright. thank you as always Alex hello Alex... when i try to solve 3d question, i compile my code like this... #include<iostream> using namespace std ; class Creature { protected: string m_name ; char m_symbol ; int amount_of_health; int damage_per_attack ; int amount_of_gold_carrying ; public: Creature(const string &name = "", const char &symbol = ' ',const int &health = 0 , const int &attack = 0 ,const int &gold = 0 ) :m_name(name) , m_symbol(symbol) , amount_of_health(health) , damage_per_attack(attack) , amount_of_gold_carrying(gold){} string getName(){return m_name ;} char getSymbol(){return m_symbol;} int getHealth(){return amount_of_health ;} int Get_damage_per_attsck(){return damage_per_attack;} int getGold(){return amount_of_gold_carrying ;} void reduceHealth(const int &reduce){amount_of_health -= reduce ; } bool isDead() { if(amount_of_health <= 0){ return true ; } } void addGold(const int &gold){amount_of_gold_carrying += gold ;} }; class Player:public Creature { private: int m_level = 1 ; public: Player(const string &name = "", const char &symbol = '@',const int &health = 10 , const int &attack = 1 ,const int &gold = 0 ) :Creature(name , symbol , health , attack , gold ) {} void levelUp(){m_level += 1 ;damage_per_attack ++ ;} int getLevel(){return m_level ;} bool hasWon(){return m_level >= 20 ;} }; class Monster:public Creature {public: enum Type { DRAGON, ORC, SLIME, MAX_TYPES }; struct MonsterData { string name ; char symbol ; int health ; int attack ; int gold ; }; MonsterData monsterData[MAX_TYPES] { { "dragon", 'D', 20, 4, 100 }, { "orc", 'o', 4, 2, 25 }, { "slime", 's', 1, 1, 10 } }; Monster(const Type &type) :Creature(monsterData[type].name , monsterData[type].symbol , monsterData[type].health , monsterData[type].attack , monsterData[type].gold){} }; int main() { Monster m(Monster::ORC) ; //Monster m(Monster::ORC); std::cout << "A " << m.getName() << " (" << m.getSymbol() << ") was created.\n"; return 0; } i declare monsterData as non static variable inside class,it's succes to compile but the program crash when running...can you explain me why ?Thnks By declaring monsterData inside Monster, you're saying that every Monster should have its own copy of monsterData. Even though that doesn't make sense, lets set that aside for the moment. The issue you're running into here is that the Creature portion of Monster is getting created BEFORE the Monster portion of Monster. So when you call the Creature constructor, monsterData hasn't actually been initialized yet. It doesn't get initialized until the Creature potion of Monster completes. This can be avoided by making monsterData static, since static members get initialized when the program starts. thnks for your explaination...i got it 🙂 And i have solved final question (3f),but i used "if" and "else" a lot ,is it bad ?and please tell me if any semantic error... this is my code in function main() : int main() { srand(static_cast<unsigned int>(time(0))); // set initial seed value to system clock rand(); // get rid of first result string name ; cout << "What's your name : "; cin >> name ; Player Dani(name) ; cout << "Welcome, " << Dani.getName() << endl ; Monster monster(Monster::ORC) ; bool chance = true ; char choice ; while(true) { if(chance) { monster = Monster::getRandomMonster() ; cout << "You meet " << monster.getName() << endl ; } cout << "(r)un or (f)ight ? : " ; cin >> choice ; if(choice == 'r') { chance = getRandomNumber( 0 , 1) ; if(!chance) { Dani.reduceHealth(monster.getAttack()) ; cout << "you faild to flee " << endl ; } else {cout << "you succes to run " << endl ;} } else { monster.reduceHealth(Dani.getAttack()) ; cout << "you hit " << monster.getName() << " for " << Dani.getAttack() << " damage " << endl ; chance = false ; } if((!monster.isDead()) && chance == false) { Dani.reduceHealth(monster.getAttack()) ; cout << monster.getName() << " hit you for " << monster.getAttack() << " damage " << endl ; } else if (monster.isDead()) { Dani.levelUp(); cout << monster.getName() << " is dead you are now level " << Dani.getLevel() << endl ; Dani.addGold(monster.getGold()) ; cout << "you found " << monster.getGold() << " gold " << endl ; chance = true ; } if(Dani.getLevel() >= 20 ) { cout << "you win and have " << Dani.getGold() << " gold " << endl ; } else if (Dani.isDead()) { cout << "you died at level " << Dani.getLevel() << ", and you have " << Dani.getGold() << " gold " << endl ; cout << "too bad " << endl ; } } The use of if and else isn't bad in and of itself, but your function is quite long and would be well served by being broken up into smaller subfunctions. Many of those if/else blocks could be separate functions. Ok Alex ...Thnk you so much for lesson and advice 🙂 Hi Alex, In the last quiz how do i ignore when ppl try enter more like "ff ff" or " r r" i did try add std::cin.ignore(1, '\' ); doesnt seem to work. how i make so it just read the first one character? Add this line immediately after reading in the input value from the user: Name (required) Website
http://www.learncpp.com/cpp-tutorial/11-x-chapter-11-comprehensive-quiz/
CC-MAIN-2018-43
refinedweb
3,613
65.42
Spring MVC, JstlView and exposeContextBeansAsAttributes Did you know that Spring MVC's JstlView has a exposeContextBeansAsAttributes property you can use to expose all your Spring beans to JSTL? I didn't. To configure it, you configure your viewResolver as follows: <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="exposeContextBeansAsAttributes" value="true"/> <property name="prefix" value="/"/> <property name="suffix" value=".jsp"/> </bean> After doing this, any Spring bean can get referenced in JSTL with: ${beanId.getterMethodWithoutTheGetPrefix} If you're using Spring 2.5a annotations and <context:component-scan>, you'll need to specify a "value" attribute on your annotations in order to reference them in JSTL. For example: @Controller(value = "beanId") @RequestMapping("/foo.html") public class MyController extends SimpleFormController ... @Component(value="testClass") public class TestClass { Pretty cool stuff. It'd be a lot more useful if you could call methods with parameters. Hopefully JUEL will solve that problem. JSTL's functions work, but I'd rather write ${foo.method('arg')} rather than ${taglib:callMethod(foo, 'method', 'arg')}. Posted in Java at Dec 05 2007, 06:34:41acob Hookom on December 05, 2007 at 09:23 PM MST # Jacob - I'd be more than happy to, especially if I can configure it as the default EL in Tomcat. To do this with JUEL, changes need to be made to Tomcat. BTW, does JBoss EL allow HTML escaping by default? Posted by Matt Raible on December 05, 2007 at 10:05 PM MST # Posted by Pascal Alberty on December 06, 2007 at 12:24 AM MST # Posted by Jacob Hookom on December 06, 2007 at 11:24 AM MST # Posted by Matt Raible on December 06, 2007 at 03:06 PM MST #
http://raibledesigns.com/rd/entry/spring_mvc_jstlview_and_exposecontextbeansasattributes
crawl-002
refinedweb
293
57.57
Controller Mockup This design equates Controller=Module/dispatch action modeled after Beehive. It features: - No configuration - everything done through annotations - XDoclet annotations to allow for Java 1.4 as well as, IMO, easier to read annotations - Option to use Form object or not - POJO controller - Validation annotations available, even without form object - Result forwards not required to be defined, when Action.SUCCESS returned, defaults to action name. The latter two points require some explaination. First, following the design of beehive, validation annotations should be able to be defined in multiple places: the controller class, the action method, or the property getter. Since Beehive supports shared flows, perhaps those would be used to share validation form definitions among other things. In this mock, I like the ability to define a quick validation properties without bothering with the overhead of a full form. While I'm sure in actual use the annotations would have to support more complex definitions, I perfer them to be as minimal as humanly possible. Second, since most Actions have only one forward, I think it makes sense to default a success outcome (encouraging the use of the static field) to a page containing the name of the action. Beehive gives each Controller its own prefix path, mapping to a physical path on the hd used when resolving jsp's. If the success result is returned and no forward defined, I think the name of the action should be provided to the default result type (jsp, velocity, etc) to guess the name of the file. So, for the login action in the "" or default Controller, the jsp result type would guess {appRoot}/login.jsp. This is similar to how Ruby on Rails operates. Finally, I'm warming to how Beehive separates Controller.java files from the main source and includes them right along side the jsp's. I think it would be interesting to go farther and, in development mode, leave them there for deployment and include a compiling classloader that compiles on file changes. Cocoon has already done the legwork for this feature. This would go a long way to encourage rapid development. Terminology page flow controller: the controller class for a particular module path page flow: an entire "package" of a controller and its associated view elements Why XDoclet? I like XDoclet for the default config/annotation engine for several reasons: - The annotations are cleaner - All annotations will be used at build time to generate an XML file, not needed at runtime. While Java 5 does provide APT, it is a sun-specific tool and, at least currently, cannot be invoked by Ant, not to mention it requires Java 5 annotations. - Since the annotations create an XML file, that is one less XWork Configuration implementation to create so we get two config styles for free. I wonder if we could use a properties file to generate XML as well making our job that much easier. - Able to include in the Struts Ti distro I believe and currently used by many Struts developers That said, I still think we would need to create Java 5 annotations as they have several key benefits, not the least being compile-type error checking, but I think supporting Java 1.4 is more important personally. Controller.java Mockup Following Beehive, this code is located in {approot}/Controller.java. Since it is the default package, it's actions will be called from the root, for example, the "index" action will be called by {contextPath}/index. If it was in the 'foo' package, it would be located at {approot}/foo/Controller.java and called by {contextPath}/foo/index. import com.opensymphony.xwork.Action; import com.opensymphony.xwork.ActionContext; import java.util.Map; import com.mycompany.app.UserManager; public class Controller { /** @ti.action */ public String index() { return Action.SUCCESS; } /** @ti.action */ public String login() { return Action.SUCCESS; } /** * @ti.action * @ti.validateRequired userName "User name is required" * @ti.validateRequired password "Password is required" * * @ti.forward name="success" type="redirect" value="index" * @ti.forward name="error" type="action" value="login" */ public String processLogin() { ActionContext ctx = ActionContext.getContext(); Map params = ctx.getParameters(); String userName = (String)params.get("userName"); String password = (String)params.get("password"); if (ctx.getMessages().size() == 0 && UserManager.isValid(userName, password)) { return Action.SUCCESS; } else { ActionContext.getInstance().put("error", "Invalid login"); return Action.ERROR; } } /** * Demonstrates login action with POJO form * @ti.action */ public String processLoginWithForm(LoginForm form) { // do something return Action.SUCCESS; } /** * POJO form with validation annotations on fields. */ public static final class LoginForm { private String userName; private String password; public void setUserName(String name) { this.userName = name; } public void setPassword(String val) { this.password = val; } /** * @Ti.validateRequired "User name is required" */ public String getUserName() { return this.userName; } /** * @Ti.validateRequired "Password is required" */ public String getPassword() { return this.password; } } } Comment by rich on Tue Jul 5 15:08:28 2005 I think that JSR175-style annotations should be prime, rather than the reverse. XDoclet-style annotations are definitely cleaner, but tool support will always end up getting built around the standard ones. Aside from the fact that editors will become friendly to raw annotations (which I believe to be true, even beyond the current support for statement completion), annotation support is already being built into the Eclipse JDT, so higher-level tools (design surfaces etc.) will have access to them more easily than they would to XDoclet tags. I'd be happy to have these two goals (XDoclet/JSR175-style annotation support) be peers, and in fact, there's a typesystem in Beehive that can run on top of both. I just think it would be a mistake to make tool-friendly annotations a secondary goal. Comment by rich on Tue Jul 5 15:19:25 2005 I really like the defaults behavior -- it eliminates a lot of rote code. One comment on this is that returning String is pretty limiting. It's the approach JSF took, and it's a roadblock if you ever want to attach something programmatically to the result. We can wait to see if we have a use for it, but we might end up wanting something like Forward in Beehive. My ideal would be to have String and a complex object both be valid return types. Comment by rich on Tue Jul 5 15:20:41 2005 Shouldn't there be an annotation to denote an action? I think it would be bad to have any public String getter turn into a user-addressable action. Conversely, I think it would be bad to say that no action can ever start with 'get'. Thoughts? Comment by mrdon on Tue Jul 5 16:29:34 2005 I agree both xdoclet and jsr 175 style annotations should be peers. The one feature of jsr 175 annotations that bothered me is you weren't allowed to repeat an annotation (i.e. multiple forwards) which forced you to shove everything into a giant annotation. Any way to minimize that? Regarding return types, I'm not sure how that would work with xwork, but we can look into that. Is there a particular usecase you are thinking of? Regarding action annotation, good point. How is this solved in JSF? I think a simple @action marker annotation would do the trick nicely, as much as I hate to require a default annotation :/ Comment by rich on Tue Jul 5 23:26:39 2005 1) I agree -- the JSR175 restriction on repeating annotations is terrible. If not for that, the annotations actually wouldn't be so bad... just an extra set of parentheses. I don't know of any way to minimize that pain (except through a nice hierarchical editor). I think that people who use editors will stick with JSR175, and people who compose and edit by hand will consider XDoclet. But, if there are good editors... I bet the former group will dwarf the latter. 2) The main usecase I was thinking of is the mechanism for passing initialization data to the view. Separating this kind of thing out from more general means (like request attributes in Servlet land) helps if you want to preserve non-long-lived state for 'go-back' situations... returning to a page that had validation errors, coming back out of a nested flow, etc. It also helps from the tool angle when there are ways to declare types to go along with the actual data that's being passed. In Beehive there are constructors and setters on Forward for passing initializer form beans and "action outputs": return new Forward("success", new LoginForm(...)); or Forward fwd = new Forward("success"); fwd.addActionOutput("initData", ...); return fwd; etc. There are also optional annotations for declaring the types and required/optional flags to go with the actual data. Assuming this sort of thing is useful in Ti (I think it is, but we'll see ), we could either accept both String and some complex type, or we could decide that return new Forward("foo") is simple enough. In the latter case I think Action.SUCCESS|ERROR would still always be a valid return value. 3) In JSF, the method "actions" aren't user-addressable, so they didn't run into the same issue. Components bind to methods through the EL, e.g., <h:commandLink action="#{someScope.myBean.login}" .../>, which resolves to method login() (not getLogin()... funny muddling of property- and method-binding). Comment by mrdon on Wed Jul 6 09:16:01 2005 - Well, if we stick to using them for generating xml configuration at build-time, then it won't take much extra work to support both. For instance, the code in svn now maps the xdoclet tags into xwork xml, re-using their configuration system. Hmmm...it wouldn't be hard to accept both String and Forward returns, and we could stick code between our action invocator and the controller to properly process the Forward. I think the question is if two techniques are more confusing to the user. On one hand, returning String keeps in line with JSF and WebWork2, but as you point out, the other adds additional functionality. Hmm... - Oh right, requests are for pages and, at least it used to be, everything is a POST. I do prefer requests being for actions, but yes, we will have to probably add that marker annotation then. Comment by rich on Wed Jul 6 20:28:42 2005 1) OK, sounds good. I'd suggest then that we focus first on the runtime, with handcoded xwork configs. We can assume that annotation/tag processing is a (large) implementation detail. It's definitely the part I have the fewest questions about. What do you think? 2. One other thought: if there's always a context available, some of the stuff that's done through Forward in Beehive could be done on the context instead. Maybe we should start with String and operate under the assumption that the context would be used for everything else? Comment by mrdon on Thu Jul 7 09:07:05 2005 - Well, actually, I have already written and tested a tag processor and ant task that uses xdoclet's xjavadoc and velocity to easily generate the xwork config, but you are right we shouldn't focus on it yet. Again, already implemented regarding use of Spring. I'm not exactly following how that relates to the context, and by context I think you mean chain WebContext? Comment by rich on Thu Jul 7 16:23:31 2005 1) Yeah, I saw that. You've been busy. I just figured that we'd end up stuffing a lot more into the config files than exists now. The processing layer in Beehive is large, because there's so much checking that can be done in the annotations and between annotations and types/methods/fields (which is a real advantage to annotation processing over XML configuration). 2) I'm confused. Are we having a String vs. Spring mismatch here? I just meant that we could use whatever context we provide (extension of WebContext?) to store what Beehive stores in the Forward object. So the action methods could return Strings. Instead of Springs. Comment by mrdon on Thu Jul 7 16:34:40 2005 - Yep, good point, however, I'm hoping the velocity template will be easy enough to edit, but if it starts to absorb too much time, I agree it can wait. Doh, I read 'Spring'. I agree returning Strings is a better design than Springs Also agree we could move that into a context. I'm thinking we'll need to create a ControllerContext, much like Struts 1.x's ActionContext, which will wrap xwork's ActionContext which will have chain's WebContext. Quite the Context party... Comment by rich on Thu Jul 7 17:22:56 2005 2) Yeah... I guess Context parties are the wave of the future. Comment by rich on Thu Jul 7 17:37:11 2005 4) Hey, how would people feel about making the XDoclet-style annotations ordered according to hierarchy? In the current mockup, the @Ti.action annotations would come before all the others on a method. This would allow there to be a better correspondance between XDoclet-style and JSR175-style annotations (and I don't think it's a harsh requirement). 5) Minor, but it's nice to settle on things like this early: I'd be in favor of lowercasing all elements of the annotation names, or uppercasing them all: @ti.action or @Ti.Action. Are XDoclet tags usually lowercased? On the JSR175 side, action is a type (an @interface), so it would seem strange to lowercase it if the wrapper interface (Ti) was uppercased. Comment by mrdon on Thu Jul 7 18:05:37 2005 Both suggestions sound good and pick one - upper-cased tags or lower-cased. Comment by rich on Thu Jul 7 22:08:03 2005 Cool. I like @ti.action because it's easier to type... Comment by rich on Wed Jul 13 16:27:26 2005 I wasn't reflecting hard enough on the Forward-vs-String question. I think we'll need to at least support something like Forward if we want to be able to accept dynamically-generated URIs. Of course we could support String and URI, but this seems like a low-flexibility option. Comment by mrdon on Wed Jul 13 16:48:18 2005 Not necessarily - WebWork allows the location attribute (Struts' ActionForword 'path') to be an OGNL expression. If we supported pluggable expression languages, we could allow the language evaluation engine to process 'location' providing dynamic paths. Comment by rich on Sat Jul 16 15:55:16 2005 Say I'm in an action method and I have: - String url = getSomeURL(); In that case, what do I return in order to forward or redirect to that URL? And how do I specify that it's a forward or redirect? Comment by mrdon on Sat Jul 16 16:45:35 2005 I don't know what WebWork2 would suggest, but I'd imagine you'd define two forwards, one a dispatch the other a redirect, which pulls the location out of the context/request attribute. While obviously a page like that isn't toolable, you can at least tell there will a redirect and a normal dispatch as results. Comment by rich on Mon Jul 18 20:40:03 2005 Would the method need to stick the url into a context/request attribute directly? So I understand, could you show what the action method would look like? If we can stick with String, that's good as long as there's not too much arcane knowledge required to fit everything in... Comment by mrdon on Tue Jul 19 10:53:16 2005 Yes, you'd have: ActionContext.getContext().put("url", url); then as your forward: @ti.forward name="dynamicUrl" location="/public/#{url}" Of course we would use the standard JSP 2.0 EL, but the principle is the same. Comment by rich on Tue Jul 19 13:20:43 2005 Hmm... OK. I do like that there's an identifiable @ti.forward, although the mechanism is difficult to discover. Much harder than recognizing that there's a Forward constructor which takes URI. I agree with trying this out, though...
http://wiki.apache.org/struts/StrutsTi/ControllerMock?highlight=WebWork
CC-MAIN-2013-48
refinedweb
2,709
65.01
Hi I was trying to write a code to read n numbers and find total, min, max, average for those n numbers. There is clearly something wrong with the code which I have failed to understand. Please help me. I'm new to these loops. Best regards Jackson Output:Output:Code:#include <iostream> #include <cstdlib> using namespace std; int main() { int i, n; float number, total=0, min=500, max=0, average; cout << "How many numbers are there?: "; cin >> n; for (i=0; i<n; i+=1) { cout << "Enter the number: "; cin >> n; total = total + n; if (n < min) { min = n; } if (n > max) { max = n; } } cout << "Total is: " << total << endl; cout << "Minimum is: " << min << endl; cout << "Maximum is: " << max << endl; cout << "Average is: " << total/n << endl; system("pause"); } Code:How many numbers are there?: 9 Enter the number: 1 Total is: 1 Minimum is: 1 Maximum is: 1 Average is: 1 Press any key to continue . . .
http://cboard.cprogramming.com/cplusplus-programming/137266-read-n-numbers-find-their-total-min-max-average.html
CC-MAIN-2014-35
refinedweb
157
67.49
inspect --- 检查对象¶ 源代码: Lib/inspect.py inspect 模块提供了一些有用的函数帮助获取对象的信息,例如模块、类、方法、函数、回溯、帧对象以及代码对象。例如它可以帮助你检查类的内容,获取某个方法的源代码,取得并格式化某个函数的参数列表,或者获取你需要显示的回溯的详细信息。 该模块提供了4种主要的功能:类型检查、获取源代码、检查类与函数、检查解释器的调用堆栈。 类型和成员¶ getmembers() 函数获取对象的成员,例如类或模块。函数名以"is"开始的函数主要作为 getmembers() 的第2个参数使用。它们也可用于判定某对象是否有如下的特殊属性: 在 3.5 版更改: Add __qualname__ and gi_yieldfrom attributes to generators. The __name__ attribute of generators is now set from the function name, instead of the code name, and it can now be modified. 在 3.7 版更改: Add cr_origin attribute to coroutines. inspect. getmembers(object[, predicate])¶ Return all the members of an object in a list of (name, value) pairs sorted by name. If the optional predicate argument is supplied, only members for which the predicate returns a true value are included. 注解if the object is a class, whether built-in or created in Python code. inspect. isfunction(object)¶ Return Trueif the object is a Python function, which includes functions created by a lambda expression. inspect. iscoroutinefunction(object)¶ Return Trueif the object is a coroutine function (a function defined with an async defsyntax). 3.5 新版功能. inspect. iscoroutine(object)¶ Return Trueif the object is a coroutine created by an async deffunction. 3.5 新版功能. inspect. isawaitable(object)¶ Return Trueif()) 3.5 新版功能. inspect. isasyncgenfunction(object)¶ Return Trueif the object is an asynchronous generator function, for example: >>> async def agen(): ... yield 1 ... >>> inspect.isasyncgenfunction(agen) True 3.6 新版功能. inspect. isasyncgen(object)¶ Return Trueif the object is an asynchronous generator iterator created by an asynchronous generator function. 3.6 新版功能. inspect. isbuiltin(object)¶ Return Trueif the object is a built-in function or a bound built-in method. inspect. isroutine(object)¶ Return Trueif the object is a user-defined or built-in function or method. inspect. ismethoddescriptor(object)¶ Return Trueiffrom the ismethoddescriptor()test, simply because the other tests promise more -- you can, e.g., count on having the __func__attribute (etc) when an object passes ismethod(). inspect. isdatadescriptor(object)¶ Return Trueifif the object is a getset descriptor. CPython implementation detail: getsets are attributes defined in extension modules via PyGetSetDefstructures. For Python implementations without such types, this method will always return False. inspect. ismemberdescriptor(object)¶ Return Trueif the object is a member descriptor. CPython implementation detail: Member descriptors are attributes defined in extension modules via PyMemberDefstructures. For Python implementations without such types, this method will always return False.. 在 3.5 版更改:. Introspecting callables with the Signature object¶ 3.3 新版功能.. A slash(/) in the signature of a function denotes that the parameters prior to it are positional-only. For more info, see the FAQ entry on positional-only parameters. 3.5 新版功能: follow_wrappedparameter. Pass Falseto get a signature of callablespecifically ( callable.__wrapped__will not be used to unwrap decorated callables.) 注解. 在 3.5 版更改: Signature objects are picklable and hashable. parameters¶ An ordered mapping of parameters' names to the corresponding Parameterobjects. Parameters appear in strict definition order, including keyword-only parameters. 在 3.7 版更改: Python only explicitly guaranteed that it preserved the declaration order of keyword-only parameters as of version 3.7, although in practice this order had always been preserved in Python) 3.5 新版功能. - class inspect. Parameter(name, kind, *, default=Parameter.empty, annotation=Parameter.empty)¶ Parameter objects are immutable. Instead of modifying a Parameter object, you can use Parameter.replace()to create a modified copy. 在 3.5 版更改:. 在 3.6 版更改:'" 在 3.4 版更改:. 注解', ())]) 3.5 新版功能. The argsand kwargsproperties can be used to invoke functions: def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs) 类与函数 None. defaults is a tuple of default argument values or Noneif there are no default arguments; if this tuple has n elements, they correspond to the last n elements listed in args. 3.0 版后已移除: Noneif arbitrary positional arguments are not accepted. varkw is the name of the Noneif arbitrary keyword arguments are not accepted. defaults is an n-tuple of default argument values corresponding to the last n positional parameters, or Noneif there are no such defaults defined. kwonlyargs is a list of keyword-only parameter names in declaration order.. 在 3.4 版更改: This function is now based on signature(), but still ignores __wrapped__attributes and includes the already bound first parameter in the signature output for bound methods. 在 3.6 版更改: This method was previously documented as deprecated in favour of signature()in Python 3.5, but that decision has been reversed in order to restore a clearly supported standard interface for single-source Python 2/3 code migrating away from the legacy getargspec()API. 在 3.7 版更改: Python only explicitly guaranteed that it preserved the declaration order of keyword-only parameters as of version 3.7, although in practice this order had always been preserved in Python 3. inspect. getargvalues(frame)¶ Get information about arguments passed into a particular frame. A named tuple ArgInfo(args, varargs, keywords, locals)is returned. args is a list of the argument names. varargs and keywords are the names of the None. locals is the locals dictionary of the given frame. 注解, 例如: >>> from inspect import formatargspec, getfullargspec >>> def f(a: int, b: float): ... pass ... >>> formatargspec(*getfullargspec(f)) '(a: int, b: float)' 3.5 版后已移除:. 注解' 3.2 新版功能. 3.5 版后已移除:. 3.3 新版功能.. 3.4 新版功能.. 在 3.5 版更改: Return a named tuple instead of a tuple. 注解. 在 3.5 版更改:. 在 3.5 版更改: A list of named tuples FrameInfo(frame, filename, lineno, function, code_context, index)is returned. inspect. currentframe()¶. 在 3.5 版更改:. 在 3.5 版更改: A list of named tuples FrameInfo(frame, filename, lineno, function, code_context, index)is returned.. 3.2 新版功能.. 3.2 新版功能.. 3.5 新版功能.. 3.3 新版功能. inspect. getcoroutinelocals(coroutine)¶ This function is analogous to getgeneratorlocals(), but works for coroutine objects created by async deffunctions. 3.5 新版功能.. 3.5 新版功能. inspect. CO_ITERABLE_COROUTINE¶ The flag is used to transform generators into generator-based coroutines. Generator objects with this flag can be used in awaitexpression, and can yield fromcoroutine objects. See PEP 492 for more details. 3.5 新版功能. 命令行界.
https://docs.python.org/zh-cn/3.7/library/inspect.html
CC-MAIN-2022-21
refinedweb
1,006
50.02
SEM_DESTROY(3) Linux Programmer's Manual SEM_DESTROY(3) sem_destroy - destroy an unnamed semaphore #include <semaphore.h> int sem_destroy(sem_t *sem); Link with -pthread.). sem_destroy() returns 0 on success; on error, -1 is returned, and errno is set to indicate the error. EINVAL sem is not a valid semaphore.sem_destroy() │ Thread safety │ MT-Safe │ └──────────────┴───────────────┴─────────┘ POSIX.1-2001, POSIX.1-2008. An unnamed semaphore should be destroyed with sem_destroy() before the memory in which it is located is deallocated. Failure to do this can result in resource leaks on some implementations. sem_init(3), sem_post(3), sem_wait(3), sem_overview(7) This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 SEM_DESTROY(3) Pages that refer to this page: sem_init(3), sem_overview(7)
http://man7.org/linux/man-pages/man3/sem_destroy.3.html
CC-MAIN-2018-34
refinedweb
147
68.36
Bugs item #3582112, was opened at 2012-10-30 22:01 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: gcc-4.7.0 Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Alex () Assigned to: Nobody/Anonymous (nobody) Summary: Simple compiled program crashes Initial Comment: Operating System: Windows 7 64-bit GCC Version: 4.7.0 Binutils Version: 2.22 MinGW Installer Version: mingw-get-inst-20120426.exe Build environment: cmd.exe Minimal Self-Contained Test Case: Compiling the following segment of code with the command line g++ main.cpp -o main.exe will produce an executable file (main.exe). However, upon running the executable file, it will result in the program crashing. #include <string> int main () { std::string str; return 0; } I obtained the latest version of GCC by running the installer listed earlier. During the installation process I chose the option "Download latest repository catalogues". Then, under the "Select Components" section, I checked the checkbox next to "C++ Compiler" (everything else left as default). It then downloaded and placed all of the files in C:\MinGW\. However, when I attempt to run a simple compiled program, the program will crash. I do think this issue has something to do with the version 4.7.0, because when I do not chose the "Download latest repository catalogues" option, it will download and install version 4.6.2 (along with Binutils Version 2.22). I've never had any problems with compiling/running programs with version 4.6.2 on the exact same system setup mentioned earlier. Running the program compiled with 4.7.0, the following is the crash report that Windows 7 gives: Problem signature: Problem Event Name: APPCRASH Application Name: main.exe Application Version: 0.0.0.0 Application Timestamp: 5090ad7a Fault Module Name: libstdc++-6.dll Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4ed82a4d Exception Code: c0000005 Exception Offset: 00049ed can respond by visiting:
http://sourceforge.net/p/mingw/mailman/mingw-notify/thread/From_noreply@sourceforge.net_Wed_Oct_31_05:01:22_2012/
CC-MAIN-2014-23
refinedweb
358
58.69
#include <Rect.h> List of all members. Rect can be associated with a SDF/Cos rectangle array using Rect(Obj*) constructor or later using Rect::Attach(Obj*) or Rect::Update(Obj*) methods. Rect keeps a local cache for rectangle points so it is necessary to call Rect::Update() method if the changes to the Rect should be saved in the attached Cos/SDF array. Get the coordinates of the rectangle. Set the coordinates of the rectangle. Determines if the specified point is contained within the rectangular region defined by this Rectangle. Normalizes the rectangle to the one with lower-left and upper-right corners. Set the horizontal value of lower-left point. Set the vertical value of lower-left point. Set the horizontal value of upper-right point. Set the vertical value of upper-right point.
http://www.pdftron.com/net/html/classpdftron_1_1PDF_1_1Rect.html
crawl-001
refinedweb
136
51.85
Interesting ... sounds like a legit bug, then (although it bears noting that byte[] primary keys aren't actually allowed by the JPA spec, as per section 2.1.4 ... support for them is an OpenJPA extension). My guess is that this only affects Oracle, due to our special handling of blobs. It'd be interesting to see if any other databases that support byte[] primary keys exhibit this problem. On Jan 2, 2007, at 7:23 PM, Igor Fedorenko wrote: > You can use use RAW(16) to store GUIDs in Oracle. This datatype is > allowed in primary keys. > > -- > Regards, > Igor > > Dain Sundstrom wrote: >> Can you have java field of type byte[] that maps to a NUMERIC (or >> heck a varchar) in he db? I'm guessing that Kevin's guid is a >> fixed 128 bit number. If it is and he can map it to a non-blob >> type, it should be possible to join with any database system. >> -dain >> On Jan 2, 2007, at 3:09 PM, Marc Prud'hommeaux wrote: >>> Kevin- >>> >>>> Also, this exception is supposedly only being produced with >>>> Oracle, not >>>> DB2. (I have not been able to verify that yet.) This would >>>> seem to >>>> indicate that it's dictionary-specific, but I'm not seeing >>>> anything there >>>> yet... >>> >>> Does Oracle even support blob primary keys? My recollection is >>> that it didn't... >>> >>> I suspect that the problem might be that since Oracle has a >>> number of problems with in-line blobs in statements, we >>> frequently issue a separate statement to load and store blobs >>> from and to rows, but if it is the primary key, then we might be >>> conflicting with that. Can you post the complete stack trace? >>> >>> >>> >>> >>> On Jan 2, 2007, at 6:03 PM, Kevin Sutter wrote: >>> >>>> Hi, >>>> Some experimenting with the @IdClass support is producing a strange >>>> exception message when attempting to map an id field of type byte >>>> []. >>>> According to the OpenJPA documentation, we need to use an >>>> Identity Class to >>>> use byte[] as the id field type. Something like this: >>>> >>>> @Entity >>>> @IdClass (jpa.classes.Guid.class) >>>> @Table(name="AGENT", schema="CDB") >>>> public class Agent { >>>> >>>> @Id >>>> @Column(name="ME_GUID") >>>> private byte[] guid; >>>> ... >>>> >>>> The Guid class has also been created with a single instance >>>> variable of type >>>> byte[]: >>>> >>>> public class Guid implements Serializable { >>>> private byte[] guid; >>>> ... >>>> >>>> But, during the loading of the database, I am getting the >>>> following error... >>>> >>>> org.apache.openjpa.util.MetaDataException: You cannot join on >>>> column " >>>> AGENT.ME_GUID". It is not managed by a mapping that supports joins >>>> >>>> First off, the exception is confusing since I don't believe I am >>>> attempting >>>> to do a join. The guid column is in the same table as the Agent. >>>> >>>> Also, this exception is supposedly only being produced with >>>> Oracle, not >>>> DB2. (I have not been able to verify that yet.) This would >>>> seem to >>>> indicate that it's dictionary-specific, but I'm not seeing >>>> anything there >>>> yet... >>>> >>>> I am in the process of validating the problem, but I thought I >>>> would drop a >>>> line to the team to see if it rings any bells... >>>> >>>> Thanks, >>>> Kevin >
http://mail-archives.apache.org/mod_mbox/openjpa-dev/200701.mbox/%3C2D57D84F-225D-4B8F-8CF3-753099827249@apache.org%3E
CC-MAIN-2015-48
refinedweb
516
71.55
For. We can either changed the web containers to put in such cut points. Or alternately, if your thread local contains a factory, then we could put a webcontainer classloader aware factory there to select the actual initial context. That would allow different parts geronimo to use different mechanisms for finding an initial context. cheers Jeremy Boynes wrote: > I have checked in a simple impl of JNDI for the java: namespace > (o.a.g.naming.java) > > This is based on a couple of assumptions: > * that the comp Context for a component is immutable so can be heavily > indexed > * that the majority of lookups are for simple strings (e.g. "env/jdbc/MyDS") > * that we do not need to bind anything to the java: namespace except > java:comp/... > > The component Context is taken from a ThreadLocal rather than the context > ClassLoader because for EJBs several components may share the same > classloader. For an EJB this is easy to set up in an Interceptor during > invocation. I am not sure how a Web Container can set this up yet (maybe in > a Valve or equivalent). > > -- > Jeremy > >
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200308.mbox/%3C3F455309.3080205@mortbay.com%3E
CC-MAIN-2015-22
refinedweb
184
60.04
How to: Return a Query from a Method (C# Programming Guide) This example shows how to return a query from a method as the return value and as an out parameter. Any query must have a type of IEnumerable or IEnumerable<T>, or a derived type such as IQueryable<T>. Therefore any return value or out parameter of a method that returns a query must also have that type. If a method materializes a query into a concrete List<T> or Array type, it is considered to be returning the query results instead of the query itself. A query variable that is returned from a method can still be composed or modified. In the following example, the first method returns a query as a return value, and the second method returns a query as an out parameter. Note that in both cases it is a query that is returned, not query results. class MQ { // QueryMethhod1 returns a query as its value. IEnumerable<string> QueryMethod1(ref int[] ints) { var intsToStrings = from i in ints where i > 4 select i.ToString(); return intsToStrings; } // QueryMethod2 returns a query as the value of parameter returnQ. void QueryMethod2(ref int[] ints, out IEnumerable<string> returnQ) { var intsToStrings = from i in ints where i < 4 select i.ToString(); returnQ = intsToStrings; } static void Main() { MQ app = new MQ(); int[] nums = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; // QueryMethod1 returns a query as the value of the method. var myQuery1 = app.QueryMethod1(ref nums); // Query myQuery1 is executed in the following foreach loop. Console.WriteLine("Results of executing myQuery1:"); // Rest the mouse pointer over myQuery1 to see its type. foreach (string s in myQuery1) { Console.WriteLine(s); } // You also can execute the query returned from QueryMethod1 // directly, without using myQuery1. Console.WriteLine("\nResults of executing myQuery1 directly:"); // Rest the mouse pointer over the call to QueryMethod1 to see its // return type. foreach (string s in app.QueryMethod1(ref nums)) { Console.WriteLine(s); } IEnumerable<string> myQuery2; // QueryMethod2 returns a query as the value of its out parameter. app.QueryMethod2(ref nums, out myQuery2); // Execute the returned query. Console.WriteLine("\nResults of executing myQuery2:"); foreach (string s in myQuery2) { Console.WriteLine(s); } // You can modify a query by using query composition. A saved query // is nested inside a new query definition that revises the results // of the first query. myQuery1 = from item in myQuery1 orderby item descending select item; // Execute the modified query. Console.WriteLine("\nResults of executing modified myQuery1:"); foreach (string s in myQuery1) { Console.WriteLine(s); } // Keep console window open in debug mode. Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } Create a Visual Studio project that targets the .NET Framework version 3.5 or a later version. By default, the project has a reference to System.Core.dll and a using directive for the System.Linq namespace. Replace the class with the code in the example. Press F5 to compile and run the program. Press any key to exit the console window.
https://msdn.microsoft.com/en-us/library/bb882532.aspx
CC-MAIN-2015-32
refinedweb
499
67.35
OpenLayers 2 is outdated. Go to the latest 3.x version: - - Frequently Asked Questions about the OpenLayers project - General - Project Organization - TRAC - github - Map - Popups - Controls - OverviewMap - Layers - WMS - WFS - GeoRSS - VE - Yahoo - Multimap - ProxyHost - Markers - Vector Related Questions - Why do I get "Your browser doesn't support Vectors" when building my … - Why don't vectors change while I'm dragging? - Why won't my vector layer work in IE before my page is done loading? - Why don't vectors work in $browser? - Why don't my vector features work over Google, Yahoo, Virtual Earth, etc.? - What is the maximum number of Coordinates / Features I can draw with a … - Misc Frequently Asked Questions about the OpenLayers project General How can I add a question to this FAQ? Please feel free to add OpenLayers questions to this page--we'll try to answer them for you. Where do I find more info? Here on this wiki. Click on the TitleIndex to see a list of available pages. Project Organization How many developers are involved in the deployment of OpenLayers? For a list of OpenLayers developers with commit access, see Committers. Besides these developers, we currently (01/2007) have about a dozen signed Contributer Licence Agreements (See the "Upload Your Patch" section of HowToContribute). These are people who follow the development of the project and locate bugs and contribute patches via tickets and the email list. How is the development of OpenLayers ensured in the future? The OpenLayers project is overseen by a Project Steering Committee which attempts to reflect the varied interests of the OpenLayers community. By having a number of users with commit access to the project, the project is not limited to the whims of any particular corporate decisions, and any questions where consensus is not straightforward/obvious require a vote by the Steering Committee. OpenLayers has graduated from the incubation process and is now a full fledged Open Source Geospatial Foundation (OSGEO) project. Similar to the Apache or Mozilla Foundations, OSGeo seeks to offer a home for projects to ensure that the project is maintained now and into the future. As part of that process, we are striving to create a project which is both useful and long-lived. We hope to create a self-maintaining project with a wide variety of disparate contributors, so that the community is not tied to any specific corporate interests. [ We have put a lot of time into making a project anyone can contribute to. Our commit process is designed to lower the bar as much as possible, and in general, we seek to create an environment where all users are free to assist in the project. By doing so, we hope to achieve utility and longevity into the forseeable future. What is MetaCarta's relationship to the OpenLayers project? OpenLayers is an independent project sponsored by MetaCarta. MetaCarta uses the OpenLayers library in some of its products. Can I pay someone to help me with OpenLayers? Of course! Generally speaking, the core OpenLayers dev team is tied up with other projects and only has the smallest amount of time to spend on OpenLayers development. So unless you have a big project to propose, it's better to get in touch with some other contractors. A good place to find them is via the OSGEO website TRAC How do I edit the wiki? You will need a wiki account. To get a wiki account, simply go to and create an OSGeo User account ,which you will be able to use to login to Trac/Wiki. To create a new page on the wiki: - Choose a name for your new page (e.g. PageName). - Log in and edit an existing page. - Place a link to your new page where appropriate (e.g. [wiki:PageName Page Title]). - Enter a comment about your change and save the modified page. - Follow the link to your page and click the "create new page" button. Remember to enter comments about the changes you make. And clear your edits with someone on the Project Steering Committee. For more info on making tickets, see FilingTickets. github How do can I get commit access to the github repository? You are invited to fork us on github, and create pull requests. This will normally provide you with enough freedom to tell OpenLayers developers about bugs in the source code, or features you want to add. HowToContribute has more information on contributing to the OpenLayers trunk or becoming an OpenLayers Committer. How do update my fork from master? Make sure you add a remote: git remote add ol git@github.com:openlayers/openlayers.git and then just pull from the upstream master e.g. by: git pull ol master Map How do I get a specific bounds to load just right into my Map Div? You have to specify a maxExtent and set the maxResolution to "auto": layer = new OpenLayers.Layer.WMS( "OpenLayers WMS", "?", {'layers': 'basic'}, {'maxExtent': new OpenLayers.Bounds(-180,-90,180,90), 'maxResolution': "auto"}); map.addLayer(layer); map.zoomToMaxExtent(); Related Links: SettingZoomLevels How do I get the LonLat from a pixel or from the current position of the mouse? Use the getLonLatFromPixel() function of the map object to translate from an xy pixel value into longitude and latitude. Ex: var pixel = new OpenLayers.Pixel(110,24); var lonlat = map.getLonLatFromPixel(pixel); alert("Lat: " + lonlat.lat + " (Pixel.x:" + pixel.x + ")" + "\n" + "Lon: " + lonlat.lon + " (Pixel.y:" + pixel.y + ")" ); For a live html example of getting the lon/lat of the current mouse position, see: To add a tool that automatically displays the current mouse position's lonlat on the map, see Control/MousePosition Why do I see repeated, chopped off layers in my map? This is a known behaviour in MapServer when serving tiled images. You can work around on this by adding "PARTIALS FALSE" to your MapServer Label definition. Projections OpenLayers supports any projection. A projection is a way of converting geographic coordinates -- latitude and longitude -- into a plane. There are three parameters in OpenLayers which are important to set if you wish to change projections: - maxExtent - maxResolution - projection These parameters are set, respectively, by default to: - -180,-90,180,90 - 1.40625 - EPSG:4326 maxExtent is the maximum bounds, in the units of your map, of the plane in which you want to display information. maxResolution is the number of mapunits per pixel at the highest zoom level, and the projection is used when issuing WMS or WFS requests to inform the server of the projection desired. You should also change the 'units' property on your map: this property is what allows OpenLayers to know what scale things are being rendered at, which is important for scale-based methods of zooming and the Scale display control. A map constructor which uses a different projection might look like: new OpenLayers.Map("map", {maxExtent: new OpenLayers.Bounds(-20037508.34, -20037508.34, 20037508.34, 20037508.34), maxResolution: 156543, units: 'meters', projection: "EPSG:41001"}); How can i trace path on openlayers map using latitude and longitude ? Popups Why don't borders show up on my popups? If you are using the OpenLayers.Popup.AnchoredBubble class, you will notice that no matter what you set for borders, they will not show up. This is because of the use of the RICO Corners, which do not allow Borders (as far as we know). If you want to have border's, for now you will have to use the OpenLayers.Popup.Anchored class. Controls How do I make an OpenLayers map without any controls? Pass an empty array as the 'controls' property when initializing the map. map = new OpenLayers.Map( $('map'), {controls: [] } ); Related Links: DefaultControls OverviewMap Why does the OverviewMap display on the top/left of the map and not in the default position? The position of the Overview Map is controlled by CSS, loaded to the page automatically when the map is created. This CSS is stored in the 'theme' directory: if you do not have a theme directory alongside the 'img' directory that OpenLayers is using, this CSS can not be loaded, and the map will be displayed in the upper left, instead of the properly CSS positioned lower right. Related Links: Control/OverviewMap Layers How do I configure Zoom Levels/Resolutions/Scales? Can I see ArcGIS layers in my OpenLayers Map? Yes you can, search the examples page for the keyword ArcGIS. What is the maximum amount of layers I can have in my OpenLayers Map? The limit is about 75. After that, layers can appear above popups. This has to do with the z-index in CSS (determines what is 'above' what). Layers (overlay) start at a z-index of 325. Popups start at 750. Controls start at 1000. Every layer 'takes up' about 5 indexes, so it will reach it's limit at around 75 layers. You cannot have more than 250 popups for the same reason. If you need more than 75 layers, consider destroying the ones you don't show instead of hiding them and recreate them when needed. WMS How can I see the URL string that OpenLayers is sending to a WMS server? Thanks to Mike Q for summarizing the responses to this one - Yves J recommended using the Firefox extension Firebug (). It does a fantastic job of obtaining the URL strings, and many other useful debug tasks. - Tim L recommended checking the Apache access_log files. This works great for natively hosted WMS. Very easy to check. - Jon B suggested using the getURL function, as in the following: alert(my_wms.getURL(new OpenLayers.Bounds(...))); - Christopher S mentioned that - Grid layers have a grid property that contains an array of an array of tiles, with each tile having a URL object. - One can simply right click on an image tile to obtain the URL property information. My Tiles are all pink! What can I do? See [TroubleshootingTips] Where can I get some free WMS layers on the web to use with OpenLayers? See Available WMS Services WFS Why isn't WFS working on my local checkout of OpenLayers? This is probably because you do not have a proxy host set up. See FrequentlyAskedQuestions I'm trying to use WFS, why won't my vector data show up? Try not setting the featureClass property when declaring your WFS layer. GeoRSS Why isn't GeoRSS working on my local checkout of OpenLayers? This is probably because you do not have a proxy host set up. See FrequentlyAskedQuestions If I have multiple points at the same location, how do I make them all visible on click? How do I load tiles I have that I generated for Google Maps in OpenLayers? See this wiki page for more information: UsingCustomTiles. Is it possible to see Google's StreetView in an OpenLayers Map? Believe it or not, yes... somebody has actually done it! The Tutorial popped up on the Fuzzy Tolerance Blog. You can see it live and working in the GeoSpatial Portal for Mecklenberg County GIS (Click the bottom + sign on the left). Not quite the same level of UI as the folks at Google give it, but still... an interesting mashup. VE Yahoo Multimap ProxyHost Why do I need a ProxyHost? Due to security restrictions in Javascript, it is not possible to retrieve information from remote domains via an XMLHttpRequest. Classes like WFS and GeoRSS use XMLHTTPRequest to get their data. If they are querying a remote server (anything other than the machine hosting your page), you must install a proxy script somewhere web accessible on that machine. See below for how to set up your own ProxyHost. If the OpenLayers.ProxyHost variable is not set to a valid proxy host, requests are sent directly to the remote servers. In most cases, the result will be a security exception, although this exception often occurs silently. How do I set up a ProxyHost? An example proxy host script is available here: trunk/openlayers/examples/proxy.cgi For the standard Apache configuration, you would place proxy.cgi into your /usr/lib/cgi-bin/ directory. Once a proxy host script has been installed, you must then edit the OpenLayers.ProxyHost variable to match that URL. Given the above standard Apache configuration: OpenLayers.ProxyHost = "/cgi-bin/proxy.cgi?url="; If you have done something like this, you should be able to visit: The resulting content at that page should be the openlayers.org website. If you get a 404 error instead, either the proxy script is not in the right location, or your webserver is not configured correctly. Markers Why is My Map Sluggish when I Add 500 Markers? Browsers can't handle moving around a DOM with more than a few hundred elements at once. I highly recommend figuring out a way to limit yourself to under 500 markers (Firefox) or 50 markers (IE6). Why don't my markers appear at certain zoom levels? The problem is that your markers layer is considered 'out of range' for some reason. The source of the problem is probably to do with the "resolutions" setting on the base layer. One fix is to override the calculateInRange function to always return true, eg: //Original Markers = new OpenLayers.Layer.Markers("Markers"); //override the calculateInRange function to always return true Markers = new OpenLayers.Layer.Markers("Markers", {'calculateInRange': function() { return true; }}); Also see this mailing list thread, particularly this post Vector Related Questions Why do I get "Your browser doesn't support Vectors" when building my own copy of OpenLayers? You must build with ./build.py full -- the default OpenLayers build does not include Vector Support. Why don't vectors change while I'm dragging? As a performance enhancement, vectors do not update their visual representation while the map is being dragged. Instead, they update when the mouseup event fires. For this reason, while dragging, you may occasionally see 'cut off' vector features. Why won't my vector layer work in IE before my page is done loading? and the rest of the thread. Adding the following statement before the call to create the vector layer can potentially solve the issue: document.namespaces; Why don't vectors work in $browser? The Vector Layer and its subclasses currently support a set of renderers which do not cover all browsers. The renderers which are currently implemented are: - SVG: Supported by Opera, Firefox. W3C Standard. (SVG Will Not Work with plugins: support for Compound Document Format is required.) - VML: Supported by Internet Explorer 6 and 7. This means that currently, Safari and Konquerer are not supported. The latest Safari Beta, available from Apple's website, does support SVG. If you are interested in changing this, the solution is to write additional renderers for other libraries. The Renderer() main class defines an API stub that you can use to implement your own renderer for other rendering tools. Why don't my vector features work over Google, Yahoo, Virtual Earth, etc.? The vector layers assume a square pixel size. The Commercial APIs, by default, use a non-square geographic pixel size -- one that changes as the map moves north and south. The only way to use vector layers over commercial basemaps is to use the SphericalMercator support in 2.5. This causes the map to be projected to mercator, and once the map is projected, pixels are geographically square, which means you can use a vector layer over them. What is the maximum number of Coordinates / Features I can draw with a Vector layer? Technically speaking, there are no limits. Performance-wise, however, you will want to keep things reasonable. Our observations so far* have shown the following as rough upper bounds on what is reasonable to expect a browser to be able to handle: - ~2500 Coordinates - ~100-200 Features (Since each geometry is rendered as a separate DOM object, dragging and the like get seriously slowed down the more features you have on the map.) - If you have different or more complete (browser-specific, precise figures, etc.) data, please insert it here! Misc How do I set up OpenLayers to run with TileCache? The TileCache distribution includes an HTML example, index.html, which shows how to use it with OpenLayers. Assuming that your bbox and resolutions array are the defaults, it should be really simple -- when they're not anymore, things get a bit more difficult. How Do I Build a Single-File Version of OpenLayers? How Do I display 2 Image layers? What is the relationship between EditingToolbar and DrawFeature/ModifyFeature controls? You would think that they MUST be able to work together but I could not get any info on that anywhere. A "realistic" entry in the Development Examples page of using EditingToolbar + DrawFeature + ModifyFeature would be greatly appreciated. Thanks in advance, A.R.
http://trac.osgeo.org/openlayers/wiki/FrequentlyAskedQuestions
CC-MAIN-2016-44
refinedweb
2,790
65.73
3 Oct 07:07 2005 Re: [Pcihpd-discuss] Re: ACPI problem with PCI Express Native Hot-plug driver Rajat Jain <rajat.noida.india <at> gmail.com> 2005-10-03 05:07:26 GMT 2005-10-03 05:07:26 GMT On 10/1/05, Rajesh Shah <rajesh.shah <at> intel.com> wrote: > On Fri, Sep 30, 2005 at 02:57:07PM +0900, Rajat Jain wrote: > > > > pciehp: pfar:cannot locate acpi bridge of PCI 0xb. > > ...... > > pciehp: pfar:cannot locate acpi bridge of PCI 0xe. > > This is saying that the driver's probe function was called for > these pciehp capable bridges, but it didn't find them in the > ACPI namespace. > > > Hi Rajesh, Thanks for the insight. But my doubt is that the PCI Express devices down the hot-pluggable slots are working fine. i.e. if we forget about the hot-plugging / unplugging, the bridges and devices are working fine, even with ACPI enabled. So is the presence of bridges in ACPI namespace required only for hot-plugging / unplugging and not for normal operation? Thanks, Rajat
http://blog.gmane.org/gmane.linux.newbie/month=20051001
CC-MAIN-2014-10
refinedweb
175
74.79
Opened 3 years ago Closed 3 years ago #20594 closed Bug (fixed) models.SlugField doesn't validate against slug_re Description This appears to be a bug to me: from django.db import models class MyModel(models.Model): slug = models.SlugField() mymodel = MyModel(slug='this is an invalid % $ ## slug') mymodel.full_clean() # it'd expect this to raise a validation error... it does not. Change History (4) comment:1 Changed 3 years ago by claudep - Easy pickings set - bmispelon - Has patch set - Owner bmispelon deleted - Status changed from assigned to new comment:4 Changed 3 years ago by Tim Graham <timograham@…> - Owner set to Tim Graham <timograham@…> - Resolution set to fixed - Status changed from new to closed Note: See TracTickets for help on using tickets. PR here: I also included some cleanup of the URLField validators. Since 9ed6e08ff99c18710c0e4875f827235f04c89d76, the URLField (both model and form field) validator doesn't depend on a parameter passed in __init__ so it can be added directly to URLField.default_validators.
https://code.djangoproject.com/ticket/20594
CC-MAIN-2016-36
refinedweb
162
56.45
Implement ES6 computed property names RESOLVED FIXED in mozilla34 Status () People (Reporter: bbenvie, Assigned: gupta.rajagopal) Tracking (Blocks 1 bug, {dev-doc-complete, feature}) Firefox Tracking Flags (Not tracked) Details (Whiteboard: [js:p2][DocArea=JS]) Attachments (1 attachment, 2 obsolete attachments) ES6 introduces computed property names in ObjectLiterals (and ClassDefinitions). The ToPropertyKey is called on the AssignmentExpression inside the computed property name, which means any non-Symbol will be coerced to a string. Examples: > var i = 0; > var obj = { > ["foo" + ++i]: i, > ["foo" + ++i]: i, > ["foo" + ++i]: 1 > }; Would result in obj being: > ({ foo1: 1, foo2: 2, foo3: 3 }) The grammar for PropertyName is updated to be > PropertyName : > LiteralPropertyName > ComputedPropertyName > > LiteralPropertyName : > IdentifierName > StringLiteral > NumericLiteral > > ComputedPropertyName : > [ AssignmentExpression ] See ES6 draft spec (September 2013 edition) section 12.1.5. This also applies to destructuring. Example: > let key = "z"; > let { [key]: foo } = { z: "bar" }; > foo; // "bar" When using a computed name in destructuring it has to be given an alias, so as to not introduce eval-like dynamic bindings. 1. This probably needs more tests. Assignee: nobody → gupta.rajagopal Status: NEW → ASSIGNED Attachment #8462937 - Flags: review?(jorendorff) Comment on attachment 8462937 [details] [diff] [review] Patch to implement computed property names v0 Review of attachment 8462937 [details] [diff] [review]: ----------------------------------------------------------------- Great patch! r=me with these comments addressed, including the new tests. ::: js/src/frontend/BytecodeEmitter.cpp @@ +3276,5 @@ > doElemOp = false; > } > + } else { > + // Has to be a computed property name. > + JS_ASSERT(key->isKind(PNK_COMPUTED_NAME)); Remove the redundant comment, please. @@ +6072,5 @@ > isIndex = true; > } > + } else { > + // Has to be a computed property name. > + JS_ASSERT(pn3->isKind(PNK_COMPUTED_NAME)); And here. ::: js/src/frontend/ParseNode.h @@ +393,5 @@ > * destructuring lhs > * pn_left: property id, pn_right: value > * PNK_SHORTHAND binary Same fields as PNK_COLON. This is used for object > * literal properties using shorthand ({x}). > + * PNK_COMPUTED_NAME unary pn_kid: assignment expr I'd like these to be clearer. How about: PNK_COMPUTED_NAME unary ES6 ComputedPropertyName. pn_kid: the AssignmentExpression inside the square brackets ::: js/src/frontend/Parser.cpp @@ +7042,5 @@ > typename ParseHandler::Node > +Parser<ParseHandler>::newComputedName(Node name) > +{ > + return handler.newComputedName(name, pos().begin); > +} Please remove this method. @@ +7279,5 @@ > break; > > + case TOK_LB: { > + // Computed property name. > + propname = newComputedName(assignExpr()); A few bugs here: 1. The result of assignExpr() must be checked for errors. 2. propname must be checked for errors.). 5. If getToken() produces a character that isn't TOK_RB but also isn't TOK_ERROR, then we would fail without producing an error message. Use MUST_MATCH_TOKEN instead. In fact, please file a bug about MUST_MATCH_TOKEN (a) being a macro rather than a method; (b) calling report() unconditionally (it should not report if tokenStream.getToken() returns TOK_ERROR); (c) the comment saying things about "cx" and "ts" which don't exist in the code anymore. Add tests to detect bugs 1, 3, 4, and 5. You can detect bugs 3 and 4 using Reflect.parse. Each node in the output has a .loc property containing location information. @@ +7282,5 @@ > + // Computed property name. > + propname = newComputedName(assignExpr()); > + if (tokenStream.getToken() != TOK_RB) > + return null(); > + handler.setListFlag(literal, PNX_NONCONST); Good catch adding PNX_NONCONST here! I would have missed that. Please add a test that would detect the bug if you hadn't. ::: js/src/frontend/Parser.h @@ +441,5 @@ > bool appendToCallSiteObj(Node callSiteObj); > bool addExprAndGetNextTemplStrToken(Node nodeList, TokenKind &tt); > #endif > inline Node newName(PropertyName *name); > + inline Node newComputedName(Node expr); reminder to remove this declaration along with the definition ::: js/src/jsreflect.cpp @@ +2920,5 @@ > return builder.objectExpression(elts, &pn->pn_pos, dst); > } > > + case PNK_COMPUTED_NAME: > + return expression(pn->pn_kid, dst); Please delete this... @@ +2995,5 @@ > bool > ASTSerializer::propertyName(ParseNode *pn, MutableHandleValue dst) > { > + if (pn->isKind(PNK_COMPUTED_NAME)) > + return expression(pn, dst); ...and instead pass pn->pn_kid to expression here. ::: js/src/tests/ecma_6/Class/compPropNames.js @@ +38,5 @@ > + > + > +// Destructuring > +var key = "z"; > +var { [key]: foo } = { z: "bar" }; Great tests! Please move destructuring tests to a separate file. More things to test: * All these should be syntax errors: ({[ ({[expr ({[expr] ({[expr]}) ({[expr] 0}) ({[expr], 0}) [[expr]: 0] ({[expr]: name: 0}) ({[1, 2]: 3}) // because '1,2' is an Expression but not an AssignmentExpression ({[1;]: 1}) // and not an ExpressionStatement ({[if (0) 0;]}) // much less a Statement function f() { {[x]: 1} } // that's not even an ObjectLiteral function f() { [x]: 1 } // or that * Test that JSON.parse() rejects computed property names. (I'm sure it does, so just a one-line test will do.) * Test that the properties defined this way are ordinary enumerable, writable, configurable data properties (using Object.getOwnPropertyDescriptor to check). * Test that if the computed property name happens to be the name of a property on Object.prototype that has a setter: Object.defineProperty(Object.prototype, "x", {set: function (x) { throw "FAIL"; }}); var a = {["x"]: 0}; the setter is *not* called, and a.x is 0. * Using the same property name more than once is *not* an error. In ObjectLiterals like this one: a = {[x]: 1, [x]: 2}; the second property can overwrite the first. * The same thing happens if the either property was defined using a non-computed property name, and even if it's an accessor property: a = {x: 1, ["x"]: 2}; a = {["x"]: 1, x: 2}; a = {get x() { return 1; }, ["x"]: 2}; // test that this makes a data property * In fact, I believe ES6 changes the rules so that even this is not an error, even in strict mode: var a = {x: 1, x: 2}; I think the same thing happens. If you want to implement this in a separate patch, feel free. * Test that it works with symbols. Stuff like a = { data: [1, 2, 3], [Symbol.iterator]: function () { return this.data[Symbol.iterator](); } }; will probably be common; but here are two other ways to create symbols: var unique_sym = Symbol("1"), registered_sym = Symbol.for("2"); * Test that it works if you run the same expression several times to build objects with different property names: a = []; for (var i = 0; ...) { a[i] = {["foo" + i]: ...}; } * Add jit-tests. Test that it works if an expression inside a loop or function is first used to build many objects with the *same* property name or names: function f(tag) { return {[tag]: 0}; } for (...) a = f("first"); and then the same loop or function is used again to build an object with a different property name or names: for (...) a = f("second"); * Test that it can be used to define several elements (that is, properties with names that are nonnegative integers), and if possible, test that the resulting properties are stored in the object's elements rather than slots (see the comment on class ObjectElements in vm/ObjectImpl.h). * Test using computed property names to define several elements, but then also defining a single large index (greater than MIN_SPARSE_INDEX) or a single string property name. * Test using this syntax to define lots of properties: var code = "({"; for (i = 0; i < 1000; i++) code += "['foo' + " + i + "]: 'ok', " code += "['bar']: 'ok'});"; var obj = eval(code); // then add some assertions involving obj * Test that in a generator, it's possible to yield in the middle of a ComputedPropertyName. * Test that the behavior when combined with getter/setter syntax works as desired: a = {get [expr]() { ... }, set[expr](v) { ... }} If this syntax doesn't work yet, that's OK - just test that it's a SyntaxError, not a crash! And file a second bug to implement it. * Test getter/setter syntax with Reflect.parse too. ::: js/src/tests/js1_8_5/extensions/reflect-parse.js @@ +388,5 @@ > + > +assertExpr('a= {["field1"]: "a", field2 : "b"}', > + aExpr("=", ident("a"), > + objExpr([{ key: lit("field1"), value: lit("a"), computed: true }, > + { key: ident("field2"), value: lit("b"), computed: false }]))); Great! Along the same lines, test that in {[0]: 0, 1: 1}, the first field is `computed: true` and the second is `computed: false. Attachment #8462937 - Flags: review?(jorendorff) → review+ Thanks for the review! (In reply to Jason Orendorff [:jorendorff] from comment #3) >). > How would I actually test 3 and 4? We just pass pn->pn_kid to expression in jsreflect? We don't actually create a node for COMPUTED_NAME in Reflect.parse. > Good catch adding PNX_NONCONST here! I would have missed that. Please add a > test that would detect the bug if you hadn't. How do I test this? If I don't add that, control will flow to getConstantValue. The fact that something like var b = 2; a = { [b] : 2, 3 : 3 } works is proof that the flag's been set, and patterns like that are tested in other places. Is that enough or did you have something particular in mind? >? > * Test that it works with symbols. Stuff like > a = { > data: [1, 2, 3], > [Symbol.iterator]: function () { return > this.data[Symbol.iterator](); } > }; Um, I'm not sure I understand what this code fragment does. Can you please explain? Flags: needinfo?(jorendorff) (In reply to guptha from comment #4) > How would I actually test 3 and 4? We just pass pn->pn_kid to expression in > jsreflect? We don't actually create a node for COMPUTED_NAME in > Reflect.parse. Oh no! This is my fault. Can you easily change it to the other approach? Add a ComputedPropertyName node, get rid of the .computed boolean? I'm sorry for the noise. Reflect.parse has to be good enough to use for static analysis and rewriting; some simple rewrites really benefit from precise location information on things like this. > > Good catch adding PNX_NONCONST here! I would have missed that. Please add a > > test that would detect the bug if you hadn't. > > How do I test this? Existing tests are good enough, thanks. > >? Yes. > > * Test that it works with symbols. Stuff like > > a = { > > data: [1, 2, 3], > > [Symbol.iterator]: function () { return > > this.data[Symbol.iterator](); } > > }; > > Um, I'm not sure I understand what this code fragment does. Can you please > explain? A symbol is a kind of value that can be used as a property key. It's not a string, it's a different kind of value. So for example: var key1 = "moon"; var key2 = Symbol("moon"); // create a unique symbol var obj = {}; obj[key1] = 1; // create a data property with a string key print(obj[key1]); // 1 obj[key2] = 2; // create a data property with a symbol key print(obj[key2]); // 2 You should be able to do the same thing with computed property names: var obj = {[key1]: 1, [key2]: 2}; It should just work, we just need tests for it. Symbol.iterator is just a standard built-in symbol; you don't have to use that for the tests. Flags: needinfo?(jorendorff) Whiteboard: [js:p2] → [js:p2][DocArea=JS] Attachment #8462937 - Attachment is obsolete: true Attachment #8467186 - Flags: review+ Jason, Did you want to skim through the patch? There were quite a few changes. Also, is this ready for commit? The getter and setter syntax will be added in bug 1048384. Bug 1041128 was created by someone else for the duplicate property names issue. Flags: needinfo?(jorendorff) This looks good! Flags: needinfo?(jorendorff) Updated commit message. Attachment #8467186 - Attachment is obsolete: true Attachment #8470087 - Flags: review+ Flags: in-testsuite+ Status: ASSIGNED → RESOLVED Last Resolved: 5 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla34 Flags: qe-verify- Wrote a new page collecting info on object literal syntax: Also mentioned on related pages: Developer release notes Any reviews to the wiki pages are very much appreciated. Keywords: dev-doc-needed → dev-doc-complete
https://bugzilla.mozilla.org/show_bug.cgi?id=924688
CC-MAIN-2019-22
refinedweb
1,872
57.87
Hi, I tried Tutorial 5, but there were a couple of problems. Firstly, when I ran it there was an exception saying that it could not find the type: Microsoft.Robotics.Services.Simulation.Drive.FourWheelSimulatedDifferentialDrive There was no Four Wheel Drive service running to connect to with the Dashboard. When I looked in the code, you had used the namespace: Microsoft.Robotics.Services.Simulation.Drive.Proxy I'm not sure why you did this. In any case, I removed the "Proxy" in several places in the code and it compiled and ran OK. I can now connect to the robot and drive it around. Secondly, there was a reference to table_01.x that generated an error because the file could not be found. I copied it from a previous CTP, but then the error changed to "mesh type is not supported". Doh! Then I remembered that Microsoft dropped support for DirectX meshes in the October CTP. Luckily there is a table_01.obj in the October CTP so I changed the code to use this instead. So everything seems to be working now. On a different topic, how did you generate your meshes? They look great (except for the colour as discussed in a prior post). Trevor Post permalink Hi, See more posts in thread…
http://channel9.msdn.com/Forums/Sandbox/244821-Robotic-Studio--two-new-simulation-tutorials-by-Robosoft/3591ffccbb794275b1a79dea0107451f
crawl-003
refinedweb
214
67.96
A comment is a programmer-readable note that is inserted directly into the source code of the program. Comments are ignored by the compiler and are for the programmer’s use only. In C++ there are two different styles of comments, both of which serve the same purpose: to help programmers document the code in some way. Single-line comments The // symbol begins a C++ single-line comment, which tells the compiler to ignore everything from the // symbol. If the lines are fairly short, the comments can simply be aligned (usually to a tab stop), like so: However, if the lines are long, placing comments to the right can make your lines really long. In that case, single-line comments are often placed above the line it is commenting: Author's note The statements above represent one of our first encounters with snippets of code. Because snippets aren’t full programs, they aren’t able to be compiled by themselves. Rather, they exist to demonstrate specific concepts in a concise manner. If you would like to compile a snippet, you’ll need to turn it into a full program in order for it to compile. Typically, that program will look something like this: Multi-line comments The /* and */ pair of symbols denotes a C-style multi-line comment. Everything in between the symbols is ignored. /* */ Since everything between the symbols is ignored, you will sometimes see programmers “beautify” their multi-line comments: Multi-line style comments can not be nested. Consequently, the following will have unexpected results: When the compiler tries to compile this, it will ignore everything from the first /* to the first */. Since “this is not inside the comment */” is not considered part of the comment, the compiler will try to compile it. That will inevitably result in a compile error. This is one place where using a syntax highlighter can be really useful, as the different coloring for comment should make clear what’s considered part of the comment vs not. Warning Don’t use multi-line comments inside other multi-line comments. Wrapping single-line comments inside a multi-line comment is okay. Proper use of comments Typically, comments should be used for three things. First, for a given library, program, or function, comments are best used to describe what the library, program, or function, does. These are typically placed at the top of the file or library, or immediately preceding the function. For example: All of these comments give the reader a good idea of what the library, program, or function is trying to accomplish without having to look at the actual code. The user (possibly someone else, or you if you’re trying to reuse code you its goal without having to understand what each individual line of code does. Third,: Reason: We already can see that sight is being set to 0 by looking at the statement Good comment: Reason: Now we know why the player’s sight is being set to 0 Reason: We can see that this is a cost calculation, but why is quantity multiplied by 2? Reason: Now we know why this formula makes sense.. :) You (or someone else) will thank you later for writing down the what, how, and why of your code in human language. Reading individual lines of code is easy. Understanding what goal they are meant to accomplish is not. Best practice Comment your code liberally, and write your comments as if speaking to someone who has no idea what the code does. Don’t assume you’ll remember why you made specific choices. Throughout the rest of this tutorial series, we’ll use comments inside code blocks to draw your attention to specific things, or help illustrate how things work (while ensuring the programs still compile). Astute readers will note that by the above standards, most of these comments are horrible. :) As you read through the rest of the tutorials, keep in mind that the comments are serving an intentional educational purpose, not trying to demonstrate what good comments look like.. or There are quite a few reasons you might want to do this: 1) You’re working on a new piece of code that won’t compile yet, and you need to run the program. The compiler won’t let you compile the code if there are compiler errors. Commenting out the code that won’t compile will allow the program to compile so you can run it. When you’re ready, you can uncomment the code, and continue working on it. 2) You’ve written new to what you had before. Commenting out code is a common thing to do while developing, so many IDEs provide support for commenting out a highlighted section of code. How you access this functionality varies by IDE. For Visual Studio users You can comment or uncomment a selection via Edit menu > Advanced > Comment Selection (or Uncomment Selection). For Code::Blocks users You can comment or uncomment a selection via Edit menu > Comment (or Uncomment, or Toggle comment, or any of the other comment tools). Tip If you always use single line comments for your normal comments, then you can always use multi-line comments to comment out your code without conflict. If you use multi-line comments to document your code, then commenting-out code using comments can become more challenging. If you do need to comment out a code block that contains multi-line comments, you can also consider using the #if 0 preprocessor directive, which we discuss in lesson 2.9 -- Introduction to the preprocessor. #if 0 Summary Pro Tip: Ctrl + '/'. You'll thank me later ;) My hero. If I wanted to leave a comment as to remind myself I’ve commented out a piece of code for x reason, would I write it like this? Is this easy to read for someone looking at the code and makes it look ‘tidy’ ? /* Commenting out until I have time to fix example code example code example code */ Is there a way of automatically aligning single line comments in Visual Studio 2019? Like this: [code] /* nice job with the tutorials these are pretty helpful */ I often comment out blocks of code by using the preprocessor directive "#if 0". Such blocks can be nested. One tip: Some editors like VS Code, Sublime, Atom allow this short cut for commenting. Select the lines of code/text to commented out and use "ctl+/" buttom. Its faster than commenting manually. In Visual Studio you can use "ctr+k+c" to comment and "ctrl+k+u" to uncomment. thanks for a great effort you made my day......... "// The following lines generate a random item based on rarity, level, and a weight factor." These "following lines" determine lines of certain function right? Or can you comment just a few statements in this way? Later on it is said that on statement level comments can be used only for "why" purpose not for "what" so that's why I am asking. HEllo I didn't get a part of it. the part that says : Commenting out code is a common thing to do while developing, so many IDEs provide support for commenting out a highlighted section of code. How you access this functionality varies by IDE. commenting out some codes can be easily done by the ways you talked about.so what is these supports that the IDE can provide for us ? I don't get it this part would be thankful if you can explain it. thanks in advance! Many IDEs allow you to highlight a section of code (with the mouse or keyboard) and then hit a magic key combination to comment/uncomment that selection. It's not hard to do it manually, but if you're commenting out a significant chunk of code it can save time. Hi, can you give me an example of comment out at the library level, and inside the library level I am finding it hard to differentiate the two? Thanks One English grammar point: I think the verb "revert" has inclusively the meaning of "return or go back" inside, so you don't have to add "back" to it. "If you can’t get your new code to work, you can always delete the new code and uncomment the old code to revert back to what you had before." Agreed. Removed the extraneous word. Thanks! Hello teacher, We can use "Ctrl+Shift+/" (forwarding slash) in Visual Studio to comment out and uncomment out automatically by selecting parts of the code we would like to do that. It's very interesting to play around with the C++code. std::cout << "Hello World!\n"; //works std::cout << "\n"; //works too But both of the code below give an error. std::cout << "Hello World!\n\n"; std::cout << "Hello World!"; "\n"; So basically this says: std::cout //only works once and std::cout << "\n"; //has to be used by itself to skip a line I have questions about code blocks though. Why do I see the red box on the editor and not get a beep? I like the red box since of clearly makes the line of the error. But how can you turn it off and on? Can I get a beep instead of the red box? There's nothing wrong with You can happily mix line feeds and letters I can't help you with your code blocks question. List of possible typos (and some questions): "the comments can simply be aligned (usually to a tab stop), like so:" ** aligned by spaces? If user changes tab width... tab stop will shift as well? "The statements above represent our first encounters with snippets of code." ** Is code in 0.7 and 0.8 (cin's .clear(), ignore(), get()) isn't a snippet because it isn't an concept (abstract idea)? "Since this is not inside the comment */ is not considered part of the comment, the compiler will try to compile it." ** ``Since "this is ... */" is not ...`` -- enhance readability by "" ? "// This function uses newton's method to approximate the root of a given equation." ** ``Newton's`` (one below is capitalized)? "The user (possibly someone else, or you if you’re trying to reuse code you’ve already previously written) can tell at a glance whether the code is relevant to what he or she is trying to accomplish." ** ``... to what this person is trying ...`` ** "Second, within a library, program, or function ..." ** Maybe your insert "First" and "Third": ** ``First, for a given library, program, or function, ...``; ** ``Third, at the statement level, ...`` "... assign a percentage. This percentage is ..." ** Maybe split this, and unite "This percentage is" and "used to calculate a letter grade." ? Looks uneven. "The compiler won't let you run if there are compiler errors." ** ``run the program if`` ? "Commenting out the broken code will ensure the broken code doesn’t execute and cause problems until you can fix it." ** ``(and cause problems)``. Funny: when i fix it, it will execute and cause problems! "consider using the #if 0 preprocessor directive," ** Please, add the <code> tag (zero VS o). PS: The avatar below is brilliant. > tabs/spaces I don't know, test it, probably varies between editors. > snippet Lesson updated to say "one the the first encounters". > not inside the comment I added quotes. > Newton Capitalized. Comments don't always follow proper grammar. If there are many "mistakes" in a lesson, but they're consistent, don't be bothered by them. > he or she Genders are a mess, there is no good solution. Won't change. > First, Second, Third Updated. > Percentage Updated. > Compiler Updated, sentence was wrong, the compiler doesn't run the program. > #if 0 Added code tags. Hi sir! "The statements above represent on of our first encounters" ** ``one of our`` Ayy, thanks! Thank you so much for providing nice tutorial Found a typo under 'Proper use of comments'. The first comment should say (asterisks mark the erroneous word): // This program *calculates* the student's final grade based on his test and homework scores. "... based on THEIR test...", sir Jonas! I don't really understand this concept at all Dear Teacher, Please let me say you that following program works in Visual Studio 2017, 2019, and online compilers Regards Behavior is undefined. Make sure you followed lesson 0.10 and 0.11. Mr. nascardriver, Please accept my many thanks for you responded to my message and that immediately. Many more for your message is instructive. I appreciate that "behavior is undefined". However after I have configured V.S. 2017 and 2019 according to lessons 0.10 and 0.11 (setting "Disable Language Extensions" to Yes (/Za), and "Warning level" to Level4 (/W4)) warnings are many. In V.S. 2017 of the kind "'Project5.exe' (Win32): Loaded 'C:\Windows\SysWOW64\KernelBase.dll'. Cannot find or open the PDB file." and in V.S. 2019 RC of the kind "'Project-comment.exe' (Win32): Loaded 'C:\Windows\SysWOW64\KernelBase.dll'." Only name in .dll file is different in warnings. With regards and friendship Georges Theodosiou # include <iostream> Everything from here to the end of the line is ignored there is an error in this line sir , you should put // before the things that you do not want the compiler to compile , the line should be : # include <iostream> //Everything from here to the end of the line is ignored Badreddine Boukheit, Please accept my thanks for your response and many more for your instruction. Problem is that this program works. Regards. May be ,you did not rebuild your program before running it. That's why it might be working. Try rebuilding it. Mr. Kaladin, Please accept my thanks for your comment and many more for your suggestion. In turn I suggest you read Mr. NASCAR driver's above comment: "Behavior is undefined. Make sure you followed lesson 0.10 and 0.11." Regards. Thanks for the tip Georges! Mr. Todd Riemenschneider Please let me express my thanks for your comment and many more for your thanks. Also let me a greater tip: Undefined behavior is the greatest problem I face in learning programming. Regards. if you have a bit of code that you will frequently comment out and put back in, you can just put /**/ below the code you want commented out, then when you need it commented out, you can just place /* above the code, like Thanks this is a smart way to comment out thing Hey? I think that a comment if for you and other people is understandable what you want to comment, it is valid, for me this is valid and, it is a good way to comment. Good luck in this bro! PD: Remember put before '}', put 'return 0;' And if you wan't to pause the program put on first line '#include <conio.h>' and before of return 0; , 'getche();', or if you have an error put '_getche();' > And if you wan't to pause the program put on first line '#include <conio.h>' Don't do this. <conio.h> is a Windows-exclusive header, it won't work on other platforms. Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/comments/
CC-MAIN-2021-17
refinedweb
2,535
73.58
package org.enhydra.dods.cache;21 22 import java.util.HashMap ;23 import org.enhydra.dods.statistics.CacheStatistics;24 25 /**26 * DODSHashMap class implements Hash map (for storing data objects), and 27 * provides statistics about the cache (query number, cache hits number, their28 * get/set/increment methods, percents of used cache, cache hits,...).29 *30 * @author Tanja Jovanovic31 * @version 1.0 15.09.2003.32 */33 public class DODSHashMap extends HashMap implements CacheStatistics {34 35 /**36 * Total number of times the cache was accessed.37 */38 protected int cacheAccessNum = 0;39 40 /**41 * Number of queries performed on the cache successfully.42 */43 protected int cacheHitsNum = 0;44 45 /**46 * Constructor (int).47 *48 * @param maxSize Maximal number of objects in DODSLRUCache.49 */50 DODSHashMap() {51 super();52 clearStatistics();53 }54 55 /**56 * Returns total number of times the cache was accessed.57 *58 * @return total number of times the cache was accessed.59 */60 public int getCacheAccessNum() {61 return cacheAccessNum;62 } 63 64 /**65 * Sets total number of times the cache was accessed.66 *67 * @param num Total number of times the cache was accessed.68 */69 public void setCacheAccessNum(int num) {70 this.cacheAccessNum = num;71 } 72 73 /**74 * Increases total number of times the cache was accessed.75 */76 public void incrementCacheAccessNum(int num) {77 cacheAccessNum += num;78 } 79 80 /**81 * Returns number of queries performed on the cache successfully.82 *83 * @return Number of queries performed on the cache successfully.84 */85 public int getCacheHitsNum() {86 return cacheHitsNum;87 }88 89 /**90 * Sets number of queries performed on the cache successfully.91 *92 * @param cacheHitsNum Number of queries performed on the cache successfully.93 */94 public void setCacheHitsNum(int cacheHitsNum) {95 this.cacheHitsNum = cacheHitsNum;96 }97 98 /**99 * Increases number of queries performed on the cache successfully for one.100 */101 public void incrementCacheHitsNum(int num) {102 cacheHitsNum += num;103 }104 105 /**106 * Returns how much cache is currently used. This value is given in percents.107 * If cache is unbounded, method returns 100%.108 *109 * @return Percents - how much cache is currently used.110 */111 public double getUsedPercents() {112 int maxCacheSize = -1;113 114 if (maxCacheSize < 0) {115 return 100;116 }117 int temp = size() * 10000;118 double res = temp / maxCacheSize;119 120 return res / 100;121 }122 123 /**124 * Returns how much queries performed on the cache were successful.125 * This value is given in percents.126 *127 * @return Percents - how much queries performed on the cache were128 * successful.129 */130 public double getCacheHitsPercents() {131 if (cacheAccessNum == 0) {132 return 0;133 }134 int temp = cacheHitsNum * 10000;135 double res = temp / cacheAccessNum;136 137 return res / 100;138 }139 140 /**141 * Clears statistics.142 */143 public void clearStatistics() {144 this.cacheAccessNum = 0;145 this.cacheHitsNum = 0;146 }147 }148 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/enhydra/dods/cache/DODSHashMap.java.htm
CC-MAIN-2017-17
refinedweb
480
58.38
Plant Physiol, March 2000, Vol. 122, pp. 845-852 Unité Mixte de Recherche 5544, Centre National de la Recherche Scientifique (S.M., C.C., J.-J.B.), and Ecole Supérieure de Technologie des Biomolécules de Bordeaux (C.C.), Université Victor Segalen Bordeaux 2, 146, rue Léo Saignat-Case 92, 33076 Bordeaux cédex, France Plastids rely on the import of extraplastidial precursor for the synthesis of their own lipids. This key phenomenon in the formation of plastidial phosphatidylcholine (PC) and of the most abundant lipids on earth, namely galactolipids, is poorly understood. Various suggestions have been made on the nature of the precursor molecule(s) transferred to plastids, but despite general agreement that PC or a close metabolite plays a central role, there is no clear-cut answer to this question because of a lack of conclusive experimental data. We therefore designed experiments to discriminate between a transfer of PC, 1-acylglycero phosphorylcholine (lyso-PC), or glycerophosphorylcholine. After pulse-chase experiments with glycerol and acetate, plastids of leek (Allium porrum L.) seedlings were purified. The labels of the glycerol moiety and the sn-1- and sn-2-bound fatty acids of plastidial lipids were determined and compared with those associated with the extraplastidial PC. After import, plastid lipids contained the glycerol moiety and the fatty acids esterified to the sn-1 position originating from the extraplastidial PC; no import of sn-2-bound fatty acid was detected. These results rule out a transfer of PC or glycerophosphorylcholine, and are totally explained by an import of lyso-PC molecules used subsequently as precursor for the synthesis of eukaryotic plastid lipids. Galactolipids are the major lipids of photosynthetic tissues, and therefore are the most abundant lipids on earth (Gounaris and Barber, 1983). Their biosynthesis in higher plants involves two different pathways that coexist or not depending on the plant involved (for review, see Mongrand et al., 1998). The prokaryotic pathway leads to galactolipid synthesis by using only the plastidial enzyme machinery, and differs greatly in this respect from the eukaryotic pathway, which requires close cooperation between the endoplasmic reticulum (ER) and chloroplasts (for review, see Browse and Somerville, 1991). Fatty acids synthesized in the plastid stroma are exported to ER and acylated to glycerophosphate to form phosphatidic acid, which is further converted to phospholipids and particularly to phosphatidylcholine (PC). Some lipids are then transferred (under an unknown form) to chloroplasts, where they account for the presence of PC in these organelles and contribute to the synthesis of the plastidial glycolipids monogalactosyl diacylglycerol (MGDG), digalactosyldiacylglycerol (DGDG), and sulfoquinovasyl diacylglycerol. Therefore, the import of lipids from ER membranes to plastids is a major phenomenon of the plant lipid metabolism since it contributes approximately 50% of the total galactolipid formation when the prokaryotic and eukaryotic pathways are operative, and 100% in the fairly common case (for review, see Mongrand et al., 1998) when no prokaryotic synthesis of galactolipids occurs. That a lipid import is required for plastid lipid synthesis is no longer a matter of debate, but the nature of the lipid link between the ER and the chloroplasts remains unknown. On one hand, plastids contain PC (in the outer leaflet of the envelope outer membrane [Dorne et al., 1985]), but are devoid of CDP-choline diacylglycerol choline-phosphotransferase activity (Joyard and Douce, 1976). One the other hand, several in vivo experiments have evidenced a PC/galactolipid precursor/product relationship in plants (e.g. Roughan, 1970; Slack et al., 1977; Ohnishi and Yamada, 1980; Browse et al., 1986; Williams and Khan, 1996; Mongrand et al., 1997). Therefore, there is general agreement that PC, or a close metabolite, has to be imported from ER membranes to chloroplasts, and that after lipid import, plastidial PC is used as a substrate for eukaryotic galactolipid synthesis (for reviews, see Roughan and Slack, 1982; Somerville and Browse, 1991; Maréchal et al., 1997. It follows that three reasonable hypotheses may be proposed for the chemical form of the lipid link between endomembranes and plastids: (a) glycerophosphorylcholine, but so far no data have established or even suggested a role of this molecule in the plastidial PC synthesis or in the galactolipid accumulation; (b) PC, which for years was thought to be the link between the ER and the plastids. This assumption received some experimental support but now needs to be re-investigated (Mongrand et al., 1997; for reviews, see Kader, 1996; Moreau et al., 1998); or (c) 1-acylglycero phosphorylcholine (lyso-PC), a transfer of which was only recently hypothesized and was immediately supported by several lines of evidence in vitro (Bessoule et al., 1995) and in vivo (Mongrand et al., 1997). Nevertheless, no conclusive data have until now been obtained. We decided to investigate the nature of the lipid link between the ER and the chloroplasts. The rationale of the in vivo experiments described in this paper was to label the glycerol moiety and both fatty acids of the lipids located in the donor (extraplastidial) compartment, to purify at various chase times the acceptor compartment (plastids), and to pay special attention to the label associated with the fatty acids in the sn-1 and sn-2 positions of the glycerol backbones, as well as with the glycerol moieties, of plastidial PC and galactolipids. For the first time to our knowledge, the labels associated with lipids in the donor compartment have been determined. We describe the nature of the lipid transferred in vivo from extraplastidial membranes to plastids. Materials High-performance thin-layer chromatography (HP-TLC) plates were Silicagel 60 F254 (Merck, Rahway, NJ). Autoradiography was performed using hyperfilm MP (Amersham, Buckinghamshire, UK). Na-[1-14C]acetate (2 GBq/mmol) was obtained from Commissariat á l'Énergie Atomique (Saclay, France). Na-[3H]acetate (93.2 GBq/mmol), [2-3-3H]glycerol (7.4 GBq/mmol), [14C(U)]glycerol (5.47 GBq/mmol), and [1-14C]oleoyl-coenzyme A (CoA) (2 GBq/mmol) were obtained from DuPont-NEN (Les Ulis, France). Lipases and all other reagents were from Sigma Chemical (St. Louis). Plant Materials and Pulse/Chase Labeling of Leek Seedlings Leek (Allium porrum var Furor) seeds stored overnight at 4°C were washed five times with distilled water and grown for 15 d at room temperature on a previously described growth medium: 5% (w/v) agar in Heller's solution (see Moreau et al., 1988). Routinely, 6 g of seedlings were gently uprooted, and Na-[1-14C]acetate (8.9 MBq), [2-3-3H]glycerol (7.4 MBq), or both Na-[3H]acetate (55.5 MBq) and [14C(U)]glycerol (455 kBq) were supplied for 2 h. Seedlings were then rinsed eight times with deionized water, and the chase was carried out by adding 1 mL of 0.46 M Na-acetate, pH 5.5, 1 mL of 16 mM glycerol, or both 1 mL of 0.46 M Na-acetate, pH 5.5, and 1 mL of 8 mM glycerol. Seedlings were then replanted at 30°C in low-melting agar (1.5% [w/v] in Heller's solution). At various times, 1 g of seedlings was sampled and cut into small pieces. After that, 0.1 g was used to extract total lipids and 0.9 g to isolate chloroplasts. Extraction of Total Lipids Green tissues were weighed (approximately 0.1 g) and ground in a glass-glass tissue grinder with 6 mL of chloroform:methanol:formic acid (10:10:1, v/v). The homogenate was transferred to a screw-capped centrifuge tube and stored overnight at 20°C. The extraction procedure was continued by adding 2.2 mL of chloroform:methanol:water (5:5:1, v/v). The organic phase was washed with 6 mL of 0.2 M H3PO4 and 1 M KCl. Lipids were recovered in the organic phase, dried, and redissolved in 1 mL of chloroform:methanol (2:1, v/v). An aliquot of the lipid extract was evaporated in a scintillation vial, and radioactivity was determined by liquid scintillation counting. Isolation of Chloroplasts, in Vitro Labeling, and Extraction of Chloroplastic Lipids All operations were carried out at 4°C. Leek seedlings were weighed and then sliced into small pieces using fine scissors in a homogenization buffer 50 mM Tris, pH 7.5, 0.33 M sorbitol, 5 mM EDTA, and 0.1% (w/v) bovine serum albumin (BSA), as described by Joy and Mills (1987). The homogenate was then strained through two layers of Miracloth (Calbiochem-Novabiochem, San Diego) and centrifuged for 5 min at 3,000g. The pellet was suspended in the homogenization buffer and loaded onto a discontinuous Percoll gradient consisting of 5 mL of 80% (v/v) and 10 mL of 40% (v/v) Percoll. After centrifugation at 5,000g for 10 min, intact chloroplasts were collected at the 40% to 80% interface, diluted with 20 mL of the homogenization buffer and further centrifuged for 10 min at 3,000g. The pellet was resuspended in 1 mL of 50 mM Tris, pH 7.5, and 0.33 M sorbitol (buffer A). The protein content was determined according to the method of Bradford (1976) using BSA as a standard. To determine the labeling of chloroplastic lipids in vitro, purified chloroplasts (700 µg) were incubated with 2.65 nmol of [1-14C]oleoyl-CoA in 700 µL of buffer A at room temperature for 1 h. After incubation, chloroplasts were spun down for 10 min at 3,000g and the pellet was resuspended in 500 µL of buffer A. Aliquots of 100 µg of chloroplasts were incubated with various amounts of unlabeled oleoyl-CoA (0-10 mmol) for 1.5 h. After incubation, lipids was extracted and lipase digestion of chloroplastic PC was carried out as described below. To extract chloroplastic lipids, chloroplasts were placed in 2 mL of chloroform:methanol (2:1, v/v). The volume of the aqueous phase was completed with water to 0.5 mL. After vortexing, the organic phase was isolated and the aqueous phase was re-extracted with 2 mL of chloroform. The lipid extract was evaporated to dryness and redissolved in 1 mL of chloroform:methanol (2:1). An aliquot of the lipid extract was evaporated in a scintillation vial, and radioactivity was determined by liquid scintillation counting. Analysis of Labeled Lipids and Lipase Digestion Individual polar lipids were purified from the extracts by monodimensional HP-TLC using the solvent system described by Vitiello and Zanetta (1978). Neutral lipids were separated by the solvent system described by Juguelin et al. (1986). Lipids were then located by spraying the plates with a solution of 0.1% (w/v) primuline in 80% (v/v) acetone, followed by visualization under UV light. After autoradiography, the silica gel zones corresponding to individual lipids were scraped off, and the radioactivity associated with the lipids was determined by liquid scintillation counting. The radioactivity associated with fatty acids esterified to sn-1 and sn-2 positions of total PC and chloroplastic PC was determined by lipase digestion. Under the conditions used, the phospholipase A2 specificity was determined as described in Mongrand et al. (1997). After HP-TLC, lipid spots were scraped off and sonicated (15 min) in 200 µL of 50 mM Tris-HCl, pH 8.9, and 5 mM CaCl2. Reactions were started by the addition of 0.2 unit of phospholipase A2. Incubations were performed for 15 min at 37°C. After incubation, 2 mL of chloroform:methanol (2:1) was added to stop reactions and to start lipid extraction. The organic phase was washed with 1 mL of 0.2 M H3PO4 and 1 M KCl. The aqueous phase was re-extracted by 2 mL of chloroform. Both of the organic phases were combined, evaporated, and lipids were redissolved in a minimal volume of chloroform:methanol (2:1). Lipids were resolved by HP-TLC as described above. After autoradiography, the silica gel zones corresponding to lysolipids and free fatty acids were scraped from the plates and the radioactivity was determined by liquid scintillation counting. The radioactivity associated with fatty acids esterified to the sn-1 and sn-2 positions of galactolipids was determined by Rhizopus arrhizus lipase digestion as described in Mongrand et al. (1997). For each pulse-chase experiment, both of the phospholipase A2 and R. arrhizus lipase digestions were carried out at least three times per chase time. Calculation of the Label Associated with Extraplastidial and Chloroplastic PC in Total Lipid Extract At each pulse/chase time, the label associated with chloroplastic PC in the chloroplastic lipid extract was determined and expressed as a percentage of the radioactivity incorporated into galactolipids (MGDG plus DGDG). Since the galactolipids are located exclusively in the plastids (Douce and Joyard, 1979), this percentage was multiplied by the label associated with galactolipids in the total lipid extract to obtain the label associated with chloroplastic PC in the total lipid extract. The radioactivity in the extraplastidial PC was calculated by subtracting these values from the label of total PC. The study was carried out with leek seedlings, which were previously shown to be 18:3 plants (Mongrand et al., 1997, 1998), i.e. a plant in which no prokaryotic synthesis of galactolipids occurs (for reviews, see Somerville and Browse, 1991; Roughan and Slack, 1982). It follows that in all the experiments reported in this paper, both galactolipids and plastidial PC entirely originate from an extraplastidial pool of PC. In Vivo Labeling of Extraplastidial PC The in vivo labeling of the acyl chains and of the glycerol moiety of extraplastidial PC in 15-d-old leek seedlings was studied by pulse/chase experiments using acetate and/or glycerol as labeled substrates. After a 2-h pulse with labeled glycerol, around 36% of total lipid label was associated with extraplastidial PC (Fig. 1a). Comparison with the radioactivity associated with total PC (38% of the total lipid label after a 2-h pulse [Mongrand et al., 1997]) showed that immediately after the pulse, labeled PC was mainly located in the extrachloroplastic compartment. During the chase, the label associated with the glycerol moiety of extraplastidial PC decreased from 36% ± 2.4% to 21% ± 1.2% of total labeled lipids. Using labeled acetate instead of labeled glycerol, basically identical results were obtained: around 30% of the total label was associated with the extraplastidial PC after the pulse, and its radioactivity decreased during the chase from 29.1% ± 4.3% to 11.2% ± 2.3% of total radioactivity (Fig. 1b). This decrease affected the label of the fatty acids esterified to the sn-1 and sn-2 position of the glycerol backbone (Fig. 2). However, it can be noted that after a 2-hour pulse and at the various chase times, the label of sn-2-bound fatty acids of extraplastidial PC was always higher than that of sn-1 bound fatty acids (Fig. 2). From five separate experiments, we observed a reproducible decrease in the label of sn-2-bound fatty acids of extraplastidial PC from 17.0% ± 3.7% (pulse) to 6.9% ± 1.6% (96-h chase) and a label decrease from 12.1% ± 1.8% to 4.4% ± 1.6% in the case of sn-1-bound fatty acids. In Vivo Import of Labeled Molecules into Plastids as a Function of Time After the pulse with radioactive glycerol, 2.6% ± 0.72% of the total label was found in chloroplastic PC (Fig. 3a). During the first 24 h following the pulse, the radioactivity incorporated into chloroplastic PC increased from 2.6% ± 0.72% to 6.0% ± 1.41%, and then reached a plateau. These results indicated an import of glycerol-labeled precursor into chloroplasts during the chase. As shown below, the plateau observed after a 24-h chase is correlated to the synthesis of galactolipids from plastidial PC, and reflects the steady state between import and metabolism. The variation in the label of the chloroplastic PC acyl chains was also studied as a function of time by supplying leek seedlings with labeled acetate. In contrast with the results observed using glycerol, the total radioactivity associated with the acyl chains of chloroplastic PC remained almost constant during the pulse/chase, and represented approximately 3.5% of the total lipid label (Fig. 3b). Therefore, unexpectedly, no import of labeled fatty acids into chloroplastic lipids seemed to occur during the chase. This apparent lack of label import into plastids resulted from the superimposition of two phenomena that were clearly evidenced when the label of the fatty acids esterified to the sn-1 and to the sn-2 positions of chloroplastic PC was studied (Fig. 4). After the pulse, the label of the fatty acids esterified to the sn-1 position was repeatedly 2 times lower than that of the sn-2-bound fatty acids: 33% ± 6% and 66% ± 6% of the plastidial PC label were associated to the sn-1 and sn-2 positions, respectively (i.e. approximately 1.2% and 2.4% of the total lipid label, respectively). During the chase, an increase in the label associated with the sn-1 position of chloroplastic PC was observed (from 33% ± 6% after the pulse to 46% ± 5% of the plastidial PC label after 96 h of chase), whereas the fatty acid radioactivity in the sn-2 position decreased from 66% ± 6% after the pulse to 54% ± 5% after 96 h of the chase. These two phenomena were prominent during the first 24 h following the pulse. These results indicated an import of labeled fatty acids esterified to the sn-1 position of plastidial PC during the chase, whereas no import of labeled fatty acids esterified to the sn-2 position seemed to occur. After demonstrating an in vivo differential labeling of sn-1- and sn-2-bound fatty acids of chloroplastic PC, we analyzed the kinetics of the fatty acid labeling of MGDG and DGDG. Results (Fig. 5) showed that during the chase, the increase in the fatty acid label due to an import of labeled molecules from the extraplastidial compartment did not affect the two acylable positions of the glycerol backbone to the same extent. Whereas the fatty acid label bound to the sn-2 position of the galactolipids remained almost constant during the chase, the radioactivity associated with fatty acids esterified to the sn-1 position increased from 2.7% ± 0.8% after the pulse to 9.1% ± 1.1% after a 96-h chase in MGDG, and from 1.7% ± 1.2% to 4.8% ± 2% in DGDG. During the chase, the glycerol labeling of galactolipids increased (Fig. 5), as did the acetate label associated with fatty acids esterified to the sn-1 position of galactolipids. When the molecular species of lipids were analyzed, it appeared that while palmitic acid accounted for 25% of the labeled fatty acids associated with PC after the pulse, only labeled 18:2 and 18:3 fatty acids were esterified to MGDG after the chase. This result is in agreement with the fatty composition of MGDG, which does not contain palmitic acid (for review, see Browse and Somerville, 1991). In contrast to MGDG, DGDG usually contains 16:0 fatty acids and, in good agreement, labeled palmitic acid was found to be esterified to DGDG after a 96-h chase. We also determined the total imported fatty acid label associated with either the sn-1 or the sn-2 position of the glycerol backbones of plastidial PC, MGDG, and DGDG. The results (Fig. 6) showed that no import of labeled fatty acid associated with the sn-2 position occurred during the chase and that the acetate labeling of eukaryotic lipids in plastids resulted from an import of radioactivity exclusively associated with the sn-1-bound fatty acids. This import matched the label decrease associated with the sn-1-bound fatty acids of PC in the donor compartment (shown in Fig. 2). Moreover, the ratio of the radioactivities associated with the sn-1 fatty acids and with the glycerol of the chloroplastic lipids remained constant during the chase (Fig. 6, inset). Our results clearly showed that: (a) the eukaryotic lipids imported into plastids were labeled by a concomitant import of glycerol and of sn-1-bound fatty acids occurring at the same rate, and (b) the sn-2 position of the eukaryotic plastid lipids synthesized during the chase was esterified by unlabeled fatty acid and not by labeled fatty acids originating from the sn-2 position of extraplastidial PC. Indeed, the decrease in the radioactivity associated with the sn-2-bound fatty acids of extraplastidial PC during the chase (Fig. 2) was accompanied by an increase of the same order in the radioactivity associated with free fatty acids (from 35.2% ± 2.2% to 43.7% ± 4.8%, see Fig. 7). Therefore, when expressed as percentage of the total radioactivity incorporated into lipids, the decrease in the label of fatty acids esterified to the sn-1 and sn-2 positions of extraplastidial PC matched the increase in the label of plastidial lipids and of free fatty acids, respectively, which is in agreement with the fact that the total amount of radioactivity did not vary greatly during the chase (see also Mongrand et al., 1997). In addition, 70% of the fatty acids esterified to the sn-1 position of PC during the pulse were transferred to plastids during the chase (47% in MGDG and 23% in DGDG), while 30% remained associated with the extraplastidial PC. These results are in good agreement with those obtained with Arabidopsis leaves: 342 molecules were transferred to plastids, while 131 molecules of PC remained in the extraplastidial compartment (Browse et al., 1986). Using the same approach (mass analysis of lipids from unlabeled plants), we obtained similar proportions in leek seedlings: 532 molecules were transferred (310 in MGDG and 167 in DGDG), while 192 molecules of PC remained in the extraplastidial compartment (Mongrand, 1998). As mentioned above, the label associated with the fatty acids esterified to the sn-2 position of plastidial lipids remained constant and did not decrease during the chase (see Fig. 6), strongly suggesting that no acyl exchange occurred in the sn-2 position of the plastidial lipids. This point was also examined in vitro. After incubation of purified chloroplasts with [14C]oleoyl-CoA (first incubation), plastids were spun down to eliminate unreacted labeled oleoyl-CoA and incubated in the presence or absence of unlabeled oleoyl-CoA (second incubation). Under these conditions, and as already observed (Bessoule et al., 1995), PC was the only labeled lipid. The activity was very low and the weak radioactivity was almost exclusively associated with the sn-2 position of PC (Table I). The addition of unlabeled oleoyl-CoAwhatever the amount useddid not induce a decrease in the specific radioactivity of chloroplastic PC, clearly showing that no acyl exchange occurred in vitro in addition to in vivo. It follows that the weak labeling of plastidial PC during the incubation involved an acylation of a low amount of endogenous lyso-PC rather than an acyl exchange. In addition, the data gathered from these in vitro experiments are in agreement with results from the in vivo pulse chase experiments, since they suggest that no extensive remodeling of lipids occurred after their import into plastids. The aim of this study was to investigate in vivo the lipid trafficking from the extraplastidial compartment (donor compartment) to the chloroplasts (acceptor compartment). Since in this kind of experiment (e.g. Browse et al., 1986), the chase is followed over a period of days, it might be inferred that the transfer is quite a slow mechanism. Nevertheless, not only the constant rate of the transfer (k) but also the amount of molecules involved (A) must be taken into account (v = kA). In the present study, we show that, like Arabidopsis leaves (Browse et al., 1986), approximately 70% to 75% of the PC located in ER membranes is involved in the plastid lipid biosynthesis in leek seedlings (see above). Since PC represents 40% to 45% of total lipids located in ER membranes of leek seedlings (Moreau et al., 1998), it appeared that during the course of the experiments described in this paper, 15% to 20% of total ER lipids were transferred to plastids during the first 24 h of chase. There were three hypotheses concerning the nature of the molecules transferred: (a) glycerophosphorylcholine, (b) PC molecules, or (c) lyso-PC. The first hypothesis may be ruled out for several reasons. First, chloroplastic membranes are devoid of glycerophosphorylcholine acyltransferase activity (Bessoule et al., 1995), so it seems unlikely that an import of glycerophosphorylcholine in plastids leads to a PC synthesis. Second, since fatty acids synthesized during the chase were unlabeled, and even if someas yet unobservedglycerophosphorylcholine acyltransferase activity was present in the plastids, a differential label variation of glycerol and fatty acids of plastid lipids would be expected, and this was not found. Third, even if some fatty acids remaining labeled during the chase unexpectedly acylated glycerophosphorylcholine in the plastids, this would lead to a similar label variation of the fatty acids esterified to the sn-1 and sn-2 positions of eukaryotic plastid lipids, and this was also not observed. Regarding the transfer of PC, the label of fatty acids esterified to the sn-2 position of extraplastidial PC was always higher than that of the fatty acids esterified to the sn-1 position. It follows that if PC were to be transferred as a whole from the ER to the plastids, the rate of incorporation of labeled fatty acids in the sn-2 position of the imported lipids (V2 = k [PCERsn-2]) would be higher than that in the sn-1 position (V1 = k [PCERsn-1]), resulting in a higher import of labeled fatty acids esterified to the sn-2 position than to the sn-1 position. Our data are therefore not compatible with a PC transfer, unless a special pool of extraplastidial PC (as yet never evidenced) specifically and exclusively labeled at the sn-1 position (because V2 = 0, see Fig. 6) was transferred to plastids. Therefore, a transfer of PC appears highly unlikely. A transfer of lyso-PC has also been proposed (Bessoule et al., 1995). According to this hypothesis, lyso-PC formed from extraplastidial PC reaches the plastids, where it is acylated by a lyso-PC acyl-CoA acyltransferase. In agreement with this proposal, this enzyme has been evidenced in the plastid envelope, and its properties are highly compatible with the formation of plastid lipids (Bessoule et al., 1995). Our data add new in vivo evidence that this pathway is operative. The study of the label variation of extraplastidial and plastidial PC and galactolipids during the chase and the comparison of the labels associated with glycerol and with the acyl moieties of these lipids establish, for the first time to our knowledge, that the glycerol moiety is transferred concomitantly withand at the same rate asfatty acids esterified to the sn-1 position. Furthermore, fatty acids originating from the sn-2 position of extraplastidial PC are not associated with plastid lipids. These results, which strongly suggest a transfer of lyso-PC molecules between ER and plastids, underline the prominent role of the plastidial lyso-PC:acyl-CoA acyltransferase in plastidial lipid synthesis. Purification of this protein is now under way in our laboratory. The helpful reading of the manuscript by Dr. Ray Cooke is gratefully acknowledged. Received June 15, 1999; accepted November 9, 1999. 1 This work was in part supported by the Conseil Régional d'Aquitaine (France). S.M. was supported by a grant from the Ministère de l'Education Nationale de la Recherche et des Technologies. 2 Present address: Laboratory of Plant Molecular Biology, The Rockefeller University, 1230 York Avenue, New York, NY 10021-6399. * Corresponding author; e-mail Jean-Jacques.Bessoule{at}biomemb.u-bordeaux2.fr This article has been cited by other articles:
http://www.plantphysiol.org/cgi/content/full/122/3/845
crawl-002
refinedweb
4,628
51.99
You can extend the behaviour of symbols to make them "intelligent". They can have a script of actions to follow in response to key and mouse clicks, as well as a a script to control what they do in general. There are many reasons to want to do this. You will see some reasons why in our application examples. The ActionScript 3 is object oriented. This means you can add functionality, such as movement programs, onto existing items, such as Buttons and MovieClips. To do this you create an ActionScript file. This is a separate program that you will add into your project. The program is then attached to a symbol of the appropriate type and it's program will then affect the behaviour of the symbol. The basic steps are: Interpreting keys is similar to receiving Mouse events. You create add an event listener for KeyboardEvent type events. Most of the time you want to add the listener to the stage, but it is possible to add it to other things, like text fields. For each key there are two major events - a KEY_DOWN and a KEY_UP event. If you want to do something just once when a key is pressed, you usually only listen for the KEY_UP and do it when the key is released. If you want to repeat an action you use the KEY_DOWN and set a flag that says the key is pressed and perform an action in your update (ENTER_FRAME) event behaviour. You unset the flag when you hear a KEY_UP that says the key was released. Each key has a special code associated with it. See the Capturing Keyboard Input and Keyboard Object Reference sections for more details. A particle system is a combination of a generator and objects. The generator introduces little independent animations called particles into a scene. You see them all the time in movies and video games. The generator controls things like: where the animation started, how fast the particle was moving when it started, how buoyant the particle is, and how long the particle lives. Many particle systems are custom made for a particular situation, but it is possible to make one sufficiently abstract to represent such diverse things as fog, explosions, smoke, and sparkles simply by adjusting the generator and the pictures in its animation. Here is the system you will make in the lab: The little red ball represents the generator. Currently it moves when you press the arrow keys. Its initial position can be controlled by moving it around on the stage. We will start by making a moveable generator, then we will create intelligent particles and finally we will make the particles appear where the generator is. stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDown); function keyDown(event:KeyboardEvent): void { if (event.keyCode == Keyboard.UP) this.y -= 5; if (event.keyCode == Keyboard.DOWN) this.y += 5; if (event.keyCode == Keyboard.RIGHT) this.x += 5; if (event.keyCode == Keyboard.LEFT) this.x -= 5; } //Key flags. var up_pressed:Boolean = false; var down_pressed:Boolean = false; var left_pressed:Boolean = false; var right_pressed:Boolean = false; //KEY_DOWN listener. //This listener will set flags corresponding to keys of interest. stage.addEventListener(KeyboardEvent.KEY_DOWN, setKeysDown); function setKeysDown(event:KeyboardEvent): void { if (event.keyCode == Keyboard.UP) up_pressed = true; if (event.keyCode == Keyboard.DOWN) down_pressed = true; if (event.keyCode == Keyboard.RIGHT) right_pressed = true; if (event.keyCode == Keyboard.LEFT) left_pressed = true; } //ENTER_FRAME listener. //The listener function will run at the timeline's framerate this.addEventListener(Event.ENTER_FRAME, mainLoop); function mainLoop(e:Event):void { if (up_pressed == true) y -= 5; if (down_pressed == true) y += 5; if (right_pressed == true) x += 5; if (left_pressed == true) x -= 5; } //KEY_UP listener. //This listener will unset flags for keys that have been released. stage.addEventListener(KeyboardEvent.KEY_UP, setKeysUp); function setKeysUp(event:KeyboardEvent): void { if (event.keyCode == Keyboard.UP) up_pressed = false; if (event.keyCode == Keyboard.DOWN) down_pressed = false; if (event.keyCode == Keyboard.RIGHT) right_pressed = false; if (event.keyCode == Keyboard.LEFT) left_pressed = false; } //package helps define the folder where this external code is located package Scripts { //Bring in definitions for the things we want to use. import flash.display.MovieClip; //The particle will be based on a MovieClip. import flash.events.Event; //This allows us to listen for frame refresh. import flash.events.MouseEvent; //This allows us to listen for mouse clicks. import Math; //Add Particle features to a movie clip by "extending" button behaviour. public class Particle extends flash.display.MovieClip { //Particle State var velX:Number; //How fast the particle is moving in the X direction var velY:Number; //How fast the particle is moving in the Y direction var gravity:Number = 9.8; //This will modify velY to simulate gravity var buoyancy:Number; //This reduces the effect of gravity. var lifespan:Number; //How many frames the particle lasts var age:Number = 0; //Counts how many frames the particle has existed //Constructor: Allows set up of custom particles. //Default values will be used if the constructor is not explicitly called. public function Particle(newX:Number = Number.MAX_VALUE, newY:Number = Number.MAX_VALUE, newVelX:Number = 10, newVelY:Number = -5, newBuoyancy:Number = .1, newLifespan:Number = 60) { //Only set x and y if they are not the default values. if (newX != Number.MAX_VALUE && newY != Number.MAX_VALUE) { x=newX; y=newY; } velX=newVelX; velY=newVelY; buoyancy=newBuoyancy; lifespan=newLifespan; addEventListener(Event.ENTER_FRAME, ParticleUpdate); } //On every new frame this function will update the particles function ParticleUpdate(e:Event):void { //Update the particle's position this.x += velX; this.y+=velY; //Add the force of gravity to the particle's velocity. velY+=gravity*buoyancy; age++; //Once a particle is older than its lifespan, play its death animation if (age == lifespan) { play(); } //When the death animation is over, clean up //or the program may run out of memory. if (age == (lifespan + 15) ) { this.parent.removeChild(this); } } } } //ENTER_FRAME listener. //The listener function will run at the timeline's framerate this.addEventListener(Event.ENTER_FRAME, mainLoop); var fps:Number = 30; //Animation frame rate var rate:Number = 5; //Number of frames between new particles var count:Number = 0; //Counts number of frames since last particle var sourceX:Number = 0; //X Offset to where particles should be generated var sourceY:Number = 0; //Y Offset to where particles should be generated //Grab the particle script. var CParticle:Class = getDefinitionByName("Scripts.Particle") as Class; function mainLoop(e:Event):void { if (up_pressed == true) y -= 5; if (down_pressed == true) y += 5; if (right_pressed == true) x += 5; if (left_pressed == true) x -= 5; count++; //if it's time to make a new particle if (count >= rate) { count = count - rate; //Create a particle var sourceVelX = 10*Math.random()-5; var sourceVelY = -5*Math.random(); var sourceBuoyancy = .01; var sourceLifespan = 50; var instance:DisplayObject = new CParticle(sourceX, sourceY, sourceVelX,sourceVelY, sourceBuoyancy, sourceLifespan); //Add it to this movie clip's display list. this.addChild(instance); } } Here is a sample of the game you will make (plus a few undocumented extras): This one is structurally simpler, but more interesting to play. It still requires only two files. Main Timeline Code: //The comments explain how the code works //Create comments by adding "//" before text on a line //the function below stops the application from going to the next frame. // the ; at the end of the line marks the end of the statement. stop(); //Create variables for the score and high score and set them to 0. var score:Number = 0; var highScore:Number = 0; var lvl:Number = 0; //A list to hold the alien ships. //This list would allow us to: // remove all ships from the screen // check to see if a ship was hit by a bullet instead of a mouse click var ShipList:Array = new Array(); //Add a listener for when Flash starts this frame addEventListener(Event.ENTER_FRAME, mainEnterFrame); //This is the function called when we enter this frame function mainEnterFrame(event:Event):void { // "{}" or curly brackets mark the beginning and end of a block of code // usually for a function or if statement var alienSpawn:Number = Math.random(); //if the number is above the cutoff then add an alien to the game if (alienSpawn < lvl / 200.0 + .01) { //the makeObject function will add an object to the stage at a position makeObject("Scripts.Alien", -40, Math.random()*440); } } //makeObject is not built-in to Flash... we need to write it ourselves. function makeObject(className:String, startX:Number, startY:Number):void { //find the class "className" in Flash so you can create a new instance var myClass:Class = getDefinitionByName(className) as Class; //declare new object using the dynamically obtained class name //Here, "this" refers to the current frame - also known as the referrer. //The current frame will be responsible for the new object we create. var instance:DisplayObject = new myClass(startX,startY,this); //Add the ship to a list of ships ShipList.push(instance); //add the object as a child to the stage (display object container) this.addChild(instance); } function gameOver() { //game over //update highscore if this is a new highscore if (score > highScore) { highScore = score; } //Clear the screen of Alien Ships var instance; while (ShipList.length > 0) { instance = ShipList.pop(); instance.remove(); } //reset the score and level to 0 score = 0; lvl = 0; //Update the score text scoreText.text = "Score: " + score + " Highscore: " + highScore + " Level: " + lvl; } Alien Ship (Button) code: //package helps define the folder where this external code is located package Scripts { import flash.display.SimpleButton; //The ship will be based on a button. import flash.events.Event; //This allows us to listen for frame refresh. import flash.events.MouseEvent; //This allows us to handle mouse clicks. import Math; //Add Alien ship behaviours to a button by "extending" button behaviour. public class Alien extends SimpleButton { var referrer; //you need this later to adjust the score. var speed:Number; //controls ship speed //Constructor function for Alien class //This is executed whenever you make a "new" Alien. // startX: initial horizontal position // startY: initial vertical position // refer: reference to the stage public function Alien(startX:Number, startY:Number, refer) { //Randomly set the size and speed of each ship. //Math.random() generates numbers between 0 and 1 inclusive. var scale = 0.2 + Math.random() * 0.8; scaleX = 0.3 + scale; scaleY = 0.3 + scale; //Set the speed so it reflects the size of the ship. speed = 5.0 * scale; //save the initial position of the ship x = startX; y = startY; //save the reference to the stage referrer = refer; //Add listener for frame refresh so we can update the ship's position this.addEventListener(Event.ENTER_FRAME, shipUpdate); this.addEventListener(MouseEvent.CLICK, shipHit); } //This is the ship click handler //When a ship is clicked you should update the score and remove the ship public function shipHit(event:MouseEvent):void { //Add points to the score. referrer.score = referrer.score + 1; //Move to a new level every 10 hits. referrer.lvl = Math.floor(referrer.score/10.0); //Update the score text referrer.scoreText.text = "Score: " + referrer.score + " Highscore: " + referrer.highScore + " Level: " + referrer.lvl; //remove from list of Bad Guy Ships (used to clear screen when game ends) referrer.ShipList.splice(referrer.ShipList.indexOf(this),1); //free the object and its dynamically allocated components remove(); } //This is where most of the game logic is. // Update position // if an alien crosses the screen completely game is over // when game is over set highscore, clear score, and clear board. function shipUpdate(event:Event):void { //move the alien to the right x = x+speed; //Uncomment this to increase difficulty //y = y+Math.cos(x/20)*speed; //test to see if the alien has moved across the screen completely if(x > 640) { //This function clears the board. referrer.gameOver(); } } //function that removes an object properly public function remove() { //remove listeners this.removeEventListener(Event.ENTER_FRAME, shipUpdate); this.removeEventListener(MouseEvent.CLICK, shipHit); //remove from parent item's display list this.parent.removeChild(this); //delete object from memory delete this; } } } There are many places to add behaviour. For example: For this lab assignment I want you to follow the instructions for this lab and create a particle system and a game. You can combine the two - use the particle system with appropriate settings to create an explosion. Submit published .html and .swf files, and all your resources in a folder or zip file with the lab number and your name on it. This is due at the beginning of your next lab. Have fun!
http://www.cs.uregina.ca/Links/class-info/325/flash-scripting2/
CC-MAIN-2018-22
refinedweb
2,040
57.16
Today we are releasing the first Community Technology Preview of the Roslyn Project! What is Roslyn? In the past, our compilers have acted as black boxes – you put source text in and out the other end comes an assembly. All of that rich knowledge and information that the compiler produces is thrown away and unavailable for anyone else to use. As Soma mentions in his blog, a part of the Visual Studio languages team is working on a project called Roslyn with a goal to rewrite the C# and VB compilers and language services in managed code. With a clean, modern, managed codebase our team can be more productive, innovate faster, and deliver more features sooner and with better quality. More importantly, we are opening up the C# and Visual Basic compilers and exposing all that rich information and code analysis to be available for your use. We expose a public API surface and provide extension points in the C# and VB language services. This opens up new opportunities for VS extenders to write powerful refactorings and language analysis tools, as well as allow anyone to incorporate our parsers, semantic engines, code generators and scripting in their own applications. Download the October 2011 CTP The CTP and supporting materials can be downloaded from: The main goal of this early preview is to gather feedback on the API design and to introduce the C# Interactive window (also known as REPL, or Read-Eval-Print-Loop). This first CTP is intended for preview-use only and does not allow redistribution of the Roslyn components or allow use in a production environment. The CTP installs on Visual Studio 2010 SP1. It also requires the Visual Studio 2010 SP1 SDK. Getting Started After the installation succeeds, the best place to start is to open Start Menu -> Microsoft Codename Roslyn CTP -> Getting Started. To get started, the “Roslyn Project Overview” document gives a look at the compiler API – how to work with syntax and semantics of your program. Several walkthrough documents are also included to provide a deep dive into various aspects of the Roslyn APIs. The CTP ships with quite a few samples for Visual Studio Extensions, compiler API, code issues, refactorings and so on. Most of the samples are provided for both C# and Visual Basic. You can open the sample source code from the Getting Started page. We also install several new project templates available in the New Project dialog: These templates will help you to get started on a new Visual Studio extension that uses Roslyn. Reference Assemblies The Roslyn assemblies are also installed in the GAC. Switch to the Full Profile (instead of the Client Profile) to be able to also reference the Services assemblies (which contain the IDE support). C# Interactive window You can invoke the C# Interactive window from View -> Other Windows -> C# Interactive Window. The Interactive window is powered by the new C# language service. The architecture of Roslyn is flexible enough to allow many of the IDE features such as IntelliSense and refactorings to work the same in a normal editor and in the Interactive window. At this time, the Interactive window is only available for C#. We’re working hard on providing the VB Interactive at a future time. C# Script File (.csx) Editing Support The CTP introduces a concept of a C# Script File. You can create a .csx file through File -> New File (or also use any other editor such as notepad): <scriptfilename>.csx. You can also copy chunks of code from a script file and send them to the C# Interactive Window (using the right-click context menu or a keyboard shortcut). The editor for the script files is also powered by the new language services. Hence it is important to keep in mind that .csx scripts will only support the part of the language already implemented in the Roslyn compilers. For more details, see the “Introduction to Scripting” walkthrough. Quick sample of the Roslyn API Here’s a sample of compiling and executing a small program using the Roslyn API. using Roslyn.Compilers; using Roslyn.Compilers.CSharp; ... var text = @"class Calc { public static object Eval() { return calc = compiledAssembly.GetType("Calc"); MethodInfo eval = calc.GetMethod("Eval"); string answer = eval.Invoke(null, null).ToString(); Assert.AreEqual("42", answer); Note: At this stage, only a subset of the language features has been implemented in the current CTP. We’re moving forward at a fast pace, but features such as Linq query expressions, attributes, events, dynamic, async are not yet implemented. To see a full list of non-implemented language features, see the Roslyn forums. Although not all the language features are supported, the shape of the public API is mostly complete, so we encourage you to write extensions and tools against the Syntax, Symbols, and Flow and Region Analysis APIs. We’re very excited to get an early preview of this technology in your hands and we welcome your feedback, ideas and suggestions. Use the forums to ask questions and provide feedback, Microsoft Connect to log bugs and suggestions, and use the #RoslynCTP hashtag on Twitter. Thanks, Kirill Osenkov QA (Roslyn Services Team) Twitter: @KirillOsenkov Hi Kirill and Team! I just took a quick look at the Roslyn API example above and it looks great. As someone who often has to write software engineering tools as extensions for Visual Studio this API seems like it has the potential to save me lots of time. However, I notice that Roslyn currently only supports VB and C#. I was just curious if it is within the scope of the Roslyn project to support C and C++ at some point. Thanks, Dave So using Roslyn could we write a compiler extension that allows us to add a [NotifyChanged] attribute to an automatic property that then gets turned into a full property that raises the PropertyChanged event at compile time? That would be pretty sweet! Hi Dave, Thanks for the comment! Roslyn is for VB and C# only. Enjoy! Kirill Jonathan: transforming a syntax tree to replace [NotifyChanged] with the property implementation is definitely possible. However we haven't thought about building this kind of extensibility into the actual compiler yet (that would be something like PreSharp as opposed to the currently existing PostSharp project). Right now we just want to focus on building a high quality compiler, we'll have to think carefully about metaprogramming sometime later in the future. This all very, very cool stuff – I love the C# Interactive window. Was that just a really nice side-effect of the new compiler, i.e. a way to test it interactively? Can we expect it to stay in future versions? Just out of interest, do you see Rosalyn competing with reflection emit in the future? Essentially they're both ways of dynamically creating assemblies, right? Mike – C# Interactive window is here to stay 🙂 Roslyn will certainly compete with Reflection.Emit as a more comfortable way to generate code at runtime. However it is important to realize that with Reflection.Emit you can create code that can never be output from a C# compiler. And Roslyn can only compile valid C#, so you could say that programs compilable with Roslyn is a subset of programs compilable with Reflection.Emit. Kirill what is the Roslyn project roadmap? I tried today to install a new VM with Only Win7 and VS11 Preview + Roslyn. It seems that the Roslyn CTP is not design at all to be installed as an extension of VS2011. Is there a way to install it anyway, or should i have to install VS2010 + SP1 + Async CTP 3 to use Roslyn ? It would be great if we could have DLR support. Writing Roslyn scripts in IronPython would be so much cleaner! Wow Roslyn works like my old friend Clipper and its CodeBlock, but now in .NET sadf How do I open the C# REPL _outside_ Visual Studio? ( analogous to F# 's `fsi.exe` ) @Matt Hickford: At the moment, the C# Interactive is only supported inside Visual Studio. We do have some ideas for this, but nothing is implemented at this time. looks amazing, can I use this to find all dependencies/usage of a certain class or method ? I recall thinking this was built into vs2012, but checking on it now, it seems to still be stuck at ctp level. Has this been abandoned? I downloaded Roselyn for VS 2012 and managed to import libraries etc..all works fine, but it often crashes the whole Visual Studio.. So far it seems to happen when I paste long string e.g. 30 characters+ long . Trying to avoid pasting long strings wherever possible. Hi It seems to crash when importing external references and these get changed while working. So I found a workaround that is almost as fast as reimporting just 1 reference. reset the interactive and then reimport them i.e. # reset recompile other references and then right click on project and "reset interactive from project" and all references are now updated highlight the using statements and "execute in interactive"
https://blogs.msdn.microsoft.com/visualstudio/2011/10/19/introducing-the-microsoft-roslyn-ctp/
CC-MAIN-2016-40
refinedweb
1,513
63.8
Serialization of functions may seem like a security concern. However, it is also a major need. Tools like Java RMI, Spark, and Akka make at least part of their name on this ability. Where flexibility is a need and systems are less vulnerable, distribution of functions are not necessarily a bad thing. The concepts presented here require Scala 2.11 or higher. A few use cases to get started: - Distributing custom functions with pre-written code - Creating localization and centrality in systems that may be spread over multiple Linux containers using tools such as Mesos - Executing user code in a trusted, secure, non-networked, and isolated environment - Tools such as JSFiddle but with Scala Follow me through reflection because, well, I may need to explain it to someone soon. Reflection Scala let’s a programmer manipulate the Abstract Syntax Tree and change elements of code. There is an article on the Scala website about this. The tool here is reflection, the ability of a program to manipulate itself. Programs which use Just In Time compiling are much more easily manipulated than those that do not. The type of reflection done at run time is Runtime Reflection.. — Heather Miller, Eugene Burmako, Philipp Haller at Scala Reflection could be useful in building ETL parsing tools, tuning code for operating systems, or changing query builders to fit certain databases. Now for my specific use case. I need to be able to serialize and pass code to children processes running separate JVMs. Why? To reduce management issues and improve flexibility in software I am writing for work. Scala Macros Scala macros look like functions but manipulate the symbol trees. Their uses include such tasks as tuning functions, CSV Parsing and generating data validators. The data validation link provides a nice overview of macros prior to digging into the Scala website. It is possible to build a basic tree from code in this way. Scala 2.11.0 Macros In Scala 2.11.0, we create a method and link it to the macro using macro with T coming from generics belonging to the object enclosing the function. This linking allows us to define elements of the macro. import c.universe._ import scala.reflect.macros object Enclosure[T]{ def myFunction[V](T => V) : Enclosure[V] = macro Func_Impl.myFunction[V,T] } Our function takes in a type T and produces a type V. It is defined in the Object FuncImpl by myFunction. We then define the implementation (this is a basic and slight deviation from the Scala website). Here we create a type at runtime. Therefore, our type is weak. The type tag comes from the previously imported c package. We then define a context (usually universal), the expression transforming the data, and the return object Func_Impl{ def myFunction(V : c.WeakTypeTag , T : c.WeakTypeTag) (c : Context) (c.Expr[ T => V): c.Expr[Enclosure[T]] = ... } The code here can be reified and used as needed following the explicit structure where data type T is transformed to V. The code here is meant as an introduction to the Scala Website which goes much more in depth on this subject including how to manipulate symbol trees and types. Scala 2.11.8 Macros Scala 2.11 changed Macros quite a bit. There are now whitebox and blackbox macros with whitebox macros having more ambiguous signatures than blackbox macros. The transformations completed by blackbox macros is understood by their input and output types without understanding their inner workings (think of testing code). Creation of macros changed significantly as well with implementations divided into whitebox and blackbox packages under scala.reflect. Tags have changed as well with changes documented on the scala website. Quasi Quotes Quasi quotes take the difficulty out of Scala Macros for our task. They take harder to write code and let us build and use trees from them. We simply write the code, parse it with Scala’s reflection tools, and then return the function through evaluation. The quasi quote effects compilation. import scala.reflection.tools.ToolBox import scala.reflect.runtime.universe.{Quasiquote, runtimeMirror} val f = q"def myFunction(i : Int) : Int = i + 1" val wrapper = "object FunctionWrapper { " + f + "}" val symbol = tb.define(tb.parse(wrapper).asInstanceOf[tb.u.ImplDef]) // Map each element using user specified function val func = tb.eval("$symbol.f _") func(1) //result should be 2 The code here created a quasiquote, generated a wrapper to be parsed, generated the symbol tree, and obtained the function from it. Serialization With a cursory view of macros, we can now look at serialization. It is actually quite simple. Serialization here just means that we take a string and serialize it. Classes do not extend the Serializable trait. Any does not extend Serializable either. Therefore, it is necessary to find alternate means to write serialize the code. Some blogs recommend approaches such as shim functions which may better suit your needs. However, String is serializable and, as long as the functions are defined, quasiquotes are useful here. Just ensure that any other libraries requiring linking to are already in the class path. Serialization is simple. @SerialVersionUID(10L) class OurClass (val code : String,val v2 : Int, val v3 : Double) extends Serializable{ override def toString = s"{code:'$code',v2: '$v2',v3: '$v3'}" } Security Mechanisms This is by no means secure or, in many cases, a bright idea. However, when possible and when the usefulness outweighs the problems, there are some tools to deploy in ensuring a bit of security. - Write and pass check sums with a Hamming, Adler, or other function - Encrypt transmissions - Ensure security (permissions by user modes,passwords, and user names) - Isolating environments that execute this sort of code - Only running and generating code passed in this manner via internal networks - If absolutely necessary, use a VPN Conclusion There are certainly more complex cases but this is a starter for reflection, macros, quasi-quotes, and serialization, a way to tie things together. The linked resources should prove useful for more depth.
https://dadruid5.com/tag/quasiquotes/
CC-MAIN-2020-34
refinedweb
997
56.86
The complete web automation library for end-to-end testing. Project description The complete web automation library. SeleniumBase is an all-in-one framework for reliable browser automation, end-to-end testing, reports, charts, presentations, website tours, and visual testing. Tests are run with pytest. Browsers are controlled by WebDriver. 🚀 Start | 🦚 Features | 🖥️ CLI | 👨🏫 Examples 📗 API | 📊 Reports | 📱 Mobile | ⏺️ Recorder 🤖 CI | 🌏 Translate | 🗺️ Tours | 🖼️ VisualTest 💻 Console Scripts | 🌐 Grid | 🏃 NodeRunner ♻️ Boilerplates | 🗾 Locales | 🗄️ PkgManager 📑 Presenter | 📈 ChartMaker | 🛂 MasterQA pytest test_swag_labs.py --mobile (Above: test_swag_labs.py in Mobile Mode.) (Below: Same test running in Demo Mode.) pytest test_swag_labs.py --demo Quick Start: 🚀 Add Python and Git to your System PATH. Create a Python virtual environment. Install SeleniumBase: pip install seleniumbase (Add --upgradeOR -Uto upgrade an installation.) (Add --force-reinstallto upgrade dependencies.) git clone cd SeleniumBase/ pip install . # Normal installation pip install -e . # Editable install - Type seleniumbaseor sbaseto) * Download a webdriver: SeleniumBase can download webdrivers to the seleniumbase/drivers folder with the install command: sbase install chromedriver - You need a different webdriver for each browser to automate: chromedriverfor Chrome, edgedriverfor Edge, geckodriverfor Firefox, and operadriverfor Opera. - If you have the latest version of Chrome installed, get the latest chromedriver (otherwise it defaults to chromedriver 2.44 for compatibility reasons): sbase install chromedriver latest - If you run a test without the correct webdriver installed, the driver will be downloaded automatically. (See seleniumbase.io/seleniumbase/console_scripts/ReadMe/ for more information on SeleniumBase console scripts.) Create and run tests: sbase mkdir DIRcreates a folder with sample tests: sbase mkdir ui_tests cd ui_tests/ That folder will have the following files: ui_tests/ │ ├── __init__.py ├── my_first_test.py ├── parameterized_test.py ├── pytest.ini ├── requirements.txt ├── setup.cfg ├── test_demo_site.py └── boilerplates/ │ ├── __init__.py ├── base_test_case.py ├── boilerplate_test.py ├── page_objects.py └── samples/ │ ├── __init__.py ├── google_objects.py └── google_test.py -.) If you've cloned SeleniumBase from GitHub, you can also run tests from the SeleniumBase/examples/ folder: cd examples/ pytest my_first_test.py pytest test_swag_labs.py") - By default, CSS Selectors are used for finding page elements. - If you're new to CSS Selectors, games like Flukeout can help you learn. - Here are some common For the complete list of SeleniumBase methods, see: Method Summary Learn More: Automatic WebDriver abilities:SeleniumBase automatically handles common WebDriver actions such as spinning up web browsers and saving screenshots during test failures. (Read more about customizing test runs.) Simplified code:SeleniumBase uses simple syntax for commands, such as: self.type("input", "dogs\n") The same command with regular WebDriver is very messy: (And it doesn't include SeleniumBase smart-waiting.) from selenium.webdriver.common.by import By element = self.driver.find_element(by=By.CSS_SELECTOR, value="input") element.clear() element.send_keys("dogs") element.submit() As you can see, the old WebDriver way is not efficient! Use SeleniumBase to make testing much easier! (You can still use self.driver in your code.) You can interchange pytest with nosetests for most tests, but using pytest is recommended. ( chrome is the default browser if not specified.) pytest my_first_test.py --browser=chrome nosetests test_suite.py --browser=firefox All Python methods that start with test_ will automatically be run when using pytest or nosetests on a Python file, (or on folders containing Python files). You can also be more specific on what to run within a file by using the following: (Note that the syntax is different for pytest vs nosetests.) pytest [FILE_NAME.py]::[CLASS_NAME]::[METHOD_NAME] nosetests [FILE_NAME.py]:[CLASS_NAME].[METHOD_NAME] No more flaky tests:SeleniumBase methods automatically wait for page elements to finish loading before interacting with them (up to a timeout limit). This means you no longer need random time.sleep() statements in your scripts. Automated/manual hybrid mode:SeleniumBase includes a solution called MasterQA, which speeds up manual testing by having automation perform all the browser actions while the manual tester handles validatation. Feature-Rich:For a full list of SeleniumBase features, Click Here. Detailed Instructions: Use Demo Mode to help you see what tests are asserting.: pytest my_first_test.py --demo Pytest includes test discovery. If you don't specify a specific file or folder to run from, pytest will search all subdirectories automatically for tests to run based on the following matching criteria: Python filenames that start with test_ or end with _test.py. Python methods that start with test_. The Python class name can be anything since SeleniumBase's BaseCase class inherits from the unittest.TestCase class. You can see which tests are getting discovered by pytest by using: pytest --collect-only -q You can use the following calls in your scripts to help you debug issues: import time; time.sleep(5) # Makes the test wait and do nothing for 5 seconds. import ipdb; ipdb.set_trace() # Enter debugging mode. n = next, c = continue, s = step. import pytest; pytest.set_trace() # Enter debugging mode. n = next, c = continue, s = step. To pause an active test that throws an exception or error, add --pdb: pytest my_first_test.py --pdb The code above will leave your browser window open in case there's a failure. (ipdb commands: 'n', 'c', 's' => next, continue, step). Here are some useful command-line options that come with pytest: -v # Verbose mode. Prints the full name of each test run. -q # Quiet mode. Print fewer details in the console output when running tests. -x # Stop running the tests after the first failure is reached. --html=report.html # Creates a detailed pytest-html report after tests finish. --collect-only | --co # Show what tests would get run. (Without running them) -n=NUM # Multithread the tests using that many threads. (Speed up test runs!) -s # See print statements. (Should be on by default with pytest.ini present.) --junit-xml=report.xml # Creates a junit-xml report after tests finish. --pdb # If a test fails, pause run and enter debug mode. (Don't use with CI!) -m=MARKER # Run tests with the specified pytest marker.:PASSWORD@SERVER:PORT # (Use authenticated proxy server.) --agent=STRING # (Modify the web browser's User-Agent string.) --mobile # (Use the mobile device emulator while running tests.) --metrics=STRING # (Set mobile .) (For more details, see the full list of command-line options here.) During test failures, logs and screenshots from the most recent test run will get saved to the latest_logs/ folder. Those logs will get moved to archived_logs/ if you add --archive_logs to command-line options, An easy way to override seleniumbase/config/settings.py is by using a custom settings file. Here's the command-line option to add to tests: (See examples/custom_settings.py) --settings_file=custom_settings.py (Settings include default timeout values, a two-factor auth key, DB credentials, S3 credentials, and other important settings used by tests.) To pass additional data from the command-line to tests, add --data="ANY STRING". Inside your tests, you can use self.data to access that. Test Directory Customization:.) These files specify default configuration details for tests. (For nosetest runs, you can also specify a .cfg file by using --config. Example nosetests [MY_TEST.py] --config=[MY_CONFIG.cfg]) As a shortcut, you'll be able to run sbase mkdir [DIRECTORY] to create a new folder that already contains necessary files and some example tests that you can run. sbase mkdir ui_tests cd ui_tests/ pytest test_demo_site.py Logging / Results from Failing Tests:. During test runs, past results get moved to the archived_logs folder if you have ARCHIVE_EXISTING_LOGS set to True in settings.py, or if your run tests with --archive-logs. If you choose not to archive existing logs, they will be deleted and replaced by the logs of the latest test run. Creating Visual Test Suite Reports: (NOTE: Several command-line args are different for Pytest vs Nosetests) Pytest Reports: Using --html=report.html gives you a fancy report of the name specified after your test suite completes. pytest test_suite.py --html=report.html You can also use --junit-xml=report.xml to get an xml report instead. Jenkins can use this file to display better reporting for your tests. pytest test_suite.py --junit-xml=report.xml: pytest test_suite.py --alluredir=allure_results Changing the User-Agent: If you wish to change the User-Agent for your browser tests (Chromium and Firefox only), you can add --agent="USER AGENT STRING" as an argument on the command-line. pytest user_agent_test.py --agent="Mozilla/5.0 (Nintendo 3DS; U; ; en) Version/1.7412.EU" Building Guided Tours for Websites: examples/tour_examples folder). It's great for prototyping a website onboarding experience. Production Environments & Integrations: Here are some things you can do to set up a production environment for your testing: You can set up a Jenkins build server for running tests at regular intervals. For a real-world Jenkins example of headless browser automation in action, check out the SeleniumBase Jenkins example on Azure or the SeleniumBase Jenkins example on Google Cloud. You can use the Selenium Grid to scale your testing by distributing tests on several machines with parallel execution. To do this, check out the SeleniumBase selenium_grid folder, which should have everything you need, including the Selenium Grid ReadMe, which will help you get started. If you're using the SeleniumBase MySQL feature to save results from tests running on a server machine, you can install MySQL Workbench to help you read & write from your DB more easily..) source = self.get_page_source() head_open_tag = source.find('<head>') head_close_tag = source.find('</head>', head_open_tag) everything_inside_head = source[head_open_tag+len('<head>'):head.type(selector, text) # updates the text from the specified element with the specified value. An exception is raised if the element is missing or if the text field is not editable. Example: self.type(_true(myvar1 == something) self.assert_equal(var1, var2) Useful Conditional Statements (with creative examples in action) is_element_visible(selector) # is an element visible on a page if self.is_element_visible('div#warning'): print("Red Alert: Something bad might be happening!") is_element_present(selector) # is an element present on a page if self.is_element_present('div#top_secret img.tracking_cookie'): self.contact_cookie_monster() # Not a real SeleniumBase method else: current_url = self.get_current_url() self.contact_the_nsa(url=current_url, message="Dark Zone Found") # Not a real SeleniumBase method: self.switch_to_window(1) # This switches to the new tab (0 is the first one) ProTip™: iFrames follow the same principle as new windows - you need to specify the iFrame if you want to take action on something in there self.switch_to_frame('ContentManagerTextBody_ifr') # Now you can act inside the iFrame # .... Do something cool (here) self.switch_to_default_content() # Exit the iFrame when you're done Handling Pop-Up Alerts What if your test makes an alert pop up in your browser? No problem. You need to switch to it and either accept it or dismiss, add --disable-csp on the command-line. next example, JavaScript creates a referral button on a page, which is then clicked: deferred deferred asserts come in. Here's the example: from seleniumbase import BaseCase class MyTestClass(BaseCase): def test_deferred_asserts(self): self.open('') self.wait_for_element('#comic') self.deferred_assert_element('img[alt="Brand Identity"]') self.deferred_assert_element('img[alt="Rocket Ship"]') # Will Fail self.deferred_assert_element('#comicmap') self.deferred_assert_text('Fake Item', '#middleContainer') # Will Fail self.deferred_assert_text('Random', '#middleContainer') self.deferred_assert_element('a[name="Super Fake !!!"]') # Will Fail self.process_deferred_asserts() deferred_assert_element() and deferred_assert_text() will save any exceptions that would be raised. To flush out all the failed deferred asserts into a single exception, make sure to call self.process_deferred_asserts() at the end of your test method. If your test hits multiple pages, you can call self.process_deferred_asserts() before navigating to a new page so that the screenshot from your log files matches the URL where the deferred. Wrap-Up Congratulations on getting started with SeleniumBase! Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/seleniumbase/1.50.1/
CC-MAIN-2020-50
refinedweb
1,936
51.24
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game! The AdMobRewardedVideo item allows monetizing your app with rewarded video ads on Android and iOS. More... Rewarded videos are skippable full-screen videos shown to the user. You can reward the user for watching a full video until the end. You can choose how this reward looks like: give away some virtual currency, virtual items or unlock features within your app or game are popular examples. Note: In contrast to interstitials, it is not possible to have more than one AdmobRewardedVideo item in your code. This restriction comes from the native AdMob SDK for iOS and Android. Here is an example how you can reward the user with virtual currency after he watched a rewarded video ad: import Felgo 3. For more information on the other available ad types and how to add this component to your app, see a rewarded video is dismissed by the user, either while it was still playing or has already finished. If the user watched the video until the end, rewardedVideoRewarded will also be called. This signal was introduced in Felgo 2.13.0. See also rewardedVideoRewarded, loadRewardedVideo(), and showRewardedVideoIfLoaded(). This handler is called if a rewarded video can not be loaded, e.g. due to a network error. This signal was introduced in Felgo 2.13.0. See also loadRewardedVideo() and showRewardedVideoIfLoaded(). This handler is called after the user clicked an ad while the video was playing and the app is going to be moved to the background to display the ad, e.g. in a browser. This signal was introduced in Felgo 2.13.0. See also loadRewardedVideo() and showRewardedVideoIfLoaded(). This handler is called when a rewarded video ad is displayed. At this point in time, the video might still be loading, rewardedVideoStarted will be emitted when it actually starts playing. This signal was introduced in Felgo 2.13.0. See also rewardedVideoStarted, loadRewardedVideo(), and showRewardedVideoIfLoaded(). This handler is called after the loadRewardedVideo() request has finished and the rewarded video is ready to display. This signal was introduced in Felgo 2.13.0. See also loadRewardedVideo() and showRewardedVideoIfLoaded(). This handler is called when the user finished watching the video and can be rewarded by the app. This signal was introduced in Felgo 2.13.0. See also rewardedVideoClosed, loadRewardedVideo(), and showRewardedVideoIfLoaded(). This handler is called when a rewarded video has started playing. This signal was introduced in Felgo 2.13.0. See also rewardedVideoOpened, loadRewardedVideo(), and showRewardedVideoIfLoaded(). Call this method to start downloading a rewarded video ad in the background. When finished, the rewardedVideoReceived signal will be emitted. This method was introduced in Felgo 2.13.0. See also rewardedVideoReceived and showRewardedVideoIfLoaded(). This method displays a rewarded video ad that was previously requested via a call to the loadRewardedVideo() method. For example, it can be called directly after rewardedVideoReceived is emitted: import Felgo 3() } } } } } This method was introduced in Felgo 2.13.0. See also loadRewardedVideo() and rewardedVideoReceived.
https://felgo.com/doc/felgo-admobrewardedvideo/
CC-MAIN-2020-40
refinedweb
517
58.89
1.1 Glossary This document uses the following terms: Central Administration site: A SharePoint site that an administrator can use to manage all of the sites and servers in a server farm that is running SharePoint Products and Technologies. content deployment: The act of exporting content from a source system and importing it to a destination system.). editor: The user who last modified an item or document in a SharePoint list. event receiver: A structured modular component that enables built-in or user-defined managed code classes to act upon objects, such as list items, lists, or content types, when specific triggering actions occur. export server: A server that serves as the source of a data export operation. farm: A group of computers that work together as a single system to help ensure that applications and resources are available. Also referred to as server farm.). HTTP POST: An HTTP method,]. import job: A timer job that is used to import data from a content migration package to a remote server. operator account: The account of the user who is managing the import process for a deployment package.-relative URL: A URL that is relative to the site that contains a resource and does not begin with a leading slash (/).. timer job: A built-in SharePoint object that can perform various tasks within the environment on a scheduled or one-time event basis.. XML schema definition (XSD): The World Wide Web Consortium (W3C) standard language that is used in defining XML schemas. Schemas are useful for enforcing structure and constraining the types of data that can be used validly within other XML documents. XML schema definition refers to the fully specified and currently recommended standard for use in authoring XML schemas. MAY, SHOULD, MUST, SHOULD NOT, MUST NOT: These terms (in all caps) are used as defined in [RFC2119]. All statements of optional behavior use either MAY, SHOULD, or SHOULD NOT.
https://docs.microsoft.com/en-us/openspecs/sharepoint_protocols/ms-cdeploy/3b05d090-87b5-426a-b6f2-108a99d66090
CC-MAIN-2021-17
refinedweb
320
61.16
Unlike the other object types we've looked at in listallthethings so far, there does not seem to be a Component.listAllByGroup type of blcli command available. But the pattern is going to be similar as we've found with the other workspace objects. With the other object types the listAllByGroup command contained calls to convert a group path to id, use that id with one of the findAll commands in the namespace and then get the name or DBKey of the returned objects. We look in the Component namespace and see a findAllByComponentGroup which takes the ComponentGroup id. Let's check the ComponentGroup namespace (or SmartComponentGroup) for the groupNameToId command and we see it's there. Great. The namespaces usually contain the basics like getName, getDBKey, getId. Putting together the series of blcli commands: blcli_execute SmartComponentGroup groupNameToId "/Workspace/All Components" # or ComponentGroup.groupNameToId blcli_storelocal componentGroupId blcli_execute Component findAllByComponentGroup ${componentGroup} false blcli_execute Component getDBKey blcli_execute Utility setTargetObject blcli_execute Utility listPrint blcli_storelocal componentKeys That was pretty easy. Components are like Servers in that they don't need to exist in a workspace folder (unlike Jobs, DepotObjects and Templates). However they are associated with Templates and Servers and sometimes we want to list all the Components associated with a server, regardless of template or list all the components associated with a template. We might also want to list if the Component is "valid" which means the discovery conditions are met. A component becomes invalid if the Component was discovered on a server and then later something changed on the server or in the discovery conditions and that component (server) no longer meets the discovery conditions. For example - you run discovery for one of the out-of-the-box compliance templates, like the DISA STIG, for your Windows 2008 servers. A number of components are created, one for each of your 2008 servers. Later, you upgrade a handful of the 2008 servers to 2012. You re-run discovery for the 2008 Windows STIG template and the components of that template for the now 2012 servers should be flagged as invalid because the discovery condition is that the server is Windows 2008 and now it's Windows 2012. We will also get the full name, and associated device (normally the component name includes the device). Also remember that it's possible to have more than one component for a template on a single server - in the event you are using Components to model an application that has multiple instances on a single system, eg the BladeLogic Application Server or an Oracle Database. The point of the training session here on components is to provide some examples of what information I might want to retrieve about a component in my script. Let's get into the examples. I'll look in my trust Unreleased blcli commands and documentation reference in the Component namespace and see if there are some commands that look like they will do what I want. I'm really just reading the name, looking at the inputs and trying it to see if I get what I want. First I want to pass a server name and get all the components and the associated templates. I need something in the Server space to convert the name to an id or DBKey - yes that exists. Now in the Component namespace I need to see if there's something to list the components by server. I see a couple: findAllLatestByDevice and findAllLatestDBKeysByDevice. Those look pretty good. The first one returns the component objects, I'd have to run a Component.getDBKey and then dump the list. That's not too bad. The second one returns a message Command execution failed. java.lang.IllegalStateException: Must be on app server. Well I am on an appserver so I'm not sure why that's happening. Welcome to the unreleased commands. So I'll use the first one. I want to get the template; I see a Component.getTemplateKey and I can feed that into some of the commands I used in List All The Component Templates to get the group path to the template, and I see a Component.getName, and I see Component.isValid. I'll script all that up: blcli_execute Server getServerIdByName ${serverName} blcli_storelocal serverId blcli_execute Component findAllLatestByDevice ${serverId}TemplateKey blcli_storelocal templateKey blcli_execute Template findByDBKey ${templateKey} blcli_execute Template getName blcli_storelocal templateName blcli_execute Template getGroupId blcli_storelocal templateGroupId blcli_execute Group getQualifiedGroupName 5008 ${templateGroupId} blcli_storeenv templateGroupPath echo "${componentName},${isValid},${templateGroupPath}/${templateName}" done Now for the list of Components and their associated servers for a template. I see a couple versions of Component.findAllByTemplate - one takes the template key, the other the template id. Since I already know how to get the template key (Template.getDBKeyByGroupAndName). Then I'll follow pretty much the same pattern as above with whatever blcli calls I need to get the component and associated server info. template="/Workspace/MyTemplates/TestTemplate1" blcli_execute Template getDBKeyByGroupAndName "${template%/*}" "${template##*/}" blcli_storelocal templateKey blcli_execute Component findAllByTemplate ${templateKey}DeviceId blcli_storelocal deviceId blcli_execute Server getServerNameById ${deviceId} blcli_storelocal serverName echo "${componentName},${isValid},${serverName}" done Since this is starting to get repetitive (which is good that we can follow the same patters between workspaces) I like to throw in something new here and there to keep it interesting. At the top I have: template="/Workspace/MyTemplates/TestTemplate1" blcli_execute Template getDBKeyByGroupAndName "${template%/*}" "${template##*/}" What's going on with that second line ? The Template.getDBKeyByGroupAndName command takes the template group and name as inputs. I have a variable named template and I passed in some gibberish to my blcli command. If you recall NSH (what BSA uses for its command line shell) is based on ZSH, which is a Unix shell like bash, tcsh, csh, etc. What's happening here is parameter expansion. From the article: $. So the first one is matching the /* in the string /Workspace/MyTemplates/TestTemplate1 from the end so just '/TestTemplate1' and removing that substring from the overall string and returns just the folder path (/Workspace/MyTemplates). The second one is matching */ out of the string from the beginning and because of the ## it's matching everything, so '/Workspace/MyTemplates' and deleting that from the string. If there was just one # then it would return 'Workspace/MyTemplates/TestTemplate'. This is the same thing as using the dirname and basename commands from the Unix shell. The advantage of using the parameter substitution is you don't need to spawn off a child process to use it and it's cool. Hopefully that was a quick and fun diversion into shell scripting since we are seeing a lot of the same kind of command sequences when we are listing out the various objects.
https://communities.bmc.com/community/bmcdn/truesight-server-automation/blog/2018/01/11/list-all-the-components
CC-MAIN-2019-43
refinedweb
1,110
52.7
There? Hi Doug, Thanks for asking :] When do you will release a fix for the formulas issues on SP2 (there’s a lot of people asking me about it in Brazil… And I mean "important" people). Best, Jomar Hi Jomar, For information on our approach to formula support in SP2’s implementation of ODF 1.1, see these posts: As covered there, we’ll be looking closely at Open Formula when it is approved and published. Personally, I think Open Formula looks very promising, and it will be good to have a published standard for formulas in future versions of ODF. Did you have any thoughts on the topics covered in the blog post above? I’d be interested in knowing, for example, whether or not you agree that standards conformance is important for enabling standards-based interoperability. Short answer: I do agree that standards conformance is the foundation for standards-based interoperability. Conformance is not enough, as we are reminded in many different ways, but a standard (evolved and improved as reality demands) is the proper foundation for resolving interoperabilty. Longer story:In the case of Excel 2007 handling of table-cell formulas, there is a complicated tension and it would be good that it was better understood, whether or not people agree with how is was resolved in Excel 2007 ODF support. It seems to me that the Be Predictable principle and the Fail Hard principle is in a tug-of-war with the Principle of least astonishment and the Fail Quietly/Softly principle. (That last tractor-puller was a surprise arrival in this match.) The predictability is great for Excel 2007 ODF spreadsheets brought back into or interchange with Excel 2007 ODF. It also strikes me that there is no question that the Excel approach qualifies under the definition of ODF and is even further supported by thise formulas being drawn from a documented and open international standard. (Bugs we will ignore on both sides of this equation) On the other hand, there is no question that users of other products are massively surprised by (1) their spreadsheets having their formulas lost when interchanged with Excel 2007 and (2) not being able to handle the formulase received in Excel 2007’s ODF output. Whether this is something that Microsoft is supposed to fix, as in Jomar Silva’s view, is not that obvious to me. I recall in my own experience how hard we worked to avoid solving problems that were not of our own creation. (We avoided dealing with authentication and authorization in one agreement for precisely that reason, rather than attempt yet-another private solution). Here the problem is that the prevalent ODF spreadsheet implementations tend to use the same private namespace for an unstandardized formula-and-functions scheme. It is unclear whether those implementations conform with each other and whether there is any way to tell. It’s my impression that the Fail Hard decision was the chosen alternative over attempting to accept these other unstandardized formulas and have errors that would be unnoticed in the results or that, when noticed, would be inexplicable and difficult to resolve. As long as I am creating my own fantasy about the state of affairs, it is also worth noting that when you already have a spreadsheet formula system that you know works, using that in the first Excel ODF support has a certain economy and helps one be confident in that initial implementation. It might even be a demonstration of the least-that-could-possible-work agility principle, and the future will determine whether the "but no simpler" line was crossed. It is very unclear to me how much it would take to become confident in addressing interoperability with the uses of no-namespace and unstandardized namespaces and whether there is any reason to do that when the long-awaited OpenFormula addition to ODF is expected real-soon-now. Although there are those who think interoperability (with which anointed non-standardized implementation?) should have been a no-brainer, I speculate that the uncertain opportunity cost of pursuing that might have been frightening, especially with OpenFormula lurking in the wings and certain implementations claiming existing support for an unratified ODF 1.2. I cannot fault the Microsoft approach as incorrect, and it is far too early to declare it to be unsuccessful. I was at the year-ago DII meeting where the guiding principles were announced and their application to spreadsheet formulas described. I applauded the principles and understood the reasoning for formulas. How this would impact various groups of users and non-users (who still want to interoperate) of Office 2007 did not surface in my consciousness. I little intelligence (pun intended) on that level of consideration. Hi Doug, nice post ( and nice illustrations ). Can you tell us about the advance of the fix in ODF formula handling in Office 2007 ( you know, the "square-bracket-thing" that prevents any real interoperability between spreadsheets generated by Microsoft(TM) Office 2007 and the Rest Of The World(TM) ODF spreadsheets ). Thanks in advance. There is a problem with your diagrams, in how theory maps to reality. Remember: It is not about applications, it is about file formats. There may be N (large value of N, >> 5) applications, but regardless of how big N is, the question is, are there any "islands" within which interoperabity (or more specifically, file format implementation compatibility, or standard interpretation, or whatever you want to call it) is "perfect", i.e. known to be good, and for which round-trip open/close preserves the complete contents of the XML encoding of the file? The question then reduces to, "How many such ‘islands’ are there?" If the number is small, then the theoretical argument is just that – theoretical. If the number is small, the question is, is it small enough to justify the effort to achieve the compatiblity work, and to do so reliably? (Hint – the answer is: very few islands, like less than 5, and yes, the work is more than justified.) But all of this ignores alternative approaches, which hit more of the "bullet points" in your Guiding Priniciples. Are those Principles in order of importance? If you are informed of an approach which goes further down the list, and in fact hits all but the second last one, would there not be a compelling argument that that would be a better approach? The gist of the alternative method of preserving everything in an ODF spreadsheet without risking introducing errors (because of different ideas of what 1+2 results in, for instance), is to make the cell contents on imported cells used in foreign namespace formulas read-only (e.g. FOOBAR:=formula(foo,bar) means that "foo" and "bar" are protected from being modified, by default.) The problem of interpreting formulas is only encountered when formulas are interpreted. In reading in a spreadsheet, both the formulas AND THE LAST RESULTING VALUES are present. The previous values are by definition "right". If the input to the formula doesn’t change, e.g. by being protected against being changed by the user, then the formula NEVER needs to be re-interpreted. This would mean: – ODF 1.1 is supported – the results are predictable (a first for MS ;-)) – user intent is preserved – visual fidelity is preserved And, as a bonus, the foreign namespace formulas *and values* survive a round-trip completely intact. Q.E.D. So, what do you think of this proposal? Do you not anticipate a demand for this sort of functionality in a spreadsheet application, claiming to "do" ODF 1.1? Name Withheld @Dennis, the tractor-pull analogy is right on. The principles are indeed often in tension/opposition with one another on certain details. There’s always room for people to debate a given approach is more or less astonishing than another approach, of course. But our Excel program managers felt strongly, based on their long experience with real customer issues, that if you open a spreadsheet someone sends you and see different results from the calculation than the sender saw that would be much worse experience than if you saw the same results they sent, but you could not easily recalculate them. With the current state of ODF 1.1 spreadsheet interop there is just no easy way for someone to know if they are seeing the same calculation results that someone else saw. And the astonishment could be severe if, for example, you accept someone’s bid to remodel your kitchen for $10,000 and then later they tell you that in their favorite spreadsheet application the total came to $15,000. Thanks for your comment, Franco. Regarding formulas, note the responses to Jomar and Dennis above. I think it’s important that we all remember there is NO published standard for ODF spreadsheet formulas yet. Nor is there any de-facto standard that everyone agrees on. In truth, among all of the other ODF implementations real-world complex spreadsheets don’t interoperate very well either. The only way an implementer can try to make ODF spreadsheet interop work today (and by work I mean not just for trivial cases, but really work reliably for real-world complex spreadsheets) is by the “spaghetti diagram” method, with all of the complexity and risk of bugs that entails. No implementer we know of has attempted that, and the very point of my post was to explain why we don’t think this is a good approach, and why we think the standards-based approach is better. In the case of spreadsheet formulas, help is on the way — OpenFormula is under development for use with ODF 1.2. In the meantime, we should not pretend that all was well with spreadsheet interop before SP2 came along. And (in my opinion) we would be collectively better off to spend our energies on solving the problem instead of complaining about it. FYI, I’d like to keep this thread on-topic around the concepts covered in this post, as outlined in my comment policy. There are already 118 comments on the two threads about formulas that I linked to above, so I’ll only be letting more comments on that topic appear here if they truly add something new to the discussion that has not already come up in those threads. I appreciate the post, very good, because it raises these aspects which are often overlooked. Visually I would rather frame it in terms of convergence, a spiral. You stressed the need to keep a strong compatibility with legacy formats in DIS 29500. So how does your interoperability process function when the competitor in your diagram above is your past product? How is it possible to "get it right" rather than to support 1983 legacy bugs? Y2K was as you know a strict standard compliance bug. A format lock-in often apllies not only to the implementation but the "corpus of existing documents". The deliberate choice to implement formula totally different from competitors ("Nor is there any de-facto standard that everyone agrees on."), what will that imply for the future: No openformula because the MS-ODF legacy has to be supported? Or incompatibility of your old formats because of the literal approach? I doubt someone would ever find a magic bullet to interoperability and user satisfaction. It is more kneading to converge. For convergence commercial and public pressure seems helpful, and of course a GATT style process to make that happen. In terms of visual fidelity NLnet and Opendoc Society prepare an interesting project: Still, as a user, I have to say that discarding formulas in spreadsheets *sucks*, and it goes a long way towards "astonishing" (as in "least astonishment") in a very negative way anyone who is not aware of this limitation. OpenFormula is still under development, but it’s already used in a major implementation of ODF (OpenOffice), so it’s not likely to evolve much in incompatible ways. There are ways to handle such a situation. You could have implemented it, and provided a service pack if changes do happen. Or if it really bothered you, then you could have included a checkbox & warning to enable/disable it according to the user’s needs. It would not have been the first time Microsoft implemented a standard before its finalization (e.g. the IE team is already implementing parts of HTML5), if it’s beneficial to the user. Still no committee draft on OpenFormula spec in OASIS. Will OASIS even allow ODF 1.2 to continue in the standardization proces when it is relying heavily on a specifcation still in draft form ? @A Nonymous, The concept of an implementation round-tripping content that it doesn’t understand is something we considered during initial planning of our ODF implementation. It was also one of the topics raised during the roundtable discussions at the DII event in Redmond last July. The problem is that if you don’t understand a piece of content, then you can’t reliably know whether other changes you’ve made in areas that you do understand may have invalidated the content. Consider, for example, a spreadsheet like this: Suppose we open that spreadsheet in an implementation that round-trips content it doesn’t understand, and we insert a header row. What happens? The cells all shift down, but the formula gets round-tripped without a change. So when you open the modified spreadsheet back in the original application, you see a different result than it had before: @Andre, you’re right, things get very messy when you have to deal with not only multiple products, but multiple versions of each one. If your application can understand cell addressing (you know, the "atoms" of spreadsheets, duh), then the question can be reduced to: Can we process relativistic moves (like inserting lines, columns, etc), by blindly manipulating the cell addresses, while leaving the formula results untouched? The answer is yes. The issue of round-tripping is not nearly so complicated as you make it sound, when the interpretation (of cells *values* and *formulas*) is factored out. Adding a line, shifting rows, means updating the formula (from "=SUM(B1:B3)" to "=SUM(B2:B4)", while literally not touching the displayed result. If the original cell B2, which was shifted to B3, had in fact been the text-formatted "2" instead of the integer ‘2’, there still be no need to *calculate* the formula result – since it is a foreign namespace formula. The underlying engine for formula triggers would need to be smart enough to know that cells are being "tracked", rather than a formula literally being modified. Caveat developer. Name Withheld @dmahugh: technically, the choice to discard formulas and keep only the latest known good result is valid; however, there is no question that formulas etc. get destroyed on save if you open an ODF spreadsheet made with OOo Calc in MS Excel. On the other hand, other spreadsheets seem to largely fall to another solution: formula is kept, but not interpreted, and preceded with its namespace. You’ll tell me, this destroys visual fidelity (by default) because instead of the last known value, you now have a string the spreadsheet can’t compute. Stop me if I’m wrong though, the last known good value is saved in the file, can be read, and is "merely squashed" by the spreadsheet application. Now, and THAT would have probably prevented a LOT of criticism, didn’t you add a small alert box saying: "This file contains formulas in a format that Excel can’t parse." Choice 1 "discard formulas, only keep values" Choice 2 "discard values, display formulas for manual editing" Check box "remember my choice" text link to help page explaining the reasons and solutions And that would have been ALL: have your cake, and eat it too. Frankly, how hard would it have been? > In my next post, I’ll cover our testing strategy and methodology in more detail. What else would you like to know about how Office approaches document format interoperability? Doug: could you shed some light if and how the MS Office-team and the ODF-converter team deal with interop testing between 2007sp2 and the plug-in? Since MS provides technical and architectural guidance for the OpenXML/ODF converter project, I assume both teams somehow share the same methodology on testing two different products, so it could be helpful for other ODF-developers as well… Thanks, Bart @A Nonymous, this sounds potentially fragile to me, because the consumer would not be taking full responsibility for the integrity of the content, but would still be modifying certain aspects of it. @Mitch, your suggestion is essentially a combination of various approaches used in existing implementations. Have you suggested that the current versions of Symphony (1.2) and OpenOffice.org (3.1) should implement this multiple-choice approach as well? In any event, I think the best path forward is for all of us to focus on making Open Formula as interoperable as possible. The wide variety of approaches that people have suggested indicates a need for an agreed-upon standard in this area. @Bart, that’s a good question, I’ll plan to address it in my next post. @Doug: yup, they should. The fact that they don’t is no reason not to one-up them, is it? You could, like, innovate… If they do imitate you, then interoperability wins anyway! Thing is, their approach (keep the formula) may allow a patient user to re-write a new formula based on the old one and get his (proper) results (in essence, to recreate user intent). Agreed, it’s far from being the best solution, but there’s no actual loss: result can be found again without external reference. Current Excel system’s would mean that a spreadsheet named LifeTheUniverseAndEverything.odc woud merely contain 42 in Excel – the question would be scrapped. And them Vogons are not patient. Now, another (non-interactive, but maybe more workflow-disruptive) solution would be a new error message: #NAMESPACE:42 (namespace is not know; last known good value is 42), which would be technically correct (error:can’t parse the formula), informative (error caused by unknown namespace), non-destructive (formula and namespace appear in edit mode, like in other spreadsheet programs, and last good value is kept after error message) and forward looking (the day that namespace is supported, formula can be computed again). Or both solutions, I don’t know? It could also be added to ODF 1.2 (how to manage formulas from different namespaces), and it doesn’t prevent OpenFormula’s development. Of course, it would mean some careful thinking: how does one work with a spreadsheet that uses several formula namespaces? Although the only thing taken into account by a formula is the ‘last known good value’, there could be problems such as floating point precision (some may accept double 32-bit float precision, others be limited to single 32-bit precisions, others may use 256-bit precision…), or dates (oxml-f may accept 1902-02-29, odf-of certainly won’t) to work out… I do think those questions should be addressed as soon as possible; destroying a formula ain’t a good solution, converting it from one format to another isn’t either (ceil() and floor() in MS Office for example are mathematically wrong on negative values; a parameter exists in oooc and OF to deal with import, but there’s no way to export floor() or ceil from those namespaces to MS formulas correctly), and merely keeping the last known good value… well, if last known good value is 256-bit precision and is read on a spreadsheet program that has single 32-bit precision, then if Excel’s solution is used, not only will you lose the formula, you’ll also lose the actual value (loss of precision). I dunno, maybe I’m overlooking something, what do you think? @Doug: it’s the second time I try to post this one… So what? Should you limit your software to a level of service equivalent to other suites? How about, innovation? The current solution that Excel uses has one slight problem. Take the file named LifeTheUniverseAndEverything.odt, by author: Deep Thought; it contains a very complex set of oooc formulas. Excel only keeps 42. If you’d rather ‘bypass’ the prompt I indicated in the first comment, then use: #NAMESPACE:42 as returned value. – it keeps the original formula and namespace in formula space (for manual porting) – it keeps the computed value in displayed results – error message is explicit Rationale: since all formulas are disabled anyway, marking all of them as such won’t create a cascade effect (except if there are several formula namespaces used in the spreadsheet – doubtful). Ah – of course, that removes ‘visual fidelity’. But then you wouldn’t lose the equation behind the ’42’ result. And them Vogons ain’t patient. Um, "potentially fragile"? Which part of this is fragile? (I will refrain from commenting on this comment appearing to be an application of "fear, uncertainty, and doubt".) Presuming of course, that the original suggestion, making the formulas and input cells READ-ONLY, means the integrity of THAT content is unimpeachable. And I am presuming that the application (excel) is already doing these other things (tracking cell relocation, and cell modification), already. If relocation is handled (it is), that is most of the battle. If it is NOT the case that excel relies exclusively on modifications of either input values or formulas, to trigger formula calculation, then excel is grossly flawed. If it IS the case (that it relies exclusively), then the rest of the battle is won. My questions are: – Whose call at Microsoft is it, to do this or not do this? – If this was considered and discarded, I fail to see the rationale for doing so. Can you elaborate on that decision? – Whose call at Microsoft is it, to revisit the question of using this method to support foreign namespaces in read-only, round-trip proof ways? Name Withheld In this blog post, I’m going to cover some of the details of how we approached the challenges of testing Microsoft neemt vandaag en morgen (15 en 16 juni 2009) deel aan een ODF bijeenkomst in Den Haag. Het Voici les derniers posts concernant Open XML et l’implémentation d’ODF dans Office 2007 SP2. De très
https://blogs.msdn.microsoft.com/dmahugh/2009/06/05/standards-based-interoperability/
CC-MAIN-2017-26
refinedweb
3,749
58.42
char mName [51];is a so called "C-string". It's an array of 51 characters, the last of which is reserved for the delimiter character '\0'. std::cin.getlineextracts characters from the stream (input) and stores them in a C-string, in this case, mName. It continues extracting until a delimiting character is reached, or until n characters have been extracted (where n is the second parameter, in this case 50). std::fixedsets the floatfield format flag for the specified stream, which means you're using fixed floating point notation. std::setprecisionsets the decimal precision for outputted floating point values for the specified stream. std::cinis an istreamobject, that's declared in the stdnamespace. std::cin.ignoreand then std::cin.get(). std::cin.ignore()returns a reference of the istream object (it returns *this, itself). The object that is returned (itself) then calls it's member function get(). istream& ignore (streamsize n = 1, int delim = EOF);
http://www.cplusplus.com/forum/general/121190/
CC-MAIN-2016-44
refinedweb
157
58.79
Axis2 Integration meeting - Feb 22, 2007 Teleconference on Axis2 Integration into WTP - Feb 22, 2007 Attendance - Chris Brealey - Kathy Chan - Lahiru Sandakith Agenda - Progress on the RFEs: - KC: With the latest code submitted to RFE 168765, I can use Axis2 runtime preference page to install Axis2 runtime (needs to run Ant command on <Axis2_install>/webapp directory, I can also add Axis2 facet in a Web project and run through a bottom-up scenario. However, I ran into problems with skeleton and client scenario. See RFE 168765 for latest status. - Discuss design issues related to Axis2 install location - Should use Axis2 binary distribution, not Webapp directory - There should not be a need for the user to do an Ant build since we should not be copying axis2.war to the project we are adding Axis2 facet to. - Discuss design issues related to adding Axis2 facet - We all agreed that we should not be replacing WebContent when adding Axis2 facet, should merge Axis2 servlet with existing content. - LS: The code is currently not changing Java output directory when installing facet. - KC: I noticed that in the latest code attachments, hot deploy (updating of Java classes that implements the Web service) is already working. So this might not be needed. - CB: Axis2 facet should not depend on the jst.web facet. - KC: Instead, the Axis2 service runtime should require axis2.core and jst.web, Axis2 client runtime should either require axis2.core and jst.web, or axis2.core and jst.utility (similar to Axis1). - LS: When Axis2 jars are added to lib/ via the wizard, will the jars appear on the build path? - CB: Yes. This is handled by resource listeners deep in the WTP platform. - LS: Can the Axis2 facet ever conflict with other facets and leave a project in an inconsistent state? - CB: In theory, yes, but it's very unlikely. When a known conflict between facets exists, it is expressed in the facet extensions. - LS: Is it OK if some of my plugins depend on Java 5? If so, do I need to write code to handle cases where a user runs Eclipse on a Java 1.4.x JRE? - CB: Nope. Other components are starting to prereq Java 5 as well. - KC: Where does the code live? - Axis2 runtime management code should live in WST. Eg: - Axis2 installation location preferences page. - Core utility methods for copying jars from Axis2. - Core utility methods for simplifying invocation of the Axis2 emitters. - Axis2 development tools should live in JST, in the axis2.core plugin. Eg: - Axis2 facet. - Axis2 emitter preferences page. - Axis2 emitter preference objects. - KC: Need to prepare stable bottom-up, top-down and client scenarios with user specified Axis2 install location and adding Axis2 facet in time for the March 5 EclipseCon. Let's aim at getting a stable driver by early next week and tutorial by Wednesday next week. - LS: OK. Will have first draft of design document soon too. Will attach to wiki. - LS: How do we proceed with Eclipse legal's question about the obscure "This is the cute way of making the namespaces columns editable" comment? - CB: You've traced the pedigree of the comment and the code around it back to Apache code, however, where that code came from isn't clear. Tell Sharon/Janet what you've discovered so far. If worse comes to worse, we can do a clean-room (Kathy) reimplementation of the suspect code. Remember that we can only staple "EPL" on code that (1) you have invented, not copied, and (2) you have authorized Eclipse to license (which you've done). - Schedule for other M6 RFEs (168937, 168938, 168939) - No time to discuss - Outstanding Axis2 RFEs and defects - Next meeting rescheduled from March 8 to Feb 28 (same time), because of EclipseCon.
http://wiki.eclipse.org/Axis2_Integration_meeting_-_Feb_22%2C_2007
CC-MAIN-2015-32
refinedweb
632
65.93
To measure time time of a program's execution, either use time.clock() or time.time() functions. The python docs state that this function should be used for benchmarking purposes. import time t0= time.clock() print("Hello") t1 = time.clock() - t0 print("Time elapsed: ", t1 - t0) # CPU seconds elapsed (floating point) This will give the output − Time elapsed: 0.0009403145040156798 You can also use the timeit module to get proper statistical analysis of a code snippet's execution time. It runs the snippet multiple times and then it tells you how long the shortest run took. You can use it as follows − def f(x): return x * x import timeit timeit.repeat("for x in range(100): f(x)", "from __main__ import f", number=100000) This will give the output − [2.0640320777893066, 2.0876040458679199, 2.0520210266113281]
https://www.tutorialspoint.com/How-do-I-get-time-of-a-Python-program-s-execution
CC-MAIN-2020-50
refinedweb
136
76.42
- NAME - SYNOPSIS - ABSTRACT - INTRODUCTION - DESCRIPTION - TSV FORMAT - INTERFACE TO OTHER SOFTWARES - AUTHOR - SEE ALSO NAME Data::Table - Data type related to database tables, spreadsheets, CSV/TSV files, HTML table displays, etc. SYNOPSIS News: The package now includes "Perl Data::Table Cookbook" (PDF), which may serve as a better learning material. To download the free Cookbook, visit # some cool ways to use Table.pm use Data::Table; $header = ["name", "age"]; $data = [ ["John", 20], ["Kate", 18], ["Mike", 23] ]; $t = new Data::Table($data, $header, 0); # Construct a table object with # $data, $header, $type=0 (consider # $data as the rows of the table). print $t->csv; # Print out the table as a csv file. $t = Data::Table::fromCSV("aaa.csv"); # Read a csv file into a table object ### Since version 1.51, a new method fromFile can automatically guess the correct file format # either CSV or TSV file, file with or without a column header line # e.g. # $t = Data::Table::fromFile("aaa.csv"); # is equivalent. print $t->html; # Display a 'portrait' HTML TABLE on web. use DBI; $dbh= DBI->connect("DBI:mysql:test", "test", "") or die $DBI::errstr; my $minAge = 10; $t = Data::Table::fromSQL($dbh, "select * from mytable where age >= ?", [$minAge]); # Construct a table form an SQL # database query. $t->sort("age", 0, 0); # Sort by col 'age',numerical,ascending print $t->html2; # Print out a 'landscape' HTML Table. $row = $t->delRow(2); # Delete the third row (index=2). $t->addRow($row, 4); # Add the deleted row back as fifth row. @rows = $t->delRows([0..2]); # Delete three rows (row 0 to 2). $col = $t->delCol("age"); # Delete column 'age'. $t->addCol($col, "age",2); # Add column 'age' as the third column @cols = $t->delCols(["name","phone","ssn"]); # Delete 3 columns at the same time. $name = $t->elm(2,"name"); # Element access $t2=$t->subTable([1, 3..4],['age', 'name']); # Extract a sub-table $t->rename("Entry", "New Entry"); # Rename column 'Entry' by 'New Entry' $t->replace("Entry", [1..$t->nofRow()], "New Entry"); # Replace column 'Entry' by an array of # numbers and rename it as 'New Entry' $t->swap("age","ssn"); # Swap the positions of column 'age' # with column 'ssn' in the table. $t->colMap('name', sub {return uc}); # Map a function to a column $t->sort('age',0,0,'name',1,0); # Sort table first by the numerical # column 'age' and then by the # string column 'name' in ascending # order $t2=$t->match_pattern('$_->[0] =~ /^L/ && $_->[3]<0.2'); # Select the rows that matched the # pattern specified $t2=$t->match_pattern_hash('$_{"Amino acid"} =~ /^L-a/ && $_{"Grams \"(a.a.)\""}<0.2')); # use column name in the pattern, method added in 1.62 $t2=$t->match_string('John'); # Select the rows that matches 'John' # in any column $t2=$t->clone(); # Make a copy of the table. $t->rowMerge($t2); # Merge two tables $t->colMerge($t2); '], 0); sub average { # this is an subroutine calculate mathematical average, ignore NULL my @data = @_; my ($sum, $n) = (0, 0); foreach $x (@data) { next unless $x; $sum += $x; $n++; } return ($n>0)?$sum/$n:undef; } $t2 = $t->group(["Department","Sex"],["Name", "Salary"], [sub {scalar @_}, \&average], ["Nof Employee", "Average Salary"]); # For each (Department,Sex) pair, calculate the number of employees and average salary $t2 = $t2->pivot("Sex", 0, "Average Salary", ["Department"]); # Show average salary information in a Department by Sex spreadsheet ABSTRACT This perl package uses perl5 objects to make it easy for manipulating spreadsheet data among disk files, database, and Web publishing. A table object contains a header and a two-dimensional array of scalars. Four class methods Data::fromFile, Data::Table::fromCSV, Data::Table::fromTSV, and Data::Table::fromSQL allow users to create a table object from a CSV/TSV file or a database SQL selection in a snap. Table methods provide basic access, add, delete row(s) or column(s) operations, as well as more advanced sub-table extraction, table sorting, record matching via keywords or patterns, table merging, and web publishing. Data::Table class also provides a straightforward interface to other popular Perl modules such as DBI and GD::Graph. The most updated version of the Perl Data::Table Cookbook is available at We use Data::Table instead of Table, because Table.pm has already been used inside PerlQt module in CPAN. INTRODUCTION A table object has three data members: 1. $data: a reference to an array of array-references. It's basically a reference to a two-dimensional array. 2. $header: a reference to a string array. The array contains all the column names. 3. $type = 1 or 0. 1 means that @$data is an array of table columns (fields) (column-based); 0 means that @$data is an array of table rows (records) (row-based); Row-based/Column-based are two internal implementations for a table object. E.g., if a spreadsheet consists of two columns lastname and age. In a row-based table, $data = [ ['Smith', 29], ['Dole', 32] ]. In a column-based table, $data = [ ['Smith', 'Dole'], [29, 32] ]. Two implementations have their pros and cons for different operations. Row-based implementation is better for sorting and pattern matching, while column-based one is better for adding/deleting/swapping columns. Users only need to specify the implementation type of the table upon its creation via Data::Table::new, and can forget about it afterwards. Implementation type of a table should be considered volatile, because methods switch table objects from one type into another internally. Be advised that row/column/element references gained via table::rowRef, table::rowRefs, table::colRef, table::colRefs, or table::elmRef may become stale after other method calls afterwards. For those who want to inherit from the Data::Table class, internal method table::rotate is used to switch from one implementation type into another. There is an additional internal assistant data structure called colHash in our current implementation. This hash table stores all column names and their corresponding column index number as key-value pairs for fast conversion. This gives users an option to use column name wherever a column ID is expected, so that user don't have to use table::colIndex all the time. E.g., you may say $t->rename('oldColName', 'newColName') instead of $t->rename($t->colIndex('oldColName'), 'newColIdx'). DESCRIPTION Field Summary - data refto_arrayof_refto_array contains a two-dimensional spreadsheet data. - header refto_array contains all column names. - type 0/1 0 is row-based, 1 is column-based, describe the orientation of @$data. Package Variables - $Data::Table::VERSION - - @Data::Table::OK see table::match_string, table::match_pattern, and table::match_pattern_hash Since 1.62, we recommend you to use $table->{OK} instead, which is a local array reference. - @Data::Table::MATCH see table::match_string, table::match_pattern, and table::match_pattern_hash Since 1.67, we return the matched row indices in an array. Data::Table::MATCH is this array reference. Here is an example of setting a max price of 20 to all items with UnitPrice > 20. $t_product->match_pattern_hash('$_{UnitPrice} > 20'); $t_product->setElm($t_product->{MATCH}, 'UnitPrice', 20); - %Data::Table::DEFAULTS Store default settings, currently it contains CSV_DELIMITER (set to ','), CSV_QUALIFER (set to '"'), and OS (set to 0). see table::fromCSV, table::csv, table::fromTSV, table::tsv for details. Class Methods Syntax: return_type method_name ( [ parameter [ = default_value ]] [, parameter [ = default_value ]] ) If method_name starts with table::, this is an instance method, it can be used as $t->method( parameters ), where $t is a table reference. If method_name starts with Data::Table::, this is a class method, it should be called as Data::Table::method, e.g., $t = Data::Table::fromCSV("filename.csv"). Conventions for local variables: colID: either a numerical column index or a column name; rowIdx: numerical row index; rowIDsRef: reference to an array of column IDs; rowIdcsRef: reference to an array of row indices; rowRef, colRef: reference to an array of scalars; data: ref_to_array_of_ref_to_array of data values; header: ref to array of column headers; table: a table object, a blessed reference. Table Creation - table Data::Table::new ( $data = [], $header = [], $type = 0, $enforceCheck = 1) create a new table. It returns a table object upon success, undef otherwise. $data: points to the spreadsheet data. $header: points to an array of column names. Before version 1.69, a column name must have at least one non-digit character. Since verison 1.69, this is relaxed. Although integer and numeric column names can now be accepted, when accessing a column by integer, it is first interpreted as a column name. $type: 0 or 1 for row-based/column-based spreadsheet. $enforceCheck: 1/0 to turn on/off initial checking on the size of each row/column to make sure the data arguement indeed points to a valid structure. In 1.63, we introduce constants Data::Table::ROW_BASED and Data::Table::COL_BASED as synonyms for $type. To create an empty Data::Table, use new Data::Table([], [], Data::Table::ROW_BASED); - table table::subTable ($rowIdcsRef, $colIDsRef, $arg_ref) create a new table, which is a subset of the original. It returns a table object. $rowIdcsRef: points to an array of row indices (or a true/false row mask array). $colIDsRef: points to an array of column IDs. The function make a copy of selected elements from the original table. Undefined $rowIdcsRef or $colIDsRef is interpreted as all rows or all columns. The elements in $colIDsRef may be modified as a side effect before version 1.62, fixed in 1.62. If $arg_ref->{useRowMask} is set to 1, $rowIdcsRef is a true/false row mask array, where rows marked as TRUE will be returned. Row mask array is typically the Data::Table::OK set by match_string/match_pattern/match_pattern_hash methods. - table table::clone make a clone of the original. It return a table object, equivalent to table::subTable(undef,undef). - table Data::Table::fromCSV ($name_or_handler, $includeHeader = 1, $header = ["col1", ... ], {OS=>$Data::Table::DEFAULTS{'OS'}, delimiter=>$Data::Table::DEFAULTS{'CSV_DELIMITER'}, qualifier=>$Data::Table::DEFAULTS{'CSV_QUALIFIER'}, skip_lines=>0, skip_pattern=>undef, encoding=>$Data::Table::DEFAULTS{'ENCODING'}}) create a table from a CSV file. return a table object. $name_or_handler: the CSV file name or an already opened file handler. If a handler is used, it's not closed upon return. To read from STDIN, use Data::Table::fromCSV(\ CSV file was generated. 0 for UNIX, 1 for PC and 2 for MAC. If not specified, $Data::Table::DEFAULTS{'OS'} is used, which defaults to UNIX. Basically linebreak is defined as "\n", "\r\n" and "\r" for three systems, respectively. optional name argument delimiter and qualifier let user replace comma and double-quote by other meaningful single characters. encoding let you specify an encoding method of the csv file. This option is added to fromCSV, fromTSV, fromFile since version 1.69. The following example reads a DOS format CSV file and writes a MAC format: $t = Data::Table:fromCSV('A_DOS_CSV_FILE.csv', 1, undef, {OS=>1}); $t->csv(1, {OS=>2, file=>'A_MAC_CSV_FILE.csv'}); open(SRC, 'A_DOS_CSV_FILE.csv') or die "Cannot open A_DOS_CSV_FILE.csv to read!"; $t = Data::Table::fromCSV(\*SRC, 1); close(SRC); The following example reads a non-standard CSV file with : as the delimiter, ' as the qaulifier my $s="col_A:col_B:col_C\n1:2, 3 or 5:3.5\none:'one:two':'double\", single'''"; open my $fh, "<", \$s or die "Cannot open in-memory file\n"; my $t_fh=Data::Table::fromCSV($fh, 1, undef, {delimiter=>':', qualifier=>"'"}); close($fh); print $t_fh->csv; # convert to the standard CSV (comma as the delimiter, double quote as the qualifier) # col_A,col_B,col_C # 1,"2, 3 or 5",3.5 # one,one:two,"double"", single'" print $t->csv(1, {delimiter=>':', qualifier=>"'"}); # prints the csv file use the original definition The following example reads bbb.csv file (included in the package) by skipping the first line (skip_lines=>1), then treats any line that starts with '#' (or space comma) as comments (skip_pattern=>'^\s*#'), use ':' as the delimiter. $t = Data::Table::fromCSV("bbb.csv", 1, undef, {skip_lines=>1, delimiter=>':', skip_pattern=>'^\s*#'}); Use the optional name argument encoding to specify file encoding method. $t = Data::Table::fromCSV("bbb.csv", 1, undef, {encoding=>'UTF-8'}); - table table::fromCSVi ($name, $includeHeader = 1, $header = ["col1", ... ]) Same as Data::Table::fromCSV. However, this is an instant method (that's what 'i' stands for), which can be inherited. - table Data::Table::fromTSV ($name, $includeHeader = 1, $header = ["col1", ... ], {OS=>$Data::Table::DEFAULTS{'OS'}, skip_lines=>0, skip_pattern=>undef, transform_element=>1, encoding=>$Data::Table::DEFAULTS{'ENCODING'}}) create a table from a TSV file. return a table object. $name: the TSV file name or an already opened file handler. If a handler is used, it's not closed upon return. To read from STDIN, use Data::Table::fromTSV(\ TSV file was generated. 0 for UNIX, 1 for P C and 2 for MAC. If not specified, $Data::Table::DEFAULTS{'OS'} is used, which defaults to UNIX. Basically linebreak is defined as "\n", "\r\n" and "\r" for three systems, respectively. transform_element let you switch on/off \t to tab, \N to undef (etc.) transformation. See TSV FORMAT for details. However, elements are always transformed when export table to tsv format, because not escaping an element containing a tab will be disasterous. optional name arugment encoding enables one to provide an encoding method when open the tsv file. See similar examples under Data::Table::fromCSV; Note: read "TSV FORMAT" section for details. - table table::fromTSVi ($name, $includeHeader = 1, $header = ["col1", ... ]) Same as Data::Table::fromTSV. However, this is an instant method (that's what 'i' stands for), which can be inherited. - table Data::Table::fromFile ($file_name, $arg_ref = {linesChecked=>2, allowNumericHeader=>0, encoding=>$Data::Table::DEFAULTS{'ENCODING'}}) create a table from a text file. return a table object. $file_name: the file name (cannot take a file handler). linesChecked: the first number of lines used for guessing the input format. The delimiter will have to produce the same number of columns for these lines. By default only check the first 2 lines, 0 means all lines in the file. $arg_ref can take additional parameters, such as OS, has_header, delimiter, transform_element, etc. Encoding allows one to specify encoding methods used to open the file, which defaults to UTF-8. fromFile is added after version 1.51. It relies on the following new methods to automatically figure out the correct file format in order to call fromCSV or fromTSV internally: fromFileGuessOS($file_name, {encoding=>'UTF-8'}) returns integer, 0 for UNIX, 1 for PC, 2 for MAC fromFileGetTopLines($file_name, $os, $lineNumber, {encoding=>'UTF-8'}) # $os defaults to fromFileGuessOS($file_name), if not specified returns an array of strings, each string represents each row with linebreak removed. fromFileGuessDelimiter($lineArrayRef) # guess delimiter from ",", "\t", ":"; returns the guessed delimiter string. fromFileIsHeader($line_concent, $delimiter, $allowNumericHeader) # $delimiter defaults to $Data::Table::DEFAULTS{'CSV_DELIMITER'} returns 1 or 0. It first ask fromFileGuessOS to figure out which OS (UNIX, PC or MAC) generated the input file. The fetch the first linesChecked lines using fromFileGetTopLines. It then guesses the best delimiter using fromFileGuessDelimiter, then it checks if the first line looks like a column header row using fromFileIsHeader. Since fromFileGuessOS and fromFileGetTopLines needs to open/close the input file, these methods can only take file name, not file handler. If user specify formatting parameters in $arg_ref, the routine will skip the corresponding guess work. At the end, fromFile simply calls either fromCSV or fromTSV with $arg_ref forwarded. So if you call fromFile({transform_element=>0}) on a TSV file, transform_elment will be passed onto fromTSV calls internally. fromFileGuessOS finds the linebreak that gives shortest first line (in the priority of UNIX, PC, MAC upon tie). fromFileGuessDelimiter works based on the assumption that the correct delimiter will produce equal number of columns for the given rows. If multiple matches, it chooses the delimiter that gives maximum number of columns. If none matches, it returns the default delimiter. fromFileIsHeader works based on the assumption that no column header can be empty or numeric values. However, if we allow numeric column names (especially integer column names), set {allowNumericHeader => 1} - table Data::Table::fromSQL ($dbh, $sql, $vars) create a table from the result of an SQL selection query. It returns a table object upon success or undef otherwise. $dbh: a valid database handler. Typically $dbh is obtained from DBI->connect, see "Interface to Database" or DBI.pm. $sql: an SQL query string or a DBI::st object (starting in version 1.61). $vars: optional reference to an array of variable values, required if $sql contains '?'s which need to be replaced by the corresponding variable values upon execution, see DBI.pm for details. Hint: in MySQL, Data::Table::fromSQL($dbh, 'show tables from test') will also create a valid table object. Data::Table::fromSQL now can take DBI::st instead of a SQL string. This is introduced, so that variable binding (such as CLOB/BLOB) can be done outside the method, for example: $sql = 'insert into test_table (id, blob_data) values (1, :val)'; $sth = $dbh->prepare($sql); $sth->bind_param(':val', $blob, {ora_type => SQLT_BIN}); Data::Table::fromSQL($dbh, $sth); - table Data::Table::fromSQLi ($dbh, $sql, $vars) Same as Data::Table::fromSQL. However, this is an instant method (that's what 'i' stands for), whic h can be inherited. Table Access and Properties - int table::colIndex ($colID) translate a column name into its numerical position, the first column has index 0 as in as any perl array. return -1 for invalid column names. Since 1.69, we allow integer to be used as a column header. The integer $colID will first be checked against column names, if matched, the corresponding column index is returned. E.g., if column name for the 3rd column is "1", colIndex(1) will return 2 instead of 1! In such case, if one need to access the second column, one has to access it by column name, i.e., $t->col(($t->header)[1]). - int table::nofCol return number of columns. - int table::nofRow return number of rows. - int table::lastCol return the index of the last columns, i.e., nofCol - 1. - int table::lastRow return the index of the last rows, i.e., nofRow - 1; This is syntax sugar. # these two are equivalent foreach my $i (0 .. $t->lastRow) foreach my $i (0 .. $t->nofRow - 1) - bool table::isEmpty return whether the table has any column, introduced in 1.63. - bool table::hasCol($colID) returns whether the colID is a table column, introduced in 1.63. - bool table::colName($colNumericIndex) returns the column name for a numeric column index, notice the first column has an index of 0. Introduced in 1.68. - scalar table::elm ($rowIdx, $colID) return the value of a table element at [$rowIdx, $colID], undef if $rowIdx or $colID is invalid. - refto_scalar table::elmRef ($rowIdx, $colID) return the reference to a table element at [$rowIdx, $colID], to allow possible modification. It returns undef for invalid $rowIdx or $colID. - array table::header ($header) Without argument, it returns an array of column names. Otherwise, use the new header. - int table::type return the implementation type of the table (row-based/column-based) at the time, be aware that the type of a table should be considered as volatile during method calls. Table Formatting - string table::csv ($header, {OS=>$Data::Table::DEFAULTS{'OS'}, file=>undef, delimiter=>$Data::Table::DEFAULTS{'CSV_DELIMITER'}, qualifier=>$Data::Table::DEFAULTS{'CSV_QAULIFIER'}}) return a string corresponding to the CSV representation of the table. $header controls whether to print the header line, 1 for yes, 0 for no. optional named argument OS specifies for which operating system the CSV file is generated. 0 for UNIX, 1 for P C and 2 for MAC. If not specified, $Data::Table::DEFAULTS{'OS'} is used. Basically linebreak is defined as "\n", "\r\n" and "\r" for three systems, respectively. if 'file' is given, the csv content will be written into it, besides returning the string. One may specify custom delimiter and qualifier if the other than default are desired. - string table::tsv return a string corresponding to the TSV representation of the table. $header controls whether to print the header line, 1 for yes, 0 for no. optional named argument OS specifies for which operating system the TSV file is generated. 0 for UNIX, 1 for P C and 2 for MAC. If not specified, $Data::Table::DEFAULTS{'OS'} is used. Basically linebreak is defined as "\n", "\r\n" and "\r" for three systems, respectively. if 'file' is given, the tsv content will be written into it, besides returning the string. Note: read "TSV FORMAT" section for details. - string table::html ($colorArrayRef_or_colorHashRef = ["#D4D4BF","#ECECE4","#CCCC99"], $tag_tbl = {border => '1'}, $tag_tr = {align => 'left'}, $tag_th = {align => 'center'}, $tag_td = {col3 => 'align="right" valign="bottom"', 4 => 'align="left"'}, $l_portrait = 1 ) return a string corresponding to a 'Portrait/Landscape'-style html-tagged table. $colorArrayRef_or_colorHashRef: If a hash reference is provided, it will take three CSS class names for odd data rows, even data rows and for the header row. The default hash is {even=>"data_table_even", odd=>"data_table_odd", header=>"data_table_header"). If a hash reference is not found, a reference to an array of three color strings is expected to provided for backgrounds for even-row records, odd-row records, and -der row, respectively. A default color array ("#D4D4BF","#ECECE4","#CCCC99") will be used if $colors isn't defined. Before version 1.59, the parameter can only accept an array reference. $tag_tbl: a reference to a hash that specifies any legal attributes such as name, border, id, class, etc. for the TABLE tag. $tag_tr: a reference to a hash that specifies any legal attributes for the TR tag. $tag_th: a reference to a hash that specifies any legal attributes for the TH tag. $tag_td: a reference to a hash that specifies any legal attributes for the TD tag. Notice $tag_tr and $tag_th controls all the rows and columns of the whole table. The keys of the hash are the attribute names in these cases. However, $tag_td is column specific, i.e., you should specify TD attributes for every column separately. The key of %$tag_td are either column names or column indices, the value is the full string to be inserted into the TD tag. E.g., $tag_td = {col3 => 'align=right valign=bottom} only change the TD tag in "col3" to be <TD align=right valign=bottom>. $portrait controls the layout of the table. The default is 1, i.e., the table is shown in the "Portrait" style, like in Excel. 0 means "Landscape". Since version 1.59, tbody and thead tags are added to the portrait mode output. Attention: You will have to escape HTML-Entities yourself (for example '<' as '<'), if you have characters in you table which need to be escaped. You can do this for example with the escapeHTML-function from CGI.pm (or the HTML::Entities module). use CGI qw(escapeHTML); [...] $t->colMap($columnname, sub{escapeHTML($_)}); # for every column, where HTML-Entities occur. - string table::html2 ($colors = ["#D4D4BF","#ECECE4","#CCCC99"], $specs = {'name' => '', 'border' => '1', ...}) This method is deprecated. It's here for compatibility. It now simple call html method with $portrait = 0, see previous description. return a string corresponding to a "Landscape" html-tagged table. This is useful to present a table with many columns, but very few entries. Check the above table::html for parameter descriptions. Table Operations - int table::setElm ($rowIdx, $colID, $val) modify the value of a table element at [$rowIdx, $colID] to a new value $val. It returns 1 upon success, undef otherwise. In 1.68, setElm can manipulate multiple elements, i.e., $rowIdx and $colIdx can be references to an index array, and setElm() will modifies all cells defined by the grid. $t->setElm([0..2], ['ColA', 'ColB'], 'new value'); $t->setElm(0, [1..2], 'new value'); # puts a limit on the price of all expensive items $t_product->match_pattern_hash('$_{UnitPrice} > 20'); $t_product->setElm($t_product->{MATCH}, 'UnitPrice', 20); - int table::addRow ($rowRef, $rowIdx = table::nofRow, $arg_ref = {addNewCol => 0}) add a new row ($rowRef may point to the actual list of scalars, or it can be a hash_ref (supported since version 1.60)). If $rowRef points to a hash, the method will lookup the value of a field by ts column name: $rowRef->{colName}, if not found, undef is used for that field. The new row will be referred as $rowIdx as the result. E.g., addRow($aRow, 0) will put the new row as the very first row. By default, it appends a row to the end. In 1.67, we support {addNewCol => 1}, if specified, a new column will be automatically created for each new element encountered in the $rowRef. # automatically add a new column "aNewColumn" to $t, in order to hold the new value $t->addRow({anExistingColumn => 123, aNewColumn => "XYZ"}, undef, {addNewCol => 1}); # $t only had one column, after this call, it will contain a new column 'col2', in order to hold the new value $t->addRow([123, "XYZ"], undef, {addNewCol => 1}); It returns 1 upon success, undef otherwise. - refto_array table::delRow ( $rowIdx ) delete a row at $rowIdx. It will the reference to the deleted row. - refto_array table::delRows ( $rowIdcsRef ) delete rows in @$rowIdcsRef. It will return an array of deleted rows in the same order of $rowIdcsRef upon success. upon success. - int table::addCol ($colRef, $colName, $colIdx = numCol) add a new column ($colRef points to the actual data), the new column will be referred as $colName or $colIdx as the result. E.g., addCol($aCol, 'newCol', 0) will put the new column as the very first column. By default, append a column to the end. It will return 1 upon success or undef otherwise. In 1.68, $colRef can be a scalar, which is the default value that can be used to create the new column. E.g., to create a new column with default value of undef, 0, 'default', respectively, one can do: $t->addCol(undef, 'NewCol'); $t->addCol(0, 'NewIntCol'); $t->addCol('default', 'NewStringCol'); - refto_array table::delCol ($colID) delete a column at $colID return the reference to the deleted column. - arrayof_refto_array table::delCols ($colIDsRef) delete a list of columns, pointed by $colIDsRef. It will return an array of deleted columns in the same order of $colIDsRef upon success. - refto_array table::rowRef ($rowIdx) return a reference to the row at $rowIdx upon success or undef otherwise. - refto_arrayof_refto_array table::rowRefs ($rowIdcsRef) return a reference to array of row references upon success, undef otherwise. - array table::row ($rowIdx) return a copy of the row at $rowIdx upon success or undef otherwise. - refto_hash table::rowHashRef ($rowIdx) return a reference to a hash, which contains a copy of the row at $rowIdx, upon success or undef otherwise. The keys in the hash are column names, and the values are corresponding elements in that row. The hash is a copy, therefore modifying the hash values doesn't change the original table. - refto_array table::colRef ($colID) return a reference to the column at $colID upon success. - refto_arrayof_refto_array table::colRefs ($colIDsRef) return a reference to array of column references upon success. - array table::col ($colID) return a copy to the column at $colID upon success or undef otherwise. - int table::rename ($colID, $newName) rename the column at $colID to a $newName (the newName must be valid, and should not be identical to any other existing column names). It returns 1 upon success or undef otherwise. - refto_array table::replace ($oldColID, $newColRef, $newName) replace the column at $oldColID by the array pointed by $newColRef, and renamed it to $newName. $newName is optional if you don't want to rename the column. It returns 1 upon success or undef otherwise. - int table::swap ($colID1, $colID2) swap two columns referred by $colID1 and $colID2. It returns 1 upon success or undef otherwise. - int table::moveCol($colID, $colIdx, $newColName) move column referred by $colID to a new location $colIdx. If $newColName is specified, the column will be renamed as well. It returns 1 upon success or undef otherwise. - int table::reorder($colIDRefs, $arg_ref) Rearrange the columns according to the order specified in $colIDRef. Columns not specified in the reference array will be appended to the end! If one would like to drop columns not specified, set $arg_ref to {keepRest => 0}. reorder() changes the table itself, while subTable(undef, $colIDRefs) will return a new table. reorder() might also runs faster than subTable, as elements may not need to be copied. - int table::colMap ($colID, $fun) foreach element in column $colID, map a function $fun to it. It returns 1 upon success or undef otherwise. This is a handy way to format a column. E.g. if a column named URL contains URL strings, colMap("URL", sub {"<a href='$_'>$_</a>"}) before html() will change each URL into a clickable hyper link while displayed in a web browser. - int table::colsMap ($fun) foreach row in the table, map a function $fun to it. It can do whatever colMap can do and more. It returns 1 upon success or undef otherwise. colMap function only give $fun access to the particular element per row, while colsMap give $fun full access to all elements per row. E.g. if two columns named duration and unit (["2", "hrs"], ["30", "sec"]). colsMap(sub {$_->[0] .= " (".$_->[1].")"; } will change each row into (["2 hrs", "hrs"], ["30 sec", "sec"]). As show, in the $func, a column element should be referred as $_->[$colIndex]. - int table::sort($colID1, $type1, $order1, $colID2, $type2, $order2, ... ) sort a table in place. First sort by column $colID1 in $order1 as $type1, then sort by $colID2 in $order2 as $type2, ... $type is 0 for numerical and 1 for others; $order is 0 for ascending and 1 for descending; In 1.62, instead of memorize these numbers, you can use constants instead (notice constants do not start with '$'). Data::Table::NUMBER Data::Table::STRING Data::Table::ASC Data::Table::DESC Sorting is done in the priority of colID1, colID2, ... It returns 1 upon success or undef otherwise. Notice the table is rearranged as a result! This is different from perl's list sort, which returns a sorted copy while leave the original list untouched, the authors feel inplace sorting is more natural. table::sort can take a user supplied operator, this is useful when neither numerical nor alphabetic order is correct. $Well=["A_1", "A_2", "A_11", "A_12", "B_1", "B_2", "B_11", "B_12"]; $t = new Data::Table([$Well], ["PlateWell"], 1); $t->sort("PlateWell", 1, 0); print join(" ", $t->col("PlateWell")); # prints: A_1 A_11 A_12 A_2 B_1 B_11 B_12 B_2 # in string sorting, "A_11" and "A_12" appears before "A_2"; my $my_sort_func = sub { my @a = split /_/, $_[0]; my @b = split /_/, $_[1]; my $res = ($a[0] cmp $b[0]) || (int($a[1]) <=> int($b[1])); }; $t->sort("PlateWell", $my_sort_func, 0); print join(" ", $t->col("PlateWell")); # prints the correct order: A_1 A_2 A_11 A_12 B_1 B_2 B_11 B_12 - table table::match_pattern ( (should use $t->{OK} after 1.62) stores $_->[$colIndex]. E.g., match_pattern('$_->[0]>3 && $_->[1]=~/^L') retrieve all the rows where its first column is greater than 3 and second column starts with letter 'L'. Notice it only takes colIndex, column names are not acceptable here! - table table::match_pattern_hash ( stores a reference to ${column_name}. match_pattern_hash() is added in 1.62. The difference between this method and match_pattern is each row is fed to the pattern as a hash %_. In the case of match_pattern, each row is fed as an array ref $_. The pattern for match_pattern_hash() becomes much cleaner. If a table has two columns: Col_A as the 1st column and Col_B as the 2nd column, a filter "Col_A>2 AND Col_2" is written before as $t-match_pattern('$_->[0] > 2 && $_->[1] <2'); where we need to figure out $t->colIndex('Col_A') is 0 and $t->colIndex('Col_B') is 1, in order to build the pattern. Now you can use column name directly in the pattern: $t->match_pattern_hash('$_{Col_A} >2 && $_{Col_B} <2'); This method creates $t->{OK}, as well as @Data::Table::OK, same as match_pattern(). - table table::match_string ($s, $caseIgnore, $countOnly) return a new table consisting those rows contains string $s in any of its fields upon success, undef otherwise. if $caseIgnore evaluated to true, case will is be ignored (s/$s/i). If $countOnly is set to 1, it simply returns the number of rows that matches the string without making a new copy of table. $countOnly is 0 by default. Side effect: @Data::Table::OK stores a reference to a true/false array for the original table rows. Side effect: @Data::Table::MATCH stores a reference to an array containing all row indices for matched rows. Using it, users can find out what are the rows being selected/unselected. The $s string is actually treated as a regular expression and applied to each row element, therefore one can actually specify several keywords by saying, for instance, match_string('One|Other'). - table table::rowMask($mask, $complement). - table table::iterator({$reverse => 0}) Returns a reference to a enumerator routine, which enables one to loop through each table row. If $reverse is set to 1, it will enumerate backward. The convenience here is each row is fetch as a rowHashRef, so one can easily access row elements by name. my $next = $t_product->iterator(); while (my $row = $next->()) { # have access to a row as a hash reference, access row number by &$next(1); $t_product->setElm($next->(1), 'ProductName', 'New! '.$row->{ProductName}); } In this example, each $row is fetched as a hash reference, so one can access the elements by $row->{colName}. Be aware that the elements in the hash is a copy of the original table elements, so modifying $row->{colName} does not modify the original table. If table modification is intended, one needs to obtain the row index of the returned row. $next->(1) call with a non-empty argument returns the row index of the record that was previously fetched with $next->(). In this example, one uses the row index to modify the original table. - table table::each_group($colsToGroupBy, $funsToApply) Primary key columns are specified in $colsToGroupBy. All rows are grouped by primary keys first (keys sorted as string). Then for each group, subroutines $funToAppy is applied to corresponding rows. $funToApply are passed with two parameters ($tableRef, $rowIDsRef). All rows sharing the key are passed in as a Data::Table object (with all columns and in the order of ascending row index) in the first parameter. The second optional parameter contains an array of row indices of the group members. Since all rows in the passed-in table contains the same keys, the key value can be obtained from its first table row. - table table::group($colsToGroupBy, $colsToCalculate, $funsToApply, $newColNames, $keepRestCols) Primary key columns are specified in $colsToGroupBy. All rows are grouped by primary keys first. Then for each group, an array of subroutines (in $funsToAppy) are applied to corresponding columns and yield a list of new columns (specified in $newColNames). $colsToGroupBy, $colsToCalculate are references to array of colIDs. $funsToApply is a reference to array of subroutine references. $newColNames are a reference to array of new column name strings. If specified, the size of arrays pointed by $colsToCalculate, $funsToApply and $newColNames should be i dentical. A column may be used more than once in $colsToCalculate. $keepRestCols is default to 1 (was introduced as 0 in 1.64, changed to 1 in 1.66 for backward compatibility) introduced in 1.64), otherwise, the remaining columns are returned with the first encountered value of that group. E.g., an employee salary table $t contains the following columns: Name, Sex, Department, Salary. (see examples in the SYNOPSIS) $t2 = $t->group(["Department","Sex"],["Name", "Salary"], [sub {scalar @_}, \&average], ["Nof Employee", "Average Salary"], 0); Department, Sex are used together as the primary key columns, a new column "Nof Employee" is created by counting the number of employee names in each group, a new column "Average Salary" is created by averaging the Salary data falled into each group. As the result, we have the head count and average salary information for each (Department, Sex) pair. With your own functions (such as sum, product, average, standard deviation, etc), group method is very handy for accounting purpose. If primary key columns are not defined, all records will be treated as one group. $t2 = $t->group(undef,["Name", "Salary"], [sub {scalar @_}, \&average], ["Nof Employee", "Average Salary"], 0); The above statement will output the total number of employees and their average salary as one line. - table table::pivot($colToSplit, $colToSplitIsStringOrNumeric, $colToFill, $colsToGroupBy, $keepRestCols) Every unique values in a column (specified by $colToSplit) become a new column. undef value become "NULL". $colToSplitIsStringOrNumeric is set to numeric (0 or Data::Table:NUMBER), the new column names are prefixed by "oldColumnName=". The new cell element is filled by the value specified by $colToFill (was 1/0 before version 1.63). Note: yes, it seems I made an incompatible change in version 1.64, where $colToSplitIsStringOrNumber used to be $colToSplitIsNumeric, where 0 meant STRING and 1 meant NUMBER. Now it is opposite. However, I also added auto-type detection code, that this parameter essentially is auto-guessed and most old code should behave the same as before. When primary key columns are specified by $colsToGroupBy, all records sharing the same primary key collapse into one row, with values in $colToFill filling the corresponding new columns. If $colToFill is not specified, a cell is filled with the number of records fall into that cell. $colToSplit and $colToFill are colIDs. $colToSplitIsNumeric is 1/0. $colsToGroupBy is a reference to array of colIDs. $keepRestCols is 1/0, by default is 0. If $keepRestCols is off, only primary key columns and new columns are exported, otherwise, all the rest columns are exported as well. E.g., applying pivot method to the resultant table of the example of the group method. $t2->pivot("Sex", 0, "Average Salary",["Department"]); This creates a 2x3 table, where Departments are use as row keys, Sex (female and male) become two new columns. "Average Salary" values are used to fill the new table elements. Used together with group method, pivot method is very handy for accounting type of analysis. If $colsToGroupBy is left as undef, all rows are treated as one group. If $colToSplit is left as undef, the method will generate a column named "(all)" that matches all records share the corresponding primary key. - table table::melt($keyCols, $variableCols, $arg_ref) The idea of melt() and cast() are taken from Hadley Wickham's Reshape package in R language. A table is first melt() into a tall-skiny format, where measurements are stored in the format of a variable-value pair per row. Such a format can then be easily cast() into various contingency tables. One needs to specify the columns consisting of primary keys, columns that are consider as variable columns. The output variable column is named 'variable' unless specified by $arg_ref{variableColName}. The output value column is named 'value', unless specified in $arg_ref{valueColName}. By default NULL values are not output, unless $arg_ref{skip_NULL} is set to false. By default empty string values are kept, unless one sets skip_empty to `. For each object (id), we measure variable x1 and x2 at two time points $t = new Data::Table([[1,1,5,6], [1,2,3,5], [2,1,6,1], [2,2,2,4]], ['id','time','x1','x2'], Data::Table::ROW_BASED); # id time x1 x2 # 1 1 5 6 # 1 2 3 5 # 2 1 6 1 # 2 2 2 4 # melting a table into a tall-and-skinny table $t2 = $t->melt(['id','time']); #id time variable value # 1 1 x1 5 # 1 1 x2 6 # 1 2 x1 3 # 1 2 x2 5 # 2 1 x1 6 # 2 1 x2 1 # 2 2 x1 2 # 2 2 x2 4 # casting the table, &average is a method to calculate mean # for each object (id), we calculate average value of x1 and x2 over time $t3 = $t2->cast(['id'],'variable',Data::Table::STRING,'value', \&average); # id x1 x2 # 1 4 5.5 # 2 4 2.5 - table table::cast($colsToGroupBy, $colToSplit, $colToSplitIsStringOrNumeric, $colToCalculate, $funToApply) see melt(), as melt() and cast() are meant to use together. The table has been melten before. cast() group the table according to primary keys specified in $colsToGroupBy. For each group of objects sharing the same id, it further groups values (specified by $colToCalculate) according to unique variable names (specified by $colToSplit). Then it applies subroutine $funToApply to obtain an aggregate value. For the output, each unique primary key will be a row, each unique variable name will become a column, the cells are the calculated aggregated value. If $colsToGroupBy is undef, all rows are treated as within the same group. If $colToSplit is undef, a new column "(all)" is used to hold the results. '], Data::Table::ROW_BASED); # get a Department x Sex contingency table, get average salary across all four groups print $t->cast(['Department'], 'Sex', Data::Table::STRING, 'Salary', \&average)->csv(1); Department,female,male IT,55000,73600 HR,86000,85000 # get average salary for each department print $t->cast(['Department'], undef, Data::Table::STRING, 'Salary', \&average)->csv(1); Department,(all) IT,70500 HR,85666.6666666667 # get average salary for each gender print $t->cast(['Sex'], undef, Data::Table::STRING, 'Salary', \&average)->csv(1); Sex,(all) male,75500 female,75666.6666666667 # get average salary for all records print $t->cast(undef, undef, Data::Table::STRING, 'Salary', \&average)->csv(1); (all) 75555.5555555556 Table-Table Manipulations - int table::rowMerge ($tbl, $argRef) Append all the rows in the table object $tbl to the original rows. Before 1.62, the merging table $tbl must have the same number of columns as the original, as well as the columns are in exactly the same order. It returns 1 upon success, undef otherwise. The table object $tbl should not be used afterwards, since it becomes part of the new table. Since 1.62, you may provide {byName =>1, addNewCol=>1} as $argRef. If byName is set to 1, the columns in in $tbl do not need to be in the same order as they are in the first table, instead the column name is used for the matching. If addNewCol is set to 1, if $tbl contains a new column name that does not already exist in the first table, this new column will be automatically added to the resultant table. Typically, you want to specify there two options simultaneously. - int table::colMerge ($tbl, $argRef) Append all the columns in table object $tbl to the original columns. Table $tbl must have the same number of rows as the original. It returns 1 upon success, undef otherwise. Table $tbl should not be used afterwards, since it becomes part of the new table. Since 1.62, you can specify {renameCol => 1} as $argRef. This is to auto fix any column name collision. If $tbl contains a column that already exists in the first table, it will be renamed (by a suffix _2) to avoid the collision. - table table::join ($tbl, $type, $cols1, $cols2, $argRef) Join two tables. The following join types are supported (defined by $type): 0: inner join 1: left outer join 2: right outer join 3: full outer join In 1.62, instead of memorize these numbers, you can use constants instead (notice constants do not start with '$'). Data::Table::INNER_JOIN Data::Table::LEFT_JOIN Data::Table::RIGHT_JOIN Data::Table::FULL_JOIN $cols1 and $cols2 are references to array of colIDs, where rows with the same elements in all listed columns are merged. As the result table, columns listed in $cols2 are deleted, before a new table is returned. The implementation is hash-join, the running time should be linear with respect to the sum of number of rows in the two tables (assume both tables fit in memory). If the non-key columns of the two tables share the same name, the routine will fail, as the result table cannot contain two columns of the same name. In 1.62, one can specify {renameCol=>1} as $argRef, so that the second column will be automatically renamed (with suffix _2) to avoid collision. If you would like to treat the NULLs in the key columns as empty string, set {NULLasEmpty => 1}. If you do not want to treat NULLs as empty strings, but you still like the NULLs in two tables to be considered as equal (but not equal to ''), set {matchNULL => 1}. Obviously if NULLasEmpty is set to 1, matchNULL will have no effect. Internal Methods All internal methods are mainly implemented for used by other methods in the Table class. Users should avoid using them. Nevertheless, they are listed here for developers who would like to understand the code and may derive a new class from Data::Table. - int table::rotate convert the internal structure of a table between row-based and column-based. return 1 upon success, undef otherwise. - string csvEscape($string, {delimiter=>, qualifier}) Encode a scalar into a CSV-formatted field. optional named arguments: delimiter and qualifier, in case user wants to use characters other than the defaults. The default delimiter and qualifier is taken from $Data::Table::DEFAULTS{'CSV_DELIMITER'} (defaults to ',') and $Data::Table::DEFAULTS{'CSV_QUALIFIER'} (defaults to '"'), respectively. Please note that this function only escape one element in a table. To escape the whole table row, you need to join($delimiter, map {csvEscape($_)} @row . $endl; $endl refers to End-of-Line, which you may or may not want to add, and it is OS-dependent. Therefore, csvEscape method is kept to the simplest form as an element transformer. - refto_array parseCSV($string) Break a CSV encoded string to an array of scalars (check it out, we did it the cool way). optional argument size: specify the expected number of fields after csv-split. optional named arguments: delimiter and qualifier, in case user wants to use characters other than the defaults. respectively. The default delimiter and qualifier is taken from $Data::Table::DEFAULTS{'CSV_DELIMITER'} (defaults to ',') and $Data::Table::DEFAULTS{'CSV_QUALIFIER'} (defaults to '"'), respectively. - string tsvEscape($rowRef) Encode a scalar into a TSV-formatted string. TSV FORMAT There is no standard for TSV format as far as we know. CSV format can't handle binary data very well, therefore, we choose the TSV format to overcome this limitation. We define TSV based on MySQL convention. "\0", "\n", "\t", "\r", "\b", "'", "\"", and "\\" are all escaped by '\' in the TSV file. (Warning: MySQL treats '\f' as 'f', and it's not escaped here) Undefined values are represented as '\N'. However, you can switch off this transformation by setting {transform_element => 0} in the fromTSV or tsv method. Before if a cell reads 'A line break is \n', it is read in as 'A link break is [return]' in memory. When use tsv method to export, it is transformed back to 'A line break is \n'. However, if it is exported as a csv, the [return] will break the format. Now if transform_element is set to 0, the cell is stored as 'A line break is \n' in memory, so that csv export will be correct. However, do remember to set {transform_element => 0} in tsv export method, otherwise, the cell will become 'A line break is \\n'. Be aware that trasform_element controls column headers as well. INTERFACE TO OTHER SOFTWARES Spreadsheet is a very generic type, therefore Data::Table class provides an easy interface between databases, web pages, CSV/TSV files, graphics packages, etc. Here is a summary (partially repeat) of some classic usages of Data::Table. Interface to Database and Web use DBI; $dbh= DBI->connect("DBI:mysql:test", "test", "") or die $DBI::errstr; my $minAge = 10; $t = Data::Table::fromSQL($dbh, "select * from mytable where age >= ?", [$minAge]); print $t->html; Interface to CSV/TSV $t = fromFile("mydata.csv"); # after version 1.51 $t = fromFile("mydata.tsv"); # after version 1.51 $t = fromCSV("mydata.csv"); $t->sort(1,1,0); print $t->csv; Same for TSV Interface to Excel XLS/XLSX Read in two tables from NorthWind.xls file, writes them out to XLSX format. See Data::Table::Excel module for details. use Data::Table::Excel; my ($tableObjects, $tableNames)=xls2tables("NorthWind.xls"); $t_category = $tableObjects[0]; $t_product = $tableObjects[1]; tables2xlsx("NorthWind.xlsx", [$t_category, $t_product]); Interface to Graphics Package use GD::Graph::points; $graph = GD::Graph::points->new(400, 300); $t2 = $t->match('$_->[1] > 20 && $_->[3] < 35.7'); my $gd = $graph->plot($t->colRefs([0,2])); open(IMG, '>mygraph.png') or die $!; binmode IMG; print IMG $gd->png; close IMG; AUTHOR It was first written by Zhou in 1998, significantly improved and maintained by Zou since 1999. The authors thank Tong Peng and Yongchuang Tao for valuable suggestions. We also thank those who kindly reported bugs, some of them are acknowledged in the "Changes" file. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Please send bug reports and comments to: easydatabase at gmail dot com. When sending bug reports, please provide the version of Table.pm, the version of Perl. SEE ALSO DBI, GD::Graph, Data::Table::Excel. 1 POD Error The following errors were encountered while parsing the POD: - Around line 2325: You can't have =items (as at line 2329) unless the first thing after the =over is an =item
https://metacpan.org/pod/release/EZDB/Data-Table-1.70/Table.pm
CC-MAIN-2015-18
refinedweb
8,169
56.35
Red Hat Bugzilla – Bug 203046 evolution cannot create new MH subfolders Last modified: 2007-11-30 17:11:40 EST Description of problem: Evolution cannot create new folders in a MH Mail tree Version-Release number of selected component (if applicable): evolution-2.6.3-1.fc5.5 How reproducible: My evolution environment has been with me for a while. It began with an import of an MH tree of legacy mail that I've had forever. This tree is configured as a secondary mail account which is not my default. I like it's structure, I've continued to use its folders through FC3 and FC4. Since installing FC5 I've been unable to add new subfolders. For example I have a top-level folder called 'ccb'. Attempting to add a folder 'foo' below 'ccb' pops a dialog that says: Store root /bcl/cbox/home/ccb/Mail/ is not an absolute path The controlling terminal also reports: DEBUG: ccb/foo (mh:///bcl/cbox/home/ccb/Mail#ccb/foo) (evolution-2.6:5638): evolution-mail-WARNING **: Error occurred while existing dialogue active: Store root /bcl/cbox/home/ccb/Mail/ is not an absolute path OK. It's unpleasant. So I pop the Preferences, select Mail Accounts, select this particular account and press Edit to bring up the Account Editor. I hit the Receiving Email tab and then Browse for a configuration path. The file picker dialog does not allow the selection of a directory. I select my Mail directory and the only buttons available are Cancel and Open. I hit Open expecting that the selected directory will be returned to the application is the desired Configuration Path:. Nope. The file picker dialog opens the Mail directory and shows me its contents. The file picker will not allow the selection of any directory at all... If I select a regular file only then does it report the file back to the Receiving Email dialog. This is of course wrong. It appears from a quick look that this is not the only thing wrong with Evolution. Should it be deprecated? Steps to Reproduce: 1. Described above 2. 3. Actual results: Described above. Expected results: I expect to be able to use existing MH folder trees with Evolution I expect to be able to select directories with the Gnome file picker when the dialog that popped the file picker needs a directory. Additional info: Apologies for taking so long to respond. Closing as UPSTREAM since there's an equivalent upstream bug report. I will continue to track the problem there. Please refer to [1] for further updates. Note that this is somewhat fixed in Rawhide. I replaced the file chooser button with a director chooser button in the Mail Accounts preferences. Unfortunately I can't apply this patch to stable Fedora Core releases because it breaks the ability to choose a single mbox file. [1]
https://bugzilla.redhat.com/show_bug.cgi?id=203046
CC-MAIN-2017-30
refinedweb
482
56.96
Hi to All. I know that is not 100% Xamarin question, but I do not know if there is any limitation for Xamarin Forms and System.Linq. I have a class like this: <br /> public class SubMenu<br /> {<br /> public string Title { get; set; }<br /> public ObservableCollection<S1Strings> S1Strings { get; set; }<br /> }</p> <pre><code>public class MyStrings { public string Title { get; set; } public string Description { get; set; } } I have a list of SubMenu(s) which are binding in a list view. I want to search in my "MyStrings" My code: var a = SubMenus.Select(x => x.MyStrings.Select(y => y.Title.ToLower() == txt.ToLower())).ToList(); gives me exception "System.ArgumentNullException: Value cannot be null. Parameter name: source". Any help? Thanks in advance That's not really a LINQ issue, that just means one of your ObservableCollections is not getting initialized. Either the SubMenus list or the MyStrings list in a SubMenu has not been set to a new ObservableCollection.
https://forums.xamarin.com/discussion/150144/linq-nested-search-in-lists
CC-MAIN-2019-26
refinedweb
160
55.34
This is my first on-my-own java project I have been working on using the Eclipse program, I have an issue, the eclipse is having an Error with my code somewhere, I can't figure out why.. maybe it's just because i'm a noob? idk, but anyhow, I am trying to make an age calculator only using the year you were born and the current year, (2012), so here's the code.. please help me figure out what's wrong and what I can do to fix it, thank you for your help! import java.util.Scanner; public class Detector { public static void main(String[] args){ int number, answer, i = 2012; Scanner age = new Scanner(System.in); System.out.println("Enter your year of birth:"); number = age.nextInt(); answer = i - age; System.out.println("You are/will be this age: " +answer); } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/15948-age-calculator.html
CC-MAIN-2015-40
refinedweb
144
72.66
CRYPTO_THREAD_run_once, CRYPTO_THREAD_lock_new, CRYPTO_THREAD_read_lock, CRYPTO_THREAD_write_lock, CRYPTO_THREAD_unlock, CRYPTO_THREAD_lock_free, CRYPTO_atomic_add - OpenSSL thread support () returns 1 on success, or 0 on error. CRYPTO_THREAD_lock_new() returns the allocated lock, or NULL on error. CRYPTO_THREAD_lock_free() returns no value. The other functions return 1 on success, or 0 on error. On Windows platforms the CRYPTO_THREAD_* types and functions in the openssl/crypto.h header crypto.h where use of CRYPTO_THREAD_* types and functions is required.. You can find out if OpenSSL was configured with thread support: #include <openssl/opensslconf.h> #if defined(OPENSSL_THREADS) /* thread support enabled */ #else /* no thread support */ #endif crypto(7) Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <>.
https://www.zanteres.com/manpages/CRYPTO_THREAD_read_lock.3ssl.html
CC-MAIN-2022-33
refinedweb
129
60.21
So far, I have written in my last four posts the basics you should know about modules in C++20. Only a few questions to modules are still open. In this post, I address these open questions, such as templates in modules, the linkage of modules, and header units. I assume in this post that you know my previous posts to modules. If not, make yourself comfortable and read the following posts. I often hear the question: How are templates exported by modules? When you instantiate a template, its definitions must be available. This is the reason that template definitions are hosted in headers. Conceptually, the usage of templates has the following structure. // templateSum.h template <typename T, typename T2> auto sum(T fir, T2 sec) { return fir + sec; } // sumMain.cpp #include <templateSum.h> int main() { sum(1, 1.5); } The main-program directly includes the header templateSum.h into the program sumMain.cpp. The call sum(1, 1.5) triggers the so-called template instantiation. In this case, the compiler generates out of the function template sum the function sum, which takes an int and a double. If you want to see this process live, play with the example on C++ Insights. With C++20, templates can and should be in modules. Modules have a unique internal representation that is neither source code nor assembly. This representation is a kind of an abstract syntax tree (AST). Thanks to this AST, the template definition is available during template instantiation. In the following example, I define the function template sum in the module math. // mathModuleTemplate.ixx export module math; export namespace math { template <typename T, typename T2> auto sum(T fir, T2 sec) { return fir + sec; } } // clientTemplate.cpp #include <iostream> import math; int main() { std::cout << std::endl; std::cout << "math::sum(2000, 11): " << math::sum(2000, 11) << std::endl; std::cout << "math::sum(2013.5, 0.5): " << math::sum(2013.5, 0.5) << std::endl; std::cout << "math::sum(2017, false): " << math::sum(2017, false) << std::endl; } The command line to compile the program is not different from the previous one in the post "C++20: Structure Modules". Consequently, I skip it and present the output of the program directly: With modules, we get a new kind of linkage: So far, C++ supports two kinds of linkage: internal linkage and external linkage. Modules introduce module linkage: A small variation of the previous module math makes my point. Imagine, I want to return the user of my function template sum additionally, which type the compiler deduces as return type. // mathModuleTemplate1.ixx module; #include <iostream> #include <typeinfo> #include <utility> export module math; template <typename T> // (2) auto showType(T&& t) { return typeid(std::forward<T>(t)).name(); } export namespace math { // (3) template <typename T, typename T2> auto sum(T fir, T2 sec) { auto res = fir + sec; return std::make_pair(res, showType(res)); // (1) } } Instead of the sum of the numbers, the function template sum returns a std::pair (1) consisting of the sum and a string representation of the type res. I put the function template showType (2) not in the exporting namespace math (3). Consequently, invoking it from outside the module math is not possible. showType uses perfect forwarding to preserve the value categories of the function argument t. The function typeid queries information about the type at run-time (runtime type identification (RTTI)). // clientTemplate1.cpp #include <iostream> import math; int main() { std::cout << std::endl; auto [val, message] = math::sum(2000, 11); std::cout << "math::sum(2000, 11): " << val << "; type: " << message << std::endl; auto [val1, message1] = math::sum(2013.5, 0.5); std::cout << "math::sum(2013.5, 0.5): " << val1 << "; type: " << message1 << std::endl; auto [val2, message2] = math::sum(2017, false); std::cout << "math::sum(2017, false): " << val2 << "; type: " << message2 << std::endl; } Now, the program displays the value of the summation and a string representation of the automatically deduced type. Neither the GCC compiler nor the Clang compiler supports the next feature, which may become one of the favourite features regarding modules. Header units are a smooth way to transition from headers to modules. You just have to replace the #include directive with a new import directive. #include <vector> => import <vector>; #include "myHeader.h" => import "myHeader.h"; First, import respects the same lookup rules as include. This means in the case of the quotes ("myHeader.h") that the lookup first searches in the local directory before it continues with the system search path. Second, this is way more than text replacement. In this case, the compiler generates something module-like out of the import directive and treats the result as if it would be a module. The importing module statement gets all exportable names for the header. The exportable names include macros. Importing these synthesized header units is faster and comparable in speed to precompiled headers. There is one drawback with header units. Not all headers are importable. Which headers are importable is implementation-defined, but the C++ standard guarantees that all standard library headers are importable headers. The ability to import excludes C headers. They are just wrapped in the std namespace. For example <cstring> is the C++ wrapper for <string.h>. You can easily identify the wrapped C header because the pattern is: xxx.h becomes cxxx. With this post, I completed my story to modules and, in particular, I completed my story to the big four in C++20. Here are all existing and upcoming posts to C++20. With my next post, I have a closer look at the core language features in C++, which are not so prominent such as concepts or modules. I start with the three-way comparison65 Yesterday 7029 Week 40989 Month 107655 All 7375495 Currently are 133 guests and no members online Kubik-Rubik Joomla! Extensions Read more... Read more...
https://modernescpp.com/index.php/c-20-open-questions-to-modules
CC-MAIN-2021-43
refinedweb
970
66.03
Inheritance can be done in a number of ways. Till now, we have come across different types of inheritances in different examples. The different types of inheritances which we have come across are: Single Inheritance In single inheritance, a class inherits another class. Multilevel Inheritance In this type of inheritance, one class inherits from another class. This base class inherits from some other class. Hierarchical Inheritance In hierarchical inheritance, more than one class inherit from a base class. Multiple Inheritance In this chapter, we will be studying about multiple inheritance. In multiple inheritance, a class can inherit from more than one classes. In simple words, a class can have more than one parent classes. This type of inheritance is not present in Java. Suppose we have to make two classes A and B as the parent classes of class C, then we have to define class C as follows. class C: public A, public B { // code }; Let's see an example of multiple inheritance #include <iostream> using namespace std; class Area { public: int getArea(int l, int b) { return l * b; } }; class Perimeter { public: int getPerimeter(int l, int b) { return 2*(l + b); } }; class Rectangle : public Area, public Perimeter { int length; int breadth; public: Rectangle() { length = 7; breadth = 4; } int area() { return Area::getArea(length, breadth); } int perimeter() { return Perimeter::getPerimeter(length, breadth); } }; int main() { Rectangle rt; cout << "Area : " << rt.area() << endl; cout << "Perimeter : " << rt.perimeter() << endl; return 0; } Perimeter : 22 In this example, class Rectangle has two parent classes Area and Perimeter. Class 'Area' has a function getArea(int l, int b) which returns area. Class 'Perimeter' has a function getPerimeter(int l, int b) which returns the perimeter. When we created the object 'rt' of class Rectangle, its constructor got called and assigned the values 7 and 4 to its data members length and breadth respectively. Then we called the function area() of the class Rectangle which returned getArea(length, breadth) of the class Area, thus calling the function getArea(int l, int b) and assigning the values 7 and 4 to l and b respectively. This function returned the area of the rectangle of length 7 and breadth 4. Similarly, we returned the perimeter of the rectangle by the class Perimeter. Let's see one more example. #include <iostream> using namespace std; class P1 { public: P1() { cout << "Constructor of P1" << endl; } }; class P2 { public: P2() { cout << "Constructor of P2" << endl; } }; class A : public P2, public P1 { public: A() { cout << "Constructor of A" << endl; } }; int main() { A a; return 0; } Constructor of P1 Constructor of A Here, when we created the object 'a' of class 'A', its constructor got called. As seen before, the compiler first calls the constructor of the parent class. Since class 'A' has two parent classes 'P1' and 'P2', so the constructors of both these classes will be called before executing the body of the constructor of 'A'. The order in which the constructors of the two parent classes are called depends on the following code. class A : public P2, public P1 The order in which the constructors are called depends on the order in which their respective classes are inherited. Since we wrote 'public P2' before 'public P1', therefore the constructor of P2 will be called before that of P1. Words may lie but actions will always tell the truth.
https://www.codesdope.com/cpp-multiple-inheritance/
CC-MAIN-2022-40
refinedweb
558
60.24
Now that we've used a module, statistics, it would be a good time to explain some import syntax practices. As with many things in programming, there are many ways to import modules, but there are certainly some best practices. So first, when you import a module, you are basically loading that module into memory. Think of a module like a script. Many if not most modules are just a single python script. So, when you go to import it, you use the file name. This can help keep code clean and easy to read. Many python developers just program everything in 1 script. Other developers, say from a language like java are going to be very used to doing lots of imports with a file for each type of job that's happening. Just like there are many ways to import, there are many more ways to program. So let's talk about basic importing: import statistics Above, we have referenced the statistics module and loaded it into memory under the statistics object. This will allow us to reference any of the functions within the statistics module. To do so, we will need to mention statistics, followed by a period, then the function name. A simple exhibition of the mean function from statistics could look like this: import statistics example_list = [5,2,5,6,1,2,6,7,2,6,3,5,5] print(statistics.mean(example_list)) The generated output from this will be the mean, or average, of the list. That is the simplest way to import and use modules, but there are many other methods. In the video, we cover each one specifically, but here are a bunch of examples: Sometimes, however, you will see people use the "as" statement in their imports. This will allow you to basically rename the module to whatever you want. People generally do this to shorten the name of the module. Matplotlib.pyplot is often imported as plt and numpy is often imported as np, for example. import statistics as s print(s.mean(example_list)) Above, we've imported statistics as the letter 's.' This means whenever we wish to reference the statistics module, we just need to type 's' instead of statistics. What if you don't even want to type that S though? Well there's an app for that! You can just import each function within the module you plan to use: from statistics import mean # so here, we've imported the mean function only. print(mean(example_list)) # and again we can do as from statistics import mean as m print(m(example_list)) Above, you can see that we no longer had to type any reference to the statistics module, then you saw that we could even import the functions "as" something else. What about more functions? from statistics import mean, median # here we imported 2 functions. print(median(example_list)) What if we want to use the as as well? from statistics import mean as m, median as d print(m(example_list)) print(d(example_list)) What if we want to just import everything from statistics like we did initially, but we don't want to type the statistics because we have fat fingers and this will just slow us down?. from statistics import * print(mean(example_list))
https://pythonprogramming.net/module-import-syntax-python-3-tutorial/?completed=/statistics-python-3-module-mean-standard-deviation/
CC-MAIN-2019-26
refinedweb
548
63.8
Microsoft ASP.NET 2.0 Membership API Extended Working with big applications requires extending the Microsoft ASP.NET 2.0 Membership API to handle more detailed member records. by Bilal Haidar Apr 30, 2007 Page 1 of 5 icrosoft ASP.NET 2.0 shipped with a complete membership API that allows developers to manage the application's users and their roles. However, this API best suits small to medium web sites due to their limitation in expressing a detailed member record. Fortunately, the Membership API is built on the provider model so you can extend it. You can use the technique discussed here to overcome this limitation and extend the Microsoft ASP.NET 2.0 Membership API to accommodate custom member records with a solution that works on top of the Membership API without requiring any change in the API. If you're not already familiar with the provider model in ASP.NET 2.0, I highly recommend the following link: Provider Model in Depth . ASP.NET Member and Role Management Overview In the days of ASP.NET 1.x, managing an application's members and roles was a hectic job, especially when most of the middle and higher-level applications needed that kind of management. You would usually end up creating your own membership API to use in any application you were working on. Microsoft ASP.NET 2.0 provides many new features, such as the Membership API, where you no longer need to worry about membership management in any application you develop. Microsoft built their Membership API upon the provider model. As with the other new features in ASP.NET 2.0, Microsoft integrated the Membership API into the .NET Framework and you can access all its objects and methods from one namespace reference, System.Web.Security. The Membership API provides many ready-made features that you had always needed to build in ASP.NET 1.x and that took hundreds of lines of code to accomplish. For example, the Membership API has the Login control. This control contains the username and password fields used to authenticate every user who tries to access a secure area inside the web application. In ASP.NET 1.x, you had to add this control to each application you developed. You would end up creating a User control or a Server control so you would not need to repeat your work again and again. ASP.NET 2.0 provides a Role Management API that works with the Membership API to provide a full solution for the authentication and authorization needed for most web applications you develop. I will not spend more time on the Membership API controls in this articleyou can find many online resources and articles to get more information. The Membership API works fine with small web applications. But a problem arises when working with huge applications. For example, the MembershipUser class, which is found in the Membership API and represents a member saved in the application's database, contains a limited number of properties. This class does not support the First Name and Last Name properties, for example. Usually, in middle to large-scale applications, a member's record requires the presence of a lot more propertiesand those properties are not all currently found in the MembershipUser class. Although the Membership API presents a generic member's record, Microsoft built the Membership API upon the provider model, so, you can easily solve that limitation and extend the current Membership API to serve your needs. In addition to the Membership API's need for more properties, it has another important limitation-by default it only works with Microsoft SQL Server and Active Directory. This last issue is not mainly a limitation just because Microsoft built the Membership API upon the provider model. Another provider can easily replace the model with any database implementation available. Ways to Solve the Problem You can choose a number of ways to overcome the limitation of the MembershipUser properties in ASP.NET 2.0's Membership API. This article will focus mainly on extending the default database that ships with the new database-related features in ASP.NET 2.0, so that you can store additional related information about a member in the web application. Figure 1 . Membership Hierarchy: The figure shows the relationship of the various layers in the Membership API class hierarchy. The Membership API provider model consists of the MembershipProvider, derived from ProviderBase, which is the base provider for all the new provider-based features in ASP.NET 2.0. The SqlMembershipProvider and ActiveDirectoryMembershipProvider represent a concrete implementation of the MembershipProvider class. The Membership class contains a set of static methods that provide the entire functionality of the Membership API to the user-interface layer in any web application. In addition, the MembershipUser class discussed above represents a single member in the Membership API database. The above can be better understood by having a look at the Membership API class hierarchy (see Figure 1 ). A typical scenario to overcome the limitation of the Membership API is to develop a new provider that inherits from the MembershipProvider where you override the existing methods and add more functionality as the application requires, but in this article I will show you an entirely different approach. Before I get to my new approach, here is a brief breakdown, through a simple example, of how the scenario mentioned above works so you can see the difference. In the user-interface layer, an ASP.NET page gathers all the required information about the member using the Profile object to store the additional data. The page calls the static method, CreateUser , which is part of the Membership class. In the new membership provider, the CreateUser method would still function as before by adding the member's record into the default database; however, it will also be responsible to add the additional member-related data that was saved in the profile object previously, into a new table added to the database that will hold the additional related data on the member's record. In this article, I will extend the Membership API in a completely different way. I will show you how to wrap the current Membership API without touching it or even inheriting from it. The basic idea is to create a wrapper over the methods that ship with the Membership API. This way you are extending the set of these methods that affect the data collected to a member's record. By extending the set, you get a richer environment to work with; the default Membership API methods are still usable in other places where there is no need to create, update, delete, and get a member's record from a database. Next Page Other Articles by this Author
https://www.devx.com/codemag/Article/34492
CC-MAIN-2021-43
refinedweb
1,131
52.29
Air Quality Sensor MQ-135 w/ Breakout Board Digital & Analog Output - Was RM22.00 RM15.00 - Product Code: Gas MQ-135 - Availability: In Stock MQ135 is a). Features: Specification: Applications Datasheet Here is a simple diagram of how your sensor should be wired to your Arduino. We also wanted to convert the voltage readings of 0-1023 from the sensor to a 0.0-5.0 value which would reflect the true voltage being read. Run this code to your Arduino and you will be ready to detect changes in the level of detectable gasses! const int gasSensor =0; void setup() { Serial.begin(9600); // sets the serial port to 9600 } void loop() { float voltage; voltage = getVoltage(gasSensor); Serial.println(voltage); delay(1000); } float getVoltage(int pin) { return (analogRead(pin) * 0.004882814); // This equation converts the 0 to 1023 value that analogRead() // returns, into a 0.0 to 5.0 value that is the true voltage // being read at that pin. } Notes: - After doing some research about this sensor it was discovered that while the MQ-135 can detect all of the gasses listed above, it cannot distinguish between them. If you are looking to specifically target one gas, it might be better to find a different sensor. - This sensor also needs uses a heater to warm up the sensor. It has been advised to not use this with a small battery source as it will quickly drain your battery. Tags: Gas Sensor
http://qqtrading.com.my/sensors/air-quality-sensor-mq-135-breakout-board
CC-MAIN-2019-39
refinedweb
242
66.64
Okay so I'm having a problem with this. My homework is to take this array of numbers and increment each one by one. Sounded simple at first and then I realized that in the array the address of variables were assigned to an element in the array. My question is..how do I increment a value of address if what I put in is the address? #include <stdio.h> #define SZ 7 int* a[SZ]; int x, y, z; void populate() { x = 1; y = 2; z = 3; a[0] = &x; // a[1] = &y; //Addresses of int variables assigned to array elements() { //My homework is here. for(int i = 0; i < sizeof(a); i++) { //How do I increment values of addresses? } } int main(int argc, char* argv[]) { populate(); printall(); add1each(); printall(); return 0; }
https://www.daniweb.com/programming/software-development/threads/358503/help-with-increasing-value-of-an-address
CC-MAIN-2018-43
refinedweb
133
72.66
05 June 2012 08:04 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company is expected to run the plant at 90% of capacity throughout the month after reducing its run rate on 1 June, the source said, adding that the unit was previously running at 100%. BD spot prices have plummeted since February this year when prices hit $4,000/tonne (€3,200/tonne) CFR (cost and freight) northeast (NE) In the week ended 1 June, BD spot prices were at $1,850-1,900/tonne CFR NE Asia, down by $600/tonne since 4 May, ICIS data showed. The company’s production loss of 1,000 tonnes during this period is not expected to have a significant impact on pricing, industry sources said. Titan Chemicals is the latest regional producer to cut the operating rate of its BD plant because of poor market conditions, they added. Other BD producers who have reduced their operating rates include ($1 = €0
http://www.icis.com/Articles/2012/06/05/9566567/malaysias-titan-chemicals-to-run-bd-unit-at-90-in.html
CC-MAIN-2014-10
refinedweb
160
55.27
The QTextTable class represents a table in a QTextDocument. More... #include <QTextTable> Inherits QTextFrame. Note: All functions in this class are reentrant.. Rows and columns within a QTextTable can be merged and split using the mergeCells() and splitCell() functions. However, only cells that span multiple rows or columns can be split. (Merging or splitting does not increase or decrease the number of rows and columns.)(). Splits the specified cell at row and column into an array of multiple cells with dimensions specified by numRows and numCols. Note: It is only possible to split cells that span multiple rows or columns, such as rows that have been merged using mergeCells(). This function was introduced in Qt 4.1. See also mergeCells().
http://doc.qt.nokia.com/4.6-snapshot/qtexttable.html
crawl-003
refinedweb
120
67.86
- Related research - Requirements and Definitions - Related Work - Roadmap - Design - Disclaimer - Old design notes Trac is strong in basic, individual and small-team task management but lacks features for heavy-duty project management a la Microsoft Project, Project Manager Workbench, etc. This page discusses those missing features and how they can best be realized. In Trac, "project" is sometimes used synonymously with "installation." As used on this page, it is a set of related tasks and deadlines. Perhaps a software project has a Design Phase, an Alpha Release, a Beta Release, and a General Release. Each phase would have a milestone with a target date and tickets to complete the work for that phase. Related research Scheduling activities in a project is an area of active research in operational research (or operations research, OR) (cf. Project Scheduling) and has been established to be NP hard. There are several variations on the problem. - RCPSP - Resource-Constrained Project Scheduling Problem - RCMPSP - Resource-Constrained Multi-Project Scheduling Problem - m_PRCPSP - Preemptable RCPSP. A RCPSP where each task may be broken (preempted) m times during scheduling. (Generally, m is limited to 1 both for simplicity of the algorithm and because there is real cost in practical task switching.) A range of techniques have been brought to bear on RCPSP. They can be broadly categorized as: - Exact solutions. Attempts to find an optimal schedule. Due to the polynomial nature of the problem these are only possible or practical for a small number of activities (e.g., a few dozen). - Heuristic approximations. Attempts to find a good solution in reasonable time for a realistic number of tasks (e.g., hundreds). - Metaheuristics. More abstract approaches such as genetic algorithms, tabu search, simulated annealing, ant colony optimization, and particle swarm optimization. All of these algorithmic approaches also have other dimensions such as the number of threads that are used, the method they use to prioritize activities, etc. To be practical for implementation as a Trac plugin, it seems likely our implementation should not require heavy-weight, opaque abstractions or multiple threads. Furthermore, we desire an algorithm which can update a schedule as part of a ticket change listener rather than having to completely recompute a schedule for each, individual change. PSPLIB provides a standard set of program scheduling problems to test the various algorithms performance against one another and the optimal solution. Kolisch and Hartmann1 tested dozens of algorithms and variations using the PSPLIB data. Other papers of note include: - Resource allocation and planning for program management, Kabeh Vaziri, Linda K. Nozick, Mark A. Turnquist, December 2005 Much of the project scheduling literature treats task durations as deterministic. In reality, however, task durations are subject to considerable uncertainty and that uncertainty can be influenced by the resources assigned. The purpose of this paper ... - Stochastic rollout and justification to solve the resource-constrained project scheduling problem, Ningxiong Xu, Linda Nozick, Orr Bernstein, Dean Jones, December 2007 The key question addressed by the resource-constrained project scheduling problem (RCPSP) is to determine the start times for each activity such that precedence and resource constraints are satisfied while achieving some objective. Priority rule-based ... - Activity scheduling in the dynamic multi-project setting: choosing heuristics through deterministic simulation, Robert C. Ash, December 1999 Requirements and Definitions Project management support software should help us answer a few basic questions: - When will my project be done? (That is, what is the forecasted completion date.) - How much Earned Value has the project achieved? - How do incurred costs compare to Earned value (Cost Performance Factor). -. Properties A project management system must track properties of (information about) tasks, resources, and milestones. Task Properties Answering the questions listed above requires recording certain properties of a task. - Original Work Estimate - The amount of work believed to be needed to complete the task as of before the task was started.). Typically work will be expressed in hours of effort of a resource. - Current Work Estimate - The total time the task is expected to take, as of now. Like Original Work Estimate, but may be modified. - Expended effort - The amount of work expended against this task so far. - Assigned resources - The individuals who will work on this task. - % Effort - Per resource, the amount of the resource's time that will be spent on this task. If the resource is working on a 2-day task 50% of the time, we expect it to take 4 days. - % Complete - The amount of the task that has been done already. If there are 10 subtasks with 10 hours original work estimate and 6 are done, the task is 60% complete. - Dependencies - Other tasks that affect when this task can be started or finished. - Task Hierarchy - A "Work Breakdown Structure" is often used to give a hierarchical organization of all tasks on a project. For example, interior painting of a new house might be broken down into paint the living room, paint the dining room, etc. In a WBS, task 1.1.2 would have subtask 1.1.2.1 and 1.1.2.2. We propose to express the hierarchy but not force explicit numbering. We propose that each task have a single parent, which is the next level up in a WBS. Dependencies Task dependencies can be quite complex. Project management generally involves four types of dependencies between tasks or activities: - Finish-to-Start (FS) - Task B cannot start until task A finishes. This is the most common. Sometimes A is referred to as B's Predecessor. The essential property of a milestone is its date. Trac tickets which are part of a milestone are implicitly due by the milestone date. In general project management, milestones have FS dependency with some tasks. Together with dependencies between tasks, this dictates when tasks must be done to keep a project on schedule. Trac milestones are not first-class objects (they lack history and comments that tickets have). An alternative to Trac milestones is to create a custom ticket type that can be used to set deadlines and have tickets which are required for that ticket scheduled as if they had a milestone with a due date set. To avoid conceptual conflicts with Trac milestones, I've called that ticket type an inchpebble. Trac milestones may still be used to group tickets but a milestone may have many inchpebbles which set intermediate dates. Task Scheduling There are two fundamental ways to approach scheduling. One is to assume that all tasks are small and to divide total estimates by resource rate. Then, one compares earned value to cost, computes CPF (Cost Performance Factor), and computes an estimated end time. This is the far simpler approach, but fails to capture tasks that only some people can work on and other complexities. The other approach is to create an actual plan that shows when each resource will work on tasks. The basic answer to "When will my project be done?" is generally displayed in a Gantt chart which shows tasks, their dependencies, their duration (scaled by resource availability), and milestones. Two threads on the Trac Users mailing list, suggest that a Gantt chart is a fundamental requirement for project management but scheduling tasks is even more fundamental and a necessary precondition for producing a Gantt chart. Consider a grossly-simplified project to design a new electronic device. This project might have the following tickets: With this simple task list -- supported by core Trac tickets and milestones -- we'll know when we are done because all the tickets are closed but we have no way to project when that will be. If all the tasks could be done concurrently, the project length is the length of the longest task. And since Trac assigns a date to the milestone, we can sort of work backwards from that to determine when work needs to start for the milestone to be met. However, core Trac has no way to record the time that a task should take. We need a field to hold Work. From here we can guess that the shortest the project could be is one week. However, we know we cannot begin circuit board layout without a schematic. We need a field to hold Predecessors. For such a short task list we can manually inspect and see that assembly (8h) follows firmware (40h) follows schematic (24h) and the shortest project time is 72 hours (9 man-days) and we can take nine days from the milestone date to know when we have to start work to finish on time. However, a project of any complexity may have many, many more tasks and manual inspection is impractical. We can begin to do more effective project management if we have a task scheduler than can work backwards from the milestone, consider work and dependencies, and calculate task start and end dates and project start date. If the milestone for this work is May 11, the schedule might look like the following. This is computed as follows: - Assemble units is the last step and is due when the milestone is scheduled - Firmware, packaging design, and the circuit board all must be done before assembly or 1 day (the length of the assembly task) before the milestone - Schematic is due the length of the circuit board task before the circuit board's end To finish on time, the project must start on May 3, the earliest start time for any task. However, the preceding schedule assumes that the motherboard and daughter board are independent tasks. If there is only one board designer, only one board at a time can be worked on. If we assign the same resource to the board designs, either one must be done before the other can begin (even in the absence of an FS dependency) or the time for both must be extended and the resource spread between them concurrently. With 100% applied to each task, the motherboard and daughter board are worked on serially. With 50% applied to each task, the board designs are worked on in parallel but take twice as long. Calendar The schedule above isn't accurate unless we have a 7-day work week. If we assume that the work week is Monday to Friday with Saturday and Sunday off, firmware has to finish Friday, May 8 and schematic has to finish Friday, May 1 so the project must begin April 29. Applying calendar information, our task list looks like the following. Microsoft Project supports many calendars in a project. Roughly speaking, the project calendar has working hours per day and working days per week (e.g., 9-5, Monday-Friday) and resource calendars can override that (e.g., Ethan is on vacation the August 15-19). Gantt Charts Dependency and Duration Once we have a task scheduler, we can simplify presentation of task scheduling with --.. The black bars extending from the bottom of circuit board and packaging design indicate how much those tasks can slip without affecting deliverable. Should talk about lag here. Earned Value To compute earned value, we need unchanged original estimates, and probably also some way to estimate remaining time. Resource Description and Allocation We need a proposal for how to describe available resources within Trac. If tasks are tickets and tickets have owners who work on them, it seems reasonable that a resource is a Trac user. There may be information we need to track for a resource that it not part of the user configuration. It might also be necessary or desirable to allow tickets to have multiple owners. Or, we might want to have a way to have a list of (resource,hours) pairs independent of the owner. Related Work Trac Core There are a couple of tickets (31, 886) and some discussion about ticket linking and relationships in the Trac core. We may be able to get a lot done while we wait for that work to come to fruition. Trac Plugins Gantt Charts Trac has several Gantt chart plugins: - TracJsGantt - A data-driven plugin which presents ticket status in a display much like Microsoft Project's WBS format. -. - GanttCalendarPlugin - provides nice views of tickets. It is noted as "not stable". It uses "Completed [%]" (quantized to 5% increments), "Start" (YYYY/MM/DD), and "End" (YYYY/MM/DD) custom fields. It seems to include an administrative calendar, perhaps where holidays and such can be recorded. Pale Purple's Virtual Planner is an alternative visualization tool. Dependencies MasterTicketsPlugin supports FS dependency (but calls it blocks and blocked by). The SubTickets page talks about adding composition type dependency (parent/child relationships). SubticketsPlugin and ChildTicketsPlugin provide composition type dependency that can be used for WBS. The Trac Dependency plugin was created in August 2009 and shows promise as a more complete and flexible solution than upgrading MasterTickets. Time and Scheduling. DateFieldPlugin has some helpful wrappers around custom fields to validate them as dates. May I draw your attention to SchedulingToolsPlugin? I was not aware of this page, but implemented some of its ideas in some kind of prototype. It currently has a scheduler, resource availability and Gantt chart in a simple fashion. I would like to enhance it, maybe we can join efforts? -- viola TeamCalendarPlugin keeps track of user availability. Data Exchange The TicketImportPlugin can import tasks exported from Microsoft Project as a CSV file. There is a patch to import dependencies so Microsoft Project can feed MasterTickets. A fact of life is that Microsoft products like Exchange and Outlook are present in many environments. It would be nice if we could get availability information from Exchange to feed a project calendar. Just reading from the group calendar would probably be enough. That way users would only have to put in their vacations, etc. in one place.. dotProject is an open-source, web-based project management system. A quick review suggests its task management isn't as good as Trac. It has a Gantt chart but wikipedia notes "as of version 2.0 the task dependencies feature is not complete". It has a troubled history but was forked last year as Web2Project. I reviewed 15 Useful Project Management Tools. It mentions Trac. None of the tools obviously have scheduling in them. Jira sounded promising but doesn't seem to have scheduling. Their chart examples don't include a Gantt. Basecamp is popular and promising: Basecamp works For years project management software was about charts, graphs, and stats. And you know what? It didn’t work. Pictures and numbers don’t get projects done. Basecamp tackles project management from an entirely different angle: A focus on communication and collaboration. Basecamp brings people together. But it is also commercial (which doesn't really help the Trac community). The description above makes it sound like we shouldn't expect a Gantt chart in Basecamp. ;-) I don't see automatic scheduling. They have a nice feature where you can subscribe to milestone updates via iCalendar.. Roadmap There seems to be a consensus that grandiose project management features for Trac should be implemented with a combination of plugins which provide useful functionality on their own. The following plan assumes that approach. - Tasks will be represented as tickets. Additional, non-core, data will be needed. (In object-oriented terms, you might say that Task is a subclass of Ticket.) - We need a way to express WBS relationships, assuming MasterTickets will be used to express dependencies. - If we need more than FS dependencies, we will need to extend MasterTickets to express dependency type. - Note that there is a patch to TicketImportPlugin that imports WBS relationships. It is known to work with MasterTickets and Subtickets plugins and may work with others. - TimingAndEstimation can be the basis of recording estimates. - The plugin should be extended to support original and revised estimates, and to store default estimates in the database or configuration or derive them from history for experience-based scheduling (rather than have rules to use default values for tickets with tiny estimates). - A simple Gantt chart can be implemented to show dependencies and schedule based on manually-entered due dates. - TracJsGanttPlugin provides such a display. - Basic scheduling (ignoring resource conflicts and availability) can be provided based on - core Trac tickets, - MasterTickets, - TimingAndEstimation, and - custom fields to hold assigned and calculated dates A new Scheduling plugin can build on these plugins (or an interface that hides them), and create an as-late-as-possible schedule working back from a milestone. - We need a resource calendar plugin to allow describing resources and their availability. This should support notions of normal work hours, holidays, etc. - Additional dependency types can be added, perhaps by enhancing or forking MasterTickets. - Additional Gantt display options (e.g., critical path) can be added independent of additional dependency types. - The scheduler can be enhanced to take into account resource conflicts. Design To maximize the flexibility in mixing and matching plugins to provide features for project management, I propose to leverage Trac's Component Architecture to specify a number of interfaces in a tracpm namespace. In this way a Gantt chart, a workload chart, a task scheduler, etc. can all use the tracpm interfaces regardless of whether, for example, Subtickets or Childtickets (or even ticket decomposition in a future Trac core) are used to represent parent/child relationships. IProjectTask Tasks for project management will be based on tickets but an abstract interface allows us to decouple a scheduler or other PM tool from the implementation of non-core ticket features like recording estimates and progress. One user may choose to implement IProjectTask on top of TimingAndEstimation and another on top of TracHours. An IProjectTask has the following properties: - id - Numeric ID (Inherent in Trac) - work - Man-hours of work to complete task - risk - Relative risk. An integer from 0 to 100. How likely it is that work is accurate. Zero means no risk; work is certain to be accurate. - priority - Relative priority. An integer. - percentComplete - How much of work is done? (This can be computed from time remaining vs. total estimate or time worked vs. total estimate. We will not, necessarily, store percent complete. It is intuitive for display and analysis but difficult for data entry.) - resource - Name of resource assigned to this task - percentEffort - How much of resource's time is spent on the task. - duration - How long will resource take to complete work with percentEffort. For example, a 16-hour task with 50% effort will take 4 days. - assignedStart - An explicit, user-specified constraint on when the task must start (a date) - assignedFinish - An explicit, user-specified constraint on when the task must finish (a date) - computedStart - The result of scheduling this task based on constraints. Equals assignedStart if that field is set. - computedFinish - The result of scheduling this task based on constraints. Equals assignedFinish if that field is set. - dependencies - A list of other tasks (by id) and the dependencies of this task on them. assignedStart and assignedFinish are likely mutually exclusive (that is, only one can be set; though both could be set if percentEffort was allowed to be computed). Each dependency specifies: - task - The ID of the task this task depends on (I'd really like a better name here. parent is wrong. origin? other?) - type - Dependency type (FS, SS, SF, or FF) - lag - Offset of this task's anchor relative to the anchor of task. (Lag may be negative.) Whether the anchor for the dependency is the start or end of the task depends on the dependency type. For example, if Task B has a FS dependency on Task A with a lag of 1 day, then Task B starts 1 day after Task A finishes. Or, if Task B has an SS dependency in Task A with a lag of 1 day, then Task B starts 1 day after Task A starts. (lag is not scaled by percentEffort.) IProjectCalendar The essential feature of a project calendar is that it knows when work is not done (weekends and holidays) so that task duration can account for that down time. A slightly more sophisticated implementation would track individual resource availability so that the schedule can account for vacations and such. An IProjectCalendar should provide the following methods: Used to have startFromFinish() and finishFromState() here but those are likely scheduler functions. I think what we need here is hoursAvailable(resource, date) which a scheduler can call on each date and subtract the result from the work it is trying to schedule. IProjectResource Describes resource availability. May need calendar options for individuals. Some may want progress rates and costs, and some may not - this is surely controversial. IProjectSchedule Computing a schedule involves determining the computedStart and computedFinish for a set of tasks taking into account the dependencies between tasks, the resources assigned to those tasks, and the resource availability. Where the tasks do not have assignedStart or assignedFinish, the computed schedule prioritizes tasks to keep resources at or below availability. However, assigned dates may force overloaded resources. These overloads can then be reported and either manually resolved or resolved with a resource leveling module apart from the scheduler. The following rules control the scheduling: - Choosing a method - If using As Soon As Possible, tasks without predecessors should be done done today (or the start of the project, if that's later), and their successors when they are done, etc. - If using As Late As Possible, tasks without successors should be done at the end of the project and their predecessors done right before they must start, etc. - It may be desirable to override the method for individual tasks so exploratory work is done ASAP and clean up work is done ALAP, regardless of the overall method being used. - Handle assigned dates - It is an error for a task to have an assignedStart and an assignedFinish. - If a task has an assignedStart, the computedStart is the assignedStart and the computedFinish is the computedStart plus the duration. - If a task has an assignedFinish, the computedFinish is the assignedFinish and the computedStared is the computedFinish minus the duration. NOTE: Once a task is begun, its assignedStart is set from the computedStart so recomputing a schedule doesn't change its start. We may have to deal with split tasks, however, if a task is begun then put down and resumed later. - Handle dependencies (This must be done iteratively for all of a task's dependencies. Several FS dependencies may produce different computedStart dates with the latest one being use. Interaction between other types of dependencies is complex.) - If a task has an FS dependency on another task, its computedStart is the other task's computedFinish plus lag. The task's computedFinish is computedStart plus duration. - If a task has an SS dependency on another task, its computedStart is the other task's computedStart plus lag. The task's computedFinish is computedStart plus duration. - If a task has an FF dependency on another task, its computedFinish is the other task's computedFinish plus lag. The task's computedStart is the computedFinish minus duration. - If a task has an SF dependency on another task, its computedFinish is the other task's computedStart plus lag. The task's computedStart is the computedFinish minus duration. NOTE: A scheduling algorithm which handled only FS dependencies would be a very useful first step. - Handle resource limitations Tasks assigned to the same resource but with no other dependency between them and no assigned dates must be sequenced to keep from overloading the resource. A comparison function can be used to determine which task should go first (as in many sorting algorithms). sequenceTasks(taskA,taskB) would return -1 if taskA should go first, 1 if taskB should go first or 0 if it doesn't matter. The scheduler should not be aware of the policy implemented in the comparison function. Possible criteria for sequencing the tasks include: - priority - More important/urgent work goes first - risk - Riskier work goes first - work - A function may favor short tasks ("low hanging fruit") or long ones (which inherently have more risk). - task type - fix bugs before doing new features - percentComplete - Finishing something that's partially done is better than starting something new - "fit" - It's better to start a 2-day task on Wednesday and hold a 4 day task for the next week than to break up the longer task across a weekend. Other tracpm functions I'm not sure what interface to put these in but some other functions that the API should hide are: - Finding predecessors (immediate and indirect) - Finding successors (immediate and indirect) - Finding descendants (children and further generations) - Finding ancestors (immediate and further generations) - Finding all related tickets (predecessors, successors, descendants, ancestors, and possibly those that share a resource) - Finding all the tickets due in a time range, begin..end. - If begin is not specified, all due by a end. - If end is not specified, all that start on or after begin Schedule Scenarios It might be helpful or interesting to consider saving schedules or scenarios. If we stored resource assignment, start and finish data in a table keyed by schedule and ticket, we could store multiple possible schedules and choose to display different ones. An initial release could have only a single schedule and no facility for creating alternatives. A later refinement could add scenario support. If a start or finish date was configured for a task, the schedule would copy that data and not allow editing or recomputation of those dates; other tasks would flow around that fixed time. If a task lacked a start or finish date, the scheduling of that task would be fluid and computed by the scheduler. - Greg Troxel Old design notes (I'm reworking these into a better flow for the whole document.) What do we need in the user interface ? A "chart these" button on a report page would be very nice. It would be nice to be able to create task dependencies graphically using the Gantt chart as a GUI or to change a milestone date by dragging it along the chart. Reader Feed Back Use this section to provide additional comments and or suggestions Suggestion(s) by Jay Walsh While I like the idea of where you are going with this. The concepts and "design" section have taken a turn towards specific implementations. I would suggest a combination of making these ideas configurable, as well as extending various classes and providers to be extensible/replaceable. - For example, one may not want a "work" field, since man hours effort, in their project may have little value. Instead, they may wish to use "days" for calendar time, or "dollars" for contractor costs. - instead of "risk", I might want business value. - for any of this meta data, it would seem logical to support "Calculated fields" for the children, such as in the ValuePropagationPlugin - Another area is the "providers" The ability to assign a provider to return the value in a custom or default field. - for example: % Complete, a straight calculation of hours done/hours total may work in most casts, my specific case....we need to provide it manually on some wiki page, which a provider looks up....., or something) - or a more pointed example, "calculated completion date", lets assume i want to use an Evidence Based Scheduling approach. in which case, I may also want to replace the Gantt views with the Probable ship dates graph? or maybe not.... The ability to do so would be key however. It almost seems like any custom or default field added to this concept should be able to specify it's very own provider, optionally. More Feed Back Here
http://trac-hacks.org/wiki/ProjectManagementIdeas?version=112
CC-MAIN-2014-23
refinedweb
4,586
54.73
27 June 2013 21:08 [Source: ICIS news] HOUSTON (ICIS)--US crude oil futures rose on Thursday for the fourth day in a row, in tandem with a surging stock market and better-than-expected ?xml:namespace> August West Texas Intermediate (WTI) futures closed at $97.05/bbl, up $1.55, and are once again flirting with $100/bbl. ICE Brent futures also finished up $1.16 to $102.82 on production glitches in the The four-day rally marks a significant turnaround from last week, when Chairman Ben Bernanke hinted that the US Federal Reserve may ease its economic stimulus programme, known as quantitative easing or QE2, as the US economy improves. Oil prices fell afterwards. August WTI futures dropped briefly in early morning trading, but then rallied through the afternoon on government data that showed that weekly jobless claims declined and personal spending and incomes rose. While the data suggested modest improvements in the Crude futures were also buoyed by a stock market rally that took the Dow Jones Industrial Average back above 15,000
http://www.icis.com/Articles/2013/06/27/9682688/us-crude-rises-1.55-on-strong-economic-data-stock-rally.html
CC-MAIN-2014-35
refinedweb
177
63.29
The class contains: - Two double data fields named width and height that specify the width and height of the rectangle. The default values are 1 for both width and height. - A string data field named color that specifies the color of a rectangle. Hypothetically, assume that all rectangles have the same color. The default color is white. - A no-arg constructor that creates a default rectangle. - A constructor that creates a rectangle with the specified width and height. - The accessor and mutator methods for all three data fields. - A method named getArea() that returns the area of this rectangle. -. My question is that i keep gettin this error...RectangleDemo.java:76: reached end of file while parsing but i'm not sure where to put it, and i dont know if it will work after i had a bracket. could someone please help me this is what i have: public class RectangleDemo { private class Rectangle { private double height; private double width; private String color; public Rectangle(double wid, double high){ height = high; width = wid; } public Rectangle(){ height = 1; width = 1; color = "White"; } public void setHeight(double high){ height = high; } public void setWidth(double wid){ width = wid; } public void setColor(String col){ color = col; } public double getArea(){ return height*width; } public double getPerimeter(){ return 2*(height + width); } public void getColor(){ System.out.println("Color is: " + color +"\n"); return; } } public class RectangleDemo { public static void main(String[] args){ Rectangle box1 = new Rectangle(); Rectangle box2 = new Rectangle(4, 40); Rectangle box3 = new Rectangle(3.5, 35.9); String Color = "Red"; box1.setColor(Color); box2.setColor(Color); box3.setColor(Color); box1.getColor(); box2.getColor(); box3.getColor(); System.out.println("The perimeter of the first box is: " + box1.getPerimeter() + "\n"); System.out.println("The perimeter of the second box is: " + box2.getPerimeter() + "\n"); System.out.println("The perimeter of the third box is: " + box3.getPerimeter() + "\n"); System.out.println("The area of the first box is: " + box1.getArea() + "\n"); System.out.println("The area of the second box is: " + box2.getArea() + "\n"); System.out.println("The area of the third box is: " + box3.getArea() + "\n"); } }
https://www.daniweb.com/programming/software-development/threads/280390/design-a-class-named-rectangle-to-represent-a-rectangle
CC-MAIN-2017-26
refinedweb
350
59.4