text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
OpenBSD Will Not Fix PRNG Weakness 196
snake-oil-security writes "Last fall Amit Klein found a serious weakness in the OpenBSD PRNG (pseudo-random number generator), which allows an attacker to predict the next DNS transaction ID. The same flavor of this PRNG is used in other places like the OpenBSD kernel network stack. Several other BSD operating systems copied the OpenBSD code for their own PRNG, so they're vulnerable too; Apple's Darwin-based Mac OS X and Mac OS X Server, and also NetBSD, FreeBSD, and DragonFlyBSD. All the above-mentioned vendors were contacted in November 2007. FreeBSD, NetBSD, and DragonFlyBSD committed a fix to their respective source code trees, Apple refused to provide any schedule for a fix, but OpenBSD decided not to fix it. OpenBSD's coordinator stated, in an email, that OpenBSD is completely uninterested in the problem and that the problem is completely irrelevant in the real world. This was highlighted recently when Amit Klein posted to the BugTraq list."
then exploit it (if you can) (Score:5, Insightful)
nothing says "fix it" faster than a few thousand compromised hosts
release a PoC that gets r00t, inform the security lists and stand back
thats what full disclosure is for.
if it isnt exploitable then BSD can fix it at leisure
or if thats not quick enough and as its Open Source, YOU fix it if you are that concerned
now somebody call the whhaaambulance
Re:then exploit it (if you can) (Score:5, Informative)
Re:then exploit it (if you can) (Score:5, Informative)
Anyway, besides rudely just posting a link like that in response, I was going to say that proof-of-concept code has at least already been published, and his point is that FreeBSD, NetBSD, DragonFlyBSD has fixes available. Apple is currently working on a fix for OS X. OpenBSD is not planning to fix this. More info can be found in my parent link.
Re: (Score:2, Insightful)
You probably did a typo in a closing tag. Anyway, There's a reason why we have a "preview" button
Re: (Score:2)
Re:then exploit it (if you can) (Score:4, Informative)
Quantum mechanics delivers true randomness, at least according to the standard interpretation.
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2, Informative)
I would have thought all algorithmic solutions to random number generation would suffer the same flaw as described in the text. Be in deep shit if it worked any other way.
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:3, Interesting)
Imagine a spin-1/2-particle (e.g. an electron). Such a particle has the peculiar property that if you measure its spin along any chosen axis, you'll always get either 1/2 ("spin up") or -1/2 ("spin down").
OK, let's assume we have just measures the spin in z direction and got +1/2. Let me first note that this is stable: If we measure the z-spin of the same particle again (assuming it didn't interact in between), we will again get +1/2 each time. That is, once we measures +1/2 in z
Re:then exploit it (if you can) (Score:5, Funny)
Re: (Score:2)
Sure, it's called "The Laws of Physics".
The only problem is, nothing can calculate the result faster than our universe is already doing. It's hard to make something that can calculate the behavior of a quark that is smaller than a quark.
Re: (Score:2)
Kinda hard to make some if nothing in our world are truly random.
Almost all processes in a computer are truly random. The number of electron crossing this particular trace per second? It's certainly not constant.
The trick in computers is keeping the RAM and the harddisk from going random too fast, such that a temporary illusion of determinism can be achieved. A crossing cosmic-ray particle will flip every last bit at random -- it'll just take sufficiently many centuries that you can imagine your bits to be stable zeros and ones.
Back in the eighties I generated trul
Re: (Score:3, Informative)
Because it is not part of the standard PC architecture ?
Re:then exploit it (if you can) (Score:5, Informative)
Re: (Score:3, Insightful)
True, but if you roll one yourself, the P in PRNG no longer has a second meaning of 'predictable.'
It does if you don't do it right, and you're unlikely to do it right unless you're a cryptographic expert. Just because your algorithm isn't published doesn't mean a competent codebreaker won't be able to crack it: ahref= [slashdot.org]>.
Re: (Score:2)
Uh what (Score:4, Insightful)
Re: Uh what (Score:5, Informative)
Thus, it is my guess that even if the attack vectors are deemed serious enough, the OpenBSD team has decided that it doesn't matter, since these protocols were never designed for security anyway, and that one should use DNSSEC and/or IPSEC (or TLS) if one actually wants to be secure (it does raise the question as to why they decided to use a PRNG for those fields from the beginning, though). My second guess is that they don't even consider the attack vectors serious, though, since they probably require a cracked router to be effective anyway.
Indeed, if they do require a cracked router, then I don't see the issue to begin with. One of the attacks was that the attacker could inject data into a TCP stream and such things, and if he has a router cracked, then I'm pretty sure he could forge all the data he wants anyway, without using any particular software attack at all, and likewise with DNS data.
Re: (Score:2, Informative)
Re: (Score:2)
Re: Uh what (Score:5, Informative)
The exploit described in the paper doesn't require a cracked router, just a malicious website. Once you can inject fake DNS entries for bankofamerica.com or ebay.com on some ISP's DNS server, the exploit has paid for itself.
Re: (Score:2)
They'd more likely be used to compromise the user (Score:3, Insightful)
Not everything is about compromising someone's computer.
Re:Uh what (Score:5, Interesting)
It is entirely believable to me. Back in 1995 I told Marc Andressen at Netscape that he had a serious problem with the random number generator used to choose session keys for SSL. There was simply not enough randomness going in for there to be 128 bits going out.
Marc had every reason to listen to me, I had broken SSL 1.0 in ten minutes when he tried to demonstrate it at MIT. But it took several weeks to drill the problem into his thick skull.
So they eventually asked me for a description of how to do the thing right.
A year later the exact same bug was discovered independently. By this time they had hired some competent crypto people. I spoke to Taher about the problem later and his explanation was that they found the design note on the PRNG which was so comprehensive that they didn't think it necessary to check the actual code.
Re:Uh what (Score:5, Funny)
Re:Uh what (Score:5, Funny)
That's because he's so l33t he can pick a Slashdot id at random every time he posts.
Re: (Score:2, Informative)
On SSL/TLS and similar security/crypto issues, he is always interesting and more likely to be right than not.
On supporting large scientific computing platforms, he is always interesting and more likely to be right than not. His system administration c.v. is impressive.
On interpreting the experiments performed
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:3, Informative)
Marc's 128 bit encryption used a random seed with 24 bits worth of ergodicity. So it was only 24 bit secure.
And SSL 1.0 had no integrity protections whatsoever, which would have been pretty bad even if he wasn't using a stream cipher. So even if he used a 256 bit cipher it would have been broken.
What makes you think this is my only Slashdot id?
Oh and in response to the AC in the other thread, no my job title is not
Re: (Score:2)
I have noticed that people are complete and utter idiots about two very important cryptographic algorithms. PRNGs and hash functions. I can't believe the number of people who still use a simple MD5 hash for software download verification. First, it isn't signed, so all someone has to do is alter both the hash and the code. Secondly, even if it were it's not very hard to make two pieces of code, one innocuous and one malicious that both have the same MD5 hash, and it's been true for years.
DNS cache pois
MD5 (Score:2)
I'm calling you on this claim.
Re: (Score:2, Interesting)
If BSD used the GPL, then Apple still wouldn't be providing a fix, because they wouldn't be using OSS at all. Neither licence is better than the other in this regard.
I don't agree with the trolling from either camp. The licence you release your code under is a matter of personal choice.
Re: (Score:3, Interesting)
While I would agree with you on the matter of trolling it really gets old when BSD users trumpet it constantly where-as in my experience GPL supporters tend to realise there are limitations. Of course I'm sure it is seen the same way across the bridge.
Re: (Score:3, Interesting)
Out of the four items you mention, only one is GPL. You could have done much better to suggest such examples as GCC et al.
The great thing about the BSD license, is that when people do contribute back (and they do, even big companies like Apple), you know its because they *want* t
Re: (Score:2)
Re: (Score:2)
Software freedom is better when its inalienable. (Score:4, Interesting)
So, in other words, the grandparent poster's point is valid and the larger more important issue remains: proprietary derivatives of non-copylefted free software uses the free software community as a market instead of treating us as equals.
Nobody "has" to under the GPL; to the degree that what you said is true, the same is true of the GPL. Statements like yours ignore all the choices that lead up to distributing source code. There's nothing in the GPL that compels conveyance. There are only conditions in the GPL that compel source code conveyance with object code conveyance. It's trivially easy to not improve GPL-covered software or not distribute the improved version. The larger issue here is whether the free software community owes Apple anything. We don't. If they want to join us and work with us, great, if not they can write their own software. The GPL helps ensure that when people and organizations convey copies of programs they do so as equals. NeXT (now owned by Apple) already tried distributing GCC derivative software without distributing complete corresponding source code when GCC was under GPLv2. It made NeXT look like an ass and put them at risk of being able to distribute GCC at all. NeXT later rectified the situation by distributing complete corresponding source code in compliance with GPLv2.
Re: (Score:3, Informative)
Oh, and captain hater, last time I checked, the fix would be shared [apple.com].
Re: (Score:2)
Because that is why they aren't using webkit, apache, samba, cups (or employ the guy who writes it), and several others in their default install.
....none of which touch proprietary hardware or deal with DRM.
Re: (Score:3, Insightful)
Re: (Score:2) [opensource.org]
Re: (Score:2, Insightful)
Apple are free to release their putative fix to the community, or not - their free choice. That's one more freedom, relative to being obliged to release any changes they make which lead to a binary release outisde of Apple, which the GPL would oblige.
There are plenty of folk who see that as a feature not a flaw.
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
(So possibly it's the GPLv3 is compatible with the Affero license...but the resulting code must be released Affero.)
Re: (Score:2)
Re:Uh what ... yeah (Score:4, Insightful)
Don't conflate "things you want" with "freedom", please.
Re: (Score:2)
I personally don't care
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
GPL has the same flaw, ya know. (Score:3, Insightful)
( * which only says something about making the code, and thus the fix, available if the code, or compiled version thereof, is distributed. )
The difference is trivial, isn't it. In both cases an existing fix would not automatically be contributed back.
Re: (Score:2)
Freedom doesn't me
Re: (Score:2)
And yes, Affero GPLv3 would indeed make this apply to Google if they were using it as a server-side solution. But if it's in-house only - they still don't have to contribute back. If they (or anyone) develop for a specific client only, then only that client
Re: (Score:2)
Re: (Score:2)
The phrase "security through obscurity" has a well established meaning in the discussion of security measures. It refers specifically to systems that are only secure if the design is not known to the attacker.
Specific passwords (or other shared secrets like symmetric keys) are not part of the design. The design merely says that you use one, not which one you use - and security of the shared secret is only based on keeping which key / password
Why this is so bad: DNS cache poisoning (Score:2, Informative)
OpenBSD secure?! (Score:4, Interesting)
Oh for Bob's sake! (Score:2, Insightful)
But when the PRNG for a non-MS operating system is shown to have a similar (but not identical) problem, it's "irrelevant"?
Troll? Redundant? (Score:2)
Perception is as important as actuality (Score:2, Insightful)
Can someone say how hard a fix would be ? Surely: for the sake of a bit of work they are committing a public relations blunder!
Re: (Score:2)
The next thing is that anything *nix or open source is not really interested in security.
Remember that it is easier to loose reputation than to gain it.
Re: (Score:2, Insightful)
Nobody forces you to use OpenBSD, and nobody prevents you from patching it yourself. They are entirely in their rights to say "No" even if it is a stupid thing to do.
XYZ Attacks were also unthinkable a while ago. (Score:2, Informative)
But I wanted to show that most of todays security threats
were first percived hard to be used or totally unthinkable, even minor security problems
which later were updated to the status of a serious threat, because the first look turned out to be wrong.
So when devellopers commit themselves to build the most secure OS, and than on the other hand show such no-interest
Strike 2, OpenBSD. (Score:5, Insightful)
First they refused to implement WPA (despite the other BSDs having it), because it "doesn't provide real security" and "just use IPSEC".
Now they're refusing to address a weakness in their network stack (despite the other BSDs addressing it), again with the implication that everybody should just jump to IPSEC. What if you're in a situation where an IPSEC rollout is impractical or impossible?
Whatever happened to defense in depth? Whatever happened to "secure by default"? Whatever happened to constructive paranoia, such as randomizing of libc addresses, that was unlikely to have any real impact on security but was a nice extra, just in case? Why must I now upgrade to NetBSD to get security features that are lacking in OpenBSD? Isn't the shoe on the wrong foot?
What happened? Was there a change of management? Is OpenBSD under the thumb of a douchebag patch manager lately? Is this going to go away at some point? Those of us that sleep with OpenBSD firewalls like a gun under our pillow are taking notice.
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
Umm, they're completely correct to take this stance. WPA is far inferior to IPSEC, security-wise. It's OpenBSD's job to help insulate you from insecure technologies. We could easily say, "Just because FreeBSD allows one-character passwords, OpenBSD should, too!" And you know what? We'd be wrong to think in that way.
What happened? Was there a change of management? Is OpenBSD
Re:Strike 2, OpenBSD. (Score:5, Insightful)
So, OpenBSD is refusing to put a locking mechanism on the doorknob because it wants to make people use a deadbolt. Me, I'd want both; if it turns out my deadbolt had a defect and thus easily defeated, the doorknob lock would at least provide some security.
Theo is slow to change, but he will. (Score:5, Interesting)
Basically, he's very conservative, very resistant to change, and don't forget that's one of the things that made OpenBSD what it was to begin with... but if it really matters he'll come around.
Re: (Score:3, Interesting)
From my impression that is an overstatement. OpenBSD will get WPA when someone writes it well enough for it to get in. Although the current devs don't want to write it themselves (as they don't feel they need it), they have left the door open for someone else to write it.
"doesn't provide real security" and "just use IPSEC" aren't reasons why it won't get in at all but reasons why that particular developer(s) isn't going to bother writing it themselves. OpenBSD is probably
Re: (Score:3, Interesting)
They're doctrinaire, sure (Score:3, Interesting)
OpenBSD wont fix? (Score:2)
What you really mean is 'Theo doesn't use this feature, so it cant possibly be important to anyone else in the world'. OBSD is a one man show.
How many people actually use PRNG? (Score:2)
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
It's my understanding that urandom often uses data from interrupts, keyboard input, device controllers etc. to increase the entropy of the random numbers it produces.
Hardware random number generators are not considered pseudo-random. As I understand it they usually ampli
Code excerpt for the curious... (Score:5, Funny)
random vs pseudo-random? (Score:2)
Re: (Score:2)
Re: (Score:2)
The rest of it either isn't necessarily random, or isn't necessarily cheap enough / fast enough. And PRNGs can be made hard enough to guess that no one will. It's kind of like how RSA is possible to crack, if someone guesses the right prime factors, but with a sufficiently large key size, you can get to where all of the matter in the Universe, assembled into chips that vaguely resemble today's p
From the forum post: (Score:2)
But it gets more interesting. Several other BSD operating systems
copied the OpenBSD code for their own IP ID PRNG, so they're
vulnerable too. This is particularly so with Apple's Mac OS X,
Mac OS X Server and Darwin, but also with NetBSD, FreeBSD and
DragonFlyBSD (the 3 latter O/S however only use this PRNG when
the kernel flag net.inet.ip.random_id is set to 1; it is 0 by
default, resulting in a sequential counter to be used instead...).
This is really a ways out of my depth, but my naive understanding is that the PRNG is a problem because it is not actually random, and can therefore be predicted. Yet, the above states that the other BSDs in particular don't even use the randomization by default, and instead use the most predictable sequence possible. Am I missing something, or doesn't that mean the other BSDs are significantly more at risk (for whatever value of 'at risk' this threat actually corresponds to)?
-Ted
Why doesn't software trust /dev/[u]random ? (Score:3, Informative)
I had an interesting discussion with Amit regarding all the hacks people (including the Bind people) do to try to roll their own random number generator and it prompted me to review our own IP randomization code (and the 'off' default). After review I was decidedly uneasy about its secureness, mainly because it was trying to use an algorithmically generated cycle for a tiny namespace (16 bits, actually 15 the way it was coded). The problem with the IP sequence space is that you can't just randomize it, you also have to ensure that sequence numbers are not immediately repeated. DNS has similar issues.
I gave up trying to improve the algorithm and decided to throw in the towel and allocate 128KB of memory to do a look-ahead running shuffle of the 65536 possible sequence number using the system's PRNG. It's not possible to do better then that, frankly. We also decided to turn on ip randomization by default.
So that brings me back to the question: Why the hell doesn't bind have an option to use the system PRNG? Not all systems have a good random number generator, but I trust ours far more then the junk coded into bind. For that matter, I don't really mind if bind ate another 128K of memory to secure its own sequence space, if that is what it takes.
I know enough about cryptology to know that I am not a cryptographer. But regardless of that, I can still get a good feel for someone else's code and what BIND does scares me. The y need to change their code to default to something more secure, even if it is memory intensive. If they want to give their users the option to use the less memory intensive algorithm that's fine with me, but the default needs to be more secure.
DNS has its own design issues, but that is no excuse for software to exasperate them.
-Matt
Re:So much for high security (Score:5, Insightful)
The OpenBSD guys are pretty defensive about security. If they say it is not a problem, I am inclined to believe them.
Re: (Score:2, Informative)
OpenBSD's argument is that a patch would not make it more secure... so your point is moot.
Re: (Score:3, Insightful)
Re:If the OpenBSD devs say it isn't a security fla (Score:2, Insightful)
I see you don't remember how OpenBSD developers downplayed remote root vulnerability in mbuf code, until COREsecurity gived them working exploit
And this is that mega randomness with what OpenBSD team was so proud
Re:What?? (Score:5, Informative)
This could potentially provide a platform for attacks involving prediction of IP sequences and thus TCP data injection attacks.
Where is a local machine access required for that? It could provide attacks on the network traffic itself, by merely knowing which operating systems are involved in it.
Re: (Score:2)
Your dollar, your time (Score:2, Insightful)
As we can see, even Microsoft can't seem to be vigilant on everything at once.
And the question to ask would be, what alternative? OpenBSD has (yet another) theoretical vulnerability. Is it one that affects the things you use obsd for?
MSWxxx has yet another real vulnerability. Is it one that affects what you use MSWxxx for?
It's better to allocate your time to be vigilant on things that matter (to you).
Re: (Score:3, Insightful)
If flawed, predictable PRNG code is so 'irrelevant in the real world' why does even Microsoft seek to improve upon it?
"Strengthens the cryptography platform with a redesigned random number generator, which leverages the Trusted Platform Module (TPM), when present, for entropy and complies with the latest standards. The redesigned RNG uses the AES-based pseudo-random number generator (PRNG) from NIST Special Publication 800-90 by default. The Dual Elliptical Curve (Dual EC) PRNG from SP 800-90 is also availa
Re: (Score:2)
Because they have like six Turing award winners working for them including Butler Lampson? Of the top fifty people in network security you will find about a quarter work for Microsoft, more than for any other company, including IBM, RSA and VeriSign. They have the cash and they use it to buy the best.
Microsoft's problem is that you can't buy your way out of a shitty legacy code base in
Re: (Score:2)
which leverages the Trusted Platform Module (TPM)
I smell marketing droid oil. I do favor fixing security issues, but as soon as the TPM becomes involved, rational assumptions vanish. MS has a history of *fixing* things to include new technologies they are having a hard time pushing. TPM is a huge technology for them that they have had an incredibly difficult time pushing. Microsoft needs this technology to win for their game plan to succeed. Trusted Computing in general and remote control of custome
Using hardware to assist a PNRG =!= lock-in (Score:2)
Your assertion that using hardware to reduce the determinism and thus reduce the predictability of a PNRG must be some sort of strategy to lock hardware and software together betrays an ignorance of the problems that comp
Re: (Score:2)
Still Alive, BSD version, sung to the tune of Jonathan Coulton's "Still Alive" from the game "Portal," originally vocalised by Ellen McLain in character as GLaDOS. I be asserting me fair use right of parody, yarr!
This was a triumph,
I'm logging a note here: Huge success,
We had to dummynet the heavy traffic,
BSD Unix (R),
We code what we must because we can,
For the good of all of us,
Including vendors as well,
But there's no sense crying over closed source
|
http://it.slashdot.org/story/08/02/10/0136236/openbsd-will-not-fix-prng-weakness?sdsrc=prev
|
CC-MAIN-2014-15
|
refinedweb
| 4,341
| 60.65
|
Test concepts of TDD.
The whole process is very simple to get to grips with, and it shouldn't take too long before you wonder how you were able to get anything done before! There are huge gains to be made from TDD—namely, the quality of your code improving, but also clarity and focus on what it is that you are trying to achieve, and the way in which you will achieve it. TDD also works seamlessly with agile development, and can best be utilized when pair-programming, as you will see later on.
In this tutorial, I will introduce the core concepts of TDD, and will provide examples in Python, using the nosetests unit-testing package. I will additionally offer some alternative packages that are also available within Python.
What Is Test-Driven Development?
This approach allows you to escape the trap that many developers fall into.
TDD, in its most basic terms, is the process of implementing code by writing your tests first, seeing them fail, then writing the code to make the tests pass. You can then build upon this developed code by appropriately altering your test to expect the outcome of additional functionality, then writing the code to make it pass again.
You can see that TDD is very much a cycle, with your code going through as many iterations of tests, writing, and development as necessary, until the feature is finished. By implementing these tests before you write the code, it brings out a natural tendency to think about your problem first. While you start to construct your test, you have to think about the way you design your code. What will this method return? What if we get an exception here? And so on.
By developing in this way, it means you consider the different routes through the code, and cover these with tests as needed. This approach allows you to escape the trap that many developers fall into (myself included): diving into a problem and writing code exclusively for the first solution you need to handle.
The process can be defined as such:
- Write a failing unit test
- Make the unit test pass
- Refactor
Repeat this process for every feature, as is necessary.
Agile Development With Test-Driven Development
TDD is a perfect match for the ideals and principles of the Agile Development process, with a great striving to deliver incremental updates to a product with true quality, as opposed to quantity. The confidence in your individual units of code that unit testing provides means that you meet this requirement to deliver quality, while eradicating issues in your production environments.
"This means both parties in the pair are engaged, focused on what they are doing, and checking one another's work at every stage."
TDD comes into its own when pair programming, however. The ability to mix up your development workflow, when working as a pair as you see fit, is nice. For example, one person can write the unit test, see it pass, and then allow the other developer to write the code to make the test pass.
The roles can either be switched each time, each half day, or every day as you see fit. This means both parties in the pair are engaged, focused on what they are doing, and checking each other's work at every stage. This translates to a win in every sense with this approach, I think you'd agree.
TDD also forms an integral part of the Behaviour Driven Development process, which is again, writing tests up front, but in the form of acceptance tests. These ensure a feature "behaves" in the way you expect from end to end. More information can be found an upcoming article here on Tuts+ that will be covering BDD in Python.
Syntax for Unit Testing
The main methods that we make use of in unit testing for Python are:
assert: base assert allowing you to write your own assertions
assertEqual(a, b): check a and b are equal
assertNotEqual(a, b): check a and b are not equal
assertIn(a, b): check that a is in the item b
assertNotIn(a, b): check that a is not in the item b
assertFalse(a): check that the value of a is False
assertTrue(a): check the value of a is True
assertIsInstance(a, TYPE): check that a is of type "TYPE"
assertRaises(ERROR, a, args): check that when a is called with args that it raises ERROR
There are certainly more methods available to us, which you can view—see the Python Unit Test Docs—but, in my experience, the ones listed above are among the most frequently used. We will make use of these within our examples below.
Installing and Using Python's Nose
Before starting the exercises below, you will need to install the
nosetest test runner package. Installation of the
nosetest runner is straightforward, following the standard "pip" install pattern. It's also usually a good idea to work on your projects using virtualenv's, which keeps all the packages you use for various projects separate. If you are unfamiliar with pip or virtualenv's, you can find documentation on them here: VirtualEnv, PIP.
The pip install is as easy as running this line:
"pip install nose"
Once installed, you can execute a single test file.
$ nosetests example_unit_test.py
Or execute a suite of tests in a folder.
$ nosetests /path/to/tests
The only standard you need to follow is to begin each test's method with “test_” to ensure that the nosetest runner can find your tests!
Options
Some useful command line options that you may wish to keep in mind include:
-v: gives more verbose output, including the names of the tests being executed.
-sor
-nocapture: allows output of print statements, which are normally captured and hidden while executing tests. Useful for debugging.
--nologcapture: allows output of logging information.
--rednose: an optional plugin, which can be downloaded here, but provides colored output for the tests.
--tags=TAGS: allows you to place an @TAG above a specific test to only execute those, rather than the entire test suite.
Example Problem and Test-Driven Approach
We are going to take a look at a really simple example to introduce both unit testing in Python and the concept of TDD. We will write a very simple calculator class, with add, subtract and other simple methods as you would expect.
Following a TDD approach, let's say that we have a requirement for an
add function, which will determine the sum of two numbers, and return the output. Let's write a failing test for this.
In an empty project, create two python packages
app and
test. To make them Python packages (and thus support importing of the files in the tests later on), create an empty file called
__init__.py, in each directory. This is Python's standard structure for projects and must be done to allow item to be importable across the directory structure. For a better understanding of this structure, you can refer to the Python packages documentation. Create a file named
test_calculator.py in the test directory with the following contents.
import unittest class TddInPythonExample(unittest.TestCase): def test_calculator_add_method_returns_correct_result(self): calc = Calculator() result = calc.add(2,2) self.assertEqual(4, result)
Writing the test is fairly simple.
- First, we import the standard
unittestmodule from the Python standard library.
- Next, we need a class to contain the different test cases.
- Finally, a method is required for the test itself, with the only requirement being that it is named with "test_" at the beginning, so that it may be picked up and executed by the
nosetestrunner, which we will cover shortly.
With the structure in place, we can then write the test code. We initialize our calculator so that we can execute the methods on it. Following this, we can then call the
add method which we wish to test, and store its value in the variable,
result. Once this is complete, we can then make use of unittest's
assertEqual method to ensure that our calculator's
add method behaves as expected.
Now you will use the
nosetest runner to execute the test. You could execute the test using the standard
unittest runner, if you wish, by adding the following block of code to the end of your test file.
if __name__ == '__main__': unittest.main()
This will allow you to run the test using the standard way of executing Python files,
$ python test_calculator.py. However, for this tutorial you are going to make use of the
nosetests runner, which has some nice features such as being able to execute nose tests against a directory and running all tests, amongst other useful features.
$ nosetests test_calculator.py E ====================================================================== ERROR: test_calculator_add_method_returns_correct_result (test.test_calculator.TddInPythonExample) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/user/PycharmProjects/tdd_in_python/test/test_calculator.py", line 6, in test_calculator_add_method_returns_correct_result calc = Calculator() NameError: global name 'Calculator' is not defined ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (errors=1)
From the output nosetest has given us, we can see that the problem relates to us not importing
Calculator. That's because we haven't created it yet! So let's go and define our
Calculator in a file named
calculator.py under the
app directory and import it:
class Calculator(object): def add(self, x, y): pass
import unittest from app.calculator import Calculator class TddInPythonExample(unittest.TestCase): def test_calculator_add_method_returns_correct_result(self): calc = Calculator() result = calc.add(2,2) self.assertEqual(4, result) if __name__ == '__main__': unittest.main()
Now that we have
Calculator defined, let's see what nosetest indicates to us now:
$ nosetests test_calculator.py F ====================================================================== FAIL: test_calculator_add_method_returns_correct_result (test.test_calculator.TddInPythonExample) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/user/PycharmProjects/tdd_in_python/test/test_calculator.py", line 9, in test_calculator_add_method_returns_correct_result self.assertEqual(4, result) AssertionError: 4 != None ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1)
So, obviously, our
add method is returning the wrong value, as it doesn't do anything at the moment. Handily, nosetest gives us the offending line in the test, and we can then confirm what we need to change. Let's fix the method and see if our test passes now:
class Calculator(object): def add(self, x, y): return x+y
$ nosetests test_calculator.py . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK
Success! We have defined our
add method and it works as expected. However, there is more work to do around this method to ensure that we have tested it properly.
We have fallen into the trap of just testing the case we are interested in at the moment.
What would happen if someone were to add anything other than numbers? Python will actually allow for the addition of strings and other types, but in our case, for our calculator, it makes sense to only allow adding of numbers. Let's add another failing test for this case, making use of the
assertRaises method to test if an exception is raised here:') if __name__ == '__main__': unittest.main()
You can see from above that we added the test and are now checking for a
ValueError to be raised, if we pass in strings. We could also add more checks for other types, but for now, we'll keep things simple. You may also notice that we've made use of the
setup() method. This allows us to put things in place before each test case. So, as we need our
Calculator object to be available in both test cases, it makes sense to initialize this in the
setUp method. Let's see what nosetest indicates to us now:
$ nosetests test_calculator.py .F ====================================================================== FAIL: test_calculator_returns_error_message_if_both_args_not_numbers (test.test_calculator.TddInPythonExample) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/user/PycharmProjects/tdd_in_python/test/test_calculator.py", line 15, in test_calculator_returns_error_message_if_both_args_not_numbers self.assertRaises(ValueError, self.calc.add, 'two', 'three') AssertionError: ValueError not raised ---------------------------------------------------------------------- Ran 2 tests in 0.001s FAILED (failures=1)
Clearly,
nosetests indicates to us that we are not raising the
ValueError when we expect to be. Now that we have a new failing test, we can code the solution to make it pass.
class Calculator(object): def add(self, x, y): number_types = (int, long, float, complex) if isinstance(x, number_types) and isinstance(y, number_types): return x + y else: raise ValueError
From the code above, you can see that we've added a small addition to check the types of the values and whether they match what we want. One approach to this problem could mean that you follow duck typing, and simply attempt to use it as a number, and "try/except" the errors that would be raised in other cases. The above is a bit of an edge case and means we must check before moving forward. As mentioned earlier, strings can be concatenated with the plus symbol, so we only want to allow numbers. Utilising the
isinstance method allows us to ensure that the provided values can only be numbers.
To complete the testing, there are a couple of different cases that we can add. As there are two variables, it means that both could potentially not be numbers. Add the test case to cover all the scenarios.') def test_calculator_returns_error_message_if_x_arg_not_number(self): self.assertRaises(ValueError, self.calc.add, 'two', 3) def test_calculator_returns_error_message_if_y_arg_not_number(self): self.assertRaises(ValueError, self.calc.add, 2, 'three') if __name__ == '__main__': unittest.main()
When we run all these tests now, we can confirm that the method meets our requirements!
$ nosetests test_calculator.py .... ---------------------------------------------------------------------- Ran 4 tests in 0.001s OK
Other Unit Test Packages
py.test). I've found
pytest to be useful when executing single tests, as opposed to a suite of tests.
To install the
pytest runner, follow the same pip install procedure that you followed to install
nosetest. Simply execute
$ pip install pytest and it will grab the latest version and install to your machine. You can then execute the runner against your suite of tests by providing the directory of your test files,
$ py.test test/, or you can provide the path to the test file you wish to execute:
$ py.test test/calculator_tests.py.
$ py.test test/test_calculator.py ================================================================= test session starts ================================================================= platform darwin -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4 collected 4 items test/test_calculator.py .... ============================================================== 4 passed in 0.02 seconds ===============================================================
An example of
pytest's output when printing from within your tests or code is shown below. This can be useful for quickly debugging your tests and seeing some of the data it is manipulating. NOTE: you will only be shown output from your code on errors or failures in your tests, otherwise
pytest suppresses any output.
$ py.test test/test_calculator.py ================================================================= test session starts ================================================================= platform darwin -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4 collected 4 items test/test_calculator.py F... ====================================================================== FAILURES ======================================================================= ________________________________________ TddInPythonExample.test_calculator_add_method_returns_correct_result _________________________________________ self = <test.test_calculator.TddInPythonExample testMethod=test_calculator_add_method_returns_correct_result> def test_calculator_add_method_returns_correct_result(self): result = self.calc.add(3, 2) > self.assertEqual(4, result) E AssertionError: 4 != 5 test/test_calculator.py:11: AssertionError ---------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------- X value is: 3 Y value is: 2 Result is 5 ========================================================= 1 failed, 3 passed in 0.03 seconds ==========================================================
UnitTest
Python's inbuilt
unittest package that we have used to create our tests can actually be executed, itself, and gives nice output. This is useful if you don't wish to install any external packages and keep everything pure to the standard library. To use this, simply add the following block to the end of your test file.
if __name__ == '__main__': unittest.main()
Execute the test using
python calculator_tests.py. Here is the output that you can expect:
$ python test/test_calculator.py .... ---------------------------------------------------------------------- Ran 4 tests in 0.004s OK
Debug Code With PDB
Often when following TDD, you will encounter issues with your code and your tests will fail. There will be occasions where, when your tests do fail, it isn't immediately obvious why that is happening. In such instances, it will be necessary to apply some debugging techniques to your code to understand exactly how the code is manipulating the data and not getting the exact response or outcome that you expect.
There will be occasions where, when your tests do fail, it isn't immediately obvious why that is happening
Fortunately, when you find yourself in such a position, there are a couple of approaches you can take to understand what the code is doing and rectify the issue to get your tests passing. The simplest method, and one many beginners use when first writing Python code, is to add
Debug With Print Statements
If you deliberately alter our calculator code so that it fails, you can get an idea of how debugging your code will work. Change the code in the
add method of
app/calculator.py to actually subtract the two values.
class Calculator(object): def add(self, x, y): number_types = (int, long, float, complex) if isinstance(x, number_types) and isinstance(y, number_types): return x - y else: raise ValueError
When you run the tests now, the test which checks that your
add method correctly returns four when adding two plus two fails, as it now returns 0. To check how it is reaching this conclusion, you could add some print statements to check that it is receiving the two values correctly and then check the output. This would then lead you to conclude the logic on the addition of the two numbers is incorrect. Add the following print statements to the code in
app/calculator.py.
class Calculator(object): def add(self, x, y): number_types = (int, long, float, complex) if isinstance(x, number_types) and isinstance(y, number_types): print 'X is: {}'.format(x) print 'Y is: {}'.format(y) result = x - y print 'Result is: {}'.format(result) return result else: raise ValueError
Now when you execute
nosetest against the tests, it nicely shows you the captured output for the failing test, giving you a chance to understand the problem and fix the code to make the addition rather than subtraction.
$ nosetests test/test_calculator.py F... ====================================================================== FAIL: test_calculator_add_method_returns_correct_result (test.test_calculator.TddInPythonExample) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/user/PycharmProjects/tdd_in_python/test/test_calculator.py", line 11, in test_calculator_add_method_returns_correct_result self.assertEqual(4, result) AssertionError: 4 != 0 -------------------- >> begin captured stdout << --------------------- X is: 2 Y is: 2 Result is: 0 --------------------- >> end captured stdout << ---------------------- ---------------------------------------------------------------------- Ran 4 tests in 0.002s FAILED (failures=1)
Advanced Debug With PDB
As you start to write more advanced code, print statements alone will not be enough or start to become tiresome to write all over the place and have to be cleaned up later. As the process of needing to debug has become commonplace when writing code, tools have evolved to make debugging Python code easier and more interactive.
One of the most commonly used tools is
pdb (or Python Debugger). The tool is included in the standard library and simply requires adding one line where you would like to stop the program execution and enter into
pdb, typically known as the "breakpoint". Using our failing code in the add method, try adding the following line before the two values are subtracted.
class Calculator(object): def add(self, x, y): number_types = (int, long, float, complex) if isinstance(x, number_types) and isinstance(y, number_types): import pdb; pdb.set_trace() return x - y else: raise ValueError
If using
nosetest to execute the test, be sure to execute using the
-s flag which tells
nosetest to not capture standard output, otherwise your test will just hang and not give you the
pdb prompt. Using the standard
unittest runner and
pytest does not require such a step.
With the
pdb code snippet in place, when you execute the test now, the execution of the code will break at the point at which you placed the
pdb line, and allow you to interact with the code and variables that are currently loaded at the point of execution. When the execution first stops and you are given the
pdb prompt, try typing
list to see where you are in the code and what line you are currently at.
$ nosetests -s > /Users/user/PycharmProjects/tdd_in_python/app/calculator.py(7)add() -> return x - y (Pdb) list 2 def add(self, x, y): 3 number_types = (int, long, float, complex) 4 5 if isinstance(x, number_types) and isinstance(y, number_types): 6 import pdb; pdb.set_trace() 7 -> return x - y 8 else: 9 raise ValueError [EOF] (Pdb)
You can interact with your code, as if you were within a Python prompt, so try evaluating what is in the
x and
y variables at this point.
(Pdb) x 2 (Pdb) y 2
You can continue to "play" around with the code as you require to figure out what is wrong. You can type
help at any point to get a list of commands, but the core set you will likely need are:
n: step forward to next line of execution.
list: show five lines either side of where you are currently executing to see the code involved with the current execution point.
args: list the variables involved in the current execution point.
continue: run the code through to completion.
jump <line number>: run the code until the specified line number.
quit/
exit: stop
pdb.
Conclusion
Test-Driven Development is a process that can be both fun to practice, and hugely beneficial to the quality of your production code. Its flexibility in its application to anything from large projects with many team members right down to a small solo project means that it's a fantastic methodology to advocate to your team.
Whether pair programming or developing by yourself, the process of making a failing test pass is hugely satisfying. If you've ever argued that tests weren't necessary, hopefully this article has swayed your approach for future projects.
Make TDD a part of your daily workflow today.
Heads Up!
If this article has whetted your appetite for the world of testing in Python, why not check out the book "Testing Python" written by the articles author and released on Amazon and other good retailers recently. Visit this page to purchase your copy of the book today, and support one of your Tuts+ contributors.
|
https://code.tutsplus.com/tutorials/beginning-test-driven-development-in-python--net-30137
|
CC-MAIN-2018-22
|
refinedweb
| 3,685
| 62.27
|
Opened 7 years ago
Last modified 8 weeks ago
#9061 assigned New feature
formsets with can_delete=True shouldn't add delete field to extra forms
Description
Current behavior of formsets with can_delete=True is to add a delete field to every form. This behavior differs from that expected, however (why would one want a delete option on an "add" form?), as well as that of the builtin admin. I've included a patch on formsets.py, but haven't bothered with patching tests yet.
Attachments (6)
Change History (17)
Changed 7 years ago by gsf
Changed 7 years ago by gsf
newer diff with edit to forms.models to avoid error on save
comment:1 Changed 7 years ago by SmileyChris
- Needs documentation unset
- Needs tests set
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
I thought brosner already had done this. Seems a valid issue, anyway.
comment:2 Changed 7 years ago by gsf
Having lived with this for a while, I can see the case where one might want a delete field on extra forms. I'd still argue that it shouldn't be the default, but it could be another option on formsets. Perhaps delete_extra or extra_delete?
comment:3 Changed 6 years ago by jkocherhans
- Resolution set to fixed
- Status changed from new to closed
I don't think this is the case on trunk anymore. I don't see delete checkboxes for "add" forms.
comment:4 Changed 6 years ago by Matthew
- Resolution fixed deleted
- Status changed from closed to reopened
The delete fields still appear here on all rows with the latest trunk, r11468.
from django import forms class Form(forms.Form): test = forms.CharField() FormSet = forms.formsets.formset_factory(Form, can_delete=1) formset = FormSet(initial=[{'test':'Some initial data for the first row'}])
The formset contains two rows, both of which have a delete checkbox.
comment:5 Changed 4 years ago by lukeplant
- Severity set to Normal
- Type set to New feature
Changed 4 years ago by oban
updated patch for 1.3.1 2 years ago by aaugustin
- Status changed from reopened to new
comment:9 Changed 3 months ago by danielward
- Owner changed from nobody to danielward
- Status changed from new to assigned
While I can understand how from a usability perspective this could be confusing, the ability to delete can also be a useful way to discard a new form entry rather than having to clear each populated field for the given form(s).
As a result, I propose adding a can_delete_extra option to formsets, which allow developers to decide whether they wish to omit deletion fields from their formsets without having to write any any additional logic in to their templates/views.
I'm about to attach the relevant patches. If accepted, I'm happy to provide a patch to public documentation for reference also.
Changed 3 months ago by danielward
Latest 'before' test to confirm behaviour
Changed 3 months ago by danielward
Potential solution by adding in 'can_delete_extra' option
Changed 3 months ago by danielward
Tests introduced to ensure all is well following introduction of potential solution
comment:10 Changed 2 months ago by timgraham
- Needs tests unset
comment:11 Changed 8 weeks ago by timgraham
- Patch needs improvement set
Left comments for improvement on the PR.
don't add delete field to formset extra forms
|
https://code.djangoproject.com/ticket/9061
|
CC-MAIN-2015-35
|
refinedweb
| 559
| 67.18
|
Building a portfolio site with Contentful, Next.js and Netlify
- Fast ⚡️
- Secure 🔒
- Maintainable 🏗
- Easy to deploy 🚀
- Service Worker ⚙️
- Colour
-:
I’m going to run that script to install the dependencies; that will also automatically save those dependencies to my
package.json file.
I’ve gone ahead here, put in the scripts to build for production, and develop the project locally..
Now that we have that, let’s run the project locally and visit localhost:3000 in the browser:.
We need to use a custom
.babelrc file here to utilize the
import /
export tokens available to us in that
getcontent.js file.
Create a new folder from the root of the project for the JSON file to be written to—we will call that data:
The last step here before we can run our
postinstall script would be installing the depedencies:
Phew, ok, let’s run it!
Excellent, we have data from Contentful, written to JSON locally:
Now, we will display this data using a few React Components. To do that, let's create a components folder, enter it, and create the three main components we will be using:
Back to
index.js, let's render our WorkFeed component and give it the data from Contentful:
Inside WorkFeed, we will loop our data and render a WorkItem for every case-study we have::
Last step on Next.js is to build the project and export this site as a static site (I’ve added in another pull from Contentful, in case anything in our system has changed):.
Thanks for reading!
|
http://brianyang.com/building-a-portfolio-site-with-contentful-next-js-and-netlify/
|
CC-MAIN-2020-50
|
refinedweb
| 259
| 59.33
|
July 11, 2022 Single Round Match 833 Editorials
CellPhoneService
We just need to do the calculations described in the statement. One part of the calculations that may be tricky for beginners is the fee per each started minute of a call. If we have a call that takes S seconds, the number of minutes we’ll paying for can be computed by dividing S by 60 and rounding the result up to the nearest integer. This is sometimes mathematically written ceil(S/60), read “ceiling”.
Ceiling of a fraction X/Y, where X, Y are positive integers, can be computed in integers using a nice formula: (X+Y-1) div Y.
(Try playing with the formula to see why it always works. Consider two cases: what happens when X is a multiple of Y? And what happens when it isn’t?)
public int payLeast(int[] calls, int P, int[] perMonth, int[] perCall, int[] perMinute) { int answer = 1 << 30; for (int p=0; p<P; ++p) { int current = perMonth[p]; for (int call : calls) { current += perCall[p]; int minutes = (call + 59) / 60; current += minutes * perMinute[p]; } answer = Math.min( answer, current ); } return answer; }
ThePriceIsRightGuessing
The key observation: Suppose your guess is X > 1 and nobody else guessed X-1. Then your guess isn’t optimal and you should rather guess X-1 instead. The reason why is pretty obvious: if the guess X won you the item for prices from some interval [X,Y], guessing X-1 means that for prices from this interval you are still the one who got closest without going over, and additionally you also win if the price is X-1. So, with the new guess you win for any price in [X-1,Y].
We can repeat the above observation until it no longer applies. Thus, the optimal guess is always either 1, or one more than another player’s guess.
With N players this gives us just O(N) candidates for the optimal guess. We can test each of them and pick the best one among them. (Each test can be done separately in O(N) time, or we can sort the previously made guesses and then find our best guess in O(N log N) time total. The code shown below implements this faster option.)
public long guess(long[] previousGuesses, long MAX) { int N = previousGuesses.length; long[] sentinels = new long[N+2]; for (int n=0; n<N; ++n) sentinels[n] = previousGuesses[n]; sentinels[N] = 0; sentinels[N+1] = MAX+1; Arrays.sort( sentinels ); long bestAnswer = -1, bestWinCount = 0; for (int n=0; n<=N; ++n) { long candidate = sentinels[n] + 1; long winCount = sentinels[n+1] - candidate; if (winCount > bestWinCount) { bestAnswer = candidate; bestWinCount = winCount; } } return bestAnswer; }
Never3Steps
This problem is an exercise in dynamic programming. For each point between (0, 0) we want to compute the number of valid ways in which it can be reached, but this time we need to keep a more precise tally and know how many of these ways fall into which of several possible categories.
More precisely, while walking from (0, 0) to (X, Y), our state at any point during the journey consists of the following information:
- obviously, our coordinates (current x, current y).
- the direction (north or east) of our last movement
- the number of consecutive steps we took in that direction (1, 2, 3, or “more than 3”)
The knowledge of this extra information allows us to control that we never make exactly 3 steps in the same direction: whenever we are in a state (x, y, dir, 3), we have to continue in that direction for at least one more step – we may not turn at that point.
We’ll then use dynamic programming to compute, for each of these states, the number of ways in which it can be reached. The final answer is then the sum over all states where we reached (X, Y) and the number of last consecutive steps isn’t 3.
In the implementation below we do a forward-DP in which we always take a state whose values are already computed correctly and we apply all valid ways to make the next step.
public int count(int X, int Y) { long MOD = 1_000_000_007; long[][][][] dp = new long[X+2][Y+2][2][5]; dp[0][0][0][0] = 1; for (int x=0; x<=X; ++x) for (int y=0; y<=Y; ++y) { // For all states where we are at (x, y) we already know // the correct values. Now we make the next step from there. // Handle the start from (0, 0) as a special case: // Reach both (0, 1) and (1, 0) with 1 step. if (x == 0 && y == 0) { dp[0][1][1][1] = 1; dp[1][0][0][1] = 1; continue; } // Make another step in the current direction. for (int s=1; s<=4; ++s) { int ns = Math.min(4, s+1); dp[x+1][y][0][ns] = (dp[x+1][y][0][ns] + dp[x][y][0][s]) % MOD; dp[x][y+1][1][ns] = (dp[x][y+1][1][ns] + dp[x][y][1][s]) % MOD; } // Make the first step in the opposite direction for (int s=1; s<=4; ++s) if (s != 3) { dp[x+1][y][0][1] = (dp[x+1][y][0][1] + dp[x][y][1][s]) % MOD; dp[x][y+1][1][1] = (dp[x][y+1][1][1] + dp[x][y][0][s]) % MOD; } } long answer = 0; for (int d=0; d<2; ++d) for (int s=0; s<5; ++s) if (s != 3) answer += dp[X][Y][d][s]; answer %= MOD; return (int)answer; }
FroggerAndNets
First, we can generate all the values L and R. For each pair L[i], R[i] we check whether the interval contains some stones and if it does not, we return -1. (A quick way to check each interval in constant time is to precompute, for each position, the nearest stone to the left and the nearest stone to the right.)
One way to solve this problem would be via dynamic programming: “for each catching attempt, for each of the good stones, what is the maximum sum of distances such that we end on that stone?”
The only problem with this solution: the above DP has O(NC) states and each transition takes O(N) time to evaluate, which is way too slow for N<=2000 stones and C<=10^6 intervals.
We need to make one more observation: For each interval it is sufficient to consider only the leftmost and the rightmost good stone. There is always an optimal solution in which Frogger only uses these stones.
Once we prove the above claim, we have effectively decreased N down to 2, which takes our time complexity down from O(N^2 C) to O(N+C). We include the proof below.
public int jump(String stones, int C, int minW, int seed) { int N = stones.length(); int maxW = N-1; int[] L = new int[C]; int[] R = new int[C]; // generate L and R long state = seed; for (int c=0; c<C; ++c) { state = (state * 1103515245 + 12345) % (1L << 31); int w = minW + (int)(state % (maxW-minW+1)); state = (state * 1103515245 + 12345) % (1L << 31); L[c] = (int)(state % (N-w)); R[c] = L[c] + w; } // precompute the next stone left and right int lastSeen = N; int[] nextStone = new int[N]; for (int n=N-1; n>=0; --n) if (stones.charAt(n) == 'O') { nextStone[n] = n; lastSeen = n; } else { nextStone[n] = lastSeen; } lastSeen = -1; int[] prevStone = new int[N]; for (int n=0; n<N; ++n) if (stones.charAt(n) == 'O') { prevStone[n] = n; lastSeen = n; } else { prevStone[n] = lastSeen; } // for each catch, find the positions of the left and right good stone int[] leftmost = new int[C]; int[] rightmost = new int[C]; for (int c=0; c<C; ++c) { leftmost[c] = nextStone[ L[c] ]; if (leftmost[c] > R[c]) return -1; rightmost[c] = prevStone[ R[c] ]; } int[][] dp = new int[C][2]; dp[0][0] = dp[0][1] = 0; for (int c=1; c<C; ++c) { dp[c][0] = Math.max( dp[c-1][0] + Math.abs( leftmost[c-1] - leftmost[c] ), dp[c-1][1] + Math.abs( rightmost[c-1] - leftmost[c] ) ); dp[c][1] = Math.max( dp[c-1][0] + Math.abs( leftmost[c-1] - rightmost[c] ), dp[c-1][1] + Math.abs( rightmost[c-1] - rightmost[c] ) ); } return Math.max(dp[C-1][0], dp[C-1][1]); }
How to prove the claim that it’s enough to consider the two extremal stones for each interval?
We can do it iteratively: take any solution, take any step where Frogger uses a middle stone instead of one of the two extremes, and show that he can instead use one of the two extremes without making the solution worse. Starting from any optimal solution and repeating that argument then eventually produces an optimal solution that only uses extremal stones.
A solution where Frogger ends the whole process on some middle stone cannot be optimal – obviously, we can make the last jump longer by continuing in its distance until we get to the last good stone. Symmetrically, it’s never optimal for Frogger to start on a middle stone.
Finally, suppose we have, somewhere in our solution, a sequence of jumps (stone A) -> (stone B) -> (stone C) such that B is some middle stone. Instead, let’s extend the jump from A to B farther in its original direction, all the way to the last available stone B’.
Without loss of generality, suppose the jump A->B was left to right. If C=B’ or C is to the right of B’, the total distance of the jumps A->B->C did not change (we are just going right from A to C). In all other cases, A->B’->C is obviously longer than A->B->C (by twice the distance between B’ and max(B,C)).
This concludes the proof.
WW
The obvious necessary and sufficient condition for a solution to exist is that each letter must have an even number of occurrences.
Next, we can observe:
- More than one edit is actually never needed. If we know the final position for each letter we want to move, we can simply move each letter directly to its final position, all during the same edit. The direct route for each letter is clearly cheaper or equal to the sum of costs of any sequence of movements of that letter with the same start and end.
- In an optimal solution, occurrences of the same letter will never change their relative positions. (Any pair of moves where two equal letters cross paths can be replaced by a shorter-or-equal pair of moves in which these two letters swap their destinations and thus maintain their relative order.)
Using the second observation we can now divide the letters into N pairs that must correspond to each other in the final WW arrangement. For each pair, one of them will be at some index x<N while the other will be at x+N.
Now, for each pair of letters and for each x we can compute the total cost of moving the pair of letters to the coordinates x and x+N. All that remains is picking a distinct x for each pair in such a way that the sum of costs is minimized. This is a pure assignment problem, also known as “minimum cost perfect matching”. We can apply one of the standard polynomial-time algorithms such as the Hungarian algorithm or any implementation of MinCost-MaxFlow.
public int rearrange(String S) { // verify solution existence if (S.length() % 2 == 1) return -1; int N = S.length() / 2; int[] counts = new int[256]; for (int n=0; n<2*N; ++n) ++counts[ S.charAt(n) ]; for (int i=0; i<256; ++i) if (counts[i] % 2 == 1) return -1; // divide the letters into N pairs int[][] pairs = new int[N][2]; int L = 0; for (int i=0; i<256; ++i) if (counts[i] > 0) { int[] where = new int[ counts[i] ]; for (int j=0, n=0; n<2*N; ++n) if (S.charAt(n) == i) where[j++] = n; int C = where.length / 2; for (int j=0; j<C; ++j) { pairs[L][0] = where[j]; pairs[L][1] = where[j+C]; ++L; } } // for each pair and each destination, compute the travel costs int[][] costs = new int[N][N]; for (int a=0; a<N; ++a) for (int b=0; b<N; ++b) { int distance = Math.abs( pairs[a][0] - b ) + Math.abs( pairs[a][1] - (b+N) ); costs[a][b] = distance * ((int)S.charAt( pairs[a][0] )); } // build the MCMF graph for the assignment problem and solve it MincostMaxflow solver = new MincostMaxflow(costs); long[] answer = solver.getFlow( solver.N-2, solver.N-1 ); return (int)answer[1]; }
misof
|
https://www.topcoder.com/blog/single-round-match-833-editorials/
|
CC-MAIN-2022-33
|
refinedweb
| 2,149
| 67.08
|
Unique features of Azure page blobs
Azure Storage offers three types of blob storage: Block Blobs, Append Blobs and page blobs. Block blobs are composed of blocks and are ideal for storing text or binary files, and for uploading large files efficiently.. This article focuses on explaining the features and benefits of page blobs.
Page blobs are a collection of 512-byte pages, which provide the ability to read/write arbitrary ranges of bytes. Hence, page blobs are ideal for storing index-based and sparse data structures like OS and data disks for Virtual Machines and Databases. For example, Azure SQL DB uses page blobs as the underlying persistent storage for its databases. Moreover, page blobs are also often used for files with Range-Based updates.
Key features of Azure page blobs are its REST interface, the durability of the underlying storage, and the seamless migration capabilities to Azure. These features are discussed in more detail in the next section. In addition, Azure page blobs are currently supported on two types of storage: Premium Storage and Standard Storage. Premium Storage is designed specifically for workloads requiring consistent high performance and low latency making premium page blobs ideal for high performant data storage databases. Standard Storage is more cost effective for running latency-insensitive workloads.
Sample use cases
Let's discuss a couple of use cases for page blobs starting with Azure IaaS Disks. Azure page blobs are the backbone of the virtual disks platform for Azure IaaS. Both Azure OS and data disks are implemented as virtual disks where data is durably persisted in the Azure Storage platform and then delivered to the virtual machines for maximum performance. Azure Disks are persisted in Hyper-V VHD format and stored as a page blob in Azure Storage. In addition to using virtual disks for Azure IaaS VMs, page blobs also enable PaaS and DBaaS scenarios such as Azure SQL DB service, which currently uses page blobs for storing SQL data, enabling fast random read-write operations for the database. Another example would be if you have a PaaS service for shared media access for collaborative video editing applications, page blobs enable fast access to random locations in the media. It also enables fast and efficient editing and merging of the same media by multiple users.
First party Microsoft services like Azure Site Recovery, Azure Backup, as well as many third-party developers have implemented industry-leading innovations using page blob's REST interface. Following are some of the unique scenarios implemented on Azure:
- Application-directed incremental snapshot management: Applications can leverage page blob snapshots and REST APIs for saving the application checkpoints without incurring costly duplication of data. Azure Storage supports local snapshots for page blobs, which don't require copying the entire blob. These public snapshot APIs also enable accessing and copying of deltas between snapshots.
- Live migration of application and data from on-prem to cloud: Copy the on-prem data and use REST APIs to write directly to an Azure page blob while the on-prem VM continues to run. Once the target has caught up, you can quickly failover to Azure VM using that data. In this way, you can migrate your VMs and virtual disks from on-prem to cloud with minimal downtime since the data migration occurs in the background while you continue to use the VM and the downtime needed for failover will be short (in minutes).
- SAS-based shared access, which enables scenarios like multiple-readers and single-writer with support for concurrency control.
Page blob features
REST API
Refer to the following document to get started with developing using page blobs. As an example, look at how to access page blobs using Storage Client Library for .NET.
The following diagram describes the overall relationships between account, containers, and page blobs.
Creating an empty page blob of a specified size
To create a page blob, we first create a CloudBlobClient object, with the base URI for accessing the blob storage for your storage account (pbaccount in figure 1) along with the StorageCredentialsAccountAndKey object, as shown in the following example. The example then shows creating a reference to a CloudBlobContainer object, and then creating the container (testvhds) if it doesn't already exist. Then using the CloudBlobContainer object, create a reference to a CloudPageBlob object by specifying the page blob name (os4.vhd) to access. To create the page blob, call CloudPageBlob.Create passing in the max size for the blob to create. The blobSize must be a multiple of 512 bytes.
using Microsoft.WindowsAzure.StorageClient; long OneGigabyteAsBytes = 1024 * 1024 * 1024; //("testvhds"); // Create the container if it doesn't already exist. container.CreateIfNotExists(); CloudPageBlob pageBlob = container.GetPageBlobReference("os4.vhd"); pageBlob.Create(16 * OneGigabyteAsBytes);
Resizing a page blob
To resize a page blob after creation, use the Resize API. The requested size should be a multiple of 512 bytes.
pageBlob.Resize(32 * OneGigabyteAsBytes);
Writing pages to a page blob
To write pages, use the CloudPageBlob.WritePages method. This allows you to write a sequential set of pages up to 4MBs. The offset being written to must start on a 512-byte boundary (startingOffset % 512 == 0), and end on a 512 boundary - 1. The following code example shows how to call WritePages for a blob:
pageBlob.WritePages(dataStream, startingOffset);
As soon as a write request for a sequential set of pages succeeds in the blob service and is replicated for durability and resiliency, the write has committed, and success is returned back to the client.
The below diagram shows 2 separate write operations:
- A Write operation starting at offset 0 of length 1024 bytes
- A Write operation starting at offset 4096 of length 1024
Reading pages from a page blob
To read pages, use the CloudPageBlob.DownloadRangeToByteArray method to read a range of bytes from the page blob. This allows you to download the full blob or range of bytes starting from any offset in the blob. When reading, the offset does not have to start on a multiple of 512. When reading bytes from a NUL page, the service returns zero bytes.
byte[] buffer = new byte[rangeSize]; pageBlob.DownloadRangeToByteArray(buffer, bufferOffset, pageBlobOffset, rangeSize);
The following figure shows a Read operation with BlobOffSet of 256 and rangeSize of 4352. Data returned is highlighted in Orange. Zeros are returned for NUL pages
If you have a sparsely populated blob, you may want to just download the valid page regions to avoid paying for egressing of zero bytes and to reduce download latency. To determine which pages are backed by data, use CloudPageBlob.GetPageRanges. You can then enumerate the returned ranges and download the data in each range.
IEnumerable<PageRange> pageRanges = pageBlob.GetPageRanges(); foreach (PageRange range in pageRanges) { // Calculate the rangeSize int rangeSize = (int)(range.EndOffset + 1 - range.StartOffset); byte[] buffer = new byte[rangeSize]; // Read from the correct starting offset in the page blob and place the data in the bufferOffset of the buffer byte array pageBlob.DownloadRangeToByteArray(buffer, bufferOffset, range.StartOffset, rangeSize); // TODO: use the buffer for the page range just read }
Leasing a page blob
The Lease Blob operation establishes and manages a lock on a blob for write and delete operations. This operation is useful in scenarios where a page blob is being accessed from multiple clients to ensure only one client can write to the blob at a time. Azure Disks, for example, leverages this leasing mechanism to ensure the disk is only managed by a single VM. The lock duration can be 15 to 60 seconds, or can be infinite. See the documentation here for more details.
In addition to rich REST APIs, Page blobs also provide shared access, durability, and enhanced security. We will cover those benefits in more detail in the next paragraphs.
Concurrent access
The page blobs REST API and its leasing mechanism allows applications to access the page blob from multiple clients. For example, let's say you need to build a distributed cloud service that shares storage objects with multiple users. It could be a web application serving a large collection of images to several users. One option for implementing this is to use a VM with attached disks. Downsides of this include, (i) the constraint that a disk can only be attached to a single VM thus limiting the scalability, flexibility, and increasing risks. If there is a problem with the VM or the service running on the VM, then due to the lease, the image is inaccessible until the lease expires or is broken; and (ii) Additional cost of having an IaaS VM.
An alternative option is to use the page blobs directly via Azure Storage REST APIs. This option eliminates the need for costly IaaS VMs, offers full flexibility of direct access from multiple clients, simplifies the classic deployment model by eliminating the need to attach/detach disks, and eliminates the risk of issues on the VM. And, it provides the same level of performance for random read/write operations as a disk
Durability and high availability
Both Standard and premium storage are durable storage where the page blob data is always replicated to ensure durability and high availability. For more information about Azure Storage Redundancy, see this documentation. Azure has consistently delivered enterprise-grade durability for IaaS disks and page blobs, with an industry-leading ZERO % Annualized Failure Rate. That is, Azure has never lost a customer's page blob data.
Seamless migration to azure
For the customers and developers who are interested in implementing their own customized backup solution, Azure also offers incremental snapshots that only hold the deltas. This feature avoids the cost of the initial full copy, which greatly lowers the backup cost. Along with the ability to efficiently read and copy differential data, this is another powerful capability that enables even more innovations from developers, leading to a best-in-class backup and disaster recovery (DR) experience on Azure. You can set up your own backup or DR solution for your VMs on Azure using Blob Snapshot along with the Get Page Ranges API and the Incremental Copy Blob API, which you can use for easily copying the incremental data for DR.
Moreover, many enterprises have critical workloads already running in on-premises datacenters. For migrating the workload to the cloud, one of the main concerns would be the amount of downtime needed for copying the data, and the risk of unforeseen issues after the switchover. In many cases, the downtime can be a showstopper for migration to the cloud. Using the page blobs REST API, Azure addresses this problem by enabling cloud migration with minimal disruption to critical workloads.
For examples on how to take a snapshot and how to restore a page blob from a snapshot, please refer to the setup a backup process using incremental snapshots article.
|
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-pageblob-overview
|
CC-MAIN-2018-51
|
refinedweb
| 1,800
| 51.89
|
- 28 Aug, 2015 7 commits
- 27 Aug, 2015 2 commits
- 26 Aug, 2015 4 commits
- Rodrigo Souto authored
Links and images are saved with relative path but it wasn't considering the profile domain. Because of this, the links were broken on article visualization. This commit also fix the url on edition of tinymce articles..
- 25 Aug, 2015 1 commit
Closes merge request !659
- 24 Aug, 2015 1 commit
- Tallys Martins authored
Signed-off-by:
Tallys Martins <tallysmartins@yahoo.com.br>
- 22 Aug, 2015 1 commit
fb_app: plugin to create FB page tabs and share to FB timeline Depends on !512 See merge request !513
- 21 Aug, 2015 12 commits
Login - Show a message if a user is not active This is a same feature described in merge request !656. I've fixed the unit tests now!! See merge request !658
This reverts commit d8c77643, reversing changes made to 950a0708.
Allow article types to define when media panel should be displayed See merge request !633
Apply suggestions of @brauliobo. Change the scope of UserNotActive exception to User model namespace
Add site_tour plugin from gitlab.com/noosfero-plugins/site_tour See merge request !654
metadata: marks urls as html safe This also fix a core bug of double escaping urls. See merge request !653
- 19 Aug, 2015 3 commits
Make lefttopright only use CSS Add a solution that uses only CSS See merge request !651
- 18 Aug, 2015 1 commit
- Phillip Rohmberger authored
Currently translated at 1.6% (15 of 911 strings)
- 15 Aug, 2015 1 commit
- 13 Aug, 2015 7 commits
Put start and end dates as datetime for articles change the start_date and end_date from date to datetime See merge request !634
|
https://gitlab.com/softwarepublico/noosfero/-/commits/15eb68fef856a9b593e2e670f24b71b2d6b7ac62
|
CC-MAIN-2021-21
|
refinedweb
| 282
| 66.23
|
Issue Type: Bug Created: 2008-03-16T16:05:40.000+0000 Last Updated: 2011-05-08T06:59:09.000+0000 Status: Resolved Fix version(s): Reporter: Justin Plock (jplock) Assignee: Adam Lundrigan (adamlundrigan) Tags: - Zend_Gdata
Related issues: - ZF-11101
Attachments:
When trying to set the hidden property of a calendar event
<pre class="highlight"> $event = $this->_gdata->newEventEntry(); $event->title = $this->_gdata->newTitle($title); $event->where = array($this->_gdata->newWhere($address)); $event->timezone = $this->_gdata->newTimezone($timezone); $event->visibility = $this->_gdata->newVisibility(true); $event->hidden = $this->_gdata->newHidden(false); $cal = $this->_gdata->insertEvent($event, '<a href="">…</a>');
I get a "Property _hidden does not exist" error message because there is no _hidden property defined in Zend_Gdata_Calendar_EventEntry or Zend_Gdata_Kind_EventEntry. Can we add the following to either one of those classes?
<pre class="highlight"> protected $_hidden = null;
Posted by Ralph Schindler (ralph) on 2011-02-17T14:55:33.000+0000
Is this still an issue? Is there a suggested fix?
Posted by Justin Plock (jplock) on 2011-02-17T16:26:49.000+0000
Wow, I don't even remember submitting this issue, especially considering I haven't played around with Zend_Gdata yet. I'm not sure if it's still an issue or not. I'd think this would have been caught awhile ago if it was still an issue though.
Posted by Kim Blomqvist (kblomqvist) on 2011-02-20T00:52:47.000+0000
This is not implemented in Zend_Gdata. However, as far as I understand Google Calendar API, the Hidden attribute does not have anything to do with the calendar +events+. There is only gCal:hidden element, which indicates whether a +calendar+ is visible or not. This is more like a documentation issue, where ZF's Reference Guide is misleadingly telling that Hidden (which is not even provided) removes the +event+ from the Google Calendar UI.
To implement a delete feature for calendar events, Zend_Gdata_Kind_EventEntry should have $_deleted property which maps to gd:Deleted.
Posted by Adam Lundrigan (adamlundrigan) on 2011-05-07T17:31:35.000+0000
This was resolved in ZF-11101
Posted by Kim Blomqvist (kblomqvist) on 2011-05-07T18:28:48.000+0000
I guess this was left open if someone would like to implement that $_delete property ...
bq. To implement a delete feature for calendar events, Zend_Gdata_Kind_EventEntry should have $_deleted property which maps to gd:Deleted.
Posted by Adam Lundrigan (adamlundrigan) on 2011-05-07T22:12:46.000+0000
According to the docs for EventKind (…) it doesn't explicitly have gd:Deleted. Do you know if it actually does (ie: inherited from parent namespace or something?)
Posted by Kim Blomqvist (kblomqvist) on 2011-05-08T06:59:09.000+0000
Ah, that's true. But it is used by the kinds.…
|
https://framework.zend.com/issues/browse/ZF-2894?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab
|
CC-MAIN-2016-36
|
refinedweb
| 453
| 58.18
|
See my new blog at .jeffreypalermo.com
I go back to work full time next Tuesday. My terminal leave is almost up, I'm a civilian, and my new office furniture will be arriving today. :)
Yesterday I was visiting with two colleagues of mine, and I was struck with a surprising reality: They were programming with ASP.NET but had no idea that they were using inheritance. One of the two was a former college professor of mine from whom I learned a great deal of database knowledge. I pondered on it for a while, and I think that there are probably many out there that don't yet know how to use the power of inheritance to improve their .Net web applications. I will now attempt to simply explain an easy way one can easily improve a page using inheritance.
First, when you write code in an ASP.NET page, you are writing code in a class that is inheriting from another class. Whether it's specified in the machine.config, web.config or in the <@Page> directive at the top of the page, your page has to inherit from a class that ultimately will inherit from System.Web.UI.Page. When you use the code-behind or code-beside technique, your page inherits from your code-behind class, which inherits from System.Web.UI.Page. Look at the class. Why else do you see “System.Web.UI.Page” in every code-behind page? It makes your page contain the functionality of the Page class in the .Net framework. It makes your page a System.Web.UI.Page object. The Page object contains the properties for accessing Request, Response, Server, Cache, etc.
Suppose you want to implement tracking for some of your pages? You could put relevant code in every page, or make a static method in a shared class to track the current request. But why not put this functionality in a class by itself, say “TrackedPage”. When you declare TrackedPage, inherit from System.Web.UI.Page:
public class TrackedPage : System.Web.UI.Page{ }
and in your code-behind page, replace the reference to System.Web.UI.Page with “TrackedPage”. This will make your page a “TrackedPage” object and will benefit from any logic you put in TrackedPage. For instance,
public class TrackedPage : System.Web.UI.Page{protected override void OnInit(EventArgs e) { base.OnInit(e); //logic to track this request Trace.Write(“Tracking“, “This request has been tracked.“); }}
will perform your tracking logic and write a line to the trace log for every page that inherits this custom class of yours. You can imagine all the functionality you can include. What if you have a custom user object that you need to be available for every request? Make it a property of your class here, and make it the base page for all your pages?
I find that a lot of people make user controls to encapsulate common functionality. This is not a good technique. Instead, put it in base pages and inherit from them.
My project on GotDotNet makes heavy use of this technique, and it's easy to extend EZWeb because all you have to do is inherit from Palermo.EZWeb.UIPage, and you have all the functionality
|
http://codebetter.com/blogs/jeffrey.palermo/archive/2004/05/13/13531.aspx
|
crawl-002
|
refinedweb
| 543
| 67.65
|
I'm new to C programming, but not to programming. Can anyone see why this code isn't working? It seems to be the simplest program in the world, but it always returns a product of 0.00000. If anyone can spot what I'm doing wrong, please reply.
Thank you.
tzuch
Code:#include <stdio.h> float x,y, prod; float fmu(float a, float b); main() { puts("Enter two floats to multiply"); scanf("%f %f", &x,&y); prod=fmu(x,y); prfloatf("\nThe following is the product...%f", &prod); puts("\n\n\nEnter a single digit to exit"); scanf("%d", &x); return 0; } float fmu(float a, float b) { return (a*b); }
|
http://cboard.cprogramming.com/c-programming/100943-anyone-see-why-doesn%27t-work.html
|
CC-MAIN-2015-27
|
refinedweb
| 113
| 75.2
|
07 August 2012 10:54 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The maintenance period for the unit, which is operating at 100% at the moment, will last until 11 September, the source said.
The producer also operates a 310,000 tonne/year PVC facility at Yokkaichi and a 150,000 tonne/year PVC plant at Osaka that are currently running at 50-60% and 30-40% of capacity, respectively, the source said.
Taiyo Vinyl is operating its
Tosoh’s 550,000 tonne/year No 2 VCM plant at Nanyo, the largest of its three VCM plants with a combined capacity of 1.2m tonne/year, has remained shut after it was severely damaged in an explosion at the site on 13 Nove
|
http://www.icis.com/Articles/2012/08/07/9584536/japans-taiyo-vinyl-to-shut-chiba-pvc-plant-on-18-august.html
|
CC-MAIN-2014-35
|
refinedweb
| 123
| 57.81
|
Introducing Tamiat: Vue.js and Firebase based CMS
tamiat
Introducing the Tamiat headless CMS, created with Vue.js and Firebase. It is a front-end focused CMS which can be used as a starting point or be integrated into an existing project. The instructions on starting from scratch with Tamiat are pretty simple but just a bit easier than fitting it on your existing project.
Making Tamiat your starting point
If you would like to start using Tamiat right away, begin by cloning the CMS repository and install its dependencies.
- Clone the repo
git clone
Install the dependencies
npm install # or yarn
Login to the Firebase console using your google account and create a new firebase project.
- In the authentication section, add a new user providing an email and a password.
- Setup your database basic security rules by going to database section and open the rules tab. You can set your security rules as you like, but as a starting point you can make it like this:
These rules mean that everyone can read from the database, but only authenticated users can write to it.
5.Copy your project configurations from WEB SETUP (in authentication section) and paste them in
config.js file by replacing the existing ones.
// replace the existing config object below let config = { apiKey: "AIzaSyCnxuLX6AgMduDMLtSJVDNJhR8xuMNvs4Y", authDomain: "tamiat-demo.firebaseapp.com", databaseURL: "", projectId: "tamiat-demo", storageBucket: "", messagingSenderId: "188459960333" };
- Run the local dev server.
npm run dev
This is the welcome page of the Tamiat CMS:
7.Access the admin interface panel by navigating to
localhost:8080/admin.
Sign in with the email and password you 've set up in step 3. Login & logout capabilities are ready to go.
Now you are in the admin panel, where by default you can create some post and fill the details, using a built-in editor, and change some general settings.
As said before you can integrate Tamiat into an existing project, following these instructions.
The structure of the project is pretty straightforward, components and mixins are placed into different folders, as well as some elements which can be used if the user desires to such as the
Signup.vue file. This file is of course for adding a new user but you have to add it in your routes and then you can use the form to add a new user to the Firebase.
Head to
index.js file inside the router directory
Import the component
import SignUp from '../components/Signup.vue';
Add a new route
{ path: '/signup', name: 'SignUp', component: SignUp },
Now you can head to fill the form and submit it.
You have successfully signed up. You will have access to the admin page once your account has been approved. Then head to Firebase and check if the user's list has been updated.
Interested in using Tamiat or help expand it? Head to its repository and give it a go. Made by Mahmoud Nouman and contributors. Find Tamiat CMS on twitter.
|
https://vuejsfeed.com/blog/introducing-tamiat-vue-js-and-firebase-based-cms
|
CC-MAIN-2019-35
|
refinedweb
| 493
| 64.91
|
KVM_NLIST(3) NetBSD Library Functions Manual KVM_NLIST(3)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
kvm_nlist -- retrieve symbol table names from a kernel image
LIBRARY
Kernel Data Access Library (libkvm, -lkvm)
SYNOPSIS
#include <kvm.h> #include <nlist.h> int kvm_nlist(kvm_t *kd, struct nlist *nl);
DESCRIPTION. If kd was created by a call to kvm_open() with a NULL executable image name, kvm_nlist() will use /dev/ksyms to retrieve the kernel symbol ta- ble.
RETURN VALUES
The kvm_nlist() function returns the number of invalid entries found. If the kernel symbol table was unreadable, -1 is returned.
FILES
/dev/ksyms
SEE ALSO
kvm(3), kvm_close(3), kvm_getargv(3), kvm_getenvv(3), kvm_geterr(3), kvm_getprocs(3), kvm_open(3), kvm_openfiles(3), kvm_read(3), kvm_write(3), ksyms(4) NetBSD 8.1 May 11, 2003 NetBSD 8.1
|
https://man.netbsd.org/NetBSD-8.1/kvm_nlist.3
|
CC-MAIN-2022-05
|
refinedweb
| 145
| 60.41
|
Custom Partitioner in Kafka Using Scala: Take Quick Tour!
Custom Partitioner in Kafka Using Scala: Take Quick Tour!
In this article, we discuss when Kafka's default partitioner is not enough and how to build a custom partitioner in Kafka using Scala.
Join the DZone community and get the full member experience.Join For Free'm assuming that you have sound knowledge of Kafka. Let’s understand the behavior of the default partitioner.
The default partitioner follows these rules:
- If a producer provides a partition number in the message record, use it.
- If a producer doesn’t provide a partition number, but it provides a key, choose a partition based on a hash value of the key.
- When no partition number or key is present, pick a partition in a round-robin fashion.
So, you can use the default partitioner in three scenarios:
- If you already know the partition number in which you want to send a message record then use the first rule.
- When you want to distribute data based on the hash key, you will use the second rule of default partitioner.
- If you don’t care about which partition message record will be stored, then you will use the third rule of default partitioner.
There are two problems with the key:
- If the producer provides the same key for each message record then hashing will give you the same hash number, but it doesn’t ensure that if you provide two different keys, then it will never give you the same hash number.
- The default partitioner uses the hash value of the key and the total number of partitions on a topic to determine the partition number. If you increase partition number, the default partitioner will return different numbers even if you provide the same key.
Now, you might have questions about how to solve this problem.
The answer to this question is very simple: you can implement your algorithm based on your requirements and use it in the custom partitioner.
You may also like: Kafka Internals: Topics and Partitions.
Kafka Custom Partitioner Example
Let’s create an example use-case and implement a custom partitioner. Try to understand the problem statement with the help of a diagram.
Assume we are collecting data from different departments. All the departments are sending data to a single topic named department. I planned five partitions for the topic. But, I want two partitions dedicated to a specific department, named IT, and the remaining three partitions for the rest of the departments. How would you achieve this?
You can solve this requirement, and any other type of partitioning needs by implementing a custom partitioner.
Kafka Producer
Let’s look at the producer code.
x
package com.knoldus
import java.util.Properties
import org.apache.kafka.clients.producer._
object KafkaProducer extends App {
val props = new Properties()
val topicName = "department"
props.put("bootstrap.servers", "localhost:9092,localhost:9093")
props.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer")
props.put("partitioner.class", "com.knoldus.CustomPartitioner")
val producer = new KafkaProducer[String, String](props)
try {
for (i <- 0 to 5) {
val record = new ProducerRecord[String, String](topicName,"IT" + i,"My Site is knoldus.com " + i)
producer.send(record)
}
for (i <- 0 to 5) {
val record = new ProducerRecord[String, String](topicName,"COMP" + i,"My Site is knoldus.com " + i)
producer.send(record)
}
} catch {
case e: Exception => e.printStackTrace()
} finally {
producer.close()
}
}
props.put("bootstrap.servers", "localhost:9092,localhost:9093")
The first step in writing messages to Kafka is to create a producer object with the properties you want to pass to the producer. A Kafka producer has three mandatory properties, as you can see in the above code:
bootstrap.serversz: port pairs of Kafka broker that the producer will use to establish a connection to the Kafka cluster. It is recommended that you should include at least two Kafka brokers because if one Kafka broker goes down, then the producer will still be able to connect Kafka cluster.
Key.serializer: Name of the class that will be used to serialize key.
value.serializer: Name of the class that will be used to serialize a value.
If you look at the rest of the code, there are only three steps:
- Create a KafkaProducer object.
- Create a ProducerRecord object.
- Send the record to the broker.
That is all that we do in a Kafka Producer.
Kafka Custom Partitioner
We need to create our class by implementing the
Partitioner Interface. Your custom partitioner class must implement three methods from the interface.
Configure.
Partition.
Let’s look at the code.
xxxxxxxxxx
package com.knoldus
import java.util
import org.apache.kafka.common.record.InvalidRecordException
import org.apache.kafka.common.utils.Utils
import org.apache.kafka.clients.producer.Partitioner
import org.apache.kafka.common.Cluster
class CustomPartitioner extends Partitioner {
val departmentName = "IT"
override def configure(configs: util.Map[String, _]): Unit = {}
override def partition(topic: String,key: Any, keyBytes: Array[Byte], value: Any,valueBytes: Array[Byte],cluster: Cluster): Int = {
val partitions = cluster.partitionsForTopic(topic)
val numPartitions = partitions.size
val it = Math.abs(numPartitions * 0.4).asInstanceOf[Int]
if ((keyBytes == null) || (!key.isInstanceOf[String]))
throw new InvalidRecordException("All messages must have department name as key")
if (key.asInstanceOf[String].startsWith(departmentName)) {
val p = Utils.toPositive(Utils.murmur2(keyBytes)) % it
p
} else {
val p = Utils.toPositive(Utils.murmur2(keyBytes)) % (numPartitions - it) + it
p
}
}
override def close(): Unit = {}
}
configure and
close methods are used for initialization and clean up. In our example, we don’t have anything to clean up and initialize.
The partition method is the place where all the action happens. The producer will call this method for each message record.input to this method is key, topic, cluster details. we need to do is to return an integer as a partition number. This is the place where we have to write our algorithm.
Algorithm
Let’s try to understand the algorithm that I have implemented. I am applying my algorithm in four simple steps.
- The first step is to determine the number of partitions and reserve 40% of it for the IT department. If I have five partitions for the topic, this logic will reserve two partitions for IT. The next question is, how do we get the number of partitions in the topic?
We got a cluster object as an input, and the method,
partitionsForTopic, will give us a list of all partitions. Then, we take the size of the list. That’s the number of partitions in the Topic. Then, we set IT as 40% of the number of partitions. So, if I have five partitions, IT should be set to 2.
- If we don’t get a message Key, throw an exception. We need the Key because the Key tells us the department name. Without knowing the department name, we can’t decide that the message should go to one of the two reserved partitions or it should go to the other three partitions.
- The next step is to determine the partition number. If the Key = IT, then we hash the message value, divide it by 2 and take the mod as partition number. Using mod will make sure that we always get 0 or 1.
- If the
Key != IT, then we divide it by 3 and again take the mod. The mod will be somewhere between 0 and 2. So, I am adding 2 to shift it by 2
Kafka Consumer
Let’s look at the consumer code.
xxxxxxxxxx
package com.knoldus
import java.util
import java.util.Properties
import scala.jdk.CollectionConverters._
import org.apache.kafka.clients.consumer.KafkaConsumer
object KafkaConsumer extends App {
val props: Properties = new Properties()
val topicName = "department"
props.put("group.id", "test")
props.put("bootstrap.servers", "localhost:9092,localhost:9093")
props.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer")
props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer")
val consumer = new KafkaConsumer(props)
try {
consumer.subscribe(util.Arrays.asList(topicName))
while (true) {
val records = consumer.poll(10)
for (record <- records.asScala) {
println("Topic: " + record.topic() + ", Offset: " + record.offset() +", Partition: " + record.partition())
}
}
} catch {
case e: Exception => e.printStackTrace()
} finally {
consumer.close()
}
}
A Kafka consumer has three mandatory properties as you can see in the above code:
bootstrap.servers: port pairs of Kafka broker that the consumer will use to establish a connection to the Kafka cluster.it is recommended that you should include at least two Kafka brokers because if one Kafka broker goes down then the consumer will still be able to connect Kafka cluster.
key.deserializer: Name of the class that will be used to deserialize key.
value.deserializer: Name of the class that will be used to deserialize a value.
If you look at the rest of the code, there are only two steps:
- Consume messages from the topic.
That is all that we do in a Kafka Consumer.
I hope you enjoy this blog. You can now create a custom partitioner in Kafka using scala. If you want the source code, please feel free to downloadit.
Thanks for reading!
References
1..
2..
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/custom-partitioner-in-kafka-using-scala-lets-take
|
CC-MAIN-2020-16
|
refinedweb
| 1,537
| 51.75
|
This FAQ is coordinated by the JED Team here: ... t-Joomla-4. Feel free to send us more questions or answers.
A huge Thank You to our volunteers! This is the current Joomla 4 Beta: ... 4.0.0-beta
MAIN QUESTIONS
When Joomla 4 is going be available? When do I have to migrate?
Joomla 4 stable is probably going to be released at the end of 2020. The Beta versions will help to test and plan the process. Please, join the effort, download the Beta, test it and report the issues that you find.
After the release, the last version of Joomla 3.10.x will be supported for two years. Joomla 3.10 will be supported until 2023. So, you can adopt Joomla 4 between 2021 and 2023.
The most up-to-date information regarding this timeline can be found in our project roadmap.
Are the extensions for Joomla 3.10 going to work on Joomla 4
Joomla 4 is a major version; it is coming with new features, improvements and deprecations.
Joomla 3.10 is going to be the final version in the 3.x line, and it is going to have a compatibility layer; so, it is the closest version to be ready for the change. The change implies that every part of the installation must be checked before the migration to avoid any surprise. All extension developers must publish updated extensions for Joomla 4.
Which are the Potential backward compatibility issues in Joomla 4?
This article is the official reference of Potential backward compatibility issues in Joomla 4: ... n_Joomla_4
On top of the Potential backward compatibility issues, the biggest changeis the new templates based on Bootstrap 4 and other modern front-end tools.
What is Joomla 3.10 compatibility layer?
Joomla 3.10 and Joomla 4 will come with a compatibility layer to easy the transition to Joomla 4. The compatibility layer will be removed on Joomla 5.
For instance, the J-Classes are aliases to the new namespaced classes. `JRegistry` => `\Joomla\Registry\Registry`.
What is the Joomla 3.10 and the Update Checker?
The Update Checker is an extension that will be included on Joomla 3.10 to confirm if:
- The hosting meets the minimum requirements of Joomla 4.x, and
- The installed extensions that use the Update Server have versions compatible with Joomla 4.
For more information:
- ... te-checker
- ... quirement/
- ... ger_Update
No. All extensions must drop the use of Bootstrap 2.3 in the migration to Joomla 4.
ABOUT THE FRONT-END DEVELOPMENT
Are Page Builders compatible with Joomla 4?
The Page Builders are regular extensions. So, each extension author must define how the page builder will support Joomla 4. In theory, the Page Builder could ease the transition between Joomla and Bootstrap versions. However, it is not clear yet if the page builder could migration via an automatic process or if the pages would have to be rebuilt.
Are extensions encouraged to be Framework Agnostic?
That would be ideal but supporting the core backend template, and core front-end template should be a good start. As for the front-end, it would be similar to now.
What have changed on the front-end? Bootstrap? jQuery?
Joomla 4 comes by default with Bootstrap 4 and plain JavaScript scripts. Joomla 4 includes jQuery for backwards compatibility and BS4 compatibility. jQuery is not used at Joomla 4 core level.
Is it defined that Joomla 5 will come with Bootstrap 5?
No. There are no plans about the Framework to be used in Joomla 5 yet.
How does the core team build the front-end files? What has changed?
TBD
ABOUT THE DATABASE CHANGES
Will Joomla be "NULL date defaults" going forward everywhere?
Yes. This is what is defined in Potential backward compatibility issues in Joomla 4.
Strict mode has been enabled. The following flags are now active by default in Joomla 4 and you may have to update your database queries accordingly. This will help us with future MySQL version upgrades and also aligns more closely with Postgres to enable easier compatibility with queries in both languages.
Will the Joomla 3 to Joomla 4 Updater convert all of the core tables to use "NULL date defaults"?
Code: Select all
'STRICT_TRANS_TABLES', 'ERROR_FOR_DIVISION_BY_ZERO', 'NO_AUTO_CREATE_USER', 'NO_ENGINE_SUBSTITUTION',
The Joomla 4 Updater will handle the change of "NULL date defaults" to the new schema of Joomla 4 core tables. Please, note that the way of how NULL values are managed changes.
These are the upcoming changes:
1. The 0000-Null conversion
1. The Null-Mandatory field type conversion, fields that are null on J3 and now they are Mandatory.
For instance, check the following table of field changes:
J3 Table | J3 Field name | J3 Field type | => |J4 Table | J4 Field name | J4 Field type
---------|----------|---------|---------|---------|---------|---------
User | registerDate | datetime No 0000-00-00 00:00:00 | => |User | registerDate | datetime No None
User | lastvisitDate | datetime No 0000-00-00 00:00:00 | => |User | lastvisitDate | datetime Yes NULL
Content | created | datetime No 0000-00-00 00:00:00 | => |Content | created | datetime No None
Content | modified | datetime No 0000-00-00 00:00:00 | => |User | registerDate | modified datetime No None
Please, check the following PR for the Weblinks extension: Fix default value for datetime columns and make some nullable to adapt to CMS 4.0-dev and remove sqlsrv
Will older extensions that still use the "0000-00-00 date defaults" will break?
It depends.
The core upgrade is going to convert only the core tables, and the core upgrade isn't going to convert schema or content for tables which belong to 3rd-party extensions.
The 3rd-party extension upgrade for Joomla 4 can convert the tables to NULL dates. The extensions will continue to work if using the API for writing into core tables in the right way.
Will the Update Checker check all tables for invalid date defaults?
No, the Update Checker checks the hosting support for Joomla 4 and the availability of extensions for Joomla 4. It will not check individually the tables, only the availability of extensions versions for Joomla 4. Indirectly, the result will be the same.
ABOUT PHP NAMESPACES AND AUTOLOADERS
Is the namespacing of extensions mandatory or recommended on Joomla 4?
It is recommended. The use of namespaces will be mandatory for Joomla 5.
Can we use still use the current J classes and J helpers in Joomla 4? For instance, JText
Yes. Please, take into account that the J-Classes will be removed on Joomla 5.
Is mandatory the use of namespaces in Joomla 4?
No
Can an extension without namespaces run on Joomla 4?
Yes
Is Joomla 4 organized by PSR4 autoloading?
Yes, the core extensions have a new `src` folder following the PSR4 namespace organization. However, the autoloader is not driven by a `composer.json` file.
Ref: [PSR-4: Autoloader]()
Do the legacy autoloaders (`jimport` and `JLoader`) work in Joomla 4?
Yes
Is Joomla 4 compatible with Composer and Packagist libraries?
Yes, an extension can be developed with Composer libraries, using PSR0, PSR2 or PSR4. Take into account that name and versions collisions can occur between extensions, so techniques to avoid conflicts must be implemened.
What is going to be deprecated on Joomla 5?
Please, check the list of classes that will be deprecated in `libraries/classmap.php`.
How many autoloaders are active in Joomla 4 by default?
TDB
ABOUT THE EXTENSION DEVELOPMENT
Does Joomla 4 support MySQL 8 and prepared statements?
Yes
Will Joomla 4 support PHP 8 at the end of 2020?
Yes
Is there a new recommended extension directory organization?
Yes, this is the new extension organization:
- forms
- helpers
- src
-- Controller
-- Dispatcher
-- Helper
-- Model
-- Service
-- View
- tmpl
Can an extension follow the Joomla 3 extension organization?
Yes
What are the official reference extensions for Joomla 4?
- Weblinks for Joomla 4 ... ee/4.0-dev
- Patchtester for Joomla 4
TBD. Reference:
What are the services and providers in Joomla 4?
TBD
Does Joomla 4 have Container?
TDB
Does Joomla 4 have Dependency Injection?
TDB
ABOUT THE SECURITY FEATURES
What is Joomla 4.0 Content Security Policy (CSP)'s?
Tobias talked about Content Security Policy (CSP) yesterday at JAB 2020. We'll link the video
How to securely use the Joomla secret
TBD
Best Regards
|
https://forum.joomla.org/viewtopic.php?f=262&p=3633215&sid=0601f5f1e449963f42394116ffd6c024
|
CC-MAIN-2021-31
|
refinedweb
| 1,372
| 67.15
|
Diff for PoolResolution
PoolResolution
A new GST variable compilation order and extension interface
Traditionally, in GST, variables are resolved by a formula built by extension over years, for backward compatibility and maintenance of autobiographical separation. TwistedPools is a new, intermixed variable search order, designed to be more intuitive and fit separation of concepts in running images, rather than in a historical context.
Existing search order
Based on STInST.STSymTable, and what I recall from my foray into the libgst resolver, here is the search order for non-instance variables:
- Class methods use same resolution as instance, so for all following, always look at the plain class.
- Add the class's namespace, including all containing namespaces.
- Add the class's class pool (classVariableNames).
- Repeat the above two for each superclass in turn, up to nil.
- Add the class's shared pools, including all containing namespaces, besides the class pool.
- Repeat the above one for each superclass in turn, up to nil.
Expectations
How does the above violate expectations? Here are some simple examples.
Namespace current: MyLibrary [ Eval [ MyLibrary at: #StandardOverrides put: (BindingDictionary from: {#Scape -> nil}) ] Object subclass: Foo [ Exception := nil. Scape := nil. exception [ "I expect to answer the above classvar, but instead answer Smalltalk.Exception." ^Exception ] ] Foo subclass: Bar [ <import: StandardOverrides> scape [ "I expect to answer the StandardOverrides Scape, but instead answer Foo classPool at: #Scape." ^Scape ] ] ] "end namespace MyLibrary" Namespace current: MyProject.MyLibWrapper [ Eval [ "note this changes my superspace" MyProject at: #Exception put: Smalltalk.Exception ] MyLibrary.Foo subclass: Baz [ exception [ "After trivial reordering for pools-first, I expect to answer MyProject.Exception, but instead answer Foo classPool at: #Exception." ^Exception ] ] ] "end namespace MyProject.MyLibWrapper"
Finding a new search order
The idea of twisting the class-pool and shared-pool resolution together is trivial. The real difficulty comes from the definitions in MyProject.MyLibWrapper. Were they to appear in the MyLibrary namespace instead, the pools-first behavior would make more sense. However, since Baz is not in the same namespace as Foo, it deserves a different outcome.
As such, no simple series of walks up the inheritance tree paired with pool-adds will give us a good search order.
Our goal in the design of PoolResolution's default resolver is to maintain a sense of containment within namespaces, while still allowing reference to all inherited environments, as is traditionally expected.
Variable search fundamentals
This is the essential variable search algorithm for TwistedPools.
- Given a class, starting with the method-owning class:
- Search the class pool.
- Search this class's shared pools, combined using IPCA, left-to-right, removing any resulting pools that are any of this class's namespace or superspaces.
- Search this class's namespace and each superspace in turn before first encounter of a namespace that contains, directly or indirectly, the superclass. This means that if the superclass is in the same namespace or a subspace, no namespaces are searched.
- Move to the superclass, and repeat from #2.
This is IPCA, the inheritable pool combination algorithm.
- Start a new list.
- From right to left, descend into each given pool not marked #visited.
- Recurse into #2 for each superspace.
- Mark this pool as #visited, and add to the beginning of #1's new list.
- After all recursions exit, return the new list.
Obviously, this is a topological sort, and is explicitly modeled after CLOS class precedence.
Combination details
While the add-namespaces step above could be less eager to add namespaces, by allowing any superclass to stop the namespace crawl, rather than just the direct superclasses, it is already less eager than the shared pool manager. The topological sort is an obviously good choice, but why not allow superclasses' namespaces to provide deletions as well as the pool-containing class? While both alternatives have benefits, I believe that an eager import of all superspaces, besides those that already contain the pool-containing class, would most closely match what's expected.
An argument could also be made that by adding a namespace to shared pools, you expect all superspaces to be included. However, consider the usual case of namespaces in shared pools: imports. While you would want it to completely load an alternate namespace hierarchy, I think you would not want it to inject Smalltalk early into the variable search. Consider this diagram:
Smalltalk | MyCompany / \ / \ MyProject MyLibrary / / \ / ModA ModB MyLibModule
If you were to use ModB as a pool in a class in MyLibModule, I think it is most reasonable that ModB and MyLibrary be immediately imported, but MyCompany and Smalltalk wait until you reach that point in the namespace walk.
Another argument could be made to delay further the namespace walk, waiting to resolve until no superclass is contained in a given namespace, based on the idea of exiting a namespace hierarchy while walking superclasses, then reentering it. Disregarding the unlikelihood of such an organization, I still think it would be less confusing to resolve the hierarchy being left first, in case the interloping hierarchy introduces conflicting symbols of its own.
You may note my lack of objective argument regarding the above points of contention. That is because I don't have a formal proof. Convenient global name resolution is entirely a matter of feeling, because a formal programmer could always explicitly spell out the path to every variable.
Integrating namespace pools
I have an idea to add shared pools to namespaces, thereby allowing users to import external namespaces for every class in a namespace, rather than each class. If this is integrated, it would need to twist nicely.
Here is how I think it would best work: after searching any namespace, combine its shared pools using IPCA, removing all elements that are any of this namespace or its superspaces, and search the combination from left to right.
Sample implementation
PoolResolution is implemented in a git branch on master, for the STInST compiler only. As such, it does not apply to bindings unresolved at compile time, or code parsed directly from the REPL. You must load Compiler to try out PoolResolution.
Besides the PoolResolution hierarchy and changes to STSymTable, there are two important protocol items of note:
- PoolResolution>>current, a property defining the current resolution class.
- Behavior>>poolResolution, which answers the above by default but can be overridden on a class-by-class basis.
Pull the `pool-resolution` branch from git://nocandy.dyndns.org/smalltalk.git . The machine has slow uplink, so please don't clone it; use an existing smalltalk clone from elsewhere as a base, and add the above as a remote branch:
$ git remote add -t pool-resolution s11 git://nocandy.dyndns.org/smalltalk.git $ git checkout --track -b pool-resolution s11/pool-resolution
|
http://smalltalk.gnu.org/node/204/revisions/view/346/348
|
crawl-001
|
refinedweb
| 1,107
| 53.92
|
On Nov 3, 2003, at 3:05 AM, Ralf W. Grosse-Kunstleve wrote: > --- Harri Hakula <Harri.Hakula at arabianranta.com> wrote: >> With the latest compiler from Apple everything builds even on Jaguar. >> I got most of the tests I tried running, but had to compile everything >> manually. > > After seeing Harri Hakula's posting I've upgraded our Mac to > OS 10.2.8 followed by installing the August 2003 gcc update. > gcc --version reports: > > gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1493) > > I've used this gcc to compile Python 2.3 from scratch as a framework > build. Using the current boost CVS I've made just one modification > based on Sean Spicer's posting. See attached patch. Then: > > bjam -sPYTHON_VERSION=2.3 -sTOOLS=darwin "-sBUILD=debug <warnings>off" > -sALL_LOCATE_TARGET=/net_coral/scratch1/rwgk/bjam > > This is a full success. Everything compiles and links without > warnings or errors. Some tests run, but unfortunately most hang > indefinitely until killed manually with -9. See attached log. > For example, this is fine: > > bjam -sPYTHON_VERSION=2.3 -sTOOLS=darwin "-sBUILD=debug <warnings>off" > -sALL_LOCATE_TARGET=/net_coral/scratch1/rwgk/bjam -sRUN_ALL_TESTS=1 > dict > ...found 2015 targets... > ...updating 1 target... > python-test-target > /net_coral/scratch1/rwgk/bjam/bin/boost/libs/python/test/dict.test/ > darwin/debug/warnings-off/dict.test > running... > Done. > ...updated 1 target... > > But this one hangs: > > bjam -sPYTHON_VERSION=2.3 -sTOOLS=darwin "-sBUILD=debug <warnings>off" > -sALL_LOCATE_TARGET=/net_coral/scratch1/rwgk/bjam -sRUN_ALL_TESTS=1 > list > > It hangs at this line: > > from list_ext import * > > According to the debugger the process hangs here: > > (gdb) where > #0 0x90034728 in semaphore_wait_trap () > #1 0x90009c18 in pthread_mutex_lock () > #2 0x0054f1f0 in std::__default_alloc_template<true, > 0>::allocate(unsigned > long > etc. > > Does this ring any bells? > > Harri, how did you get around this problem? Could you please post more > details about your platform and the commands that you used to compile > and link? > Unfortunately I've already upgraded to Panther on my system so I cannot replicate everything. (And I accidentally wiped my /usr/local/src ... :-() Luckily, I did save some notes. Mac: Dual 1GHz, 10.2.8 Apple compilers: August 2003 gcc update MacPython 2.3 compiled from source with ./configure --enable-framework --with-cxx=g++ boost: I am not sure about this, but I believe it was 1.30.2 and not a snapshot from CVS. I compiled with the standard actions except I added the -fabi-version=0 since the ublas tests require this -- I had this on MacPython also. I followed Jonathan Brandmeyer's advice on an earlier thread and linked everything manually along the lines of this makefile snippet (this is from his posting): libboost_python.dylib: $(BOOST_OBJS) ld -w -d -u -o libboost_python.lo $^ g++ -w -dynamic -undefined suppress -o $@ libboost_python.lo rm -f libboost_python.lo I never got the dynamic loading working properly (unfortunately I did not save the error messages) but this led to success: g++ -undefined suppress -flat_namespace -bundle getting_started1.o /usr/local/src/boost/bin/boost/libs/python/build/libboost_python.dylib/ darwin/debug/shared-linkable-true/warnings-off/libboost_python.lo -o getting_started1.so I am afraid I was not employing any scientific method while trying to get this to work. Having reread Jonathan Brandmeyer's message I don't understand why I never tried this idea # python points to the python2.3 executable # libboost_python.dylib is assumed to be on the standard library search # path. myextension.so: $(MYEXTENSION_OBJS) g++ -w -bundle -bundle_loader $(python) -o $@ $^ -lboost_python Sorry for this stream-of-consciousness, too much teaching, not enough sleep... Cheers, Harri Hakula
|
https://mail.python.org/pipermail/cplusplus-sig/2003-November/005831.html
|
CC-MAIN-2018-26
|
refinedweb
| 586
| 53.07
|
convert a string to a long integer
#include <stdlib.h> long int strtol( const char *ptr, char **endptr, int base );
The strtol() function converts the string pointed to by ptr to an object of type long int. It recognizes a string containing:
The conversion ends at the first unrecognized character. A pointer to that character will be stored in the object to which endptr points, if endptr is not NULL..
The converted value. If the correct value would cause overflow, LONG_MAX or LONG_MIN is returned according to the sign, and errno is set to ERANGE. If base is out of range, zero is returned and errno is set to EDOM.
#include <stdlib.h> void main() { long int v; v = strtol( "12345678", NULL, 10 ); }
ANSI
atoi(), atol(), errno, itoa(), ltoa(), sscanf(), strtoul(), ultoa(), utoa()
|
http://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/src/strtol.html
|
CC-MAIN-2022-33
|
refinedweb
| 133
| 71.75
|
Implements a specialised surface that represents the screen.
More...
#include <screen.h>
List of all members.
Implements a specialised surface that represents the screen.
It keeps track of any areas of itself that are updated by drawing calls, and provides an update that method that blits the affected areas to the physical screen
Definition at line 42 of file graphics/screen.h.
Definition at line 30 of file screen.cpp.
Definition at line 34 of file screen.cpp.
Definition at line 38 of file screen.cpp.
[protected, virtual]
Adds a rectangle to the list of modified areas of the screen during the current frame.
Reimplemented from Graphics::ManagedSurface.
Definition at line 61 of file screen.cpp.
[inline, virtual]
Clear the current dirty rects list.
Definition at line 83 of file graphics/screen.h.
Clears the current palette, setting all entries to black.
Definition at line 123 of file screen.cpp.
Return the currently active palette.
Definition at line 103 of file screen.cpp.
Return a portion of the currently active palette.
Definition at line 108 of file screen.cpp.
[inline]
Returns true if there are any pending screen updates (dirty areas).
Definition at line 72 of file graphics/screen.h.
Marks the whole screen as dirty.
This forces the next call to update to copy the entire screen contents
Definition at line 70 of file screen.cpp.
[private]
Merges together overlapping dirty areas of the screen.
Definition at line 74 of file screen.cpp.
Set the palette.
Definition at line 113 of file screen.cpp.
Set a subsection of the palette.
Definition at line 118 of file screen.cpp.
Returns the union of two dirty area rectangles.
Definition at line 96 of file screen.cpp.
[virtual]
Updates the screen by copying any affected areas to the system.
Definition at line 42 of file screen.cpp.
List of affected areas of the screen.
Definition at line 47 of file graphics/screen.h.
|
https://doxygen.residualvm.org/d4/d6c/classGraphics_1_1Screen.html
|
CC-MAIN-2019-13
|
refinedweb
| 321
| 71
|
06 June 2012 07:38 [Source: ICIS news]
BRISBANE (ICIS)--Asia’s June caprolactam (capro) contract offers at $2,530-2,580/tonne (€2,024-2,064/tonne), down by $50-90/tonne from May settlements, have failed to move negotiations forward because of a wide buy-sell gap, market sources said on Wednesday.
May settlements were at $2,620-2,630/tonne, down by $130-140/tonne from April. The prices are on a cost and freight (CFR) northeast (NE) ?xml:namespace>
Most buyers said it is difficult to move negotiations forward, as the difference between spot and contract prices is more than $200/tonne.
Asian capro spot prices have fallen by more than 20% since 15 February, when prices were trading at $2,950-3,000/tonne CFR China, as a result of weak consumption and concerns over softening global demand.
Asian capro spot prices fell this week by more than $50/tonne week on week to the $2,300s/tonne CFR China, because of tumbling upstream Asian benzene prices that hit a six-month low on 4 June.
Sellers said they are willing to compromise and settle at lower prices as buyers were understandably bearish.
However, the sellers said they are not keen to slash their selling ideas to the current spot prices, as that will mean selling at way below their production costs, suggesting huge margin losses.
Capro is an intermediate primarily used in the production of nylon 6 (or polyamide 6) fibres, plastics and other polymeric materials.
(
|
http://www.icis.com/Articles/2012/06/06/9566642/lower-june-capro-contract-offers-fails-to-attract-asian.html
|
CC-MAIN-2014-52
|
refinedweb
| 252
| 54.86
|
Subject: [ggl] 'area' and 'within' implementations for spherical earth?
From: Barend Gehrels (barend)
Date: 2011-07-13 01:20:39
Hi Thomas,
> Ahh, I missed that. Yes within appears to work fine.
Good to hear.
>?
Hmm, I'm afraid you find a bug indeed. If you take serious data, it will
probably not occur.
The central idea of Boost.Geometry are that polygons do have the correct
orientation. They should be clockwise, inner rings should be counter
clockwise. This rule is general, unless you specify that polygons have
counterclockwise order (then inner rings should be clockwise).
If you specify it wrong, you would get a negative area.
This all in the general case. But for spherical it is a little more
complicated: they have to take the dateline into account. Apparently the
detection of that is not perfect, therefore your first area was
positive. Probably the same for the inner area.
If you call bg::correct(boxRegion), it should be OK. If you use polygons
not entering negative coordinates like this, it is also OK though I
admit that this is the bug, so I will look at that in more detail soon.
OK, finally, what I really find strange is, that if I change
spherical_equatorial<bg::degree> to cartesian, it (correctly) complains
about the forgotten namespace in assign_points and within. That is
really surprising - I don't change the headerfiles, so the original file
should NOT have compiled... But it did, in MSVC, in GCC and in clang.
This is a mystery to me. I scanned for using and found only a few local
calls to using. So I don't know why it compiles.
Regads, Barend
Geometry list run by mateusz at loskot.net
|
https://lists.boost.org/geometry/2011/07/1371.php
|
CC-MAIN-2020-34
|
refinedweb
| 286
| 68.36
|
#define trigPin 5#define echoPin 3void);}
#include <NewPing.h>#define SONAR_NUM(5, 3, MAX_DISTANCE), // Each sensor's trigger pin, echo pin, and max distance to ping. }(i); Serial.print("="); Serial.print(cm[i]); Serial.print("cm "); } Serial.println();}
I can hear it clicking but I'm beginning to think it may be defective.
delayMicroseconds(1000);
I can hear it clicking but I'm beginning to think it may be defective. Any Ideas?
NewPing sonar[SONAR_NUM] = { // Sensor object array. NewPing(5, 3, MAX_DISTANCE), // Each sensor's trigger pin, echo pin, and max distance to ping. };
Next I tried an example from the new Ping library:Code: [Select]#include <NewPing.h>.....And all I get in output are zeroes. (0=0cm). I can hear it clicking but I'm beginning to think it may be defective. Any Ideas?
#include <NewPing.h>.....
Quite a bad sign. I cannot hear my (working) sensor and that's fine because the frequency is 40kHz, more than double of what a human ear is able to detect.
Also, no reason to open a new thread with an ultrasonic sensor question, you can just ask on the NewPing thread if you're trying to get your sensor to work with NewPing.
Thanks for the input, Tim. I tried the Simple Sketch mentioned in your last post with the same results- "Ping: 0cm". I double-checked my connections against the diagram using pins 11 and 12 for echo and trigger. Also, I have swapped jumper wires to no avail. Any other suggestions?
void loop() { int duration, distance; digitalWrite(trigPin, LOW); // Added delayMicroseconds(2); // added digitalWrite(trigPin, HIGH); delayMicroseconds(10); // changed to 10....
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=122334.msg920680
|
CC-MAIN-2016-18
|
refinedweb
| 308
| 66.64
|
RTL-SDR Tutorial: Listening to TETRA Radio Channels
NOTE: There is now a plugin available for SDR# that will decode TETRA fairly easily. It is still in beta and misses a few features found in telive. Check it out in this post.
TETRA is a trunked radio communications system that stands for "Terrestrial Trunked Radio". It is used heavily in many parts of the world, except for the USA. Recently, a software program called Tetra Live Monitor (telive) was released on GitHub. This software can be used along with the (patched) Osmo-TETRA software to monitor and listen to unencrypted TETRA communications.
Below we show a tutorial on how to listen to TETRA communications using a RTL-SDR RTL2832U software defined radio. This tutorial is based heavily on the telive_doc.pdf file that is written by the author of telive and included in the telive git download. Please refer to that pdf file for further details on how the software works. We have modified their tutorial slightly to make it a little easier to understand. As this code is still under heavy development if you have trouble please check their PDF file for modifications to the procedures.
Again, we reiterate: This tutorial is not a substitute for a thorough reading of the documentation. If you have trouble setting this software up, please refer to the telive documentation first, before asking any questions. It contains a comprehensive FAQ section which solves most of the common problems. The documentation can be found directly at
Decoding and Listening to TETRA Tutorial
Most of this tutorial is performed in Linux and we assume that you have some decent Linux experience. We also assume you have some experience with the RTL-SDR dongle and have a decent antenna capable of picking up TETRA signals in your area. If you don't have a RTL-SDR dongle yet see our Buy RTL-SDR dongles page.
Note: As of October 2016 there is now a Windows port of the Telive decoding software available. This may be an option for you if you prefer to run in Windows. More information here.
First, we will need to find some TETRA signals. The easiest way to do this is to open SDR# or another program like GQRX and look for them. TETRA signals are continuously broadcasting with a bandwidth of around 25 kHz. In most European countries they can be found at 390 - 470 MHz. In some countries they may be found around 850 MHz or 915 - 933 MHz. There may be several TETRA signals grouped in close proximity to one another. See the example images below.
An example audio clip of a TETRA signal recorded in NFM mode is shown below.
Once you have found some TETRA signals, record their frequencies. Now close SDR#, or whatever software you were using and boot into Linux. In this tutorial we use a 32-bit Ubuntu 14.04 virtual machine running on VMWare Player as our Linux system. Some of the commands may vary if you are using a different system.
Install the software
Note: There is now a telive live Linux image available. This will allow you to boot via a USB drive straight into a Linux OS with telive preinstalled. If you want the easy way out, or have trouble with the install script below, then try this image.
Note 2: As of October 2016 there is now a Windows port of the Telive decoding software available. This may be an option for you if you prefer to run in Windows. More information here.
This install script will automatically download the software and all the required prequisites including the RTL-SDR drivers. If you have problems consult the documentation or try a manual install. Instructions for the manual install are shown at the end of this post.
sudo wget sudo chmod 755 install_telive.sh ./install_telive.sh
Running the Software
- Open a terminal window and browse to ~/tetra/osmo-tetra-sq5bpf/src and run ./receiver1 1.
cd ~/tetra/osmo-tetra-sq5bpf/src ./receiver1 1
- Open a second terminal window or tab and open a specially sized xterm window using the following.
/usr/bin/xterm -font fixed -bg black -fg white -geometry 203x60
- In the xterm window, browse to ~/tetra/teliveand run ./rxx.
cd ~/tetra/telive ./rxx
- Open another terminal window or tab and browse to /tetra/bin and run ./tetrad.
cd /tetra/bin ./tetrad
- Open another terminal window or tab and open GNU Radio Companion by typing the following.
gnuradio-companion
- In GNU Radio open the telive_1ch_simple_gr37.grc file which is found in ~/tetra/telive/gnuradio-companion.
- Execute the flowgraph by clicking on the play button icon on GNU Radio Companion toolbar.
- At the bottom of the screen that pops up look for the Frequency: text box and enter the centre frequency of the TETRA signal that you want to monitor. You can also click on the centre of the TETRA signal spikes in the Full Spectrum view to tune to a different signal.
- Enter the PPM offset of your RTL-SDR dongle in the ppm: text box.
- Finally adjust the SDR Input Gain setting for best reception.
At this point you should confirm that you see a strong rectangular TETRA signal in the FFT window that pops up. If you do, switch back to your first terminal window where you ran ./receiver1 1. You should confirm that you see system data scrolling by. If there is no data scrolling by, try adjust the gain and PPM offset in the FFT window.
If data is scrolling and the system is not encrypted you should start to hear voice audio. If a system is capable of encryption, the terminal window with the system data will show Air encryption: 1. However, note that even if it shows this, there is still a possibility that encryption has not been enabled.
Note that for a one channel receiver the frequency you tune to should be a control channel. The control channel frequency is the frequency shown in the top row of the Telive window in the green bar next to the word "Down:". By pressing "t" (lower case T) in the Telive window you can toggle between the usage identifier window and the frequency info window. By looking at the frequency info window you can find neighbour networks.
If you want to log all voice communications you can by pressing "shift+R" (upper case R) in the telive window. This will log .ogg audio files to /tetra/out. You can also enable a text log by pressing "l" (lower case L) which will to /tetra/log/telive.log. More options can be found by entering ? (question mark).
Telive is also capable of decoding SDS messages, which are used to send short text messages or radio locations. If the TETRA system you are monitoring does send radio locations via SDS, then these can be automatically exported to a KML file which is stored at /tetra/log/tetra1.kml. If you open example_google_earth.kml, then Google Earth will periodically read from /tetra/log/tetra1.kml and give you an updated map of location. You can also set the TETRA_KML_INTERVAL environment variable which defines how often the location file will update. The default is 30s, but be aware than decreasing the time can slow your system down.
If you happen to close the GNU Radio FFT window and want to run the program again, you will need to restart the ./receiver1 1 program in the first terminal window.
To see how to monitor two or four TETRA channels simultaneously, refer to the telive_doc.pdf PDF file.
OLD MANUAL INSTRUCTIONS
Don't use these instructions unless you cannot use the automatic script install for some reason.
Install the RTL-SDR Linux Drivers
If you haven't done so already, follow the instructions at to install the Linux RTL-SDR drivers. Remember to blacklist the DVB-T drivers on Linux.
Install Prerequisites
sudo apt-get update sudo apt-get install vorbis-tools sudo apt-get install sox sudo apt-get install alsa-utils sudo apt-get install libncurses-dev
Note that if you use a different Linux OS, then some users have reported needing to also install the following extra dependencies:
sudo apt-get install git-core autoconf automake libtool g++ python-dev swig libpcap0.8-dev sudo apt-get install cmake git libboost-all-dev libusb-1.0-0 libusb-1.0-0-dev libfftw3-dev swig python-numpy
Install GNU Radio 3.6
The TETRA decoding software requires installation of the older GNU Radio 3.6 (latest version is 3.7). The easiest way to do this is to run Marcus Leech's install script with the -o flag, to indicate that you want the old version:
cd ~ wget && chmod a+x ./build-gnuradio && ./build-gnuradio -o
This script will run for a few hours and should install GNURadio 3.6 and all the drivers required to run the RTL-SDR on Linux. Note that if you already have GNU Radio 3.7 installed, we recommend installing 3.6 on a fresh Linux install as the two versions many conflict.
Install libosmocore-sq5bpf
cd ~ git clone cd libosmocore-sq5bpf autoreconf -i ./configure make sudo make install sudo ldconfig
Install osmo-tetra-sq5bpf
cd ~ git clone cd osmo-tetra-sq5bpf cd src make
Install telive
cd ~ git clone cd telive make sudo mkdir /tetra sudo chown YOURUSER.YOURGROUP /tetra sh install.sh
Where YOURUSER.YOURGROUP should be replaced with the username and group that you are currently logged in to on your Linux system. In most cases it can just be YOURUSER.YOURUSER. Run ls -l in your home directory to see what username and group your files are using.
Install the TETRA Codecs
Note that if you are running a 64-Bit Linux version you will need to set your system to use a 32-bit compiler. The Appendix of the telive_doc.pdf file shows how to do this.
- Go to
- In the top right enter as a search term "en 300 395-2" and click the button to select Search Standards.
- Start the search.
- Find the search result labelled as REN/TETRA-05059.
- Click on the winzip icon (looks like a white page with a yellow file cabinet on it) to the right of the result to download en_30039502v010301p0.zip.
- Move this zip file into ~/osmo-tetra-sq5bpf/etsi_codec-patches.
- In a terminal browse to ~/osmo-tetra-sq5bpf/etsi_codec-patches.
- Unzip the file, making sure to unzip with lower case letters by using the following unzip command.
unzip -L en_30039502v010301p0.zip
- Use the codec.diff file to patch the codec files you just unzipped by typing the following patch command.
patch -p1 -N -E < codec.diff
- Open the c-code folder.
cd c-code
- Run make to compile the codecs.
make
- Copy the compiled files cdecoder and sdecoder to /tetra/bin by typing the following, or just by copy and pasting them in the Linux GUI.
cp cdecoder sdecoder /tetra/bin
Hi everybody.
I tried to install Gnuradio in my RP3, and this happened.
Starting all functions at: su 11.8.2019 18.23.56 +0300
SUDO privileges are required
Do you have SUDO privileges?y
Continuing with script
Installing prerequisites.
====> THIS MAY TAKE QUITE SOME TIME <=====
Unsupported Debian version 9.9
Is there any way to get that work or do i need something else than a raspberry pi to run it??.
Hi all
I’m trying to decode TETRA in Debian with a SDRPlay RSP2. I follow the steps above mentioned but when I run the last step in GNURadio I’m in doubt about the file I’d choose (the steps above indicate choose the file ~/tetra/telive/gnuradio-companion/telive_1ch_simple_gr37.grc but in my installation the path isn’t correct). The likest I found is ~/tetra/telive/gnuradio-companion/sdrplay/telive_1ch_simple_udp_xmlrpc_sdrplay_rsp1a.grc but I’m afraid it’s for SDRPlay RSP1A.
How can I run these in my system?
Thanks.
I installed and all went fine but the running!
When I arrive to the Tetra live receiver, it came grey and does not respond anymore to any click.
In the same time I get “PLL not locked”.
Where I can look to fix it? Obviously using GQRX the dongle work…
Thank you
I continued to test and experiment, with no luck.
The manual tell that “PLL not locked” can be a out of range frequency; no fix entering the frequency in the main windows (X then my 462.3000) and impossibility to use the window of Tetra Live Receiver, which not respond.
I get also a message:
The xterm executable “” is missing.
You can change this setting in your gnuradio.conf, in
section [grc], ‘xterm_executable’.
The message will show only once, and I don’t know if I need to fix it (and how!)
Thanks for a great tutorial, works great!
I just have one issue; when i activate the recording function, i can see that it saves the .ogg file in the terminal window, but it is not there in the tetra folder, so i created the /out folder, still nothing in it.
See below:
Encoding “/tetra/out/20180209/traffic_20180209_164526_idx7_callid0_0_0_0.wav” to
“/tetra/out/20180209/traffic_20180209_164526_idx7_callid0_0_0_0.ogg”
at quality 3,00
[ 98,0%] [ 0m00s remaining] |
Encoding of “/tetra/out/20180209/traffic_20180209_164526_idx7_callid0_0_0_0.ogg” done
FIle length: 0m 39,0s
Time: 0m 00,1s
Ratio: 445,3308
Bit rate: 22,0 kb/s
Does anyone have a solution?
Solved, it was in the root directory *facepalm*
Hi all,
i tried to run it on Pi3 on Exagear but i get this message starting:
Executing: “/home/pi/tetra/telive/gnuradio-companion/receiver_pipe/top_block.py”
linux; GNU C++ version 4.9.1; Boost_105500; UHD_003.007.003-0-unknown
Using Volk machine: sse4_2_32
/root/.gnuradio/prefs/vmcircbuf_default_factory: No such file or directory
vmcircbuf_createfilemapping: createfilemapping is not available
gr::vmcircbuf_sysv_shm: shmat (3): Invalid argument
gr::vmcircbuf_sysv_shm: shmat (3): Invalid argument
gr::vmcircbuf_sysv_shm: shmat (3): Invalid argument
terminate called after throwing an instance of ‘gr::signal’
>>> Done
On Lubuntu/RPi3 works fine 🙂
Did you install it with the script or manually?
Hi Eudardo,
it wasnt so easy get it running on RPi3.
I tried the “telive live Linux image” on a regular PC and it worked.
After a days of testing i tried copy the telive_1ch_simple_gr37_slow_udp.grc from the “telive live Linux image” (tru a USB stick) to RPi3 and run it from gnuradio and it worked 🙂
Hopefully i helped you!
Hi Eudardo,
i forgot to answer your original question 🙂
I installed with the script + the step mentioned above.
I’ll give it a try! Thank you so much.
Were you able to get it to work? What is the process you followed?
What´s about encrypted Tetra? Any patch?
Does anybody know or have source code,to set osmo for DMO (direct mode)? 😊
Hi, i have tried using Bootable 16GB USB stick for Tetra decoding on my 7 year Laptop, i have problems receiving no audio and nothink showing in Telive monitor window. GQRX is work really well from RTL 820T2 donge any help please ?
What is the password to install google heart in live telive
would like to know the root pass too
read the fucking manual!
I am getting error bug #528 when i press play in GNU Radio.
What do i need to do to ensure it connects with the rtl2832u device.
fixed-working now once i plug in the dongle just before launching the play button.
hello,
the automatic script doesn’t works with opensuse 42.1 , is it possible using it with that Linux?
thank U !
Hi,
I’ve just installed all steps, and everything works but the sound. I can’t hear anything , and if I check it , that’s appears to me. Can someone help me how to fix it to hear?
thank you
Found 1 device(s):
0: Realtek, RTL2838UHIDIR, SN: 00000001
Using device 0: Generic RTL2832U OEM
usb_claim_interface error -6
Failed to open rtlsdr device #0.
-9 means LIBUSB_ERROR_BUSY. Some other device/program is locking the device?
Does SDR# work?
Edit, I meant “-6 means LIBUSB_ERROR_BUSY”.
“Failed to open rtlsdr device #0.” means that the usb dongle is claimed by something else, probably the dvb-t driver. blacklist it (i can’t remember the exact command, but it’s in the telive docs) and reboot and it should work.
btw did you try the bootable usb pendrive image?
no need to set up anything, works right out of the box, and no VM is needed to run it.
you might want to turn the volume down, by default it is set to 100%
you’ve to blacklist some modules, google for it
Hi, trying to install the software appears…
————————————————————–
INSTALLING Gnuradio
Cannot add PPA: ‘ppa:gqrx/releases’.
Please check that the PPA name or format is correct.
—————————————————————
I’m begginer at ubuntu, and I need help how to resume.
thanks in advance
Install gqrx first from the offical site and then run the installer. Worked for me.
Any chance of porting this to an OS for mentally challenged and undereducated normal people and getting this into a form that doesn’t require a time machine to travel back 50 years to get the degree in programming neurosurgery robots on DIY rocket ships needed to operate this software? Thanks for considering. Linux-only software is modern racism.
Cmon guys. You can’t complain about software that is written by a volunteer for free. The fact is if Linux didn’t exist this software would be 100x harder to program and woulnd’t even exist in the first place
Linux-only software are only way for this.
N O it’s not, i have wavecom and that decodes tetra fine in windows
ok, but where did you download the wavecom Software ?
He’s probably just a troll
I just want to comment that this is shitty instructions that dont work..
“failed to find package ‘libzmq1-dev
Thanks for wasting 3 hours of my life fo
go back to looking at kids
HI, and thanks for this tutorial. Finaly found the time to install it and test it (using the install script) … and for most part it went well.
A do have a few questions, and I hope someone can help me:
I’m running Ubuntu 14.4 32bit on VMplayer with maximum number of cores (6) and 4 gb of ram, but when everything is running the fft control window gets frozen (turns to b&w) when decoding a signal. I can only change freq. and ppm at the start, and even then it’s sluggish. Is that normal, and what can I do to make it faster??
Decoding is running despite of the frozen control window, without any evident error messages but I only record very short (1-2 seconds max) voice massages that seem to be encrypted. I am not sure if the whole sistem is encrypted or if there is a problem with decoding the signal due to a slow system. (I have a phenom 1090t procesor with 20gb ram)
Thanks for your help!
Change number of cores to 1 and you should see improvement. Also make sure your have vmware tools running
Tnx … but in the mean time I “fixed” the issue by installing debian (as suggested on the radioreference forum). Under Debian everything is running much much smoother!
But the disappointment is that almost everything is encrypted 🙁
hey,
can anyone help me? Starting the receiver1 1 creates the following error :
” Traceback (most recent call last):
File “demod/python-3.7/simdemod2.py”, line 18, in
import osmosdr
File “/usr/local/lib/python2.7/dist-packages/osmosdr/__init__.py”, line 26, in
from osmosdr_swig import *
File “/usr/local/lib/python2.7/dist-packages/osmosdr/osmosdr_swig.py”, line 28, in
_osmosdr_swig = swig_import_helper()
File “/usr/local/lib/python2.7/dist-packages/osmosdr/osmosdr_swig.py”, line 24, in swig_import_helper
_mod = imp.load_module(‘_osmosdr_swig’, fp, pathname, description)
ImportError: /usr/local/lib/libgnuradio-osmosdr-0.1.5git.so.0.0.0: undefined symbol: hackrf_device_list
”
I tried to find a solution but I didn’t find anything. Can anyone help me?
Cheers
Alex
Hi,I installed on mac os El Capitan, Vm were Kali with linux 2.0 . Then I followed the guide . Everything ok, but when I run gnuradio this is the error message:
<>>
Preferences file: /home/user/.grc
Block paths:
/usr/share/gnuradio/grc/blocks
/home/user/.grc_gnuradio
Showing: “”
>>> Done
Showing: “/home/user/tetra/telive/gnuradio-companion/telive_1ch_simple_gr37.grc”
Generating: “/home/user/tetra/telive/gnuradio-companion/top_block.py”
Executing: “/home/user/tetra/telive/gnuradio-companion
thread[thread-per-block[9]: ]: thread_bind_to_processor failed with error: 22
thread[thread-per-block[5]: ]: thread_bind_to_processor failed with error: 22
thread[thread-per-block[8]: ]: thread_bind_to_processor failed with error: 22
thread[thread-per-block[12]: ]: thread_bind_to_processor failed with error: 22
thread[thread-per-block[11]: ]: thread_bind_to_processor failed with error: 22
thread[thread-per-block[6]: ]: thread_bind_to_processor failed with error: 22
thread[thread-per-block[7]: ]: thread_bind_to_processor failed with error: 22
thread[thread-per-block[10]: ]: thread_bind_to_processor failed with error: 22
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO…
Someone can help me understand the error ? Thanks.
I get exactly the same error message. Everything seems to work fine, but when I hit the playbutton for the flowgraph, the fft runs for a second and then freezes. Did you find any solution to this?
Just found out a solution to this problem. In my case it was due to the virtual machine in WM Player was set to 1 processor while my system is an Intel Haswell with 4 cores (8 logical cores).
Changing the virtual machine to 2 processors fixed it! (power off the guest in WM Player, right click your virtual machine, select settings, select processor and set it to an adequate number). Hope this is of any help!
it works on 32bit Kubuntu 14.04 with GRC from the official 3.7.2.1 ubuntu repo and compiled rtl drivers, except after 2 seconds or so it freezes with error ” file_sink write failed with error 21 ” – i guess this may be a buffer prob, as just running RTL FM from within bash plays ok. but with buffering errors, but in GQRX it works smoothly.. – i’m currently tweaking GRC to try to correct this.. anyone help plz ? am i on the right track ?
update – if i start GRC on a valid freq – on its own, no tetrad, receiver 1, no telive or no rxx … GRC starts , finds my hardware, looks good and apears to work without any apparent errors.
if i *- then* start tetrad and the other command line tools whilst GRC is running, they don’t error, but sadly they don’t see any output from GRC – i think there is some lack of comms between modules maybe.
so near.. yet so far…. i can taste it !
actual error message is:
Found Rafael Micro R820T tuner
Exact sample rate is:2000000.052982 Hz
thread[thread-per-block[4]: ]: file_sink write failed with error 21
ooooooooooooooooooooooooooooo
what is error 21 ?
I tried to get a system running a few weeks ago but got stuck with errors and a non working system. Have anyone made a working VM that they would like to share?
Hi all! Thks for the help in the last post, it works very fine.
The next question is how to decode SDS data, here are the telive log:
20150716 11:58:49 FUNC:SDS [010011110111111111111110000000001011000010010000000001000000001000000000011001111111111101101000000001100000000000000000000000000001000010000000000000000000000000000000000]
20150716 11:58:49 FUNC:SDSDEC [CPTI:1 CalledSSI:4223 CallingSSI:16775170 CallingEXT:0 UserData4: len:64 protoid:02(Simple Text Messaging) coding_scheme:01 DATA:[\x003\xFF\xB4\x03\x00\x00]] RX:1
20150716 11:58:49 FUNC:D-SDS DATA SSI:00004223 IDX:000 IDT:1 ENCR:0 RX:1
20150716 11:58:51 FUNC:SDS [010011110111111111111110000000001001001100000011000000110110010000110100000000000001000010000000000000000000000000000000000]
20150716 11:58:51 FUNC:SDSDEC [CPTI:1 CalledSSI:4223 CallingSSI:16775170 CallingEXT:0 UserData2: 0x30 0x30 0x36 0x43] RX:1
20150716 11:58:51 FUNC:D-SDS DATA SSI:00004223 IDX:000 IDT:1 ENCR:0 RX:1
20150716 11:59:45 FUNC:SDS [010011110111111111111110000000001001001100000011000000110110010000110100000000000001000010000000000000000000000000000000000]
20150716 11:59:45 FUNC:SDSDEC [CPTI:1 CalledSSI:4238 CallingSSI:16775170 CallingEXT:0 UserData2: 0x30 0x30 0x36 0x43] RX:1
the data is 0x30 0x30 0x36 0x43 ? how to decode? thks
I tried 5 times to get this working but I can’t. When I try to run it, in the first step, (./receiver1 1) I keep getting this error:
What can I do?
I saw that Christian said that he installed GRC 3.7 instead of 3.6, so now I’m trying the same. I’ll notice you. Thanks.
Anyway, any help will be huge.
(Sorry for my english)
For “
mkfifo: cannot create fifo ‘/tmp/fifo1’: File exists” error I deleted that file manually and it worked, but still getting
“
Traceback (most recent call last):”
File “demod/python-3.7/simdemod2.py”, line 18, in
import osmosdr
ImportError: No module named osmosdr
error. Any help?
Hey Eduardo,
What OS are you using?
I just built this all again onto another laptop with Debian 8.1 64bit over the weekend using a new install script that SQ5BPF (the author of Telive) posted up on Radio Reference here…
He posts the link to his script on page 33.
Because it is now using GNURadio 3.7 there is no need for the long build procedures anymore either 🙂
That whole thread is quite useful actually…
Thanks.
Finally I got my TETRA decoder working. It’s working on a VMWare machine. Ubuntu 14.04 32 bits, GRC 3.7.
Hi
I’m running the scripts in kali linux (no vm) and get this
BURST
#### could not find successive burst training sequence
found SYNC training sequence in bit #84
In vm works but in ‘real’ linux no, can anyine help me? thks
This normally (for me anyway) means that I either don’t have a strong enough signal or the signal I do have needs fine tuning in the FFT window.
I also got a lot of this when using a VM in VirtualBox. Switching to VM Player was much better as apparently VM Player handles the USB passthrough a lot better.
You dont mention if you’re using an RTL stick or not but might be worth checking that you are using at least USB2 ports and have the latest drivers installed etc….
Thaks for the reply!
I am using Terratec E4000 and works fine in VM but no in the laptop who have usb 2.0. and kali instaled in him. And work in 32 bits, i’´ll try what you mention of the fft window.
thks
HELP PLEASE! I
Did everything according to instructions. But in point 1 under “Running the software” I always get the following error message:
bash: cd: ~osmo-tetra-sq5bpf/src: No such file or directory
bash: ./receiver1: No such file or directory
How?
Hi, you missed the / between ~ and osmo-tetra-sq5bpf/src in: [email protected]:~$ cd ~osmo-tetra-sq5bpf/src
Sorry, but i mean this error message:
Traceback (most recent call last):
File “demod/python-3.7/simdemod2.py”, line 18, in
import osmosdr
ImportError: No module named osmosdr
Can anyone help?
I’ve done it! With the 3.7 gnuradio it now works! Yippi!
Hi,
this runs with GNU Radio 3.7? Everyone here supposed that 3.7 is not working and as a workaround you have to install the old version (3.6.5 or so)…
How did you get it working?
I did everything according to instructions. But instead gnuradio 3.6 I installed gnuradio 3.7. And then opened the file telive_1ch_gr37.grc. receiver1 I started with sudo ./receiver1 1, because otherwise I had an error message.
…and gnuradio I started with Sudo privileges too, because an error message.
Hey, Christian, does it work for you with GRC 3.7? Sure?
Okay. I did it as you said. It seems to work, the program runs.
But i can’t see info scrolling.
Okay. It works perfectly in my pc.
Hello..how resolve this problem…all work good, but when i open ./receiver1 1 apear this error:
Traceback (most recent call last):
File “demod/python/simdemod2.py”, line 78, in
tb = top_block()
File “demod/python/simdemod2.py”, line 49, in __init__
verbose=options.verbose)
File “/home/test/osmo-tetra-sq5bpf/src/demod/python/cqpsk.py”, line 260, in __init__
self.receiver.set_alpha(self._costas_alpha)
AttributeError: ‘mpsk_receiver_cc_sptr’ object has no attribute ‘set_alpha’
I tried many ways,but do not I realize that I supposed to do. sorry for my badd english and thank s
Hi,
I had the same error.
To “solve” I brutally commented out two lines in the python source code “cqpsk.py” in order to have:
#self.receiver.set_alpha(self._costas_alpha)
#self.receiver.set_beta(self._costas_beta)
However I’m not sure if this hack compromise the proper functioning of the program.
Hi, the REN/TETRA-05059 seem to be replaced with REN/TETRA-06132 on etsi.org
Hi everybody, I just wonder if I have a AIR ENCRYPTION = 1 system, and I think I shouldn’t get any voice or just sound from it. but I get something like broken voice or wrong decoder noises when somebody’s making a group call. Is this a real air encryption enabled system, or I made some mistake in the settings?
Air Encryption = 1 means that there has been a license for encryption applied to the network as a whole.
However (as is the case with a local network that I’ve been monitoring) that doesn’t mean that every transmission will be encrypted, its just that it has the capability to be if needed.
I actually get (surprisingly) quite a lot of unencrypted comms.
Encrypted stuff does just sound like garbled speech with a few whistles and other strange noises thrown in!
Unencrypted stuff should be nice and clear…
Dont know about other countries but in the UK attempting to decode a secure home office/public safety TETRA encryption algorithms (TEAs) type radio’s- Airwave tetra base terminal network (O2)- is a serious as carrying a firearm. The area of TETRA security is extensive costing millions of pounds in its development.i would not like to be the person caught hacking tetra.
Spooks, first you should have read the documentation that comes with the software. it’s not about breaking any cryptography here.
if it’s encrypted then you can’t listen to it with this software.
if its not encrypted then you can listen to it (even if the people setting up the system managed to convince their clients that it’s somehow “safe”. in this case telive makes a nice auditing tool)
simple as that.
HI, HAs anyone ever got this SW to run on a raspberry pi, if I would be grateful for some guidance.
Thanks
Peter
G6VBJ
My HP laptop with dual core 1.2GHz CPUs and 2GB of RAM will not run it so I doubt a Pi would I’m afraid.
Would be nice though 😉
I tried to run it on a Pi 2 last weekend (just to see what would happen). It took an age to install, and at the end of it all, Gnu Radio just froze it. So, I would say that is a pretty comprehensive no! 🙂
|
https://www.rtl-sdr.com/rtl-sdr-tutorial-listening-tetra-radio-channels/?replytocom=141248
|
CC-MAIN-2020-05
|
refinedweb
| 5,217
| 65.93
|
30 April 2009 12:00 [Source: ICIS news]
LONDON (ICIS news)--Here is Thursday’s midday European oil and chemical market summary from ICIS pricing.
CRUDE: June WTI: $51.80/bbl, up $0.83/bbl. June BRENT: $51.38/bbl, up $0.60/bbl
Prices continued to rise in tandem with buoyant global stock markets, with a weaker US dollar also adding support.
NAPHTHA: Open-spec spot cargoes were assessed in a $412-422/tonne CIF (cost, insurance and freight) NWE (northwest ?xml:namespace>
BENZENE: One European May benzene deal was rumoured at $580/tonne CIF ARA (
STYRENE: European May styrene values were assessed at $820-850/tonne FOB (free on board)
TOLUENE: Bids and offers for May toluene were heard this morning at $525-545/tonne FOB
MTBE: Bids were heard in the morning at a factor of 1.31 and offers at 1.35, with market sources saying that the factors should come off. Gasoline barges traded between $488-490/tonne
XYLENES: Offers for May paraxylene were heard this morning at $1,120/tonne CIF Rotterdam, but no bids were forthcoming. The value range was adjusted to $1,000-1,030.
|
http://www.icis.com/Articles/2009/04/30/9212327/noon+snapshot+-+europe+markets+summary.html
|
CC-MAIN-2013-20
|
refinedweb
| 193
| 74.49
|
Caring For The Environment - Making getenv() Scale
By pgdh on Jun 14, 2005
Although a relatively minor contributor to OpenSolaris I still have the satisfaction of knowing that every Solaris 10 process is using my code. But who in their right mind needs getenv(3C) to scale? Of course if you don't care about thread safety (as is currently the case with glibc version 2.3.5 -- and hence with Linux) your implementation might scale very nicely thankyou!
Sun on the other hand does care about thread safety (and we've been doing so for a long time). However we had rather assumed that no one in their right mind would need getenv() to scale, so our implemetation was made thread safe by putting a dirty great mutex lock around every piece of code which manipulates environ. After all, as our very own Roger Faulkner is so fond of saying: "Correctness is a contraint; performance is a goal". And who cares about getenv() performance anyway?
But Who Really Cares?
Well it turns out that there are some significant applications which depend on getenv() scalability (and which scale wonderfully on Linux ... where thread safety is often ignored ... they are just very lucky that no one seems to be updating environ whilst anyone else is reading it). So Bart Smaalders filed bug 4991763 getenv doesn't scale and said he thought it was an excellent opportunity for my first putback. Thanks Bart!
For some time I've been saying: "If Linux is faster, it's a Solaris bug!" but somehow 4991763 didn't quite fit the bill. Firstly, I think an application which depends on getenv() scalability is broken. Secondly, Linux is just feeling lucky, punk. However, I do firmly believe that we should do all we can to ensure that Solaris runs applications well -- even those which really need some tuning of their own. I had also been itching for an chance to explore opportunities for lockless optimisations in libc, so all in all 4991763 was an opportunity not to be missed!
A Complete Rewrite
The existing implementation of getenv(), putenv(), setenv(), and unsetenv() was split across three files (getenv.c, putenv.c and nlspath_checks.c) with the global mutex _libc_environ_lock being defined elsewhere. Things had become fairly messy and inefficient so I decided on a complete rewrite.
NLSPATH security checks had introduced yet another global variable and a rather inefficient dance involving a mutex on every {get,put,set,unset}env() call just to ensure that clean_env() was called the first time in. In this instance it was an easy matter to remove the mutex from the fast path by doing a lazy check thus:
static mutex_t update_lock = DEFAULTMUTEX; static int initenv_done = 0; char \*getenv(const char \*name) { char \*value; if (!initenv_done) if (findenv(environ, name, 1, &value) != NULL) return (value); return (NULL); }
The test was then repeated under the protection of the mutex in initenv() thus:
extern void clean_env(); static void initenv() { lmutex_lock(&update_lock); if (!initenv_done || ... ) { /\* Call the NLSPATH janitor in. \*/ clean_env(); . . . initenv_done = 1; } lmutex_unlock(&update_lock); }
By rolling putenv.c into getenv.c I was able to eliminate the use of globals altogether, which in turn allowed the compiler to produce better optimised code.
Look, No Locks!
But the biggest part of the rewrite was to make the fast path of getenv() entirely lockless. What is
not apparent above is that findenv()
is entirely lockless.
Various standards define the global environment list pointer:
extern char \*\*environ;
This has to be kept as a consistent NULL terminated array of pointers to NULL terminated strings. However, the standards say nothing about how updates are to be synchronised. More recent standards forbid direct updates to environ itself if getenv() and friends are being used.
Yet the requirement that environ is kept consistent is precisely what we need to implement a lockless findenv(). The big issue is that whenever the environ list is updated, anyone else in the process of scanning it must not see an old value which has been removed, or miss a value which has not been removed.
The traditional approach is to allocate an array of pointers, with
environ pointing to the first element. When someone needs to a new
value to the environment list we simply add it to the end of the list.
But how do we go about deleting values? And what if we need to add a
new value when the allocated array is already full? If you care about
threads, it's not long before you need to introduce some locking!
The new implementation contains two "smarts" which meets these challenges without introducing locks into the getenv() path ...
Double It And Drop The Old One On The Floor
When the new implementation needs a bigger environ list, it simply
allocates a new one which is twice as large and copies the old list
into it. The old list is never reused -- it is left intact for the
benefit of anyone else who might happen to be still traversing it.
This may sound wasteful, but the environment list rarely needs to be
resized. The wastage is also bounded -- it is quite easy to prove
mathematically that
this strategy never consumes more than 3x the space required by an
array of optimal size.
However, one teeny weeny issue with the "drop it on the floor" approach is
that leak detectors can get a tad upset if they find allocated memory
which is not referenced by anyone. With a view to keeping me on the
straight and narrow -- but mostly to avert high customer support call
volumes -- Chris
Gerhard recommended that I keep a linked list of all internally
dropped memory (just to keep those goody-goody leak detectors happy).
I first met Chris in 1989. He was on my interview panel when I
joined Sun. I do hope he feels he did a good job that day!
Overwrite With The First Entry And Increment
I was bouncing some other getenv() ideas around with Chris when he also gave me just what I needed for deletes. The old code just grabbed the global lock, found the element to be deleted, shifted the rest of the list down one slot (overwriting the victim), and then released the lock.
Chris had the bright idea of copying the first element in the list
over
the victim, and then incrementing the environ
pointer itself. The worst case would be that the same element might be
seen
twice by another thread, but this is not a correctness issue.
This led to two further changes:
- New values are now added at the bottom of the environment list (with the environ pointer being decremented once the new value is in place).
- When a new doube-sized environment list needs to allocated, the old one is copied into the top of the new one (instead of the bottom) so that the list can then be grown downwards.
OK, Not Entirely Lockless
Obviously mutex lock protection is still needed to serialise all updates to the environment list. The new implementation has a lock for all seasons: update_lock (e.g. for updating initenv_done and for protecting environ itself). However the new getenv() is entirely lockless (i.e. once clean_env() has been called once).
Another important issue is that it is considered bad practice for system libraries to hold a lock while calling malloc(). For this reason the first two thirds of addtoenv() are inside a for(;;) loop. If it is necessary to allocate a larger environment array addtoenv() needs to drop update_lock temporarily. However this opens a window for another thread to modify environ in such a way that means we must retry. This loop is controlled by a simple generation number environ_gen (also protected by update_lock).
Guaranteeing Consistency
That's almost all there is to it. However in multiprocessor systems we still have to make sure that memory writes on one CPU happen in such a way that they don't appear out of sequence on another CPU. Of course this is taken care of automatically when we use mutex locks.
Consider the following code fragment to insert a new item:
environ[-1] = string; environ--;
It is vitally important the two memory writes implied by this are seen in the same order to every CPU in the system. On SPARC today this doesn't matter since all ABI compliant binaries run in Total Store Order mode (i.e. stores are guarantted to be visible in the order in which they are submitted). But is it possible that future systems will used a more relaxed memory model.
However, this is not just a hardware issue, it is also a compiler issue. Without extra care the compiler might reorder the two stores, since the C language cares nothing for threads. I had quite a long discussion with a number of folk concerning "sequence points" and the use of volatile in C.
The eventual solution was this:
environ[-1] = string; membar_producer(); environ--;
First, the function membar_producer() serves as a sequence point, guaranteeing that the C compiler will preserve the order of the preceding and following statements. Secondly, it provides any store barriers needed by the underlying guarantee the same effect as Total Store Order for the preceding and following instructions.
A Spirit Of Generousity
My new implementation was integrated into s10_67 yet despite my own extensive testing it caused a number of test suites to fail in later testing. This was tracked down to common test suite library code which updated environ directly. Yuck! Although this kind of thing is very much frowned on by the more recent standards it was felt that if our own test cases did it there was a good chance that some real applications did it too. So with some reluctance I filed 6183277 getenv should be more gracious to broken apps.
If someone else if going to mess with environ under your nose there's not a lot you can do about it. However it is fairly easy to detect the majority of cases (except for a few really nasty data races) by keeping a private copy my_environ which can be compared with the actual value of environ. If these two values are ever found to differ we just go back to square one and try again.
So the above fragment for adding an item now looks more like this:
if (my_environ != environ || ...) initenv(); . . . my_environ[-1] = string; membar_producer(); my_environ--; environ = my_environ;
Conclusion
My second putback integrated into s10_71. Following this I had to fight off one other challenge from Rod Evans who filed 6178667 ldd list unexpected (file not found) in x86 environment. However this turned out to be not my fault, but a latent buffer overrun in ldd(1) exposed by different heap usage patterns. Of course, the engineer who introduced this heinous bug (back in January 2001) will remain anonymous (sort of). Still, he did buy me a beer!
Of course, the serious point is that when you change something which is used by everyone, it is possible that you expose problems elsewhere. It is to be expected that the thing last changed will get the blame. Such changes are not for the faint-hearted, but they can be a whole lot of fun!
My first experience of modifying Solaris actually resulted in two putbacks into the Solaris gate, but I learnt a great deal along the way. Dizzy with my success, I am now actively seeking other opportunities for lockless optimisations in libc!
Technorati Tags: OpenSolaris, Solaris
Your comment about having to check environ against my_environ to see if anyone has played with environ directly - are you talking about code actually moving the environ pointer, or just the strings being updated without using putenv? I'd have thought the former was pretty rare, the latter less so.
Of course the latter is fairly slow to trap so I'm assuming you meant the former.
Go on, blind us with your brilliance - how does the performance compare with the old version?
Posted by Philip Beevers on June 15, 2005 at 06:24 PM PDT #
Yes, I'm talking about code moving environ.
For example, such a thing might happen if someone implements their own local putenv(). However, by the time they call their putenv() my initenv() will probably have already been called due to process setup activity elsewhere.
One can assume that the first call to any putenv() implementation <em>must</em> result in a new environment list being allocated (because you have to assume the existing list is already full).
So if an application calls its own local putenv() and then calls my libc implementation (e.g. due to some library dependency), environ will have moved since I last saw it.
I, too, thought this would be rare. However, I fell foul of three Solaris test suites (all of which used the same broken support library ... and yes, I did file bugs against them)! But if we have code which is does this, it is farily likely that there are applications out there which are similarly broken.
To detect this case in a singlethreaded environment is trivial ... which is what I now do. However, all bets are off for multithreaded processes ... which was always the case with the old libc implementation.
Performance?
Well, my new getenv() scales linearly on multiprocessor hardware (the old version had zero scalability). It was also gratifying to find that the new implementation was slightly faster on Opteron in the singlethreaded case.
There are a few singlethreaded cases which are <em>marginally</em> slower on some SPARC platforms. However both SPARC ánd Opteron multithread scalability are dramatically improved (both for updates as well as reads).
Posted by Phil Harman on June 16, 2005 at 09:38 PM PDT #
Posted by guest on August 14, 2005 at 10:21 PM PDT #
Posted by Mark Brown on August 15, 2005 at 02:20 AM PDT #
In reply to Mark and the anonymous 83.28.66.114 ...
Well of course that's all very nice, except that some care about more than just POSIX.
The "MT-Safe" putback to the Solaris libc was made on 92/05/06 (according to the SCCS history). We didn't ship a user-level threads library until 1993, which means Solaris has always provided a thread safe getenv().
The Pthreads specification wasn't even ratified until 1995 (Solaris 2.5 was shipped with support for Pthreads in November 1995).
When I came to tune getenv(), dropping thread safety simply wasn't an option. We could argue until the proverbial cows come home about the relative merits of getenv() thread safety. But that's not the point. We said we were for it. We have always shipped it. We must assume there are people who depend on it.
So, quote whatever standards you will! Our getenv() has always been thread safe, but it is now also fast! What's the problem? It's certainly not a problem for Solaris! If people need getenv() to be thread safe, they know where to come; if they don't, I think we've shown that we're here to compete.
Posted by Phil Harman on August 15, 2005 at 06:20 AM PDT #
Posted by Wu Yongwei on August 24, 2005 at 12:14 PM PDT #
|
https://blogs.oracle.com/pgdh/entry/caring_for_the_environment_making
|
CC-MAIN-2016-26
|
refinedweb
| 2,550
| 61.97
|
<<
hello,
is it possible to change Module Bluetooth adress with AT Commands or is there another way to change it?
Hi, very usefull ible.
I have one question. Is it possible to configure the HC-05 through AT commands to be seen as a keyboard/gamepad/mouse? According to the command ref pdf (appendix 2) you posted, there is such functionality. The question is if this would sufice, or a firmare flashing would be needed for it.
Dear Hazim, thank you for this very useful application. I have an EN pin instead of KEY pin on my HC05 and it does not seem to be connected to PIO11 (Pin34) (I have checked it with an AVO. Do you think short circuiting the EN pin with Pin34 will solve the problem?
My second question is, how can I pair the HC05 connected to an Arduino uno with an HC06 connected to an ATTiny 85?
Cheers.
Dear friend,
In my Hc-05 have tiny push button to put AT mode.
what i doing to put configure mode
1) disconnect power pin.
2) press the tiny push button
3) same time connect the power.
Done.
EN pin should be the same as KEY pin. The default baud rate in AT command mode is different for different devices. My HC05 had it set at 9600. Try setting different baud rate and see if that solves the problem.
Same problem. Do you got the solution???
HC-05 bluetooth why I did not respond to anything when I type AT? please help, I want to change the name and passwordnyaa
Hey, this instructable is missing one thing. To communicate Arduino with HC-05 you have to set line ending box in Arduino Serial Monitor to "Both NL and CR"
Hope it helps.
One IMPORTANT thing, Newline feed (\n) and Carriage return (\r) are needed on every AT command entered. So if you're using Arduino's serial monitor make sure you select "Both NL and CR" on the dropdown.
Thank Youuuu!!!
WOW! THANK YOU for sharing this! I have been stuck on this tutorial for 3 days now, never being able to display any response in my serial monitor from the HC-05. I even bought another HC-05, thinking that my original was defective. You rock :)
Thanks for this!
THANKS!! THAT WAS NEEDED :) :)
Very useful set of instructions.
I have the same module but found that you could enter AT mode by disconnecting power to the HC-05, pressing the reset button on the module and reapplying power.
When initially connected the red LED would flash quickly. When in AT mode, the LED flashes much more slowly.
Thank you, this worked for me.
To clarify, you must:
1) disconnect power from module
2) press and hold the little button on the module
3) reconnect power and continue holding the button until you see the light blinking slowly
This is the module I'm using:
same module, i cant see any button
Hello,
I have a CZ-HC-05 Bluetooth module that I've wired to an Arduino UNO Rev3 per the instructions on this website. I have successfully gotten the LED to blink slowly (2 seconds on, 2 seconds off) indicating to me that I'm in AT command mode. However, when I open the serial monitor and type AT I don't get an OK response from the HC-05. I copied and pasted the code from this website into the Arduino IDE so I'm certain it's correct. My serial monitor is set to 9600 to match the serial port baud rate. I also set the serial port to Both NL & CR. I have tried typing AT\n\r as well and still no response from the HC-05. Can anyone help me?
Same Problem, no response, have you got it?
The default rate for the BT module is 38400.
hello, thanks for everything you done.
my broblem is the HC 05 is not responding after verifying the wiring the code every thing many times, so could you tell me what could be the problem ???
thanks again ..
I've been searching and searching and I have not been able to find your sketch for this tutorial. (PS: Im new to BT and Arduino)...one week ago I thought a "sketch" was just a basic drawing of something...but now it also means Arduino code too. :-)
Hello, I'm having a problem. I followed all the instruction and can enter AT command mode. I entered "AT" but it returns like this. The same goes "AT+VERSION?". I try other commands like "AT+NAME=MYBLUE" but HC-05 name doesn't change. What is wrong in this case?
I have already configured my HC-05 as a master. I would like to communicate it with an android app where in this app will be used as the slave. I followed the instruction wherein i configured it at master using AT command. I would like to test if I can send data using HC-05 to android app so i run the program with the same code including BTSerial.write("test") at the last if statement hoping this data willl be send to my app, but it fail to do so. What can I do to test it? Thanks
I tried this tutorial with hc-05, with my arduino MEGA R3, but when i sent AT, the monitor didn't show anything. My module has EN instead of KEY. I soldered wire with Pin34 and gave it 5v But still sam problem.
Had the same problem today, look at for nice tutorial.
(keep in mind those options in serial monitor in arduino ide)-...
Also it seems that my module communicates ok both ways while in AT mode, and when in normal mode it only sends data to phone. the data i send from phone gets lost some ware, strange.. same issues anyone?
Hi. You need to finish the command with a terminator : AT /r/n or AT ENTER
For somme reasons , it's not the same on the hc-06 .
I've chimed in late. Does "finish the command..." deal with the differences needed for HC-06? Are you saying that 6 days asadsalehhayat is using a HC-06 and not a HC-05? Other than the terminator, is dealing with a HC-06 the same as a HC-05? Thanks...
I'm using an HC05
Hi, i only don't understand one thing. Let's say we want to put HC-05 in master mode and we do that successfully by sending the appropriate device. How does it know with which (slave) device to set up a connection with and how will it pair with it without knowing its password?!
Thanks! :)
Ok, I noticed that there's a bind command. So I should provide the master module with the slave's address. What about the slave's password?
How do we successfully connect two Bluetooth modules with each other (till the level of data transmission and reception)?
I have been able to link the two Bluetooth modules and connect them. But having pulled out the config pin (making the devices in the working mode), data transmission is not happening.
Some expert, Please help.
1. On both blue tooth devices, put the in AT mode, and type this command in the serial monitor: "AT+BAUD?". This should give you the current baud rate of the devices. The default for mine said it was 38400, but it was actually set to 9600. If it's not set to 9600, do so. Make this change in the code as well "BTSerial.begin(9600)". If you ever need to go back to the AT command, just do "BTSerial.begin(9600).
2. Make sure one of the Bluetooth is a master, and the other is a slave via the AT commands.
3. If you want to tell the Bluetooth device to only find each other. Find the address of both devices (write them down). Make sure that AT+CMOD is set to 0 for both devices. Finally use the AT+BIND command to apply eachother's addresses.
4. To send data from master to slave, use:
BTSerial.write(number or character);
To receive the data on your slave, use:
If (BTSerial.available())
{
Serial.println( BTSerial.read() );
}
Hope this helps and somewhat answers your question.
you are right about baudrate. normally hc-05 is baudrate 9600 but in this code wrote 38400 then I change to 9600 but program wasn't work. I was try to work along 3 hours as a 9600 baudrate but I didn't. then I didn't touch any code, just copy/past from original code, it was work. now I can use AT Commands with 38400.
There's no such thing as 'normally hc-05' - different manufacturers put different firmware with different commands and different capabilities.
I also made a typo. I say "If you ever need to go back to the AT command, just do "BTSerial.begin(9600)". That number should say 38400.
thanks.....nice information.
i have a question also..how many bluetooth modules (HC-05) can be connected to bluetooth dongle?
7
I have a HC-05 question : Can I use the AT Command "AT+PIO=2,1" to set the pio port 2 to High state "over the air"? I can set the pio 2 port to HIGH via UART (Rx-Tx) and it works fine, but not when HC-05 is wireless connected to a remote device.
|
http://www.instructables.com/id/Modify-The-HC-05-Bluetooth-Module-Defaults-Using-A/
|
CC-MAIN-2015-11
|
refinedweb
| 1,588
| 74.69
|
Content-type: text/html
ffs - Finds first set bit
Standard C Library (libc.so, libc.a)
#include <strings.h>
int ffs(
int pattern);
Interfaces documented on this reference page conform to industry standards as follows:
ffs(): XPG4-UNIX
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Specifies the bit pattern searched.
The ffs() function finds the first bit set (beginning with the least significant bit) and returns the index of that bit. Bits are numbered starting at 1 (the least significant bit). If pattern is 0 (zero), ffs() returns 0.
Standards: standards(5)
delim off
|
http://backdrift.org/man/tru64/man3/ffs.3.html
|
CC-MAIN-2016-50
|
refinedweb
| 103
| 59.4
|
NAMEtime - overview of time and timers
DESCRIPTION
Real time and process timeRealMost computers have a (battery-powered) hardware clock which the kernel reads at boot time in order to initialize the software clock. For further details, see rtc(4) and hwclock(8).
The software clock, HZ, and jiffiesT).
System and process clocks; time namespacesThe kernel supports a range of clocks that measure various kinds of elapsed and virtual (i.e., consumed CPU) time. These clocks are described in clock_gettime(2). A few of the clocks are settable using clock_settime(2). The values of certain clocks are virtualized by time namespaces; see time_namespaces(7).
High-resolution timersBeforeUNIX systems represent time in seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC).
A program can determine the calendar(2).
Broken-down timeCertainVarious).
|
https://man.archlinux.org/man/time.7.en
|
CC-MAIN-2021-10
|
refinedweb
| 132
| 59.5
|
Hello fellow dog lovers! Does your dog have destructive behaviors like digging, or tearing up your house? Or even psychological issues like biting, endless pacing, or other seemingly absurd behavioral issues? No? Then prevent these problems and let your canine friend get out more and enjoy life. The best answer and prevention to these problems is to give your dog plenty of exercise and activity. Your animal also demands quality attention from you (the dog owner), so this instructable will not talk about building dog runs, or treadmills but instead it will focus on healthy activities one can do with the animals.
The key to good exercise is work up the heart a good amount so that the dog reaches a good pant. This happens to be the dog's reaction to body heat, so it is a good indication they are working hard. If the pant is too strong, you will hear a harsh wheezing breath that sounds very unpleasant. Be sure to give your dog rest, especially older dogs and provide plenty of water when finished. They will be very thirsty.
This guide is based on my own experiences with my dogs, but due to the nature of taking care of pets mileage may very and suggestions are certainly welcome! Let me remind you that playing with your dog is not only beneficial for the mind and body of your pet, but also for you! Daily exercise is required and most ideally more than once in a day.
Step 1: Choosing an Exercise
There are of numerous ways to exercise/play with your dog, and I welcome you guys to contribute. I'm going to focus on the following:
Playing fetch (with different toys)
Misc games
Walking your dog
Hiking with your dog
Taking your dog to a dog park and doing all of the above and more
You can vary what you do with your dog, so if you take your dog on a long hike in the woods you might want to go easy the next day with a light amount of fetch or a shorter walk in the neighborhood. Other then that, choose something that you enjoy doing because its important to continue these activities regularly. And lastly, you may call it exercise, but your dog thinks of it as play/an adventure. Think like the dog.
Step 2: Fetch
Playing fetch is probably one of the more popular activities and only devoting a single step is a gross oversimplification. I encourage you to seek other instructables and consult other literature for much more detail on how to train your dog to fetch.
The key is to take the whole process in steps, as it will take time. Choose an object that won't hurt the animal should it accidentally hit him/her (it happens) like tennis balls, ropes, squeak toys, and small sticks. When you are introducing the dog to the object, try to associate the object with rewards and attention. Some dogs take to them naturally, others may need encouragement. Putting a treat inside a toy is an easy way to draw the dogs attention to it. In the beginning, just let the dog have fun with it like carrying it around in the mouth and generously reward/pet your dog so that he associates the toy with attention and joy.
Once the dog gets attached to the object, then start training your dog in steps. First run to chase after it. Then grab it with the mouth and finally to return (with object in mouth) to you. The last one is optional for smaller dogs and can be difficult to teach. However once your dog gets it, you can try to encourage other forms of fetch. Also teaching fun behaviors like jumping to catch a ball/frisbee, or swimming to get a stick are great additional steps.
Step 3: Other Games
Let your imagination run wild. Remember this is about having fun for both you and your dog, so pick something that you enjoy as well. With all that positive encouragement and treats, dogs love to be taught tricks. Just make sure you stop when you or your dog gets tired of the task at hand.
Though not strictly exercise, it will give your dog plenty of mental activity. Can easily be applied to more physical games. The typical boring tricks are great, but everyone has seen them so come up with something that is easy enough to teach a dog in steps. Some ideas:
The typical:
rolling over
crawling
shaking
Better ideas:
Giving hugs: the opposite of not jumping. Bad idea if dog is constantly muddy
Finding some object: not just useful for drugs/bombs
Jumping over objects: like low walls, yourself, or small children
Step 4: Dog Walks
Taking your dog on walks is a great way for him/her to enjoy different smells and discover new places. Especially in urban settings the necessary items include:
leash
collar/harness
plastic bag/device for poop
water bowl and water
some treats
Teaching your dog to walk on a leash is almost always necessary. Many dogs will just pull on the leash, making the activity difficult for both parties involved. It will be a slow process at first, and some harnesses have been designed with the idea to help solve this issue by putting pressure on the forward leg joints. Though, truthfully, it failed miserably with our dog and she would protest its use. Choke collars work, especially the ones with points that collapse on the neck, but don't use it if you can't stand hurting your dog even just temporarily.
Otherwise, the use of the leash in the city limits is generally recommended if not required. Encountering other dogs, or automobiles make the leash pretty useful. An extendable leash is a good buy, but it will require smarter use by the dog owner. Water is good to bring along, especially on longer walks and on hotter days. Treats come in handy for just the regular positive encouragement of your dog.
Step 5: Hiking With Your Dog
Hiking with your dog, is very similar to urban walks with dogs with a few notable exceptions. So pick a fun trail and plan a good morning/afternoon for the activity. Unfortunately many trails are off-limits to dogs due to varying reasons like wild animals, dangerous trails, or general bigotry against dogs. Despite such signs, many people bring their dogs anyway. Make sure you know the trails and the area.
If your dog is well trained, it is generally acceptable to allow your dog to walk without a leash (rules vary per location). Generally they stay on the trail and within view. Most dogs love to wander ahead and then check back to make sure the owner is within sight, then run ahead again. Others will run down the trail, then back up, then back down. Typically the outdoors hike (is there any other kind) is good exercise, but is also very tiring. Begin with shorter hikes like a few miles, and build up.
The key point on outdoor hikes is to bring more water than you'd think you would need. The last thing you want is to run out of water a few miles from your car, and your dog dehydrating and overheating. Its difficult to know how much water your dog needs and since their internal cooling system is linked to water consumption its best to err to your dogs benefit. My suggestion is to bring a lot in the beginning (at least a gallon) and let your dog drink all he/she can. Eventually you will get an idea of how much is enough. Debate is still out regarding drinking creek or lake water for dogs. They contain harmful organisms and bacteria, the worst of which is giardia or e coli (think terrible diarrhea) , but the dog has a much better system for all that (they eat other animals' poop!).
Step 6: The Dog Park
The dog park is a wonderful place for your animal to romp and socialize. You can play fetch and let your dog run around without a leash. Wait until your dog is at least 4 months before getting to the dog park so that he/she won't be dominated by the other animals. If you find that your dog does not play well with others, the dog park may not be for you.
Suggestions on finding dog parks nearby and tips for taking your dog to one can be found at:
Step 7: The Cool Down
Well, there is the cool down. After playing/exercising make sure your dog gets water. They probably won't object to it, but just be sure they do drink afterward. Also, if your dog dislikes bathes this is a great time for a shower. They will be plenty warm, and might not object terribly to a cool spray with hose. Rub some shampoo on them and spray again, and you have a cool, clean dog.
Make sure you pet/reward your dog for a fun play/exercise session with plenty of praise and/ or a treat. Finally, you will need to set limits for your dog. He/she will gladly tag along and play fetch long after they should. Monitor their breathing, and attitude and if you notice a great decrease in energy it may be time to stop. If the dog is old, or hasn't exercised much it may be sore the next day. Your animal might not get up, or may be limping. Thats a sign to take it easier next time, and keep up the regular exercise.
Have fun, and playing with your dog is what makes owning one so much fun
56 Discussions
9 years ago on Introduction
Digging is not destructive behaviour in dogs, especially not as regards hunting dogs (whether these actually go on formal hunts or not). Threadmills are considered a form of torture of dogs, but ordinary running letting your dog run freely at his own pace is good exercise for dogs. Puppies and very small sized dogs should not be forced to run at the pace of an adult human as this is too strainful for them (whether on leash or not on leash trying to catch up with their owner).
Reply 4 years ago on Introduction
a dog running on a treadmill is not a form of torture i agree it should be no substitute for a structured daily walk but it can supplement a dogs exercise programme if you have a high energy level dog for example huskies or sled dog breeds are bred to run what we would call a marathon in freezing conditions with a heavey sled tied to their backs if you think you can walk these guys enough you are mad, sometimes walks just arent enough and if you purchase or build a self driven dog treadmill they will only do what they want to do on it they will not run if they dont want to and will only go at a speed they feel confortable doing and will increase the dogs quality of life
Reply 8 years ago on Introduction
It seems to me that if my dog was digging holes in my yard, tearing up my lawn and garden and risking exposing wiring, piping, etc., that it would be considered destructive, whether it is a hunting dog or not.
As a canine behavior it stems from a dog becoming bored with no outlet for excess energy.
10 years ago on Introduction
My mom and I are considering a dog, but we don't know what kind to get. She wants a Lab because they are easy to train and whatnot, and we definitely don't want a lapdog. The neighbors have 4 Shelties that seem really cool. What do you recommend?
Reply 9 years ago on Introduction
Labs are great, but they are not as easy to train as people think - The first year is hell (Ok, we did get 2 making it much worse), but they can be very stubborn. And if you aren't very strict, you will get a lapdog... I haven't met a lab yet that hasn't tried to climb on my lap at some point. A golden retriever might be a good plan too - They are less hyper than Labs.
Reply 9 years ago on Introduction
Golden's are a great dog but do best if you can let them go to water, be it a ditch lake or stream. you can't make the weather too cold for Labs and Golden's. All dogs the first year are teething plenty of big bones to chew on and they love wood also. Golden's have only one bad trait for the first 3 years and that is they will jump up on you, because they want to be the best friend you ever had. Golden's do have hair shedding issues but can be cut back in the summer. Golden is not just a name or color it's because, the nature of these dogs and their patience. Never let the soft looks fool you they will defend you with their lives. Our Golden has already flipped one pitbull and put him on his back and commenced to kill him. Took 2 folks to drag the dog off.
Reply 4 years ago on Introduction
what a lovely story so great to hear how your goldy was going to kill a pitbull... by the way i dont think it matters what the breed of your first dog is my first dog was a rescued pitbull terrier and she was amazing she protected my pregnant girlfriend from 2 large shepherd type dogs one looked nervous aggressive or may have just been following the other dogs lead but regardless of breed the owner was an idiot he made no attempt to recall his dogs but his dogs changed their mind after seeing bailey who was on lead luckily for them and only managed to give them a verbal and a bit of a show.. choosing a good breeder is very important they will be able to tell you what to expect exercise and temperament wise, exercise requirements of your chosen breed should match your activity level and good socialization is a must for any breed if you get these basics right youl be happy with your chosen dog
Reply 9 years ago on Introduction
Get a Red Doberman, trained cool dogs i had mine for almost 14 years until he passed away. DO NOT CROP THE EARS!!!
Reply 7 years ago on Introduction
OH I so agree with you on this. same with tails. a dog needs its tail for so many things. one is to show you how happy or worried it is. tail goes under when worried or unhappy, wags like mad when happy OUR dog won a waggiest tail comp in mutt contest. lol She was lovely, now on our third this one is a staffie cross but rescue as have all our dogs except the first. Deepha( D for Dog) was my special lady and no idea what her parentage was as her mother died giving birth to 4 puppies BUT Deepha was so loyal and intelligent she learn STAY in a situation where I was across the road and she had jumped off our boat to follow me. She also was my best companion when I was so ill I could not talk but she would go fetch my husband when I needed help.
Thanks Yerboogieman for your comment.
Reply 10 years ago on Introduction
Hey'a Shino. If you are a first time dog owner a lab would be perfect as they are pretty difficult to 'muck up'. Don't get me wrong, if you neglect them, don't give them daily exercise, and let them lawd it over you they will be a pain in the butt. However, they at least are normally well balanced mentally. But no matter what dog you get, the fastest way to get problems is to not exercise them every day. Very few dogs require less exercise than at a bare minimum 3/4 - 1 hour of good solid trotting with runs in between. Most dogs are under exercised in our western society. I have a Rhodesian Ridgeback and she needs a good hour at least. In the winter I take her beside me on my bike up hill and down dale and she's happy as larry. Some of that run is at near full speed for her (down the hill of course as I can't keep up with her otherwise...lol). In the summer it's quite hot where I am, so I take her early morning or evening for walks with a few runs in between. Also, if you can afford it, feed a diet NATURAL to your dog....that's meat, meat and meat and a few bones, especially the edible ones, raw....not cereals, and other human foods. Shelties are cool little dogs but can be very noisy. My RR never barks unless there's someone coming. All the best dog hunting. :o)
Reply 10 years ago on Introduction
Oh...just a note...you shouldn't exercise a dog under 6 months hardly at all....just playing in the backyard, or take him in your car to other places to meet people and other dogs. It's really important to protect their skeletal system when they are young...let them grow...then introduce the regular exercise I am talking about above.
Reply 9 years ago on Introduction
if you exercise dogs when thear yong thay will behave. becuse if they are hiper thay are more likley to be bad!!!
Reply 9 years ago on Introduction
Actually, if you exercise your dog too much, and give them the wrong kind of exercise when they are too young, you have a higher risk of skeletal problems, (particularly hips) especially with the larger breeds. You must be careful when they are young to keep them from repetitive jumping off things, hard running, etc. Just playing in the backyard or a park is fine. If you want a young dog to behave, then be the pack leader! That is predominantly what keeps any dog well behaved; just ask my Rhodesian Ridgeback...lol.
Reply 10 years ago on Introduction
I recommend shetland sheepdogs, very smart, easy to train, medium size...
Reply 10 years ago on Introduction
Actually, our neighbors had 4-5 shelties. :P They were straight out awesome.
Reply 9 years ago on Introduction
i say just get a mut from the pound thats were i got my last two dogs
Reply 9 years ago on Introduction
Being partial to AKC Golden Retrievers, You can't beat these dogs. Very few bad traits and extra smart dog. The hair problems, just have them trimmed close once a year. They are outside dogs and need a lot of exercise. Love to swim, go, meet people-very social.
Reply 10 years ago on Introduction
Thats a tough question. It depends on what you are looking for and what type of a place you have. We have a large field around our place so our large dogs can get plenty of exercise. If you have a house with a small backyard or an apartment, then go with a cool small breed. Unfortunately i don't really know about them. As for large breeds, I've had golden retrievers which are always super friendly. But you cannot go wrong with a lab. They are friendly, smart, and I have yet to meet someone who is unhappy with a lab.
Reply 10 years ago on Introduction
Mm. We have large fields around the house, big yard and stuff. We want something we can do stuff with, not lazy and stuff. We had a retriever once that was pretty cool. I used to like rottweilers, but my dad's cat got killed by one, so meh. XD
Reply 10 years ago on Introduction
*double post* We also go camping alot during the summer, so we want a dog we can play with then.
|
https://www.instructables.com/id/How-to-exercise-your-dog/
|
CC-MAIN-2019-04
|
refinedweb
| 3,378
| 77.77
|
Question:
Given a FILE*, is it possible to determine the underlying type? That is, is there a function that will tell me if the FILE* is a pipe or a socket or a regular on-disk file?
Solution:1
There's a
fstat(2) function.
NAME stat, fstat, lstat - get file status
SYNOPSIS
#include <sys/types.h> #include <sys/stat.h> #include <unistd.h> int fstat(int fd, struct stat *buf);
You can get the fd by calling
fileno(3).
Then you can call
S_ISFIFO(buf) to figure it out.
Solution:2
Use the fstat() function. However, you'll need to use the fileno() macro to get the file descriptor from file FILE struct.
#include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> FILE *fp = fopen(path, "r"); int fd = fileno(fp); struct stat statbuf; fstat(fd, &statbuf); /* a decoding case statement would be good here */ printf("%s is file type %08o\n", path, (statbuf.st_mode & 0777000);
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2018/05/tutorial-distinguishing-pipe-from-file.html
|
CC-MAIN-2018-43
|
refinedweb
| 182
| 69.58
|
.
Scala code resides in the Java platform's global hierarchy of packages. The example code you've seen so far in this book has been in the unnamed package. You can place code into named packages in Scala in two ways. First, you can place the contents of an entire file into a package by putting a package clause at the top of the file, as shown in Listing 13.1.
package bobsrockets.navigation class Navigator
The package clause of Listing 13.1 places class Navigator into the package named bobsrockets.navigation. Presumably, this is the navigation software developed by Bob's Rockets, Inc.
The other way you can place code into packages in Scala is more like C# namespaces. You follow a package clause by a section in curly braces that contains the definitions that go into the package. Among other things, this syntax lets you put different parts of a file into different packages. For example, you might include a class's tests in the same file as the original code, but put the tests in a different package, as shown in Listing 13.2:
package bobsrockets { package navigation {
// In package bobsrockets.navigation class Navigator
package tests {
// In package bobsrockets.navigation.tests class NavigatorSuite } } }
The Java-like syntax shown in Listing 13.1 is actually just syntactic sugar for the more general nested syntax shown in Listing 13.2. In fact, if you do nothing with a package except nest another package inside it, you can save a level of indentation using the approach shown in Listing 13.3:
package bobsrockets.navigation {
// In package bobsrockets.navigation class Navigator
package tests {
// In package bobsrockets.navigation.tests class NavigatorSuite } }
As this notation hints, Scala's packages truly nest. That is, package navigation is semantically inside of package bobsrockets. Java packages, despite being hierarchical, do not nest. In Java, whenever you name a package, you have to start at the root of the package hierarchy. Scala uses a more regular rule in order to simplify the language.
Take a look at Listing 13.4. Inside the Booster class, it's not necessary to reference Navigator as bobsrockets.navigation.Navigator, its fully qualified name. Since packages nest, it can be referred to as simply as navigation.Navigator. This shorter name is possible because class Booster is contained in package bobsrockets, which has navigation as a member. Therefore, navigation can be referred to without a prefix, just like the code inside methods of a class can refer to other methods of that class without a prefix.
package bobsrockets { package navigation { class Navigator } package launch { class Booster { // No need to say bobsrockets.navigation.Navigator val nav = new navigation.Navigator } } }
Another consequence of Scala's scoping rules is that packages in an inner scope hide packages of the same name that are defined in an outer scope. For instance, consider the code shown in Listing 13.5, which has three packages named launch. There's one launch in package bobsrockets.navigation, one in bobsrockets, and one at the top level (in a different file from the other two). Such repeated names work fine—after all they are a major reason to use packages—but they do mean you must use some care to access precisely the one you mean.
// In file launch.scala package launch { class Booster3 }
// In file bobsrockets.scala package bobsrockets { package navigation { package launch { class Booster1 } class MissionControl { val booster1 = new launch.Booster1 val booster2 = new bobsrockets.launch.Booster2 val booster3 = new _root_.launch.Booster3 } } package launch { class Booster2 } }
To see how to choose the one you mean, take a look at MissionControl in Listing 13.5. How would you reference each of Booster1, Booster2, and Booster3? Accessing the first one is easiest. A reference to launch by itself will get you to package bobsrockets.navigation.launch, because that is the launch package defined in the closest enclosing scope. Thus, you can refer to the first booster class as simply launch.Booster1. Referring to the second one also is not tricky. You can write bobrockets.launch.Booster2 and be clear about which one you are referencing. That leaves the question of the third booster class, however. How can you access Booster3, considering that a nested launch package shadows the top-level one?
To help in this situation, Scala provides a package named _root_ that is outside any package a user can write. Put another way, every top-level package you can write is treated as a member of package _root_. For example, both launch and bobsrockets of Listing 13.5 are members of package _root_. As a result, _root_.launch gives you the top-level launch package, and _root_.launch.Booster3 designates the outermost booster class.
In Scala, packages and their members can be imported using import clauses. Imported items can then be accessed by a simple name like File, as opposed to requiring a qualified name like java.io.File. For example, consider the code shown in Listing 13.6:
package bobsdelights
abstract class Fruit( val name: String, val color: String )
object Fruits { object Apple extends Fruit("apple", "red") object Orange extends Fruit("orange", "orange") object Pear extends Fruit("pear", "yellowish") val menu = List(Apple, Orange, Pear) }
An import clause makes members of a package or object available by their names alone without needing to prefix them by the package or object name. Here are some simple examples:
// easy access to Fruit import bobsdelights.FruitThe first of these corresponds to Java's single type import, the second to Java's on-demand import. The only difference is that Scala's on-demand imports are written with a trailing underscore (_) instead of an asterisk (*) (after all, * is a valid identifier in Scala!). The third import clause above corresponds to Java's import of static class fields.
// easy access to all members of bobsdelights import bobsdelights._
// easy access to all members of Fruits import bobsdelights.Fruits._
These three imports give you a taste of what imports can do, but Scala imports are actually much more general. For one, imports in Scala can appear anywhere, not just at the beginning of a compilation unit. Also, they can refer to arbitrary values. For instance, the import shown in Listing 13.7 is possible:
def showFruit(fruit: Fruit) { import fruit._ println(name +"s are "+ color) }
Method showFruit imports all members of its parameter fruit, which is of type Fruit. The subsequent println statement can refer to name and color directly. These two references are equivalent to fruit.name and fruit.color. This syntax is particularly useful when you use objects as modules, which will be described in Chapter 27.
Scala's import clauses are quite a bit more flexible than Java's. There are three principal differences. In Scala, imports:
Another way Scala's imports are flexible is that they can import packages themselves, not just their non-package members. This is only natural if you think of nested packages being contained in their surrounding package. For example, in Listing 13.8, the package java.util.regex is imported. This makes regex usable as a simple name. To access the Pattern singleton object from the java.util.regex package, you can just say, regex.Pattern, as shown in Listing 13.8:
import java.util.regex
class AStarB { // Accesses java.util.regex.Pattern val pat = regex.Pattern.compile("a*b") }
Imports in Scala can also rename or hide members. This is done with an import selector clause enclosed in braces, which follows the object from which members are imported. Here are some examples: java.sql.{Date => SDate}
This imports the SQL date class as SDate, so that you can simultaneously import the normal Java date class as simply Date.
import java.{sql => S}
This imports the java.sql package as S, so that you can write things like S.Date.
import Fruits.{_}
This imports all members from object Fruits. It means the same thing as import Fruits._.
import Fruits.{Apple => McIntosh, _}
This imports all members from object Fruits but renames Apple to McIntosh.
import Fruits.{Pear => _, _}
This imports all members of Fruits except Pear. A clause of the form "<original-name> => _" excludes <original-name> from the names that are imported. In a sense, renaming something to `_' means hiding it altogether. This is useful to avoid ambiguities. Say you have two packages, Fruits and Notebooks, which both define a class Apple. If you want to get just the notebook named Apple, and not the fruit, you could still use two imports on demand like this:
import Notebooks._ import Fruits.{Apple => _, _}This would import all Notebooks and all Fruits except for Apple.
These examples demonstrate the great flexibility Scala offers when it comes to importing members selectively and possibly under different names. In summary, an import selector can consist of the following:
Scala adds some imports implicitly to every program. In essence, it is as if the following three import clauses had been added to the top of every source file with extension ".scala":
import java.lang._ // everything in the java.lang package import scala._ // everything in the scala package import Predef._ // everything in the Predef object
The java.lang package contains standard Java classes. It is always implicitly imported on the JVM implementation of Scala. The .NET implementation would import package system instead, which is the .NET analogue of java.lang. Because java.lang is imported implicitly, you can write Thread instead of java.lang.Thread, for instance.
As you have no doubt realized by now, the scala package contains the standard Scala library, with many common classes and objects. Because scala is imported implicitly, you can write List instead of scala.List, for instance.
The Predef object contains many definitions of types, methods, and implicit conversions that are commonly used on Scala programs. For example, because Predef is imported implicitly, you can write assert instead of Predef.assert.
The three import clauses above are treated a bit specially in that later imports overshadow earlier ones. For instance, the StringBuilder class is defined both in package scala and, from Java version 1.5 on, also in package java.lang. Because the scala import overshadows the java.lang import, the simple name StringBuilder will refer to scala.StringBuilder, not java.lang.StringBuilder.
Members of packages, classes, or objects can be labeled with the access modifiers private and protected. These modifiers restrict accesses to the members to certain regions of code. Scala's treatment of access modifiers roughly follows Java's but there are some important differences which are explained in this section.
Private members are treated similarly to Java. A member labeled private is visible only inside the class or object that contains the member definition. In Scala, this rule applies also for inner classes. This treatment is more consistent, but differs from Java. Consider the example shown in Listing 13.9:
class Outer { class Inner { private def f() { println("f") } class InnerMost { f() // OK } } (new Inner).f() // error: f is not accessible }
In Scala, the access (new Inner).f() is illegal because f is declared private in Inner and the access is not from within class Inner. By contrast, the first access to f in class InnerMost is OK, because that access is contained in the body of class Inner. Java would permit both accesses because it lets an outer class access private members of its inner classes.
Access to protected members is also a bit more restrictive than in Java. In Scala, a protected member is only accessible from subclasses of the class in which the member is defined. In Java such accesses are also possible from other classes in the same package. In Scala, there is another way to achieve this effect, as described below, so protected is free to be left as is. The example shown in Listing 13.10 illustrates protected accesses:
package p { class Super { protected def f() { println("f") } } class Sub extends Super { f() } class Other { (new Super).f() // error: f is not accessible } }
In Listing 13.10, the access to f in class Sub is OK because f is declared protected in Super and Sub is a subclass of Super. By contrast the access to f in Other is not permitted, because Other does not inherit from Super. In Java, the latter access would be still permitted because Other is in the same package as Sub.
Every member not labeled private or protected is public. There is no explicit modifier for public members. Such members can be accessed from anywhere.
package bobsrockets { package navigation { private[bobsrockets] class Navigator { protected[navigation] def useStarChart() {} class LegOfJourney { private[Navigator] val distance = 100 } private[this] var speed = 200 } } package launch { import navigation._ object Vehicle { private[launch] val guide = new Navigator } } }
Access modifiers in Scala can be augmented with qualifiers. A modifier of the form private[X] or protected[X] means that access is private or protected "up to" X, where X designates some enclosing package, class or singleton object.
Qualified access modifiers give you very fine-grained control over visibility. In particular they enable you to express Java's accessibility notions such as package private, package protected, or private up to outermost class, which are not directly expressible with simple modifiers in Scala. But they also let you express accessibility rules that cannot be expressed in Java. Listing 13.11 presents an example with many access qualifiers being used. In this listing, class Navigator is labeled private[bobsrockets]. This means that this class is visible in all classes and objects that are contained in package bobsrockets. In particular, the access to Navigator in object Vehicle is permitted, because Vehicle is contained in package launch, which is contained in bobsrockets. On the other hand, all code outside the package bobsrockets cannot access class Navigator.
This technique is quite useful in large projects that span several packages. It allows you to define things that are visible in several sub-packages of your project but that remain hidden from clients external to your project. The same technique is not possible in Java. There, once a definition escapes its immediate package boundary, it is visible to the world at large.
Of course, the qualifier of a private may also be the directly enclosing package. An example is the access modifier of guide in object Vehicle in Listing 13.11. Such an access modifier is equivalent to Java's package-private access.
All qualifiers can also be applied to protected, with the same meaning as private. That is, a modifier protected[X] in a class C allows access to the labeled definition in all subclasses of C and also within the enclosing package, class, or object X. For instance, the useStarChart method in Listing 13.11 is accessible in all subclasses of Navigator and also in all code contained in the enclosing package navigation. It thus corresponds exactly to the meaning of protected in Java.
The qualifiers of private can also refer to an enclosing class or object. For instance the distance variable in class LegOfJourney in Listing 13.11 is labeled private[Navigator], so it is visible from everywhere in class Navigator. This gives the same access capabilities as for private members of inner classes in Java. A private[C] where C is the outermost enclosing class is the same as just private in Java.
Finally, Scala also has an access modifier that is even more restrictive than private. A definition labeled private[this] is accessible only from within the same object that contains the definition. Such a definition is called object-private. For instance, the definition of speed in class Navigator in Listing 13.11 is object-private. This means that any access must not only be within class Navigator, but it must also be made from the very same instance of Navigator. Thus the accesses "speed" and "this.speed" would be legal from within Navigator. The following access, though, would not be allowed, even if it appeared inside class Navigator:
val other = new Navigator other.speed // this line would not compileMarking a member private[this] is a guarantee that it will not be seen from other objects of the same class. This can be useful for documentation. It also sometimes lets you write more general variance annotations (see Section 19.7 for details).
To summarize, Table 13.1 here lists the effects of private qualifiers. Each line shows a qualified private modifier and what it would mean if such a modifier were attached to the distance variable declared in class LegOfJourney in Listing 13.11.
In Java, static members and instance members belong to the same class, so access modifiers apply uniformly to them. You have already seen that in Scala there are no static members; instead you can have a companion object that contains members that exist only once. For instance, in Listing 13.12 object Rocket is a companion of class Rocket:
class Rocket { import Rocket.fuel private def canGoHomeAgain = fuel > 20 }
object Rocket { private def fuel = 10 def chooseStrategy(rocket: Rocket) { if (rocket.canGoHomeAgain) goHome() else pickAStar() } def goHome() {} def pickAStar() {} }
Scala's access rules privilege companion objects and classes when it comes to private or protected accesses. A class shares all its access rights with its companion object and vice versa. In particular, an object can access all private members of its companion class, just as a class can access all private members of its companion object.
For instance, the Rocket class above can access method fuel, which is declared private in object Rocket. Analogously, the Rocket object can access the private method canGetHome in class Rocket.
One exception where the similarity between Scala and Java breaks down concerns protected static members. A protected static member of a Java class C can be accessed in all subclasses of C. By contrast, a protected member in a companion object makes no sense, as singleton objects don't have any subclasses.
In this chapter, you saw the basic constructs for dividing a program into packages. This gives you a simple and useful kind of modularity, so that you can work with very large bodies of code without different parts of the code trampling on each other. This system is the same in spirit as Java's packages, but there are some differences where Scala chooses to be more consistent or more general.
Looking ahead, Chapter 27 describes a more flexible module system than division into packages. In addition to letting you separate code into several namespaces, that approach allows modules to be parameterized and to inherit from each other. In the next chapter, we'll turn our attention to assertions and unit testing.
|
https://www.artima.com/pins1ed/packages-and-imports.html
|
CC-MAIN-2018-17
|
refinedweb
| 3,116
| 58.28
|
Sometimes. So i put in a TAR on metalink with the request if there was a way to trace, eq. an event setting, when a datafile extended. This way i could cross-reference this timestamp to the timestamps i was registering my latch problems. The reponse took a long time. I was in need for an answer, so i created a workaround via a PL/SQL procedure. This procedure was executed via the database job scheduler. Every time a datafile extended, the timestamp, SCN, extend growth etc. was picked up by this procedure, and the data was stored in a table.
Lately i am in search of XML DB knowledge, more specific, DBA/database specific XML DB knowledge (object performance/sizing/XML Schema tuning, etc. stuff). In the new Oracle 10g Release 2 manuals, i came across the event setting 31098: “Internal event to turn on XDB tracing”. So i wondered if there were more of these settings. Apparently this setting was already applicable in Oracle 10g Release 1.
I have a small script called OERR, like the Oracle message utility under UNIX. It does the following:
SQL> @oerr 31098 Error 31098 is: ORA-31098: Internal event to turn on XDB tracing
The SQL code for this script:
prompt set serveroutput on size 1000000 set feedback off exec dbms_output.put_line('Error ' || &&1 || ' is: ' ||sqlerrm(-1 * &&1)); prompt undefine 1 set feedback on
Wondering if i couldn’t do more with this, when using it with Oracle collections, i came up with the following:
SQL> @search_oerr Enter string to search: XDB RESULT: ORA-31000: Resource '' is not an XDB schema document ORA-31004: Length of the BLOB in XDB$H_INDEX is below the minimum ORA-31098: Internal event to turn on XDB tracing ORA-31099: XDB Security Internal Error ORA-31100: XDB Locking Internal Error ORA-31112: fail to for port using xdb configuration ORA-31113: XDB configuration may not be updated with non-schema compliant data ORA-31114: XDB configuration has been deleted or is corrupted ORA-31115: XDB configuration error: ORA-31153: Cannot create schema URL with reserved prefix "" ORA-31155: attribute not in XDB namespace ORA-31179: internal XDB event for ftp test harness 12 rows selected of 59989 records PL/SQL procedure successfully completed.
The SQL statements for this report are:
-- Find certain events or error numbers set serveroutput on size 1000000 set array 1 set long 10000 set trimspool on prompt accept XXX char prompt "Enter string to search: " DECLARE TYPE statement IS RECORD (r_statement varchar2(1000)); TYPE statement_stack IS TABLE OF statement INDEX BY binary_integer; showstring statement_stack; t number:=0; BEGIN dbms_output.put_line(chr(10)); dbms_output.put_line('RESULT:'||chr(10)); showstring.delete; for i in 0..60000 loop showstring(i).r_statement := (sqlerrm(-1 * i)); if upper(showstring(i).r_statement) like upper('% &&XXX %') then dbms_output.put_line(showstring(i).r_statement); t:=nvl(t,0)+1; else null; --> you should replace this with some usefull code end if; end loop; dbms_output.put_line(chr(10)); if t=0 then --> just for fun dbms_output.put_line('no rows selected'); elsif t=1 then dbms_output.put_line('1 row selected'); else dbms_output.put_line(t||' rows selected of '||(showstring.COUNT-t)||' records'); end if; END; / undefine XXX
I hope you can use it. I know it’s not hightech but just like the old “Tales from the script” SQL scripts on Metalink (it’s still there), it’s a starting point for further improvement.
😉
Could guys please change the font. Its so difficult to read.
Thanks!
|
https://technology.amis.nl/2005/08/23/oerr-in-search-of-error-messages/
|
CC-MAIN-2017-09
|
refinedweb
| 582
| 61.87
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to inherit computed field from old Api to New Api ?
Any body please explain me about inheriting the computed field form old api to new api.
It's not possible to inherit function from Old Api to New Api
If you want change the method called for a compute field. In the first time you should inherit of the model. And in this model you override the function field with your function
Old api
class account_account_type(osv.osv):
_name = "account.account.type"
_description = "Account Type"
....
def _get_current_report_type(self, cr, uid, ids, name, arg, context=None):
res = {}
financial_report_ref = self._get_financial_report_ref(cr, uid, context=context)
for record in self.browse(cr, uid, ids, context=context):
res[record.id] = 'none'
for key, financial_report in financial_report_ref.items():
list_ids = [x.id for x in financial_report.account_type_ids]
if record.id in list_ids:
res[record.id] = key
return res
....
_columns = {
'name': fields.char('Account Type', required=True, translate=True),
......
'report_type': fields.function(_get_current_report_type, fnct_inv=_save_report_type, type='selection', string='P&L / BS Category', store=True,
selection= [('none','/'),
('income', _('Profit & Loss (Income account)')),
('expense', _('Profit & Loss (Expense account)')),
('asset', _('Balance Sheet (Asset account)')),
('liability', _('Balance Sheet (Liability account)'))], help="This field is used to generate legal reports: profit and loss, balance sheet.", required=True),
'note': fields.text('Description'),
}
New api
Class account_account_type(models.Model):
_inherit = "account.account.type"
_description = "Account Type inherited"
....
@api.depends('field1')
def _your_function(self):
Your code here
....
report_type': fields.Selection(compute='_your_function',string='Report' ....
Hi John, its possible to inherit a function from old api to new api using the method decorator "@api.v7", but only in the case of compute field function, its not possible I think.
Yes it's right, It's possible to override function. In my example, I only speak for function field . Its my fault, I speak poorly.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Can you please tell what exactly you are trying to do? You want to make some changes in that function or you want to use the same function for your custom field?
Hi Akhil, Thanks for replying, I need the result form the base method and with that result I need to perform more operation and update it.
|
https://www.odoo.com/forum/help-1/question/how-to-inherit-computed-field-from-old-api-to-new-api-91985
|
CC-MAIN-2017-13
|
refinedweb
| 416
| 52.76
|
kb(7m) [centos man page]
kb(7M) kb(7M) NAME
kb - keyboard STREAMS module SYNOPSIS
#include <sys/types.h> #include <sys/stream.h> #include <sys/stropts.h> #include <sys/vuid_event.h> #include <sys/kbio.h> #include <sys/kbd.h> ioctl(fd, I_PUSH, "kb"); DESCRIPTION indi- cate special characters system. characters function . Keyboard Compatibility Mode When started, the kb STREAMS module is in the compatibility mode. When the keyboard is in the TR_EVENT translation mode, ISO 8859/1 charac- ters. DESCRIPTION structure"Unshifted" translation table. The kio_station request specifies the keystation code for the entry to be modified. The value of kio_entry is stored in the entry in ques- tion. KIO- CABORT2 then the value of kio_station is set to be the second keystation in the sequence. An attempt to change the "break to the PROM mon- itor" sequence without having superuser permission results in an EPERM error. KIOCGKEY. KIOCTYPE The argument is a pointer to an int. A code indicating the type of the keyboard is stored in the int pointed to by the argument: KB_SUN3 Sun Type 3 keyboard KB_SUN4 Sun Type 4 or 5 keyboard, or non-USB Sun Type 6 keyboard KB_USB USB standard HID keyboard, including Sun Type 6 USB keyboards argument.EN Pointer to an int. The keyboard abort sequence effect (typically L1-A or Stop-A on the keyboard on SPARC systems, F1-A on SLIP has no comparable capability, and must not be used if the Alternate Break sequence is in use.. KIOCSDIRECT Has no effect. KIOCGDIRECT Always returns 1. The following ioctl() requests are used to set and get the keyboard autorepeat delay and rate. KIOCSRPTDELAY This argument is a pointer to an int, which is the kb autorepeat delay, unit in millisecond. KIOCGRPTDELAY This argument is a pointer to an int. The current auto repeat delay setting is stored in the integer pointed by the argument, unit in millisecond. KIOCSRPTRATE This argument is a pointer to an int, which is the kb autorepeat rate, unit in millisecond. KIOCGRPTRATE This argument is a pointer to an int. The current auto repeat rate setting is stored in the integer pointed by the argument, unit in millisecond. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Interface Stability |Stable | +-----------------------------+-----------------------------+ kbd(1), loadkeys(1), kadb(1M), pppd(1M), keytables(4), attributes(5), zs(7D), se(7D), asy(7D), virtualkm(7D), termio(7I), usbkbm(7M) NOTES
Many keyboards released after Sun Type 4 keyboard also report themselves as Sun Type 4 keyboards. 26 Feb 2004 kb(7M)
|
https://www.unix.com/man-page/centos/7M/kb/
|
CC-MAIN-2021-43
|
refinedweb
| 426
| 57.87
|
CherryPy PyLucene integration
Problem
There are some problems in calling some PyLucene API from CherryPy code. I got following Java exception when I tried calling PyLucene.IndexWriter from CherryPy.
writer = PyLucene.IndexWriter( "c:/index", PyLucene.StandardAnalyzer(), True) JavaError: java.lang.NullPointerException
The same API works fine if called from python console.
>>> PyLucene.IndexWriter( "c:/index", PyLucene.StandardAnalyzer(), True) <IndexWriter: org.apache.lucene.index.IndexWriter@17f3fa0>
Reason
The reason for conflict lies in the threading mechanism used by CherryPy.
CherryPy page handlers never run in the main thread, a new thread is created for handling new request. These new threads are local threads created from threading.Thread.
Basically, for a python only main thread can call into PyLucene without any problems. For other threads to work with PyLucene, the boehm-gc component of the java runtime ( i.e. the garbage collector) should be informed about their creation. If these threads are created as an instance of PyLucene.PythonThread, it will make sure that the thread is created via libgcj and libgcj garbage collector is aware of it.
Solution
The idea is to use PyLucene.PythonThread instead of normal threading.Thread in CherryPy source code.
I did following modifications in CherryPy source code with CherryPy 2.2.1 and PyLucene 2.0.0 on Windows 2000 to work together.
The cherrypy code should be available in following directory inside python installation directory. python_installation_dir\Lib\site-packages\cherrypy
Files which have threading.Thread are:
- _cpserver.py
- _cpwsgiserver.py
You need to add statement import PyLucene and replace threading.Thread with PyLucene.PythonThread in above two files.
Also disable CherryPy's autoreload (by adding autoreload.on=False to configuration file) because autoreload uses low level threads.
These changes worked fine in my case.
More Information
For more details, see
For porting this solution to Fedora Core 5 Linux, visit
Above information may not be correct, as its just taken from my experience, Let me know if there is any mistake.
Pravin Shinde [getpravin at gmail.com]
A solution to avoid changing the PyLucene source code (inspired by the dummy_threading module mentioned to me by my colleague Richard Philips).
Import the following code on top of your application, before you import cherrypy. All subsequent import threading commands will import this module and use the PyLucene.PythonThread class. Be sure to replace the anet.explorator.exploratorthreading path by the name and path you give to the module.
It is necessary to fill the locals() table with the _names of the threading module since CherryPy addresses some _classes directly (_Timer for example)
from threading import * from threading import __all__ import threading all = dir(threading) for name in all: if name[0] == '_' and name[1] != '_': locals()[name] = getattr(threading, name) import sys del sys.modules['threading'] try: sys.modules['threading'] = sys.modules['anet.explorator.exploratorthreading'] except: sys.modules['threading'] = sys.modules['exploratorthreading'] import PyLucene class Thread(PyLucene.PythonThread): def __init__(self, *args, **kwds): PyLucene.PythonThread.__init__(self,*args, **kwds)
Marc Jeurissen
University of Antwerp
marc.jeurissen@ua.ac.be
Update for PyLucene 2.3
PyLucene >= 2.3 lacks PythonThread?. The authors say that the new version no longer needs it. You can instead use ordinary Python threads; you just have to call attachCurrentThread() before using PyLucene. See this thread and the readme for PyLucene.
Joe Barillari
cherrypy.wiki@barillari.org
Update for PyLucene 2.4
Most of the information on this page is obsolete. Some current integration issues:
- Autoreload is not compatible with the VM initialization. Set engine.autoreload.on to False.
- WorkerThreads? must be attached to the VM (lucene.getVMEnv().attachCurrentThread()). Don't bother with detachCurrentThread; it's unnecessary and can cause crashes.
- Also recommended that the VM ignores keyboard interrupts for clean server shutdown. Pass vmargs='-Xrs' to lucene.initVM.
The LuPyne project provides a standalone search server based on CherryPy. It can be used as an example to be customized, or as a pythonic alternative to Solr.
Attachments
- CherrypyPylucene.txt (1.4 kB) - added by marc.jeurissen@ua.ac.be on 12/17/07 12:36:45.
|
http://tools.cherrypy.org/wiki/PyLucene
|
CC-MAIN-2014-10
|
refinedweb
| 674
| 53.78
|
A Smooth Transition to ECMAScript 6: Using New Features
In part one of this miniseries, we talked about the timeline for ES6 rollout, feature compatibility in existing environments and transpilers, and how to get ES6 set up in your build process.
Today, we’ll continue the conversation, looking at some of the easiest places to start using ES6 in a typical front-end Backbone + React project. Even if that’s not your stack, read on! There’s something for everyone here.
If you want to try out the examples, you can use a sandboxed ES6 environment at ES6 Fiddle.
New Features
Classes, Shorthand Methods, and Shorthand Properties
A lot of client-side JS code is object-oriented. If you’re using Backbone, just about every Model, Collection, View, or Router you ever write will be a subclass of a core library Class. With ES6, extending these objects is a breeze. We can just call
class MySubclass extends MyClass and we get object inheritance. We get access to a
constructor method, and we can call
super from within any method to apply the parent class’s method of the same name. This prevents us from having to write things like:
Backbone.Collection.prototype.initialize.apply(this, arguments)
We also get some handy shorthands for defining methods and properties. Note the pattern I’m using to call
initialize instead of
initialize: function(args) {}:
class UserView extends Backbone.View { initialize(options) { super(options); } }
We can also define properties using a nice new shorthand. The code below sets an
app property on the
Injector that points to the instance of
App we create on the second line. In other words, it’s the same as doing
this.app = app;.
let App = function() {}; // we'll look at 'let' in just a second. let app = new App(); let Injector = { app };
Let
The new
let keyword is probably the easiest win that you can possibly get in using ES6. If you do nothing else, just start replacing
var with
let everywhere. What’s the difference, you ask? Well,
var is scoped to the closest enclosing function, while
let is scoped to the closest enclosing block.
In essence, variables defined with
let aren’t visible outside of
if blocks and
for loops, so there’s less likelihood for a naming collision. There are other benefits. See this Stack Overflow answer for more details.
You can use it pretty much everywhere, but here’s a good example of somewhere that it actually makes a difference in preventing a naming collision. The
userNames inside of the
for loop don’t clash with the current user’s
userName defined just above it.
let UserList = React.createClass({ ... render() { let userName = this.props.current // See the Fat Arrow section below let userComponents = this.props.users.map(user => { let userName = user.get('userName'); return <UserComponent displayName={userName} />; }); return ( <div className="user-list"> <h1>Welcome back, {userName}!</h1> {userComponents} </div> } });
Const
As you might guess from the name,
const defines a read-only (constant) variable. It should be pretty easy to guess where to use this. For example:
const DEFAULT_MAP_CENTER = [48.1667, -100.1667]; class MapView extends Backbone.View { centerMap() { map.panTo(DEFAULT_MAP_CENTER); } }
The Fat Arrow
You’ve probably already heard about the fat arrow, or used it before if you’ve written any CoffeeScript. The fat arrow,
=>, is a new way to define a function. It preserves the value of
this from the surrounding context, so you don’t have to use workarounds like
var self = this; or
bind. It comes in really handy when dealing with nested functions. Plus it looks really cool.
let Toggle = React.createClass({ componentDidMount() { // iOS setTimeout(() => { var $el = $('#' + this.props.id + '_label'); $el.on('touchstart', e => { let $checkbox = $el.find('input[type="checkbox"]'); $checkbox.prop("checked", !$checkbox.prop("checked")); }); }, 0); },
Template Strings
Do you ever get sick of doing string concatenation in JavaScript? I sure do! Well, good news! We can finally do string interpolation. This will come in very handy all over the place. I’m especially excited about using it in React
render calls like this:
let ProductList = React.createClass({ ... render() { let links = this.props.products.map(product => { return ( <li> <a href={`/products/${product.id}`}>{product.get('name')}</a> </li> ); }); return( <div className="product-list"> <ul> {links} </ul> </div> ); } });
String Sugar
It’s always been kind of a pain to check for substrings in JavaScript.
if (myString.indexOf(mySubstring) !== -1)? Give me a break! ES6 finally gives us some sugar to make this a little easier. We can call
startsWith,
endsWith,
includes, and
repeat.
// Clean up all the AngularJS elements $('body').forEach(node => { let $node = $(node); if ($node.attr('class').startsWith('ng')) { $node.remove(); } }); // Pluralize function Pluralize(word) { return word.endsWith('s') ? word : `${word}s`; } // Check for spam let spamMessages = []; $.get('/messages', function(messages) { spamMessages = messages.filter(message => { !(message.toLowerCase().includes('sweepstakes')); }); }); // Sing the theme song let sound = 'na'; sound.repeat(10); // 'nananananananananana'
Argument Defaults
Languages like Ruby and Python have long allowed you to define argument defaults in your method and function signatures. With the addition of this feature to ES6, writing Backbone views requires one less line of boilerplate.
By setting options to an argument default, we don’t have to worry about cases where nothing is passed in. No more
options = options || {}; statements!
class BaseView extends Backbone.View { initialize(options={}) { this.options = options; } }
Spread and Rest
Sometimes function calls that take multiple arguments can get really messy to deal with. Like when you’re calling them from a
bind that’s being triggered by an event listener.
For example, check out this event listener from a Backbone view in a recent project I was working on. Because of the method signature of
_resizeProductBox, I have to pass all those null arguments into
bind and it gets kind of ugly.
class ProductView extends BaseView { initialize() { this.listenTo(options.breakpointEvents, 'all', _.bind(this._resizeProductBox, this, null, null, true)); } _resizeProductBox(height, width, shouldRefresh) { ... } }
In ES6, we can clean this up a bit with spread. We’ll just prepend an array of default arguments with a
... to send them through as arguments to the method call.
Here’s how you’d do it using spread:
const BREAKPOINT_RESIZE_ARGUMENTS = [null, null, true]; class ProductView extends BaseView { initialize() { this.listenTo(options.breakpointEvents, 'all', _.bind(this._resizeProductBox, this, ...BREAKPONT_RESIZE_ARGUMENTS)); } _resizeProductBox(height, width, shouldRefresh) { ... } }
On the other side of the coin is rest, which lets us accept any number of arguments in a method signature instead of at invocation time, as you can do with the splat in Ruby. For example:
function cleanupViews(...views) { views.forEach(function(view) { view.remove(); }); }
Array Destructuring
Sometimes I find myself having to access all of the elements of an array-like object with square brackets. It’s kind of a bummer. Luckily, ES6 lets me use array destructuring instead. It makes it easy to do things like splitting latitude and longitude from an array into two variables, as I do here. (Note that I also could have used spread.)
let markers = []; listings.forEach(function (listing, index) { let lat, lng = listing.latlng; // looks like: [39.719121, -105.191969] let listingMarker = new google.maps.Marker({ position: new google.maps.LatLng(lat, lng) }); markers.push(listingMarker); });
Promises
It seems like every project I work on these days uses promises. Native promises have landed in ES6, so we can all rely on the same API from here on out. Both promise instances and static methods like
Promise.all are provided. Here’s an example of a
userService from an Angular app that returns a promise from
$http if the user is online, and a native promise otherwise.
angular.module('myApp').factory('userService', function($http, offlineStorage) { return { updateSettings: function(user) { var promise; if (offlineStorage.isOffline()) { promise = new Promise(function (resolve, reject) { resolve(user.toJSON()); }); } else { promise = $http.put('/api/users/' + user.id, user.settings) .then(function(result) { return result.data; }); } return promise.then(data => { return new User(data); }); } }; });
A Note On Modules
You may have noticed that I didn’t cover modules, importing, or exporting in this miniseries. Although modules are one of the higher profile features in ES6, and they’re easy to get started with, they still have a lot of edge cases that need to be worked out as ES6 is rolled out.
Specifically, ES6 modules have a
default export and named exports. CommonJS and AMD only support a single export, and traditionally handle named exports by exporting an object with the named exports as properties. The different ES6 module libraries have different ways of reconciling the differences, so you have to use only default exports or named exports if CommonJS might use the module. How these differences will be reconciled remains to be seen.
That said, the easiest way to start requiring modules is to switch from:
var _ = require(‘lodash’);
To the new syntax:
import _ from ‘lodash’;
Similarly, you can import relative files using:
import Router from ‘../router’;
When exporting, you can switch from:
module.exports = App;
To this:
export default App;
There’s a bit more to using modules (such as named exports), which we won’t cover today. If you want to learn more, take a look at this overview.
Conclusion
In this miniseries, we took a look at some real-world examples of how you would use ES6 in a client-side JavaScript app. We took a quick look at setting up an ES6 transpile step in your build process, and examined many of the easy-to-use features you can start using right away.
Before you know it, ES6 will be the standard language in use across the web and on servers and personal computers everywhere. Hopefully, this walkthrough will help you get started with a minimum of fuss.
If you’re feeling excited about ES6, and you want to learn more, I would suggest reading through this overview. Or if you’re feeling really enthusiastic, try this book.
Until next time, happy coding!
P.S. How do you plan to use these features in your app? Have you discovered a trick made possible by the new features that speed up your workflow? Leave us a comment!
Share your thoughts with @engineyard on Twitter
OR
Talk about it on reddit
|
http://blog.engineyard.com/2015/smooth-transition-ecmascript-6-new-features
|
CC-MAIN-2017-30
|
refinedweb
| 1,700
| 59.19
|
I have this very simple beginner program for my java class. I cannot seem to grasp why my if statement is not being executed when the correct input is entered. I know I could probably get the user to enter a number which I could convert and run through a switch but I really have to know why this doesn't work, I've been reading through JOption forums for the past hour and I guess I'm not seeing my solution.
I am probably not comprehending something so simple and fundamental. I rightfully deserve any tongue/keyboard lashing.
Thanks in advance,
Mike
Code:import javax.swing.*; public class Week04_InternetServiceProvider_michaelBrooks { public static void main(String[] args) { String input; String internetType; double hours = 0; double overages = 0; double internetPlanA = 9.95; //double internetPlanB = 13.95; //double internetPlanC = 19.95; input = JOptionPane.showInputDialog("What type of internet package do you you have? Enter A, B or C").toUpperCase(); internetType = input; input = JOptionPane.showInputDialog("How many hours did you use?"); hours = Double.parseDouble(input); if (internetType == "A"){ if (hours >10) { overages = hours - 10; overages *= 2; internetPlanA += overages; } JOptionPane.showMessageDialog(null, "Your total monthly bill is: $" + internetPlanA); } System.exit(0); } }
|
http://forums.devshed.com/java-help/939187-joption-input-validation-last-post.html
|
CC-MAIN-2016-44
|
refinedweb
| 196
| 51.95
|
Introduction
With its massive .NET framework, Microsoft® introduced a new data access technology called ADO.NET. In this article, we will examine how to use the ADO.NET driver for Informix that is included with the IBM Client SDK version 2.90. The sample code included is written in C#.
Laying the groundwork
Before using the ADO.NET driver, you should make sure it is installed and working properly. The current version of the driver is installed with the Informix Client Software Developer's Kit (SDK) 2.90. Unlike the previous 2.81 version, this SDK installation includes the ADO.NET driver by default. As you might expect, the client machine must have the .NET framework installed in order to use the driver. The SDK install program will warn you about this, too. It does not seem to actually look for the .NET framework, it just warns you that it needs to already be installed. If you have the 2.81 SDK already installed, it is best to uninstall it first. The two versions do not co-exist well. Be aware that the 2.90 ADO.NET driver incorrectly reports itself as version 2.81 when you add it into your Visual Studio Projects.
The 2.90 version is a significant upgrade over the 2.81 version. It includes a new IfxDataAdapter wizard, IPv6 support, and new classes for Informix data types (IfxDateTime, IfxDecimal, IfxBlob, and IfxClob). The documentation is more complete with twice the amount of material.
Important: The IBM Informix ADO.Net driver is not self-contained in the IBM.Data.Informix.dll file that gets installed in the /bin directory of your installation. Apparently, it uses some of the other client code installed by the SDK. This means that you must install the Informix Client SDK on any machines that will use the ADO.Net driver. You cannot just include the IBM.Data.Informix.dll in your distribution. This could be a serious limitation for some applications. You also need to go through the SDK setup (SetNet32) to define your Informix data sources.
Before using the ADO.NET driver to connect, you must also run a stored procedure called cdotnet.sql. It is located in the /etc directory of your SDK installation. This is similar to the process of setting up the OLEDB driver, though the procedure is much shorter. This process is documented in the User's Guide. (See the Resources section below.)
After installation, check your driver and make sure you get a connection. To use the ADO.NET driver in your Visual Studio project, make sure you add a reference to the IBM.Data.Informix.dll found in the /bin directory of your client SDK installation. The proper
using statement is:
using IBM.Data.Informix. Here is a simple method that demonstrates how to get a connection to the database:
Listing 1. Connecting to an Informix database
public void MakeConnection() { string ConnectionString = "Host=" + HOST + "; " + "Service=" + SERVICENUM + "; " + "Server=" + SERVER + "; " + "Database=" + DATABASE + "; " + "User Id=" + USER + "; " + "Password=" + PASSWORD + "; "; //Can add other DB parameters here like DELIMIDENT, DB_LOCALE etc //Full list in Client SDK's .Net Provider Reference Guide p 3:13 IfxConnection conn = new IfxConnection(); conn.ConnectionString = ConnectionString; try { conn.Open(); Console.WriteLine("Made connection!"); Console.ReadLine(); } catch (IfxException ex) { Console.WriteLine("Problem with connection attempt: " + ex.Message); } }
The sample code includes a
BasicConnection class for this functionality. As you can see, the
ConnectionString is just a semicolon-separated list of parameters for the connection. The
Open() method opens the connection to the database and throws an
IfxException if the connection fails. The
IfxException.Message property usually gives a reasonable amount of detail on the reason for the failure.
Basic commands
Once you have a connection, you can begin to execute commands against the database. To do this, use an
IfxCommand object. The constructor for an
IfxCommand takes a string (the SQL command text) and an
IfxConnection. The
IfxCommand object has a series of
Execute methods to execute the command against the database. To clean up, use the
IfxConnection.Close() method. Here is an example of executing a simple command that doesn't return a set of results. It could be an insert, update, or delete.
Listing 2. Executing an insert, update or delete
IfxCommand cmd; cmd = new IfxCommand("insert into test values (1, 2, 'ABC')",conn); cmd.CommandTimeout = 200; //seconds to wait for command to finish try { int rows = cmd.ExecuteNonQuery(); } catch (IfxException ex) { Console.WriteLine("Error "+ex.Message); }
ExecuteNonQuery returns, as an integer, the number of rows affected by the command. You can also build parameterized statements and queries, which we will also examine below. Notice the
CommandTimeout property of the
IfxCommand. The default timeout is 30 seconds, although it is undocumented. Unless you change this property, a command that runs for over 30 seconds will time out and throw an exception.
The next example is executing a select statement and working with the set of results returned by the database server. For a fast, forward-only cursor through the results, use an
IfxDataReader returned by the
ExecuteReader method. However, you can only have one open
IfxDataReader per
IfxConnection. (This is an ADO.NET limitation, not a specific limitation of the Informix ADO.NET driver.)
Listing 3. Iterating through an IfxDataReader
IfxCommand cmd = new IfxCommand("select * from test",bconn.conn); try { IfxDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { int a = dr.GetInt32(0); int b = Convert.ToInt32(dr["b"]); string c = (String)dr[2]; } dr.Close(); } catch (IfxException ex) { Console.WriteLine("Error "+ex.Message); }
Each column is retrieved as a generic Object type. As the code demonstrates, there are several ways to convert the column Objects into the correct datatypes. You can use the
GetXxx methods of the
IfxDataReader. There are methods for almost every datatype. The
GetXxx methods take a column number as a parameter. You can use the indexers of the
IfxDataReader to access the columns by their names. The .NET framework
Convert functions will convert these Objects into the proper types, if possible. Finally, you can index into the columns by column number and cast the results directly (for some types).
This next example shows how to call a stored procedure that needs a parameter value.
Listing 4. Executing a stored procedure with a parameter
IfxCommand cmd = new IfxCommand("test_proc",conn); cmd.CommandType = CommandType.StoredProcedure; //from System.Data cmd.Parameters.Add("in_parameter",2); //many ways to create these try { cmd.ExecuteScalar(); } catch (IfxException ifxe) { Console.WriteLine("Error "+ifxe.Message); }
For this
IfxCommand, you must set the
CommandType to the
StoredProcedure value from the
CommandType enum in
System.Data. To create the parameter, you can use the
IfxCommand's
Parameters.Add method.
IfxCommands.Parameters is a collection, so you can add as many parameters as you need. You can create the parameters with any of the
IfxParameter() constructors, or you can shortcut their creation as above. Note, however, that each
IfxParameter is associated with a specific
IfxCommand. You cannot create
IfxParameters and then use them in multiple
IfxCommand objects. The
ExecuteScalar() method returns only 1 row. This example does not return anything back from the stored procedure.
To build a parameterized SQL statement that doesn't execute a stored procedure, insert question marks as place-holders in the
CommandText. For example:
Listing 5. Parameterized query
IfxCommand insCmd = new IfxCommand("insert into clientstest " + "(clientcode, clientacctname, primarycontact, primaddrcode, " + "initialamt,createdate) values (0,?,?,?,?,TODAY)",conn);
Add
IfxParameter objects to the
IfxCommand's
Parameters collection in the exact order as they occur in the command text. This technique is further demonstrated in the final strongly typed
DataSets in the extended example below.
Strongly typed DataSets
ADO.NET includes a specialized database object called a
DataSet. It is an in-memory database. The
DataSet consists of one or more
DataTable objects (made up of
DataRow objects). The
DataTables can be related by primary and foreign keys. And constraints can be placed on the data. The
DataSet is also disconnected from the actual data store. It gets filled through one or more
DataAdapters (one per
DataTable), and then keeps that data and any changes in memory. At a later point, the
DataAdapters can submit changes back to the data store.
The basic
DataSet is not strongly typed. It does not know what the real columns and rows of the database are. Columns can be indexed by name:
row["itemcode"]. But compliler does not check these column names. Any mistake in the column name, for example, is not apparent until runtime. Also, the developer has no help remembering if the column is "itemcode" or "itemid."
A strongly typed
DataSet addresses these problems. Instead of a generic
DataRow, it would have, for example, an
OrderDetailDataRow as part of an
OrderDetailDataTable. And you could refer to columns as actual properties of the
OrderDetailDataRow (
row.ItemCode). This way you also get the productivity benefits of Intellisense. The table and column names also become available in the property editors to enhance designer-level tools like data binding.
So how can you build this productivity-enhancing, strongly typed
DataSet? Will it take so much time or effort to build that you don't experience any net productivity gain? The Informix ADO.NET driver may not be as sophisticated as some other drivers. The Microsoft
SQLDataAdapter (for SQL Server) includes a Generate DataSet wizard. The
IfxDataAdapter doesn't have this wizard yet. However, you can build some tools to help you. You can also use some tools already built into the .NET framework. In the end, you will have a descendent of a strongly typed
DataSet that encapsulates all of the database interaction.
The .NET framework includes an XSD compiler (xsd.exe) that can generate a strongly typed
DataSet from a specially formatted .xsd file. But who wants to type in a bunch of XML? Fortunately, the
DataSet object includes a method called
WriteXmlSchema(). This method allows you to use a non-typed
DataSet to create the XSD file for a strongly-typed
DataSet. Let's look at how this works. Here is a simple table:
Listing 6. Clientstest table
CREATE TABLE clientstest ( clientcode SERIAL not null, clientacctname CHAR(60) not null, primarycontact CHAR(30) not null, primaddrcode CHAR(10), createdate DATE, initialamt DECIMAL (18,0) );
Here's the single-table DataSet for that table:
Listing 7. Defining the DataSet
DS = new DataSet("dsClients"); //main table definition DataTable mainTable = new DataTable("clients"); DataColumnCollection cols = mainTable.Columns; DataColumn column = cols.Add("clientcode",typeof(Int32)); column.AllowDBNull = false; cols.Add("clientacctname",typeof(String)).MaxLength = 60; cols.Add("primarycontact",typeof(String)).MaxLength = 30; cols.Add("primaddrcode",typeof(String)).MaxLength = 10; cols.Add("initialamt",typeof(Decimal)); cols.Add("createdate",typeof(System.DateTime)); //primary key mainTable.PrimaryKey = new DataColumn[] {cols["clientcode"]}; //add table to DataSet DS.Tables.Add(mainTable); //Write schema to file DS.WriteXmlSchema("dsClients.xsd");
In this definition, you set the types and the constraints on the data. You also set the names for the columns. They do not have to match the database's column names. Look in the code files in the Download section of this article to see the resulting dsClients.xsd file.
To make it easier to generate the XSD file (and re-generate it, after changes), build a framework for these
DataSet Builders. (All the code required for this is included below.) Since you want the framework to determine which Builders to build, use reflection to dynamically determine if something is a
DataSetBuilder. Start by writing the
IBuildable interface. It defines the properties and methods that our
DataSetBuilders must implement.
Listing 8. IBuildable interface
public interface IBuildable { string FileName {get; set;} string FilePath {get; set;} Logger Log {get; set;} DataSet DS {get; set;} void BuildXSD(); void CompileXSD(string outputDirectory); }
The code from listing 7 (the definition of the
DataSet) is basically the
BuildXSD() method. Create an abstract parent class called
DataSetBuilder.
BuildXSD is the abstract method that must be overriden in each concrete descendent. The
CompileXSD method is the same for each
DataSetBuilder, so it resides in the
DataSetBuilder. Here is the
CompileXSD() method from that abstract class:
Listing 9. CompileXSD() method
log.LogStatus("Compiling "+filename); ProcessStartInfo processinfo = new ProcessStartInfo(); processinfo.FileName = @"C:\Program Files\Microsoft Visual Studio .NET 2003\" +@"SDK\v1.1\Bin\xsd.exe"; processinfo.Arguments = FilePath+FileName+" /dataset /namespace:" +ds.Namespace+" /out:"+outputDirectory; processinfo.UseShellExecute = false; processinfo.RedirectStandardInput = true; processinfo.RedirectStandardOutput = true; processinfo.RedirectStandardError = true; processinfo.CreateNoWindow = true; //doesn't work processinfo.WindowStyle = ProcessWindowStyle.Hidden; //doesn't work Process compiler = Process.Start(processinfo); log.LogStatus("Output:\n"+compiler.StandardOutput.ReadToEnd()); log.LogStatus("Error:\n"+compiler.StandardError.ReadToEnd()); compiler.WaitForExit();
This method uses the
Process and
ProcessStartInfo classes from
System.Diagnostics in the .NET framework to execute the XSD compiler. This example code uses the free and relatively simple .NET Logging Framework from the ObjectGuy (see Resources).
Because the
DataSetBuilder classes all implement the
IBuildable interface, we can use reflection to look through the assembly and build all the
DataSet classes from the
DataSetBuilders. This is what the
DataLibraryBuilder class does. For example, the
ClientsBuilder gets compiled to a
dsClients class.
The generated
dsClients class is the strongly-typed
DataSet. From the 42-line
ClientsBuilder, we now have almost 500 lines of strongly typed code. Look at this generated code. It contains a
clientsDataTable. The
clientsDataTable has properties for each column. There are also methods like:
NewclientsRow(),
IsinitialamtNull(), and
FindByclientcode(int clientcode). These will be quite useful when using this class.
To encapsulate the Informix database access into the strongly typed
DataSet, inherit from
dsClients. This is the
Clients class that you can use in your application. Inheritance provides some protection from changes in the schema of the
DataSet. If the schema changes, you can just regenerate the
dsClients class. The
Clients class remains unchanged (though you may need to make changes there, too). In the
Clients class, add an
IfxDataAdapter for each
DataTable (just one in this case). For each
IfxDataAdapter, define the SQL text and parameters for the select, insert, update, and delete commands. You can then override the
Fill and
Update methods to initialize, fill, and update all the
IfxDataAdpaters. Look at the Insert Command as an example:
Listing 10. InsertCommand for the IfxDataAdapter
IfxCommand insCmd = new IfxCommand("insert into clientstest " +"(clientcode, clientacctname, primarycontact, primaddrcode, " +"initialamt,createdate) values (0,?,?,?,?,TODAY)",conn); insCmd.Parameters.Add("clientacctname", IfxType.Char,60, "clientacctname"); insCmd.Parameters.Add("primarycontact", IfxType.Char, 30, "primarycontact"); insCmd.Parameters.Add("primaddrcode", IfxType.Char, 10, "primaddrcode"); insCmd.Parameters.Add("initialamt", IfxType.Decimal, 16, "initialamt"); daclients.InsertCommand = insCmd;
The
IfxDataAdapter has the following command properties:
SelectCommand,
InsertCommand,
DeleteCommand, and
UpdateCommand. When the
IfxDataAdapter executes the
Fill() method, it uses the
SelectCommand to query the database. When the
Update() method is called, the
IfxDataAdapter uses a combination of the Insert, Update, and Delete commands to conform the database to the in-memory version of the table. The
IfxDataAdapter decides which rows and columns need to be changed; the developer does not have to code to track the changes to the data.
Notice that zero is inserted into the serial value, as is usual for the Informix serial type. But how can you get the database-generated value back into your disconnected
DataSet? You have to hook into the
RowUpdated event of the
dsclients
IfxDataAdapter. In that event handler, the code looks for any inserts. For an insert, it executes a dbinfo command to retrieve the just-created serial value. It puts that value into the clientcode column for that
DataRow. Here's the event handler code for this Informix-specific trick:
Listing 11. Retrieving the generated serial value
private void daclients_RowUpdated(object sender, IfxRowUpdatedEventArgs e) { //For INSERTs only, gets the serial id and //inserts into the clientcode if (e.StatementType == StatementType.Insert) { IfxCommand getSerial = new IfxCommand( "select dbinfo('sqlca.sqlerrd1') from systables " +"where tabid = 1", daclients.InsertCommand.Connection); e.Row["clientcode"] = (int)getSerial.ExecuteScalar(); } }
Results
You now have a fully encapsulated business object that handles its own database interaction. What can you do now? Since
DataSets derive from
System.Component, you can add your strongly typed
DataSet objects (for example,
Clients) onto your Visual Studio Toolbox. Then you can drag it out onto any WinForm or WebForm design view. Set the property for the Connection. In your code, execute the object's
Fill() method (perhaps in a
FormLoad event). That fills the object with all the data for each
DataTable. In the Designer view, you can also databind by setting the
DataSource (and perhaps the
DataMember property) for the visual control or grid.
The LibraryConsoleTest program in the sample solution demonstrates how the strongly typed
DataSet works. You can now write something like this:
Console.WriteLine(client.clientcode + " " + client.clientacctname+" " + client.createdate); instead of this:
Console.WriteLine(ds.Tables["clients"].Rows["clientcode"] + " " + ds.Tables["clients"].Rows["clientacctname"] + " " + ds.Tables["clients"].Rows ["createdate"] ); The LibraryConsoleTest adds a new client and retrieves the generated serial number. It deletes a client after using the
Findbyclientcode() method to select the proper row. It also updates one column in a particular row. Finally it loops through the clients and prints the data to the Console. The sample solution also includes a quick Windows Forms application (WinFormsApp) that demonstrates databinding use a DataGrid.
Data access is constant need for most business applications. Yet, the models and methods for doing data access are continually changing. The examples in this article should help you get started if you have chosen ADO.NET and Informix Dynamic Server as your tools.
Download
Resources
Learn
- Visit the developerWorks IDS corner for articles on Informix Dynamic Server.
- Learn the details of the Informix Client Software Developer's Kit 2.90 in the documentation library.
Get products and technologies
- Download the Informix Client SDK.
- Get the Informix Dynamic Server free trial download.
- Download the free .NET Logging Framework from the ObjectGuy. Plenty of power without the complexity of other frameworks.
Discuss
- Participate in the discussion forum.
- Participate in the Informix Dynamic Server discussion forum..
|
http://www.ibm.com/developerworks/data/library/techarticle/dm-0510durity/
|
CC-MAIN-2015-22
|
refinedweb
| 2,979
| 52.15
|
an Open Source Utility that Automatically Create Data Transfer Objects based on LINQ to SQL Data Classes
Few weeks ago I posted in my Hebrew blog a post about using Data Transfer Objects to work with LINQ to SQL and ADO.NET Entity Framework (that currently both of them doesn’t support working with POCO).
One of the comments I got was that using DTO’s takes twice the time than not using them. That’s because you have to write DTO class for each entity and you also have to write method in the DTO class that return the DAL object (the object created by the ORM and mapped to a table in the DB) from the DTO and vice-versa.
Although I don’t think it’s too much work, and i think that the advantages are significant enough to make the effort worth, I wrote during my job a small code generator that create DTO’s for each entity in the LINQ to SQL DataClasses that exists in the application.
LINQ2SQLDTOCreator
The application I wrote called LINQ2SQLDTOCreator get a path to an assembly (let’s say – the application DLL) and a path to a folder where the application will save the generated files. The application looks in the assembly to find classes that has the attribute System.Data.Linq.Mapping.TableAttribute . Classes with this attributes are actually classes that are part of LINQ to SQL model.
In the classes with this attribute, the application looks for properties with System.Data.Linq.Mapping.ColumnAttribute attribute. These properties, are properties that mapped to a specific column in a table.
Finally, for each object in the LINQ to SQL Data Classes, the application creates a new class (.cs file) in the path the user gave as parameter to the application.
The generated class include definition for the properties that are in the original object created by LINQ to SQL.
In addition to the properties, the generated classes contains two important methods: a static method that get an instance of the object created by LINQ to SQL (the data class) and return an instance of the DTO and a method that return an instance of the LINQ to SQL object with all of the values that stored in the DTO.
That means that if you put the generated DTO’s in a separate project you have to make sure that this project have reference to the assembly that include your LINQ to SQL Data Classes (so the method that work with this data classes will compile).
For example, here is a DTO class created by the Application:
1: using System;
2: /*
3: ------------------------DTO OBjECT-----------------------------
4: --------------Generated By LINQ2SQLDTOCreator------------------
5: ----------------Developed by Shahar Gvirtz---------------------
6: ----------------------
7: ----------------------------------
8: */
9:
10: namespace DTO
11: {
12: public class VideoDTO
13: {
14: public static VideoDTO GetDTOFromDALObject( DAL.Video src )
15: {
16: VideoDTO obj = new VideoDTO ();
17: obj.ID = src.ID;
18: obj.Title = src.Title;
19: obj.DateAdded = src.DateAdded;
20: obj.Code = src.Code;
21: obj.ForumLink = src.ForumLink;
22: obj.CategoryID = src.CategoryID;
23: obj.ThumbnailURL = src.ThumbnailURL;
24: obj.Description = src.Description;
25:
26: return obj;
27: }
28: public DAL.Video GetDALObject()
29: {
30: DAL.Video obj = new DAL.Video ();
31: obj.ID = ID;
32: obj.Title = Title;
33: obj.DateAdded = DateAdded;
34: obj.Code = Code;
35: obj.ForumLink = ForumLink;
36: obj.CategoryID = CategoryID;
37: obj.ThumbnailURL = ThumbnailURL;
38: obj.Description = Description;
39:
40:
41: return obj;
42: }
43:
44: public Int32 ID { get; set; }
45: public String Title { get; set; }
46: public DateTime DateAdded { get; set; }
47: public String Code { get; set; }
48: public String ForumLink { get; set; }
49: public Int32 CategoryID { get; set; }
50: public String ThumbnailURL { get; set; }
51: public String Description { get; set; }
52:
53:
54: }
55: }
How To Use The DTO Generator?
- Download the application
- The file you downloaded contains the full source. inside the project directory in \bin\Release you’ll find the executable file. extract all of them to another directory.
- This is a Console Application. This application get two command line parameters separated by blank space. The first parameter is the path of the assembly that contain your LINQ to SQL Data Classes. The second parameter is the path to a directory where all of the .cs files generated by the application will stored. For example:
1: c:\myApp\LINQ2SQLDTOCreator.exe "c:\Dev\App\bin\debug\logic.dll" "c:\outputfromapp"
2:
(make sure the parameters are separated by a single blank space)
- Now, the application will run and in the folder you entered in the second parameter you will find the DTO classes. Now, you can migrate this files into your project, compile and use.
- Remember to re-run the application every time you change the LINQ to SQL model in order to get an updated DTO’s.
Known Problems
- The application doesn’t support any relationships between objects. If there are relationships in the database, the DTO will contain a property for the FK but no object association will be created.
- The application can’t work with .NET 4.0 assemblies for a simple reason. If you want to work with .NET 4.0 assemblies you have to modify the project settings and set the Target Framework to .NET 4.0, rebuild and enjoy.
- The application doesn’t put the created files automatically in your project.
If you get any other problems, let me know and I’ll try to fix.
Summary
LINQ2SQLDTOCreator is a really simple application that use Reflection in order to create Data Transfer Objects based on given assembly that include LINQ to SQL model. The application can be really useful if you use LINQ to SQL and want to create DTO’s without write at all :-)
You can download the application, for free, here.
Shahar.
|
http://weblogs.asp.net/shahar/an-open-source-utility-that-automatically-create-data-transfer-objects-based-on-linq-to-sql-data-classes
|
CC-MAIN-2014-42
|
refinedweb
| 965
| 56.05
|
: | | > | +#ifdef mblen | > | + inline int (mblen)(const char *p, size_t l) { return mblen(p, l); } | > | +#undef mblen | > | +#else | > | extern "C" int mblen(const char*, size_t); | > | +#endif | | > "mblen" needs to have C linkage in all cases. | | Is it std::mblen that must have C linkage? I ask because ::mblen | is declared and defined in the C library, and it does have C linkage. We want all C-functions, whether there are shadowed (i.e. brought in scope through <cxxx> or <xxx.h> headers) have C linkage -- this ensures that those names ultimately refer to the same entities -- without exceptions for the reasons I exposed in another message. | Isn't this inline function enough? No. -- Gaby CodeSourcery, LLC
|
http://gcc.gnu.org/ml/libstdc++/2000-12/msg00293.html
|
crawl-001
|
refinedweb
| 115
| 74.79
|
Practice React/TypeScript By Building A Chrome Extension
Milecia McG
・4 min read
Chrome is hands down one of the best browsers to work with. The debugging tools are great and you can add a lot of other functionality through extensions. These little programs other developers write and maintain can really make a difference in how you get work done. Although, there is a chance you won't find an extension that does exactly what you need it to.
The good news is that you can make your own! You don't even need to learn anything special. If you know how to write TypeScript, you can make your own Chrome extension. You'll learn exactly how to do that in this short tutorial. We'll cover some background, build the extension, and learn how to use it in Chrome.
Why you would make a custom extension
While you were testing your code, you might have thought about ways you could make it easier or ways you could automate it in the browser. A custom extension would let you do that. Making extensions is more about solving specific problems you have. The company you work for could implement a process for testing that you could write a quick extension for and give to the whole team.
Or you could write a few extensions just to practice your TypeScript skills in a meaningful way. It's important to not get caught in the hype of making the "best" extension or the most popular extension. Your custom code is for you and the problems you are trying to fix. Think of it as making your own little shortcut.
Writing the code for an extension
On a code level, a Chrome extension is just HTML, CSS, and JavaScript that lets you add functionality to the browser by using the APIs Chrome exposes. We're going to write our demo extension using React. The extension we're making won't do anything spectacular, but it will show you the basics of how you can start making extensions.
The first thing we'll do is make a new React project using create-react-app. In case you don't have create-react-app, install it in your directory first using this command.
npm install create-react-app
Now that you have a fresh app, let's edit one of the files to make this a Chrome extension. Go into the public folder and find the manifest.json file. It will already have some code in there for you, but here's how we will make it look.
{ "manifest_version": 2, "short_name": "The Ultimate Help Tool", "name": "The Ultimate Help Tool", "description": "When you get stuck on a coding problem and you aren't sure what to do next, push this button", "version": "0.1", "browser_action": { "default_popup": "index.html" }, "permissions": [ "activeTab" ], "content_security_policy": "script-src 'self' 'sha256-5As******'; object-src 'self'", "author": "Milecia McG" }
One thing to note is that your manifest_version should always be 2 because Google said so. Also, the content_security_policy has to be set similar to this so that you'll be able to use your extension locally. We use the browser_action property to show that little icon in the upper right corner and to show the body of the extension when you click it. The permissions value is set to activeTab so that we can do our browser_action in the current tab. Next we will write the code for the App.js file. It's going to be really simple and it'll just have a link and title.
import React, { Component } from 'react'; import './App.css'; class App extends Component { render() { return ( <div className="App"> <h1>Save Me Now</h1> <a href="" id="checkPage" target="_blank" rel="noopener noreferrer">Check this page now!</a> </div> ); } } export default App;
Now that you have this little demo code finished, go ahead and build it with this command.
npm run build
Using it in Chrome
Making an extension isn't too bad right? Now you can test it in Chrome. Go to the browser and type this in a new tab.
chrome://extensions
In the upper right corner, you'll see the Developer mode option. Go ahead and turn that on. You should see this.
Upload your build folder by clicking Load unpacked. Now you'll see your custom extension! It'll also show up as a puzzle piece in the top right corner of the Chrome browser.
Giving it to others
After you've tested your shiny new extension, you can share it with others easily. If you don't want to be bothered with the Chrome web store, you can always make a GitHub repo that people can clone from. Although, if you don't want people to have access to the source code, uploading an extension to the web store is a good option. It's a bit of a process, but they have some good documentation on how to get through the publishing process.
Making Chrome extensions is another way you can practice your JavaScript and learn more about the frameworks. Or you can write some plain old JavaScript, HTML, and CSS. Plus, you could make something useful that everyone loves. Have you ever made or published an extension? Or have you made an extension-like thing for another browser? I know Firefox has their add-ons, but I haven't made one.
Hey! You should follow me on Twitter because reasons:
What should production CSS look like? Share your layout-to-web workflow
How does I write good CSS?
It doesn't work for me, all I see is a tiny white square... What am I doing wrong!
Same problem!
Ope! Sorry it took a while to get back to y'all. But after you run the build, did you use the build folder when you click Load unpacked? It should look like this
Exactly. In my case, I used the build folder.
Hmmm... 🤔 If you went through all the steps and copied the code line for line, I'm not sure what the problem is.
Wait! Did you run create-react-app after you installed it? I know it sounds kind of simple, but I'm not sure what else it could be.
You can try to compare your code to mine here: github.com/flippedcoder/little-dem...
But where is the typescript?
The example code is small and there is overlap with ES6 syntax, but the class structure is a part of TypeScript.
|
https://dev.to/flippedcoding/practice-react-typescript-by-building-a-chrome-extension-1482
|
CC-MAIN-2019-39
|
refinedweb
| 1,084
| 73.37
|
Opened 7 years ago
Closed 6 years ago
#8742 closed (invalid)
Windows support of FastCGI feature
Description
Django FastCGI cannot be used on Windows, and possibly other non-Unix environments.
Currently the FastCGI feature of Django (manage.py runfcgi) relies on the Python library 'flup', which relies on Unix-only socket functions (socket.socketpair(), socket.fromfd(), etc.) making it impossible to use the FastCGI feature of Django on non-Unix envs like Windows, and hence, making it impossible to serve Django with non-Apache (non-mod_python) httpds like lighttpd on non-Unix envs.
Possible solutions: The function socketpair() is unofficially implemented on Windows by a recipe. () Committing this patch to the flup project is not a hard work. However, the function fromfd() has nothing like that. There is a patch enabling its Windows use () but the patch is not yet applied on the Windows release, even on Python 2.6b2. Rewriting the FastCGI feature with python-fastcgi () could be a more work than flup, but then the feature can support Windows.
Unix environments, especially GNU/Linux, is of course better than Windows to be used as a web server, but there are some situations where Windows and FastCGI have to be used. It would be good to have Django FastCGI available on Windows.
Change History (6)
comment:1 follow-up: ↓ 2 Changed 7 years ago by Daniel Pope <dan@…>
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 in reply to: ↑ 1 Changed 7 years ago by anonymous
No, it doesn't work, resulting an AttributeError: 'module' object has no attribute 'fromfd'.
comment:3 Changed 7 years ago by anonymous
i mean, it 'still' doesn't work, even with method=threaded.
comment:4 Changed 7 years ago by Daniel Pope <dan@…>
The other thing I can suggest is to try AJP or SCGI. A quick grep shows no instances of fromfd or socketpair in the AJP server code.
comment:5 Changed 6 years ago by snaury
I can confirm that flup works on Windows and Apache, just not the way most people expect (due to principal differences between files and sockets on Windows). To make it work, you need to start your server like this in your dispatch.fcgi:
from django.core.servers.fastcgi import runfastcgi runfastcgi(protocol="fcgi", method="threaded", daemonize="false", host="127.0.0.1", port=port)
Then in your httpd.conf you will need to make apache aware of your server:
FastCgiExternalServer <path-to-your-wwwroot>/dispatch.fcgi -host 127.0.0.1:<port>
Of course with this setup apache won't start your fastcgi server for you (and frankly, with the way python (or flup, I'm not sure) handles signals on Windows you DON'T want apache to start python servers for you, as it won't be able to kill them afterwards), you'll need to do it yourself separately. I, for example, run my fastcgi server as a Windows service (using win32service), which is even better since with this setup apache and fastcgi servers can run as different users.
comment:6 Changed 6 years ago by jacob
- Resolution set to invalid
- Status changed from new to closed
flup (and by extension Django) does not officially support FastCGI on Windows.
However, according to the FAQ, threaded servers are reported to work under Windows; could you determine whether manage.py runfcgi method=threaded works?
We could add a note in the documentation drawing attention to this fact.
|
https://code.djangoproject.com/ticket/8742
|
CC-MAIN-2015-22
|
refinedweb
| 576
| 60.95
|
NAME
alq, alq_open, alq_write, alq_flush, alq_close, alq_get, alq_post - Asynchronous Logging Queues
SYNOPSIS
#include <sys/alq.h> int alq_open(struct alq **app, const char *file, struct ucred *cred, int cmode, int size, int count); int alq_write(struct alq *alq, void *data, int waitok); void alq_flush(struct alq *alq); void alq_close(struct alq *alq); struct ale * alq_get(struct alq *alq, int waitok); void alq_post(struct alq *alq, struct ale *ale);
DESCRIPTION
The alq facility provides an asynchronous fixed kernel thread, which services all log entry requests. An “asynchronous log entry” is defined as struct ale, which has the following members: struct ale { struct ale *ae_next; /* Next Entry */ char *ae_data; /* Entry buffer */ int ae_flags; /* Entry flags */ }; The ae_flags field is for internal use, clients of the alq interface should not modify this field. Behaviour is undefined if this field is modified.. The size of each entry in the queue is determined by size. The count argument determines the number of items to be stored in the asynchronous queue over an approximate period of a disk write operation. The alq_write() function writes data to the designated queue, alq. In the event that alq_write() could not write the entry immediately, and ALQ_WAITOK is passed to waitok, then alq_write() will be allowed to tsleep(9). The alq_flush() function is used for flushing alq to the log medium that was passed to alq_open(). The alq_close() function will close the asynchronous logging queue, alq, and flush all pending write requests to the log medium. It will free all resources that were previously allocated. The alq_get() function returns the next available asynchronous logging entry from the queue, alq. This function leaves the queue in a locked state, until a subsequent alq_post() call is made. In the event that alq_get() could not retrieve an entry immediately, it will tsleep
The alq_open() function returns one of the error codes listed in open(2), if it fails to open file, or else it returns 0. The alq_write() function returns EWOULDBLOCK if ALQ_NOWAIT was provided as a value to waitok and either the queue is full, or when the system is shutting down. The alq_get() function returns NULL, if ALQ_NOWAIT was provided as a value to waitok and either the queue is full, or when the system is shutting down. NOTE: invalid arguments to non-void functions will result in undefined behaviour.〉.
|
http://manpages.ubuntu.com/manpages/jaunty/man9/alq.9freebsd.html
|
CC-MAIN-2014-41
|
refinedweb
| 389
| 65.86
|
A final class cannot have any subclasses. An abstract class cannot be instantiated unless it is extended by a subclass.
Some differences are:An abstract class can have code for one or more methods but an interface cannotAll variables in an interface are public static and final but in an abstract class it is notAbstract classes are faster than interfaces
Yes, a class can be defined as final and abstract also..
Yes, you can declare a final method in an abstract class. However, that method cannot be abstract itself.
Yes, you can make an instance of a final class. You can't have an instance of an abstract class.
Abstract classes are to be extended until to a concrete class.Can have both abstract & non abstract methods.An Abstract class can not be instantiated.A non abstract class can be extended to an abstract class.If At least one abstract method present in a class then that class must be abstract.abstract & final modifiers can never be together.abstract classes can have both abstract methods & non abstract methods.
No. The abstract keyword means that you cannot instantiate the class unless you extend it with a subclass. The final keyword means that you cannot create subclasses of that class.Combining them would lead to an unusable class, so the compiler will not let this happen.
difference between final and intermediate goods difference between final and intermediate goods
The abstract keyword is used to denote abstract classes and abstract methods in Java.An abstract method is a method which is defined, but not implemented:public abstract void doStuff();An abstract class is a class which contains one or more abstract methods*, and which cannot be instantiated directly. All subclasses of an abstract class must implement the abstract methods.* Note that abstract classes can include no abstract methods, but this rather defeats the purpose of using an abstract class.The quintessential example of an abstract class is the Shape class.// Definition of our abstract classpublic abstract class Shape {// Notice how we can actually declare and implement variables and methods.// This is what differentiates between an abstract class and an interface.// The location of this shapeprivate double x,y;// Change our location - all subclasses will have this by defaultpublic void moveTo(final double newX, final double newY) {x = newX;y = newY;}// Definitions of our abstract classes.// All classes which extend from Shape must implement these.public abstract double getArea();public abstract double getPerimiter();}// Definition of our concrete example classpublic class Rectangle extends Shape {// Beyond the x,y location of Shape, Rectangle must have width and heightprivate double width, height;// Implement abstract methodspublic double getArea() {return width * height;}public double getPerimiter() {return width + width + height + height;}}
Differences:Abstract class can also contain method definitions but an interface can contain only declarationsAll variables in an interface are by default public static and final whereas in Abstract class it is notAn interface can be considered as a pure abstract class that contains no method implementations and contains only declarations.:• All interface methods are implicitly public and abstract. In other words, you do not need to actually type the public or abstract modifiers in the method declaration, but the method is still always public and abstract. (You can use any kind of modifiers in the Abstract class)• All variables defined in an interface must be public, static, and final-in other words, interfaces can declare only constants, not instance variables.• Interface methods must not be static.• Because interface methods are abstract, they cannot be marked final, strictfp, or native. (More on these modifiers later.)• An interface can extend one or more other interfaces.• An interface cannot extend anything but another interface.• An interface cannot implement another interface or class.• An interface must be declared with the keyword interface.You must remember that all interface methods are public and abstract regardless of what you see in the interface definition.
® {} class [extends class_name] [implements {, }] ® public | abstract | final ® {} class [extends class_name] [implements {, }] ® public | abstract | final:
The difference is: your motha. Final answer.
They are inversely related. That is: If you declare a method as final you cannot overridden in the child class If you declare a class as final you cannot inherit it in any other class.
Short answer: no. If the purpose of creating such a beast is to prevent the class from being instantiated, use a private no-arg constructor. abstract = means the class cannot be instantiated except through a sub-class. final = means the class cannot be sub-classed. Seems the only reason to want to do this is to create a "utility" class - only static methods that other classes will use, maybe as a way of collecting common code in one spot. This is a very common practice, and utility classes should be declared final with private constructors.
The difference between an intermediate good and service and final goods and service are that intermediate goods and service are used to make final goods and service(:
there us no really big difference between them inuyasha final act is basicley the closing of the whole anime
A private variable is one that is accessible only to the current class and cannot be accessed by any other class, including the ones that extend from it. A final variable is one that cannot be modified once it is initialized and assigned a value.
The final keyword can be used to modify a class, method, or variable.When used on a variable, it means that variable cannot be changed once set.When used on a method, it means that no subclasses may override that method.
Because of the following reasons:static - If a constructor is static, an object instance cannot invoke it to initialize itself (Because static members are not linked to an object)abstract - because an abstract class cannot be instantiated and hence it will not have a constructor. If you make a concrete class's constructor abstract - it cannot be instantiated. eitherways it makes no sensefinal - a constructor cannot be final (thats the way java is designed)
No. An abstract method must be overriden in an subclass. If the method is final then it can't be edited.
|
https://www.answers.com/Q/Difference_between_abstract_and_final_class
|
CC-MAIN-2021-10
|
refinedweb
| 1,036
| 53.71
|
Autotune.NET
- Posted: Jan 05, 2011 at 2:19 PM
- 65,537 Views
- 20 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
We slightly flat note and you're out.
So what takes care of a bad-pitch day? Autotune—an effect that corrects the pitch of your voice so you'll never again sing out of tune. And now, with the power of modern microprocessors, autotune is possible in real-time, allowing singers to benefit from its almost magical powers during live concerts.
The company most famous for its autotune effect is Antares. Antares Auto-Tune currently retails for $249, and a stripped down version is available for $100. In addition to simply improving the pitch of a dodgy singer, autotune can be used to create unique robotic sounding vocal effects, a technique massively popular in recent years thanks to its use by artists such as T-Pain and the group behind the “Auto-Tune the News” YouTube videos. In 1998, when the effect was first used on Cher's “Believe” single, the producer used such extreme settings that instead of subtly adjusting the pitch, autotune “snapped” instantaneously to the nearest “correct” note.
Here is a nerdy example of what Autotune can do.
An autotune effect has two parts. The first is pitch detection, which calculates the dominant frequency of the incoming signal, and is the reason autotune is normally used on monophonic audio sources (i.e. playing one note at a time, not whole chords). So, if your guitar is out of tune, you're out of luck (Celemony's Melodyne product, however, features some incredible capabilities for pitch-shifting polyphonic audio).
The second stage is pitch shifting, or “correcting” a given note. However, the bigger the pitch shift required, the more artificial the end result will be, and it is worth noting that absolutely perfect pitch is not always desirable. Sometimes the blended notes resulting from vibrato, for example, are an important part of the performance, and eliminating them would be detrimental.).
To get started, I searched to see if there were some pre-existing open source autotune implementations, which brought me to awesomebox, a project created by Ravi Parikh and Keegan Poppen while they were students at Stanford University. They kindly gave me permission to make use of their code, which uses an auto-correlator for pitch detection and an open source pitch-shifting algorithm from audio DSP expert Stephan M. Bernsee.
Although porting from C/C++ to C# is not exactly fun, there is enough similarity in the syntax that it is possible to complete without too many changes. You need to remember, however, that a long in C is an int in C# (i.e. 32 bits long not 64).
Additionally, the C# compiler is fussier than C/C++ when it comes to casting between floats, doubles, and ints. Putting the “f” suffix on numeric literals sorts out most of these compiler errors.
Pointers can be a pain. I tend to replace them with integer variables used to index into an array. You can of course use unsafe code, but that limits your options if you plan to port to Silverlight or Windows Phone 7 at a later date, neither of which allow unsafe code or interop into unmanaged code. The necessary mathematical functions are available in the System.Math class.
To see an example, compare this pitch shifting C++ source file with my C# conversion of it.
Interop wrappers for the NAudio Windows WaveIn APIs capture the audio. Here is the code used to start recording:
c#:
waveIn = new WaveIn(); waveIn.DeviceNumber = recordingDevice; waveIn.DataAvailable += waveIn_DataAvailable; waveIn.RecordingStopped += new EventHandler(waveIn_RecordingStopped); waveIn.WaveFormat = recordingFormat; waveIn.StartRecording();
VB.Net:
waveIn = New WaveIn waveIn.DeviceNumber = recordingDevice AddHandler waveIn.DataAvailable, AddressOf waveIn_DataAvailable AddHandler waveIn.RecordingStopped, AddressOf waveIn_RecordingStopped waveIn.WaveFormat = _recordingFormat waveIn.StartRecording()
The steps are:
Whenever the soundcard reports a new buffer of recorded audio, we receive it in the DataAvailable event handler:
c#:
void waveIn_DataAvailable(object sender, WaveInEventArgs e) { byte[] buffer = e.Buffer; int bytesRecorded = e.BytesRecorded; WriteToFile(buffer, bytesRecorded); for (int index = 0; index < e.BytesRecorded; index += 2) { short sample = (short)((buffer[index + 1] << 8) | buffer[index + 0]); float sample32 = sample / 32768f; sampleAggregator.Add(sample32); } }
VB.Net
Private Sub waveIn_DataAvailable(ByVal sender As Object, ByVal e As WaveInEventArgs) Dim buffer() = e.Buffer Dim bytesRecorded = e.BytesRecorded WriteToFile(buffer, bytesRecorded) For index = 0 To e.BytesRecorded - 1 Step 2 Dim sample = CShort(buffer(index + 1)) << 8 Or CShort(buffer(index + 0)) Dim sample32 = sample / 32768.0F _sampleAggregator.Add(sample32) Next index End Sub
The WaveInEventArgs contains the number of bytes recorded (e.BytesRecorded) and a pointer to the buffer containing those bytes (e.Buffer). The handler does two things with the recorded data. First, it calls WriteToFile, which uses the WaveFileWriter class from NAudio to write the data to disk:
c#:
// before we start recording, set up a WaveFileWriter... writer = new WaveFileWriter(waveFileName, recordingFormat); // ... every block we receive we write it to the WaveFileWriter: writer.WriteData(buffer, 0, bytesRecorded); // ... and when recording stops we must call Dispose to finalize the // .WAV file properly writer.Dispose()
VB.Net:
writer = New WaveFileWriter(waveFileName, _recordingFormat) writer.WriteData(buffer, 0, bytesRecorded) writer.Dispose()
Once recording has completed, we have a WAV file on which to perform our autotune effect. However, our WAV file consists of 16 bit samples (i.e. System.Int16 aka short). In other words, we have a sequence of byte pairs, each of which represent a number in the range -32768 to 32767. For the digital signal processing we will be performing, it's best to have a sequence of floating point numbers (System.Single or float) in the range -1.0f to 1.0f. This is a common requirement, so NAudio provides a utility class to convert audio from short to float called Wave16ToFloatProvider. Here's the code that takes a WAV file and implements the autotune algorithm on it:
c#:
public static void ApplyAutoTune(string fileToProcess, string tempFile, AutoTuneSettings autotuneSettings) { using (WaveFileReader reader = new WaveFileReader(fileToProcess)) { IWaveProvider stream32 = new Wave16toFloatProvider(reader); IWaveProvider streamEffect = new AutoTuneWaveProvider(stream32, autotuneSettings); IWaveProvider stream16 = new WaveFloatTo16Provider(streamEffect); using (WaveFileWriter converted =) byte[] buffer = new byte[8192]; int bytesRead; do { bytesRead = stream16.Read(buffer, 0, buffer.Length); converted.WriteData(buffer, 0, bytesRead); } while (bytesRead != 0 && converted.Length < reader.Length); } } }
VB.Net
Public Shared Sub ApplyAutoTune(ByVal fileToProcess As String, ByVal tempFile As String, ByVal autotuneSettings As AutoTuneSettings) Using reader As New WaveFileReader(fileToProcess) Dim stream32 As IWaveProvider = New Wave16ToFloatProvider(reader) Dim streamEffect As IWaveProvider = New AutoTuneWaveProvider(stream32, autotuneSettings) Dim stream16 As IWaveProvider = New WaveFloatTo16Provider(streamEffect) Using converted As) Dim buffer(8191) As Byte Dim bytesRead As Integer Do bytesRead = stream16.Read(buffer, 0, buffer.Length) converted.WriteData(buffer, 0, bytesRead) Loop While bytesRead <> 0 AndAlso converted.Length < reader.Length End Using End Using End Sub
Here's how it works:
As we saw in the last code snippet, the AutoTuneWaveProvider is the piece in our audio pipeline that actually performs the autotune effect. It implements the NAudio IWaveProvider interface, which allows it to be used in the pipeline for real-time playback if necessary, even though our example code is not doing this (see the section on performance later). Here's the AutoTuneWaveProvider constructor:
c#:
public AutoTuneWaveProvider(IWaveProvider source, AutoTuneSettings autoTuneSettings) { this.autoTuneSettings = autoTuneSettings; if (source.WaveFormat.SampleRate != 44100) throw new ArgumentException("AutoTune only works at 44.1kHz"); if (source.WaveFormat.Encoding != WaveFormatEncoding.IeeeFloat) throw new ArgumentException("AutoTune only works on IEEE floating point audio data"); if (source.WaveFormat.Channels != 1) throw new ArgumentException("AutoTune only works on mono input sources"); this.source = source; this.pitchDetector = new AutoCorrelator(source.WaveFormat.SampleRate); this.pitchShifter = new SmbPitchShifter(Settings); this.waveBuffer = new WaveBuffer(8192); }
VB.Net
Public Sub New(ByVal source As IWaveProvider, ByVal autoTuneSettings As AutoTuneSettings) Me.autoTuneSettings = autoTuneSettings If source.WaveFormat.SampleRate <> 44100 Then Throw New ArgumentException("AutoTune only works at 44.1kHz") End If If source.WaveFormat.Encoding <> WaveFormatEncoding.IeeeFloat Then Throw New ArgumentException("AutoTune only works on IEEE floating point audio data") End If If source.WaveFormat.Channels <> 1 Then Throw New ArgumentException("AutoTune only works on mono input sources") End If Me.source = source Me.pitchDetector = New AutoCorrelator(source.WaveFormat.SampleRate) ' alternative pitch detector: ' Me.pitchDetector = New FftPitchDetector(source.WaveFormat.SampleRate) Me.pitchShifter = New SmbPitchShifter(Settings, source.WaveFormat.SampleRate) Me.waveBuffer = New WaveBuffer(8192) End Sub
Some points to notice:
The key method on any implementation of IWaveProvider is its Read method. This is where the audio consumer, usually the sound card or a WaveFileWriter, asks for data. The data must be supplied as a byte array, and if at all possible you should return exactly the number of bytes you were asked for (if you can't, an extra layer of buffering is usually required, or audio playback will be choppy). Here's our implementation of the Read method:
c#:
public int Read(byte[] buffer, int offset, int count) { if (waveBuffer == null || waveBuffer.MaxSize < count) { waveBuffer = new WaveBuffer(count); } int bytesRead = source.Read(waveBuffer, 0, count); // the last bit sometimes needs to be rounded up: if (bytesRead > 0) bytesRead = count; int frames = bytesRead / sizeof(float); float pitch = pitchDetector.DetectPitch(waveBuffer.FloatBuffer, frames); // an attempt to make it less "warbly" by holding onto the pitch // for at least one more buffer if (pitch == 0f && release < maxHold) { pitch = previousPitch; release++; } else { this.previousPitch = pitch; release = 0; } WaveBuffer outBuffer = new WaveBuffer(buffer); pitchShifter.ShiftPitch(waveBuffer.FloatBuffer, pitch, 0.0f, outBuffer.FloatBuffer, frames); return frames * 4; }
VB.Net:
Public Function Read(ByVal buffer() As Byte, ByVal offset As Integer, ByVal count As Integer) As Integer Implements NAudio.Wave.IWaveProvider.Read If waveBuffer Is Nothing OrElse waveBuffer.MaxSize < count Then waveBuffer = New WaveBuffer(count) End If Dim bytesRead = source.Read(waveBuffer, 0, count) 'Debug.Assert(bytesRead = count) ' the last bit sometimes needs to be rounded up: If bytesRead > 0 Then bytesRead = count End If 'pitchsource->getPitches() Dim frames = bytesRead \ Len(New Single) ' MRH: was count Dim pitch = pitchDetector.DetectPitch(waveBuffer.FloatBuffer, frames) ' MRH: an attempt to make it less "warbly" by holding onto the pitch for at least one more buffer If pitch = 0.0F AndAlso release < maxHold Then pitch = previousPitch release += 1 Else Me.previousPitch = pitch release = 0 End If Dim midiNoteNumber = 40 Dim targetPitch = CSng(8.175 * Math.Pow(1.05946309, midiNoteNumber)) Dim outBuffer As New WaveBuffer(buffer) pitchShifter.ShiftPitch(waveBuffer.FloatBuffer, pitch, targetPitch, outBuffer.FloatBuffer, frames) Return frames * 4 End Function
Here's what's going on
Now that we've seen the big picture of the AutotuneWaveProvider, let's drill down into its two main components—the pitch detector and pitch shifter.
The pitch detection part of autotune is vital to getting good results. If it can't accurately detect the input pitch, it will incorrectly calculate how much the pitch needs to be adjusted. However, high quality pitch detection is quite difficult to get right. First of all, the microphone may well pick up background noise. Second, when you sing a into a microphone, the signal consists not only of a single frequency, but also “harmonics” at different frequencies.
The good news is that we need to detect only the primary pitch.
The awesomebox algorithm makes use of “autocorrelation” for its pitch detection, but I made a few small tweaks to how the algorithm is implemented in an attempt to improve its accuracy. Autocorrelation has the advantage of being a relatively quick process. The basic principle is that if a signal is periodic, it will “correlate” well with itself when shifted forward (or backwards) one cycle.
Let's say we are looking to see if the note “Middle C” is being sung. The frequency of Middle C is around 262Hz. If we are sampling at 44.1kHz (which is standard for CD quality audio), then we will expect the signal to repeat at approximately every 168 samples (44100/262). Accordingly, for every sample in the buffer, we calculate the sum of squares of that sample and the sample 168 samples previous. We do this for every possible offset that measures a frequency in the range we want to detect (I am using 85Hz to 300Hz, which is adequate for pitch detecting vocals). The offset with the highest score is the most likely frequency.
Let's have a look at the code for an autocorrelation algorithm, starting with the constructor for the AutoCorrelator class:
c#:
public AutoCorrelator(int sampleRate) { this.sampleRate = (float)sampleRate; int minFreq = 85; int maxFreq = 255; this.maxOffset = sampleRate / minFreq; this.minOffset = sampleRate / maxFreq; }
VB.Net
Public Sub New(ByVal sampleRate As Integer) Me.sampleRate = CSng(sampleRate) Dim minFreq = 85 Dim maxFreq = 255 Me.maxOffset = sampleRate \ minFreq Me.minOffset = sampleRate \ maxFreq End Sub
First of all, we pre-calculate some values based on the minimum and maximum frequencies we are looking for. Remember that lower frequencies are harder to detect than higher frequencies, so don't set minFreq too low. MaxOffset and MinOffset are the maximum and minimum backwards distances we will be seeking while looking for a match.
c#:
public float DetectPitch(float[] buffer, int frames) { if (prevBuffer == null) { prevBuffer = new float[frames]; } float maxCorr = 0; int maxLag = 0; // starting with low frequencies, working to higher for (int lag = maxOffset; lag >= minOffset; lag--) { float corr = 0; // sum of squares for (int i = 0; i < frames; i++) { int oldIndex = i - lag; float sample = ((oldIndex < 0) ? prevBuffer[frames + corr += (sample * buffer[i]); } if (corr > maxCorr) { maxCorr = corr; maxLag = lag; } } for (int n = 0; n < frames; n++) { prevBuffer[n] = buffer[n]; } float noiseThreshold = frames / 1000f; if (maxCorr < noiseThreshold || maxLag == 0) return 0.0f; return this.sampleRate / maxLag; }
VB.Net
Public Function DetectPitch(ByVal buffer() As Single, ByVal frames As Integer) As Single Implements IPitchDetector.DetectPitch If prevBuffer Is Nothing Then prevBuffer = New Single(frames - 1){} End If Dim secCor As Single = 0 Dim secLag = 0 Dim maxCorr As Single = 0 Dim maxLag = 0 ' starting with low frequencies, working to higher For lag = maxOffset To minOffset Step -1 Dim corr As Single = 0 ' this is calculated as the sum of squares For i = 0 To frames - 1 Dim oldIndex = i - lag Dim sample = (If(oldIndex < 0, prevBuffer(frames + oldIndex), buffer(oldIndex))) corr += (sample * buffer(i)) Next i If corr > maxCorr Then maxCorr = corr maxLag = lag End If If corr >= 0.9 * maxCorr Then secCor = corr secLag = lag End If Next lag For n = 0 To frames - 1 prevBuffer(n) = buffer(n) Next n Dim noiseThreshold = frames / 1000.0F 'Debug.WriteLine(String.Format("Max Corr: {0} ({1}), Sec Corr: {2} ({3})", Me.sampleRate / maxLag, maxCorr, Me.sampleRate / secLag, secCor)) If maxCorr < noiseThreshold OrElse maxLag = 0 Then Return 0.0F End If 'Return 44100.0f / secLag '--works better for singing Return Me.sampleRate / maxLag End Function
A few things to notice:
I wrote some unit tests to measure the accuracy of detection with sine waves (which admittedly are the easiest to detect). Here are the results for audio sampled at 44.1kHz:
Notice that the detected frequencies from the final two tests are actually half the correct amount. This doesn't actually matter for our purposes, since this just means the frequency has been detected as one octave below the correct note.
To improve on the accuracy of the autocorrelator's results, there are a couple of things you can do:
I decided to implement an alternative pitch detection algorithm to see if I could get better results. A different approach is to use the Fast Fourier Transform, which converts signals from the “time domain” into the “frequency domain.”
The basic approach is to take a block of samples (which must be a power of 2 – e.g. 1024), and run the FFT on them. The FFT takes complex numbers as inputs, which for audio signals are entirely real. The implementation I am using expects real and complex parts interleaved for the input buffer. Here's our code setting up fftBuffer with interleaved samples:
c#:
private float[] fftBuffer; private float[] prevBuffer; public float DetectPitch(float[] buffer, int inFrames) { Func<int, int, float> window = HammingWindow; if (prevBuffer == null) { prevBuffer = new float[inFrames]; } // double frames since we are combining present and previous buffers int frames = inFrames * 2; if (fftBuffer == null) { fftBuffer = new float[frames * 2]; // times 2 because it is complex input } for (int n = 0; n < frames; n++) { if (n < inFrames) { } }
VB.Net
Private fftBuffer() As Single Private prevBuffer() As Single Public Function DetectPitch(ByVal buffer() As Single, ByVal inFrames As Integer) As Single Implements IPitchDetector.DetectPitch Dim window As Func(Of Integer, Integer, Single) = AddressOf HammingWindow If prevBuffer Is Nothing Then prevBuffer = New Single(inFrames - 1) {} End If ' double frames since we are combining present and previous buffers Dim frames = inFrames * 2 If fftBuffer Is Nothing Then fftBuffer = New Single(frames * 2 - 1) {} ' times 2 because it is complex input End If For n = 0 To frames - 1 If n < inFrames Then End If Next n
Notice that we prepend the previous buffer we were passed. This is a common way of increasing the accuracy and resolution of an FFT by using overlapping windows, and can be further extended to store three previous buffers, allowing us to have 75% overlapping windows instead of just the 50% that we have in this example.
For better peak frequency detection, the signal that is passed into the FFT is best pre-processed with a “windowing” function. There are several to choose from, each with its own strengths and weaknesses. I used the Hamming window, which is a fairly common choice:
c#:
private float HammingWindow(int n, int N) { return 0.54f - 0.46f * (float)Math.Cos((2 * Math.PI * n) / (N - 1)); }
VB.Net
Private Function HammingWindow(ByVal n As Integer, ByVal _N As Integer) As Single Return 0.54F - 0.46F * CSng(Math.Cos((2 * Math.PI * n) / (_N - 1))) End Function
The next step is to pass on our interleaved buffer to the FFT algorithm. I am using Stephan Bernsee's here, though there is an alternative implementation in NAudio that I could have used. Since the same function can be used for an inverse FFT, the -1 parameter means (rather counter-intuitively), do a forwards FFT. It processes the data in place, which is fine since we don't need to keep the contents of the input buffer:
c#:
// assuming frames is a power of 2 SmbPitchShift.smbFft(fftBuffer, frames, -1);
VB.Net
' assuming frames is a power of 2 SmbPitchShift.smbFft(fftBuffer, frames, -1)
Once we have completed the FFT, we are ready to interpret its output. The output of the FFT consists of complex numbers (again real followed by imaginary in our buffer), which represent frequency “bins.”
We start off by calculating the bin size and working out which bins correspond to the range of frequencies we are interested in detecting:
c#:
float binSize = sampleRate / frames; int minBin = (int)(85 / binSize); int maxBin = (int)(300 / binSize);
VB.Net
Dim binSize = sampleRate / frames Dim minBin = CInt(Fix(85 / binSize)) Dim maxBin = CInt(Fix(300 / binSize))
For example, if our sample rate is 44.1kHz and we analyse a block of 1024 samples, then each bin represents 43Hz, which is hardly the granularity we are looking for. To increase resolution, our options are to either sample at a higher rate or analyse a bigger chunk. Our approach is to use overlapping blocks of 8192 samples, as we read 4096 samples each time. This means we have a resolution of around 5Hz, which is much more acceptable.
Now we can calculate the magnitude or “intensity” for each frequency by calculating the sum of squares (strictly we should then take the square root, but we don't need to since we are just looking for the largest value):
c#:
float maxIntensity = 0f; int maxBinIndex = 0; for (int bin = minBin; bin <= maxBin; bin++) { float real = fftBuffer[bin * 2]; float imaginary = fftBuffer[bin * 2 + 1]; float intensity = real * real + imaginary * imaginary; if (intensity > maxIntensity) { maxIntensity = intensity; maxBinIndex = bin; } }
VB.Net
Dim maxIntensity = 0.0F Dim maxBinIndex = 0 For bin = minBin To maxBin Dim real = fftBuffer(bin * 2) Dim imaginary = fftBuffer(bin * 2 + 1) Dim intensity = real * real + imaginary * imaginary If intensity > maxIntensity Then maxIntensity = intensity maxBinIndex = bin End If Next bin
Since we have identified the bin with the maximum intensity, we can calculate the detected frequency:
c#:
return binSize * maxBinIndex;
VB.Net
Return binSize * maxBinIndex
I don't currently specify a minimum threshold for maxIntensity, but perhaps if it were very low, the FFT pitch detector would return zero to indicate no pitch detected instead of returning an answer that is probably not accurate.
Let's have a look at how the FFT pitch detector does:
As can be seen, it correctly picks out the primary frequencies of the higher notes, but overall it doesn't get that much closer than the autocorrelator, so I've left that as the default algorithm. You, however, can swap in the FFT detector in the code if it works better for the material you are auto-tuning.
There are ways of using the phase information from the FFT output to increase the accuracy of pitch detection even further, but I have left that as an exercise for the reader!
The next step is to determine how much we will shift the pitch. The simplest way to do this is to look for the musical pitch that is closest to the detected pitch. Then, the amount of the shift by is simply the ratio of those two notes.
There are, however, some additional considerations. First, we may want to select a subset of musical notes that are acceptable. For example, only notes in the key of C#, or maybe F# minor pentatonic. This may require a slightly more radical adjustment.
Second, depending on the effect we are after, we may not want to instantaneously jump to the new frequency. The code I am using utilizes a fairly rudimentary “attack” time parameter, allowing you to gradually move to the new frequency.
The actual DSP for the pitch-shifting effect is more or less untouched from Stephan Bernsee's code, and this is because it works really well. The Bernsee's code makes use of the Fast Fourier Transform, plus a bunch of clever mathematics, which I almost understand, but not quite well enough to try and explain here! You're better off reading an article in which the man himself explains how it works.
The class that manages the pitch-shifting algorithm is called SmbPitchShifer and inherits from a PitchShifter base class. This does the bulk of its work in the ShiftPitch function:
c#:
public void ShiftPitch(float[] inputBuff, float inputPitch, float targetPitch, float[] outputBuff, int nFrames) { UpdateSettings(); detectedPitch = inputPitch;
VB.Net
Public Sub ShiftPitch(ByVal inputBuff() As Single, ByVal inputPitch As Single, ByVal targetPitch As Single, ByVal outputBuff() As Single, ByVal nFrames As Integer) UpdateSettings() detectedPitch = inputPitch
The inputPitch parameter is set to the frequency detected by the PitchDetector. The targetPitch parameter is currently unused, but will be used to specify the target pitch in real-time when accepting input from, say, a MIDI keyboard. In any case, we call UpdateSettings in order to see if any of the autotune algorithm settings have changed since last time.
Next we calculate the amount we need to shift the pitch shift. A shift factor of 1 means no change. We don't allow the shift factor to go above 2 or below 0.5, since these figures represent a whole octave change:
c#:
float shiftFactor = 1.0f; if (inputPitch > 0) { shiftFactor = snapFactor(inputPitch); } if (shiftFactor > 2.0) shiftFactor = 2.0f; if (shiftFactor < 0.5) shiftFactor = 0.5f;
VB.Net
Dim shiftFactor = 1.0F If inputPitch > 0 Then shiftFactor = snapFactor(inputPitch) shiftFactor += addVibrato(nFrames) End If If shiftFactor > 2.0 Then shiftFactor = 2.0F End If If shiftFactor < 0.5 Then shiftFactor = 0.5F End If
The decision of what the target note is takes place in the snapFactor function:
c#:
protected float snapFactor(float freq) { float previousFrequency = 0.0f; float correctedFrequency = 0.0f; int previousNote = 0; int correctedNote = 0; for (int i = 1; i < 120; i++) { bool endLoop = false; foreach (int note in this.settings.AutoPitches) { if (i % 12 == note) { previousFrequency = correctedFrequency; previousNote = correctedNote; correctedFrequency = (float)(8.175 * Math.Pow(1.05946309, (float)i)); correctedNote = i; if (correctedFrequency > freq) { endLoop = true; } break; } } if (endLoop) { break; } } if (correctedFrequency == 0.0) { return 1.0f; } int destinationNote = 0; double destinationFrequency = 0.0; // decide whether we are shifting up or down if (correctedFrequency - freq > freq - previousFrequency) { destinationNote = previousNote; destinationFrequency = previousFrequency; } else { destinationNote = correctedNote; destinationFrequency = correctedFrequency; } if (destinationNote != currPitch) { numElapsed = 0; currPitch = destinationNote; } if (attack > numElapsed) { double n = (destinationFrequency - freq) / attack * numElapsed; destinationFrequency = freq + n; } numElapsed++; return (float)(destinationFrequency / freq); }
VB.Net:
Protected Function snapFactor(ByVal freq As Single) As Single Dim previousFrequency = 0.0F Dim correctedFrequency = 0.0F Dim previousNote = 0 Dim correctedNote = 0 For i = 1 To 119 Dim endLoop = False For Each note As Integer In Me.settings.AutoPitches If i Mod 12 = note Then previousFrequency = correctedFrequency previousNote = correctedNote correctedFrequency = CSng(8.175 * Math.Pow(1.05946309, CSng(i))) correctedNote = i If correctedFrequency > freq Then endLoop = True End If Exit For End If Next note If endLoop Then Exit For End If Next i If correctedFrequency = 0.0 Then Return 1.0f End If Dim destinationNote = 0 Dim destinationFrequency = 0.0 ' decide whether we are shifting up or down If correctedFrequency - freq > freq - previousFrequency Then destinationNote = previousNote destinationFrequency = previousFrequency Else destinationNote = correctedNote destinationFrequency = correctedFrequency End If If destinationNote <> currPitch Then numElapsed = 0 currPitch = destinationNote End If If attack > numElapsed Then Dim n = (destinationFrequency - freq) / attack * numElapsed destinationFrequency = freq + n End If numElapsed += 1 Return CSng(destinationFrequency / freq) End Function
The way this function works is that it runs through the MIDI notes 0-120 and, if that note is selected as one of the valid pitches we support, we remember the “corrected frequency,” which can be calculated from the MIDI note number with the following formula:
c#:
correctedFrequency = (float)(8.175 * Math.Pow(1.05946309, (float)midiNoteNumber));
VB.Net
correctedFrequency = CSng(8.175 * Math.Pow(1.05946309, CSng(i)))
Obviously, a pitch is likely to fall somewhere in between two valid notes, so we choose the which pitch to correct by determining which one is closest to the detected frequency.
The snapFactor function is also responsible for implementing the attack time parameter. This allows the destinationFrequency to be slowly moved to the target note over the duration of the attack period. Having calculated our shift factor, we are now ready to pass our data on to the actual pitch-shifting algorithm:
c#:
int fftFrameSize = 2048; int osamp = 8; // 32 is best quality SmbPitchShift.smbPitchShift(shiftFactor, nFrames, fftFrameSize, osamp, this.sampleRate, inputBuff, outputBuff);
VB.Net
Dim fftFrameSize = 2048 Dim osamp = 8 ' 32 is best quality SmbPitchShift.smbPitchShift(shiftFactor, nFrames, fftFrameSize, osamp, Me.sampleRate, inputBuff, outputBuff)
The final thing we do in the ShiftPitch function is keep a record of the pitch shifts we have made. These are stored in a queue (maximum of 5000 entries) and are very useful for diagnosing what is going on if you are not getting the results you wanted from the algorithm:
c#:
shiftedPitch = inputPitch * shiftFactor; updateShifts(detectedPitch, shiftedPitch, this.currPitch);
VB.Net
shiftedPitch = inputPitch * shiftFactor updateShifts(detectedPitch, shiftedPitch, Me.currPitch)
Performance, as you might expect in a managed application that has not been extensively optimized, was not good. Using my laptop, I could autotune one minute of audio in about 90 seconds. Obviously, that rules out real-time autotuning. I decided to profile the application to see if there were any quick ways I could improve things.
The profiling tools in Visual Studio revealed that 20% of the time was spent on pitch detection and 80% pitch shifting. Unfortunately, there were not too many options available for optimisation, since further investigation pointed to calls to Math.Sin taking the bulk of the time. Possibly creating lookup tables could save a bit more time.
Fortunately, we have another option for speeding things up. The pitch-shifting algorithm takes an “oversampling” parameter, which by default is set to 32, the highest value. However, we can trade off speed for quality. Setting it to 16 meant that I could autotune a minute of audio in 55ms (on my 2.4GHz Core2Duo laptop) – realtime but only just. Setting it to 8 reduced that down to 36s. The results still sounded reasonable, so I have left it set at 8 in the code.
An alternative way of speeding it up though would be to swap in a different pitch-shifting algorithm. You could start by trying out one I created as part of the Skype Voice Changer project previously featured on Coding4Fun, which is also able to operate in real-time (although I haven't done any quality comparisons).
Rather than starting from scratch, I decided to build upon .NET Voice Recorder, a WPF application I created for a previous Coding4Fun article. This takes advantage of the NAudio .NET audio library for audio recording and playback capabilities. The GUI has three screens. On the first is the input device used for recording. The second records a short voice clip. And the third allows you to edit a small portion of saved audio.
Here's a screenshot of the second screen showing a recording in progress:
And here's the screen that allows you to trim the recording, preview it, and save it as WAV:
As you can see, I have added a new button allowing access to the autotune effect settings. On this screen, you can select which notes are valid, and you can also adjust the “attack time” if you prefer to not go for the robotic effect. I've included a drop-down menu that automatically selects the appropriate notes from various keys.
When you click “Apply,” the autotune effect is applied (while you wait on a background thread) and then you are returned to the screen, allowing you to play back your recording and see how it sounds. If you'd like, you can then go back and change the autotune settings (or turn it off).
The original VoiceRecorder application used a MVVM (model-view-viewmodel) architecture for binding data to each view. I have updated it to make use of Laurent Bugnion's excellent MVVM Light library. This removes the need for my own RelayCommand and ViewModelBase classes, and also enables me to replace my ViewManager with a more extensible framework using the event aggregator (“Messenger”) that is included with MVVM light. This allows me to quickly navigate from one view to another by sending out a message on the event aggregator:
c#:
private void NavigateToSaveView() { Messenger.Default.Send(new NavigateMessage(SaveViewModel.ViewName, this.voiceRecorderState)); }
VB.Net
Private Sub NavigateToSaveView() Messenger.Default.Send(New NavigateMessage(SaveViewModel.ViewName, Me.voiceRecorderState)) End Sub
Unfortunately, Autotune is an effect that doesn't always produce the desired result . Obviously, if you want great autotune, you're best off buying a commercial implementation, but here are a few tips for getting the most out of an autotune algorithm:
.NET Voice Recorder is open source and hosted on CodePlex in a Mercurial repository. So what you waiting for? Make a fork and have a go at improving it:
Mark Heath is the author of several open source .NET applications and libraries, including NAudio and the Skype Voice Changer. He works for NICE Systems, developing applications that search, display, and play back vast amounts of multimedia data. He has a blog, Sound Code, and you can follow him on his sporadically updated Twitter cool!
Great work Mark. Now I just have to find a use for it.
Nice work Mark.
Whoa awesome! Great to see Awesomebox being used in the wild
I have also implemented a pitch detection algorithm which is used to display a realtime pitch graph in a VST plugin (although VST uses an unmanaged API, my plugin is written in C# and uses reverse P/Invoke). This plugin is used as a visual guide to train a singer to sing in key (or for vocal exercises), and can also display a "grade" based on previously entered notes (or via a MIDI clip) that has to be hit throughout the song. My algorithm is loosely based on auto-correlation, but it is heavily modified to solve the following two problems with it:.
can you please give the code for pitch detection? i have implemented fft algoritm, and jus need to detect the pitched at a high frequency. How can i choose the amplitude threshold to get the corect peaks? If the amplitue thresh is to high i get 0 peaks, if its; too low i get incorect peaks
@BitFlipper:do u have the pitch detection code? I implement FFT and need pitch detection algorithm. I need to detect all peaks from the spectrum. All peaks at 18000hz frequency for example. I also work with amplitude threshold..but it is;s too high it doesn;t show me the peaks, if it;s too low it dowsnt show corectly
@mary:
Unfortunately right now my code isn't quite fit for public release. I think there are some dependencies on parts of other code that I don't want to post and which isn't really pitch-related. For instance I have a DSP class that the pitch-correction algorithm uses, but most of that code is unrelated to it.
If I have some time available I will clean it up and post it. Most likely before the end of the weekend.
OK I had some time to clean up my code. I am working on creating a CodePlex project in order to publish it. I also first want to create some sort of sample app in order to demonstrate the code in use (even though you need just three lines of code to instantiate and get your first pitch results back). Hopefully I will be done with it by this weekend.
@Christian Louboutin Bottes:
Wow is that spam or some weird C9 bug? I can't tell.
It's Spam that's been through an auto-tune like comment modifier to make it look like a C9 bug.
Hi Mark,
"Autotune.NET" is really very different topic. I am very excited to read this blog. Very interesting and useful.After reading your blog only i came to know about this technology in dotnet. Thanks for giving your knowledge to us. Your code also works well.
OK, I finally had some time to isolate my pitch tracker class and create a CodePlex project. I would be interested to find out whether you can use my pitch tracker in your project, and what the results are.
From my tests my algorithm has an error of less that 0.02% over the frequency range of 55Hz to 1.5kHz. Accuracy is unaffected by amplitude, frequency or complexity of the waveform.
Please let me know if you end up trying it out.
hit BitFlipper, looks interesting. Fancy submitting a patch to the VoiceRecorder project () that uses your algorithm?
Hi Mark, thanks for the code, it is rather interesting, but I can not understand one thing - in the code
public float DetectPitch(float[] buffer, int frames){ if (prevBuffer == null) { prevBuffer = new float[frames]; } float maxCorr = 0; int maxLag = 0; for (int lag = maxOffset; lag >= minOffset; lag--) { float corr = 0; // sum of squares for (int i = 0; i < frames; i++) { int oldIndex = i - lag; float sample = ((oldIndex < 0) ? prevBuffer[frames +oldIndex]:prevBuffer[oldIndex]); corr += (sample * buffer[i]); } ..........................................
Here we first initialize the array - prevBuffer = new float[frames] - and evidentially that all its members have zero as value.
And then we have "sample = prevBuffer[frames +oldIndex] or sample=prevBuffer[oldIndex]", but as the prevBuffer has only zero values so sample will always have the same zero value.
Could you explain this thing or maybe I am wrong?
Hey. It is nice lib. But I have a qns. How to record a voice for longer time ? say an hour or more.
@Wonde, to record for that length of time I would recommend storing the saved audio in a WAV file rather than the current implementation which keeps it in memory. Use the WaveFileWriter class.
@Eugene - look at the end of the function - the contents of the current buffer are copied across into prevBuffer
@Eric:Hi, it is a very nice introduction for recording voice on .net or from microphone. I am looking for a streaming audio recorder to record voice or sound that can be passed through on my computer audio device.
Hi there,
I want to know if its possible (or is there an algorithm I could use ) to convert a byte array sound data and change the pitch, frequency ... I did it with SoundEffectInstance in XNA but there is no way I could save the stream that was passed in. I would greatly appreciate any help. This is some code that i did (I want to change the pitch of bStream). Thank you very much:
if (states == PlayerStates.Ready || states == PlayerStates.Stopped)
{
InitTimer();
byte []bStream = stream.ToArray();
sound = new SoundEffect(bStream, microphone.SampleRate, AudioChannels.Mono);
SoundEffectInstance soundInstance = sound.CreateInstance();
soundInstance.Pitch += 1;
soundInstance.Play();
textBlock1.Text = "Now Playing";
states = PlayerStates.Playing;
}
Remove this comment
Remove this threadclose
|
http://channel9.msdn.com/coding4fun/articles/AutotuneNET
|
CC-MAIN-2014-10
|
refinedweb
| 6,215
| 55.44
|
User:Ashley Y
From HaskellWiki
(Difference between revisions)
Revision as of 02:16, 20 September 2006
Ashley Yakeley
I hereby license all my contributions to this wiki under the simple permissive license on HaskellWiki:Copyrights. —Ashley Y 05:25, 14 January 2006 (UTC)
1 GeSHi Tests
1.1 C
for (int a=0;a<3;a++) printf ("%d\n",a);
1.2 Haskell
Inline:
{- My program -} import Prelude foo :: (Monad m) -> m (Int,Int) foo = (x-2,x - 1) where x = 3 -- The main function main :: IO () main = do a <- foo putStr ("And the answer is: " ++(show (fst a))++"\n")
. Inline:
import Prelude
pre.
|
https://wiki.haskell.org/index.php?title=User:Ashley_Y&diff=6156&oldid=6154
|
CC-MAIN-2016-44
|
refinedweb
| 104
| 62.07
|
Jeff Epler wrote: > When using pthread_atfork, os.system never triggers my code. However, > reimplementing os.system in terms of os.fork+os.execv, it does. I don't > know if this is right or wrong according to pthread, but since it doesn't > work on my platform the question is academic for me. Interesting. I'd be curious to find out why this fails - it may be a bug in your system, in which case I'd say "tough luck, complain to the system vendor" (for Redhat 9, I'm tempted to say that anyway ...) Looking at what likely is the source of your system(3) implementation (glibc 2.3.2, sysdeps/unix/sysv/linux/i386/system.c), I see that the fork used inside system(3) is # define FORK() \ INLINE_SYSCALL (clone, 3, CLONE_PARENT_SETTID | SIGCHLD, 0, &pid) Atleast, this is the fork being used if __ASSUME_CLONE_THREAD_FLAGS is defined, which is the case for Linux >2.5.50. With this fork() implementation, atfork handlers won't be invoked, which clearly looks like a bug to me. You might want to upgrade glibc to glibc-2.3.2-27.9.7.i686.rpm and nptl-devel-2.3.2-27.9.7.i686.rpm. In this version, the definition of FORK is changed to #if defined __ASSUME_CLONE_THREAD_FLAGS && !defined FORK # define FORK() \ INLINE_SYSCALL (clone, 3, CLONE_PARENT_SETTID | SIGCHLD, 0, &pid) #endif which might actually do the right thing, assuming FORK is already defined to one that calls the atfork handlers. > Wouldn't the PyOS_AfterFork approach also require python to provide its > own versions of any POSIX APIs that would typically be implemented in > terms of fork (system(), popen(), and spawn*() come to mind)? You are right. system(3) won't call our version of fork, so PyOS_AfterFork won't be invoked. So forget about this approach. Regards, Martin
|
https://mail.python.org/pipermail/python-dev/2003-December/041309.html
|
CC-MAIN-2016-40
|
refinedweb
| 304
| 65.32
|
Although, in the real world application you are much more likely to implement Runnable interface than extends Thread. Extending the Thread class is easiest but not a good Object Oriented practice.
In this post we will see the difference between "implements Runnable" and "extends Thread". This is one of the basic interview question on the topic of Threads.
Read Also : Life Cycle of.
On the other hand,
Implementing the Runnable interface gives you the choice to extend any class you like , but still define behavior that will be run by separate thread.
2. Reusability : In "implements Runnable" , we are creating a different Runnable class for a specific behavior job (if the work you want to be done is job). It gives us the freedom to reuse the specific
behavior job whenever required.
"extends Thread" contains both thread and job specific behavior code. Hence once thread completes execution , it can not be restart again.
"extends Thread" is not a good Object Oriented practice.
4. Loosely-coupled : "implements Runnable" makes the code loosely-coupled and easier to read .
Because the code is split into two classes . Thread class for the thread specific code and your Runnable implementation class for your job that should be run by a thread code.
"extends Thread" makes the code tightly coupled . Single class contains the thread code as well as the job that needs to be done by the thread.
5. Functions overhead : "extends Thread" means inheriting all the functions of the Thread class which we may do not need . job can be done easily by Runnable without the Thread class functions overhead.
Example of "implements Runnable" and "extends Thread"
public class RunnableExample implements Runnable { public void run() {
System.out.println("Alive is awesome");
}
}
public class ThreadExample extends Thread { public void run() {
System.out.println(" Love Yourself ");
}
}
When to use "extends Thread" over "implements Runnable"
The only time it make sense to use "extends Thread" is when you have a more specialized version of Thread class. In other words , because you have more specialized thread specific behavior.
But the conditions are that the thread work you want is really just a job to be done by a thread. In that case you need to use "implements Runnable" which also leaves your class free to extend some other class.
Recap : Difference between "implements Runnable" and "extends Thread"
If you have any doubts regarding the difference between "implements Runnable" and "extends Thread" then please mention in the comments .
|
http://javahungry.blogspot.com/2015/05/implements-runnable-vs-extends-thread-in-java-example.html
|
CC-MAIN-2017-22
|
refinedweb
| 409
| 63.59
|
On Sun, Jul 29, 2001 at 06:22:17PM +0200, beevis libero it wrote: > Unwisely I deleted the .journal file, since I never mounted / as ext3. I assume the ext3 filesystem was created using old e2fsprogs tools; the new versions will create it as a hidden file that doesn't appear in the namespace, as long as the filesystem is unmounted --- and even if it is mounted, the .journal file is created with the immutable flag set. > Jul 29 13:11:36 uno fsck: /dev/hda4: Superblock has a bad ext3 journal > (inode 20). > Jul 29 13:12:37 uno xinetd[513]: Started working: 0 available services > Jul 29 13:11:36 uno fsck: CLEARED. > Jul 29 13:11:36 uno fsck: *** ext3 journal has been deleted - filesystem > is now ext2 only *** > Jul 29 13:11:36 uno fsck: /dev/hda4 was not cleanly unmounted, check > forced. > Jul 29 13:11:36 uno fsck: /dev/hda4: 236046/2265408 files (2.1% > non-contiguous), 1415789/4521984 blocks > ... > > I was certainly wrong deleting the .journal without changing the > parameters of the ext3 filesystem. However IMHO the kernel should be > able to correct the situation and revert to ext2 without intervention > in this case. Rebuilding the journal file is a matter of 1" (at least > on this mighty machine...). The kernel wouldn't care if you just deleted the .journal file, as long as you were mounting using ext2 and the NEEDS_RECOVERY flag wasn't set. What happened here was that fsck noticed that the filesystem was inconsistent, and therefore finished cleaning up the filesystem. No big deal; the right thing happened. - Ted
|
http://www.redhat.com/archives/ext3-users/2001-July/msg00223.html
|
CC-MAIN-2014-10
|
refinedweb
| 270
| 73.78
|
Asked by:
Question
First,
1) I know this forum could not be the best choose but I didn't find any better on msdn
2) I know this is not a way to build proper windows store apps
now question:
In most cases, adding new library is simple. Programmer just choose proper winmd file or create new solution which could be added as reference.
But imagine that you have only dll (unmanaged, registered in system registry which expose DllGetActivationFactory, and rest of important Dll*), you know class structure which is inside this library, you also know UUID. You do not have code and idl, tlb, winmd.
Using C++ you could use RoInitialize etc. function and of course probably this also could be done in "normal" .NET app. But how could this object be created in WinRT app ? Logical should by simple as adding UUID parameter to class and "everything should work", but of course it is not so simple. How could be realized namespace (because in winmd you already have it, when you declare class in your file, should you add proper namespace ). Have you any ideas ?
Another question, is there a possibility to browse such COM object (such a normal com object which exposed DllGetClassObject).
Best regards
Bartosz
Saturday, March 21, 2015 5:11 PM
All replies
Hi Bartosz,
>> Using C++ you could use RoInitialize etc. function and of course probably this also could be done in "normal" .NET app. But how could this object be created in Windows Runtime App?
I guess what you mean by “normal .NET App” is the desktop Windows .NET application which could invoke external C++ dynamic link library. But I’m afraid Windows Runtime App is a different from the desktop application.
According to your description, I think your first question is:
Is it possible to load WRT class inside C# pragmatically in Windows Runtime App?
The Windows Runtime C++ Template Library (WRL) is a template library that provides a low-level way to author and use Windows Runtime components, but I’m afraid we don’t have such library in C#.
To use the class library in your Windows Runtime App in C#, instead of loading it in C# pragmatically, I will suggest you referencing the Windows Runtime Component in the Window Runtime App Project.
>> is there a possibility to browse such COM object
Based on my understanding, Windows Runtime is the COM based technology, all of the Windows Runtime classes would implement the IInspectable interface, but I think it will be very complex to browse it.
(Please correct me if I have any misunderstandings)
Hope it will help.
Regards,
Jeffrey
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click HERE to participate the survey.Monday, March 23, 2015 10:59 AM
If I understand it correctly you are trying to use a Win32 based COM-object from a WinRT App? As far as I remember this is not possible. It is however possible in Non-Store WinRT Apps to use Brokered Windows Runtime components which basically launch a service in the Win32 world that provides data to the WinRT App. The topic is described here:
As is stated there this is only intended for .Net applications but the Desktop side of the .Net code can access regular COM components.
Monday, March 23, 2015 1:14 PM
Thank you for your answers.
Olivier,
No, Jeffrey understanding of my problem is better. I would like to load WinRT dll library but having only this library. I know that I could reference it using winmd but as mention by Jeffery, in C++ I could use dynamic loading (using Ro* functions). But why I couldn't do that under C#-winrt, wher I not have also dllimport (and this is OK in sens of this type of application). Can not be combined with safety, because dll must be registered, from this point of view we do not do something wrong in sense of winrt.
Overall my question could be also formulated as: "How could be generated winmd for C#-WinRT file for winrt-com dll"
Jeffrey,
view - something like oleview. But in this case this is a second problem. As I mention I know class structure, I know UUID, library is registered.
What I see now is to build C++/winrt wrapper for loading class and use it under winrt-c# app, but this still not solve some problems from my question (how namespace is embeded in winmd, because it could not be embedded at the COM, we get only pointer to IInspectable->IUnknown.
@JeffreyI will suggest you referencing the Windows Runtime Component in the Window Runtime App Project.
yes, this is solution. But i need winmd or source what to do if I not have or have only interfaces/view of class (in normal com - object, this situation was not a problem, normal com mean that as base interface was not IInspectable derived from IUnknown but IUnknown or IDispatch with DllGetClassObject exposed) ?Monday, March 23, 2015 10:44 PM
|
https://social.msdn.microsoft.com/Forums/en-US/8025579d-7724-4e97-9011-1f45087d4865/loading-wrt-class-inside-c-also-wrt?forum=winappswithcsharp
|
CC-MAIN-2022-27
|
refinedweb
| 863
| 61.36
|
My View of C# 4.0
I've known a bit about C# 4.0 for a while now and have had time to think about it. I've just re-read the New features in C# 4.0 paper published by Microsoft and would like to offer the following critique of the language's new features:
Dynamic Lookup
This feature just makes me cringe, just like anonymous methods made me cringe when they were introduced in C# 2.0. To this day, I hardly use them, as they always feel like a kludge to me (lambda expressions fixed that).
The
dynamic keyword is as open to abuse as anything could be. It takes the principles of static typing and throws the baby out with the bathwater.
What is wrong with it
When looked at initially, the
dynamic keyword is great, because it simplifies and speeds up what is usually done with Reflection and Primary Interop Assemblies, both in the aspect of development times and the aspect of run time. Unfortunately, too much of a good thing is bad for you. Imagine the following:
public dynamic GetCustomer() { // mystery... }
What do we have here then? I don't know and neither does IntelliSense? I guess we'll have to go with trial and error.
I admit this is quite the dramatization, but you get my point: it's ripe for abusing an otherwise perfectly fine static syntax.
Moreover, the
dynamic keyword's syntax does what no other feature of C# has ever done - it breaks existing syntax. Should I define in C# 3.0 a type named
dynamic, the following piece of code will take a whole different meaning in C# 4.0:
public dynamic GetCustomer() { dynamic customer = GetCustomerCOMObject(); return customer; }
How it can be fixed
Using the
dynamic keyword is actually a built-in form of Duck Typing. The idea is good and should be introduced into the language, but I'd like to suggest a different way of doing it:
public ICustomer GetCustomer() { dynamic ICustomer customer = GetCustomerCOMObject(); return customer; }
Here, what I get back is a dynamic dispatch object that must adhere to a specific interface. This means that the object graph is checked for conformity against
ICustomer the moment it is cast in the dynamic scope (i.e. returned from
GetCustomerCOMObject) and is from this moment on a typed object with dynamic dispatch under the hood. From this moment on, we couldn't care less about whether this object uses dynamic dispatch or not, since we now treat it as a POCO.
This, along with removing of the ability to send dynamic dispatch objects through the call-stack (as parameters and return types), bringing them to the level of anonymous types, will help stop the deterioration of C# into a dynamic language.
Named and Optional Arguments
This is just silly. Really, this looks like some people cried "we don't like overloads" hard enough and got some VB into the C# the rest of us liked the way it was. If you want to initialize your method with some of the parameters, use a builder pattern with an object initializer instead.
Here, I'll take the sample at the bottom of page 6 and fix it, C# 3.0 style:
public void M(int x, MBuilder builder); public void M(int x) { this.M(x, new MBuilder()); } public class MBuilder { public MBuilder() { this.Y = 5; this.Z = 7; } public int Y { get; set; } public int Z { get; set; } } M(1, new MBuilder { Y = 2, Z = 3 }); // ordinary call of M M(1, new MBuilder { Y = 2 }); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7)
Yes, I do realize it's mainly for COM interop, but most people will just get either confused by all the syntax, abuse it or simply forget it ever existed.
What is wrong with it
It exists.
How it can be fixed
Remove it from C#. There - fixed.
If you want optional parameters in your COM interop calls, just implement the correct overloads in the interface you create for use with the
dynamic keyword (see my suggestion for dynamic lookups) and the binding will be done at run time by the parameter names.
Variance, Covariance and Contravariance
These three features are long overdue and finally make an appearance in the language. It's a great feature and I would love to integrate it into my code as soon as I possibly can.
I would love to know if there are plans to not only include reference conversions, but also the implicit and explicit conversion operators as qualifiers for VC&C.
What is wrong with it
Although Variance is implicit, the others are explicit. Using the
Type<in T> /
Type<out T> notation is good for being explicit (for instance when you expect your interface to be expanded in the future), but it doesn't have to be and can become a bit annoying over time.
How it can be fixed
The compiler can very easily infer the fact that your interface is either input-only or output-only and mark it as such for you. Language-wise, the explicit version should be kept available, for when you want to prevent someone (or yourself) from mistakenly adding a new method that breaks the your input / output only design.
Summary
It looks to me like the team behind C# is going in the wrong direction (DLR) instead of the right direction (Spec#), slowly turning C# into a dynamic language. It looks like all of this is done for the sake of easy interop with dynamic languages and COM objects. It looks as though the designers have succumbed to peer pressure. There are so many features missing from C# and the above are nowhere near the top of my list.
I can only hope someone is listening.
|
http://weblogs.asp.net/okloeten/6708812
|
CC-MAIN-2015-27
|
refinedweb
| 983
| 68.1
|
Download presentation
Presentation is loading. Please wait.
Published byGannon Loomer Modified over 2 years ago
1
CO1301: Games Concepts Dr Nick Mitchell (Room CM 226) email: npmitchell@uclan.ac.uknpmitchell@uclan.ac.uk Material originally prepared by Gareth Bellaby Lecture 8 Basic Trigonometry Hipparchos the “father” of trigonometry (image from Wikipedia)
2
References Rabin, Introduction to Game Development, Chapter 4.1 Van Verth & Bishop, Essential Mathematics for Games, Appendix A and Chapter 1 Eric Lengyel, Mathematics for 3D Game Programming & Computer Graphics Frank Luna, Introduction to 3D Game Programming with Direct 9.0c: A Shader Approach, Chapter 1
3
Lecture Structure Introduction Trigonometric functions: sine, cosine, tangent Circles Useful trigonometric laws
4
Why study Trigonometry? Why is trigonometry relevant to your course? Games involve lots of geometrical calculations: Rotation of models; Line of sight calculations; Collision detection; Lighting. For example, the intensity of directed light changes according to the angle at which it strikes a surface. You require a working knowledge of geometry.
5
Mathematical Functions A mathematical function defines a relationship between one variable and another. A function takes an input (argument) and relates it to an output (value) according to some rule or formula. For instance, the sine function maps an angle (as input) onto a number (as output). The set of possible input values is the functions domain. The set of possible output values is the functions range. For any given input, there is exactly one output: The 3 2 cannot be 9 today and 8 tomorrow! Mathematical Laws I'll introduce some laws. I'm not going to prove or derive them. I will ask you to accept them as being true.
6
Greek letters It is a convention to use Greek letters to represent angles and some other mathematical terms: α alpha β beta γ gamma θ theta λ lambda π pi Δ (capital) Delta
7
Trigonometry Trigonometry arises out of an observation about right angled triangles... Take a right angled triangle and consider one of its angles (but NOT the right angle itself). We'll call this angle α. The opposite side to α is y. The shorter side adjacent to (next to) α is x. The longest side of the triangle (the hypotenuse) is h. o a
8
Trigonometry There is a relationship between the angle and the lengths of the sides. This relationship is expressed through one of the trigonometric functions, e.g. sine (abbreviated to sin). sin( α ) = o / h o a
9
Values of sine degreessin (degrees) 00 150.26 300.5 450.71 600.87 750.97 901 1050.97 1200.87 1350.71 1500.5 1650.26 degreessin (degrees) 1800 195-0.26 210-0.5 225-0.71 240-0.87 255-0.97 270 285-0.97 300-0.87 315-0.71 330-0.5 345-0.26
10
Trigonometry Function Name SymbolDefinition sinesinsin(α) = o / h cosinecoscos(α) = a / h tangenttantan(α) = o / a = sin(α) / cos(α) You need to be aware of three trigonometric functions: sine, cosine and tangent. o a
11
Radians You will often come across angles measured in radians (rad), instead of degrees (deg)... A radian is the angle formed by measuring one radius length along the circumference of a circle. There are 2 radians in a complete circle ( = 360 ° ) deg = rad * 180 ° / rad = deg * / 180 °
13
Trigonometric Functions Sine, cosine and tangent are mathematical functions. There are other trigonometric functions, but they are rarely used in computer programming. Angles can be greater than 2 or less than -2 . Simply continue the rotation around the circle. You can draw a graph of the functions. The x-axis is the angle and the y-axis is (for example) sin(x). If you graph out the sine function then you create a sine wave.
14
Sine Wave and Cosine Wave Image taken from Wikipedia
15
Tangent Wave Image taken from Wikipedia
16
C++ C++ has functions for sine, cosine and tangent within its libraries. Use the maths or complex libraries: The standard C++ functions use radians, not degrees. #include using namespace std; float rad; float result; result = sin(rad); result = cos(rad); result = tan(rad);
17
PI Written using the Greek letter . Otherwise use the English transliteration "Pi". is a mathematical constant. 3.14159 (approximately). is the ratio of the circumference of a circle to its diameter. This value holds true for any circle, no matter what its size. It is therefore a constant.
18
Circles The constant is derived from circles so useful to look at these. Circles are a basic shape. Circumference is the length around the circle. Diameter is the width of a circle at its largest extent, i.e. the diameter must go through the centre of the circle. Radius is a line from the centre of the circle to the edge (in any direction).
19
Circles A tangent is a line drawn perpendicular to (at right angles to) the end point of a radius. You may know these from drawing splines (curves) in 3ds Max. You'll see them when you generate splines in graphics and AI. A chord is line connecting two points on a circle.
20
Circles A segment is that part of a circle made by chord, i.e. a line connecting two points on a circle. A sector is part of a circle in which the two edges are radii. sector
21
Circle Using Cartesian coordinates. Centre of the circle is at (a, b). The length of the radius is r. The length of the diameter is d.
22
Points on a Circle Imagine a line from the centre of the circle to (x,y) is the angle between this line and the x-axis.
24
Trigonometric Relationships This relationship is for right-angled triangles only: Where
25
Trigonometric Relationships These relationships are for right-angled triangles only:
26
Properties of triangles This property holds for all triangles and not just right- angled ones. The angles in a triangle can be related to the sides of a triangle.
27
Properties of triangles These hold for all triangles
28
Inverses Another bit of terminology and convention you need to be familiar with. An inverse function is a function which is in the opposite direction. An inverse trigonometric function reverses the original trigonometric function, so that If x = sin(y)theny = arcsin(x) The inverse trigonometric functions are all prefixed with the term "arc": arcsine, arccosine and arctangent. In C++: asin()acos()atan()
29
Inverses The notation sin -1, cos -1 and tan -1 is common. We know that trigonometric functions can produce the same result with different input values, e.g. sin(75 o ) and sin(105 o ) are both 0.97. Therefore an inverse trigonometric function typically has a restricted range so only one value can be generated.
30
Inverses FunctionDomainRange
Similar presentations
© 2017 SlidePlayer.com Inc.
|
http://slideplayer.com/slide/4213159/
|
CC-MAIN-2017-22
|
refinedweb
| 1,145
| 57.37
|
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Certification
»
Programmer Certification (SCJP/OCPJP)
Author
Mock exam questions section 3
Tom Tolman
Ranch Hand
Joined: Sep 02, 2004
Posts: 83
posted
Sep 20, 2004 21:22:00
0
Slowly getting there.. please ask questions
1. Which three of the following statements are true?
1 finalize() is called only once for each object instance
2 The garbage collector frees memory in the heap
3 You can force the garbage collector to operate
4 An object only used by a blocked thread can be cleaned up
5 An object with a reference to it will never be cleaned up by the garbage
collector
6 finalize() may never run
7 A
java
application will not run out of memory because of the garbage
collector
2. What is the output of the following code?
class cleanMe { int i = 0; cleanMe() { System.out.print("memory"); } void finalize() { System.out.print("clean"); } } class Test3_2 { public static void main (String [] args) { cleanMe c = new cleanMe(); c = null; System.out.print("empty"); } }
1 Compile error
2 Run time exception thrown
3 memoryemptyclean
4 memoryclean
5 memory empty
6 The output can not be known for certain beyond that memory will be output
before empty, with clean possibly printed between them or after empty.
3 The correct syntax for requesting garbage collection is:
1 class garbage extends garbageCollector { public void collect(); }
2 system.
3 GarbageCollection();
4 system.deleteUnusedObjects();
5 system.GarbageCollection();
6 system.gc();
7 None of the above
4 Which line of code inserted //Here will free up a single object for garbage
collection?
class C { public C i; } class MixedUp { public static void main (String [] args) { C a = new C(); C b = new C(); C d = new C(); C e = null; d.i = a; b.i = e; a.i = b; a = null; // Here } }
1 a.i = null;
2 b.i = a.i;
3 b = null;
4 a = d.i;
5 d.i = b.i;
6 e = null;
5 At the line // Here, how many objects are eligible for garbage collection?
class C { public C i; } class Q5 { public static void main (String [] args) { C a = new C(); C b = new C(); C d = new C(); C e = null; a.i = b; b.i = d; b = d.i; d = a; a = b; // Here } }
1 Compile error
2 Run time exception
3 0 objects available for collection
4 1 object available for collection
5 2 objects available for collection
6 3 objects available for collection
6 At the line // Here, how many objects are eligible for garbage collection?
class C { public C i; } class Q6 { public static void main (String [] args) { C a = new C(); C b = new C(); C d = new C(); a.i = b; b.i = a; a = b = d.i; // Here } }
1 Compile error
2 Run time exception
3 0 objects available for collection
4 1 object available for collection
5 2 objects available for collection
6 3 objects available for collection
1 Objective 3.1
1,2,6 are true
1 - True. finalize() is called once for each object instance
2- True. The garbage collector operates on the heap
3 - False. You can request the garbage collector be run, but it may not be.
4 - False. An object used by a blocked thread can not be cleaned up.
5 - False. An object with a reference to it may be referenced by an object
which itself is not referenced, and both can be cleaned up.
6 - True. Finalize may never run
7 - False. A Java application can still run out of memory
2 Objective 3.1
Answer - 1
The method finalize is void protected finalize, so it can not be compiled. The garbage collector does not guarantee when or if it will call finalize, so one can not be sure what the output is. It may or may not print "clean" at any point. (In my tests, it never does, but that is implementation dependent on the JVM)
3 Objective 3.2
Answer 7 - none of the above. System.gc() is the correct syntax (capitalized System)
4 Objective 3.2
Answer 5 d.i = b.i
At the line // Here the only object which has a single reference to it
(available to be erased) is the object formerly pointed to by a. This is where d.i points. So changing d.i causes nothing to refer to it, so it can be cleaned up.
5 Objective 3.3
Answer 3- 0 objects available for collection. At // Here, every object still has a reference to it.
6 Objective 3.3
Answer - 5 two objects available for collection. The object originally pointed to by a and the object originally pointed to by b are pointing to each other but no other reference exists to them.
Section 3: Garbage Collection
1 State the behavior that is guaranteed by the garbage collection system.
2 Write code that explicitly makes objects eligible for garbage collection.
3 Recognize the point in a piece of source code at which an object becomes
eligible for garbage collection.
Purushoth Thambu
Ranch Hand
Joined: May 24, 2003
Posts: 425
posted
Sep 20, 2004 21:40:00
0
Without having public class how one can run the programs ? The question listed above doesn't contain any public class. So it won't be possible to execute them let alone objects be gc'ed. Let me know if I am wrong
Tom Tolman
Ranch Hand
Joined: Sep 02, 2004
Posts: 83
posted
Sep 20, 2004 21:54:00
0
What makes you think you need a public class to run the programs? I compiled and ran them all. Here is another one for you:
class boo {} class foo { static public void main (String [] args) { System.out.println("hello"); } }
If you save this as boo.java and compile it it will compile- but you can't run boo because it does not have a main.
If you save this as foo.java and compile it it also will compile, and you CAN run it.
However, I think you are correct in that it is poor programming practice not to declare one of the classes public. The only rules I can find here in Sierra and Bates book is:
There can only be one public class per source file
The name of the file must match the name of the public class
Then they give an example of a class which is not declared public and show it compiles fine.
[ September 20, 2004: Message edited by: Tom Tolman ]
Purushoth Thambu
Ranch Hand
Joined: May 24, 2003
Posts: 425
posted
Sep 20, 2004 22:04:00
0
It's interesting. I executed the program and works fine. Here is my question.
When you don't declare a package JVM will create unnamed package for this program. Since the class has default accessor the class can be accessed only from the current package.
I am not sure how JVM invokes the class with default accessor. Please let me know how this happens Tom.
Tom Tolman
Ranch Hand
Joined: Sep 02, 2004
Posts: 83
posted
Sep 20, 2004 22:32:00
0
As I understand it- and I may be wrong- the JVM looks for the static method
static public void main (
String
[]values)
as the initiation point of the program. The JVM, again a guess, is using the name of the class given as the name of the file to search for this static method.
You can declare multiple static public void main (String [] values) in many (non public) classes and it doesn't care- it will always invoke the one associated with the class which the file is compiled into.
Of course you can only have one public class. I can't imagine how you would invoke the static public void main on a non public class if another class were public- it wouldn't allow you to compile it with anything but the public class as the name of the file.
I agree. Here's the link:
subject: Mock exam questions section 3
Similar Threads
few doubts
Garbage Collection
Xplain me plz..
some Qs from javahomepage???
Mughal's Mock Exam-Garbage Collection
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/246381/java-programmer-SCJP/certification/Mock-exam-questions-section
|
CC-MAIN-2014-35
|
refinedweb
| 1,395
| 72.76
|
Hi,
With lots of help I have a script that search my folder and gdb for featureclasses and rename them. That script works great.
The next step in my workflow will by clipping. I want the clip feature to clip all my featuresclasses in all folders and gdb’s. I am using the rename script as kind of a template.
The problem now is in a unique output name (the name of the featuresclasses in the gdb are the same (not the factor only the name).
The script I have so far is wrong but it I was hoping it could do the trick if I set it right.
So far the error is in line 10 the join is not working I do not know if the unique_name will work but that is my solution for the unique name problem.
BTW I was think about a script that looks at the extend (clip feature) first and if true then clip and if not nothing. I am over my head already and I have a script that cleans my gdb’s of empty featureclasses. So for now for my this is ok.
import os import arcpy from arcpy import env workspace = "D:\\GIS\\Zone1\\Zone1A" feature_classes = [] clipfeature = "D:\\GIS\\Temp\\clip.gdb\\BorderClip" outputdatabase = "D:\\GIS\\Output\\clip.gdb" walk = arcpy.da.Walk(workspace, datatype="FeatureClass", type="All") fc_output = os.path.join(outputdatabase,filenames) unique_name = arcpy.CreateUniqueName(fc_output) for dirpath, dirnames, filenames in walk: for filename in filenames: feature_classes.append(os.path.join(dirpath, filename)) # arcpy.AddMessage("clipping: " + feature_classes) arcpy.Clip_analysis(feature_classes, clipfeature, ounique_name)
Greeting Peter
Hi Peter,
It looks like the code will fail at line 11 above in the 'fc_output' variable since 'filenames' is not yet defined.
Try the following:
|
https://community.esri.com/thread/120268-clip-from-multiple-folders-and-gdb
|
CC-MAIN-2020-40
|
refinedweb
| 292
| 67.45
|
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#28265 closed Bug (fixed)
Template widget rendering: "Add the renderer argument ..." false positive when using **kwargs
Description
I have a custom widget with the following method:
def render(self, *args, **kwargs): self.pre_render_setup(*args, **kwargs) try: super().render(*args, **kwargs) finally: self.post_render_cleanup(*args, **kwargs)
Starting with Django 1.11 every use of this widget spews the following warning:
.../venv/lib64/python3.5/site-packages/django/forms/boundfield.py:41: RemovedInDjango21Warning: Add the `renderer` argument to the render() method of <class '...'>. It will be mandatory in Django 2.1.
As I'm using
**kwargs, the
renderer value is correctly passed on to the
render() function. This warning should be silenced.
Change History (4)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
Note: See TracTickets for help on using tickets.
PR
|
https://code.djangoproject.com/ticket/28265
|
CC-MAIN-2021-21
|
refinedweb
| 163
| 58.69
|
#include "ace/ACE_export.h"
#include "ace/Malloc_Allocator.h"
#include "ace/OS_NS_time.h"
#include "ace/OS_NS_Thread.h"
#include "ace/Timeprobe.inl"
#include "ace/Synch_Traits.h"
#include "ace/Singleton.h"
#include "ace/Timeprobe_T.h"
. Use make probe = 1, if you are using the make utility.
. Define ACE_COMPILE_TIMEPROBES in config.h
. Define ACE_COMPILE_TIMEPROBES in the VC project file.
. Other regular methods will also work.
It is not necessary to define ACE_COMPILE_TIMEPROBES when using time probes, you simply need ACE_ENABLE_TIMEPROBES. You can use the ACE_TIMEPROBE_* macros to program the time probes, and use the ACE_ENABLE_TIMEPROBE to enable the time probes. If you define ACE_ENABLE_TIMEPROBE in your code, but forget to compile ACE with ACE_COMPILE_TIMEPROBES, you will end up with linker errors.
Remember that ACE_COMPILE_TIMEPROBES means that the ACE library will contain code for time probes. This is only useful when compiling ACE. ACE_ENABLE_TIMEPROBES means that the ACE_TIMEPROBE_* macros should spring to life.
|
http://www.theaceorb.com/1.4a/doxygen/ace/Timeprobe_8h.html
|
CC-MAIN-2017-51
|
refinedweb
| 148
| 63.66
|
@ -0,0 +1,148 @@
* Python deployment is a solved problem
Time I wrote about using Python deployment in [[][scenarios]] of
development, testing, staging and production, something we do on a
grand scale. In 2018 Python is the [[][4th]] largest programming language in
the world and may soon reach the top 3. Dealing with Python
deployment, however, is less than straightforward. If you got bitten
by Python dependencies in non-trivial scenarios, or if you find that
Docker deployment is terribly slow, or if you need to reproduce old
installations faithfully, or if you need to support multiple languages
and compilers or versions thereof: you may want to continue reading
because GNU Guix solves it all elegantly.
In this writeup I focus on isolation of components which is the basis
of a sane environment.
** Let's install some software for development
We are going to install software in Guix using python, python-numpy and python-sqlalchemy
: guix package -p ~/opt/python-dev -i python python-numpy python-sqlalchemy
here we use a secial directory ~/opt/python-dev that contains symlinks
to all Guix references. This is a convenience so we don't have to deal
with these funny looking HASH values ever (see below).
In a shell you can pull in a profile by running
: source ~/opt/python-dev/etc/profile
Now python3 should be in the PATH and modules are available
#+BEGIN_SRC python
python
import sqlalchemy
import numpy
--- do something
#+END_SRC
It just works. Now if you can create a second profile
: guix package -p ~/opt/python-dev -i python2 python2-numpy python2-sqlalchemy
Load that profile and you are set to run Python2. No interference with
the previous Python3 profile.
You can also use pip or pip3 to install software in your home directory:
: pip3 install xxx
Similarly you can use virtualenv and conda to install Python software
(and more). Guix does not stop you from doing that, though there is a
chance of deployments going pear shape because these tools do not give
you full control over the dependency graph and therefore are not
truely reproducible.
The Guix way is to install software with Guix and manage all versions
through profiles.
** Guix environment
** Multiple Python versions
First of all GNU Guix supports multiple Python interpreters out of the box:
'guix package -A python' lists 1,400 Python related packages including
the interpreters
#+BEGIN_SRC
python 2.7.14 out,tk gnu/packages/python.scm:143:2
python 3.6.3 out,tk gnu/packages/python.scm:343:2
This number reflects a module for each interpreter, so 'guix package -A numpy' lists
python-numpy 1.14.0 out gnu/packages/python.scm:2783:2
python2-numpy 1.8.2 out gnu/packages/python.scm:2865:2
python2-numpy 1.14.0 out gnu/packages/python.scm:2783:2
interestingly there is an older version for python2-numpy in there
because some other package (nmoldyn) requires it. This is already a
hint of how versions can be mixed in. To see the nmoldyn dependencies
you can run
: guix graph nmoldyn|grep label
and see that it has over 400 dependencies! You can draw the graph as an SVG using
: ???
Now we want to see the (computed) store paths with
: guix graph -t bag nmoldyn
which renders over 9,000 package links that this package depends on,
including
: /gnu/store/m3qrcmlrkx6ms2b5f4afidd7h2qrv994-python2-numpy-1.8.2.drv
You can also check that there is a dependency on
python2-cython.
: /gnu/store/9dqb0rfj0ammn321l9bai7bp6pq12xgy-python2-cython-0.27.drv
I mean, this is a really complex deployment! But if you check this way
you can see that other packages contain many many dependencies
too. Both python-scipy and python-matplotlib also have close to 10,000
nodes in the dependency graph. Note that all dependencies are uniquely
identified and tractable.
In other words: GNU Guix gives you full control over the dependency graph.
** These funny HASH values
So, what is this funny hash value you see in the path. When I do a
'ls /gnu/store/*python-3*/bin/python3' on my laptop I get something like:
/gnu/store/isc3rwn2picqha5m4yiqj4rj51la50pm-python-3.4.3/bin/python3
/gnu/store/3rpnwnzfyskkmp6yqdxfxz20gm8d62ki-python-3.5.2/bin/python3
/gnu/store/alk9r3rir93pjmv8im20f8xrvv90219z-python-3.5.2/bin/python3
/gnu/store/3aw9x28la9nh8fzkm665d7fywxzbl15j-python-3.5.3/bin/python3
/gnu/store/3lkypf5wnsnvkaidhw0pv7k3yjfh1r9g-python-3.6.3/bin/python3
/gnu/store/kw5v2fvsnvdfaihrrq73fddv0fwbj7fd-python-3.6.5/bin/python3
/gnu/store/gx7gjwr05gzbh7f2kwbwhbxrh27hvgk8-python-3.6.5/bin/python3
which implies I have multiple versions of python3 on my system and
they do not interfere with each other. The HASH value reflects the
unique source package that was used to compile this version (the
source changes, the HASH changes). Not only that if dependencies
change the HASH changes (so the last two versions may depend on
different versions of libraries, for example) and when the build
changes the HASH changes too (e.g., with and without SSL support, or
maybe an optimization switch).
If I install a module that depends on python-3.6.5 it will hard
reference, for example,
: /gnu/store/gx7gjwr05gzbh7f2kwbwhbxrh27hvgk8-python-3.6.5/bin/python3.
This way the exact Python gets called with the exact modules that you
specified when creating the dependency graph.
@ -0,0 +1,9 @@
** A complex deployment
In the context of a web service we deploy a large stack of tools that
invoke other tools. The web servers require multiple versions of
Python (2.4, 2.7 and 3.x series) mostly because of incompatibilities
between Python modules and their bindings. We also have tools in there
that are written in R (some with Python RPy2 bridges), C, C++, D,
Elixir, Fortran - you name it.
|
https://git.genenetwork.org/pjotrp/guix-notes/commit/c57baaa178f8b46bd983efcb77b18dca4f22fbed
|
CC-MAIN-2022-05
|
refinedweb
| 952
| 54.73
|
Answered by:
Typedef-equivalent in C#?
- I pass integer IDs to a variety of functions. I want to create a type for each ID so the compiler can check I am passing the right type. In C I would do this with typedefs. E.g.:
typedef int personID;
typedef int addressID;
void DoSomething(personID p, addressID a)
{ ... }
How can I do this in C#? Creating an entire class seems like a lot of overhead.Wednesday, April 05, 2006 1:27 AM
Question
Answers
All replies
Apparently, it's the using statement./
using personID = int;
I didn't know that, so thanks :-) I just found it online.
Wednesday, April 05, 2006 1:31 AM
- Unmarked as answer by Dour High Arch Friday, April 10, 2009 10:26 PM
- Proposed as answer by EdgarTaor Wednesday, September 05, 2012 3:21 AM
- With the drawback that it is only file scope. I searched further, but didn't find anything new either.Wednesday, April 05, 2006 1:59 AM
using personID = int;
This gives me a syntax error: Invalid token 'using' in class, struct, or interface member declaration.Wednesday, April 05, 2006 7:29 PM
Short of defining a struct, … why not just name the parameters appropriately (e.g., ‘int personID’) and let the IDE intellisense be your guide?Wednesday, April 05, 2006 8:52 PM
- It's not enough to "guide" the programmer to pass the right type, I want the compiler to enforce it.
I just discovered that we need to support null IDs as well. I guess a class is the only way to go.
Is there sample code somewhere for implementing wrappers to CLR types without having to implement every method/property of the base type?Wednesday, April 05, 2006 9:23 PM
You don't need a class to do this, use nullable value types:Wednesday, April 05, 2006 10:30 PM
- nullable wha? (google google...)
That is cool, but this has to work in .Net 1.1.
Also, can the compiler enforce these types? The reason for this exercise is so that I can define things like:
AddressID GetAddress( PersonID id ) ...
string GetCity( AddressID id ) ...
And call:
string c = GetCity( GetAddress( new PersonID(...) ) );
Whereas the following will be compiler errors:
string GetCity( new PersonID( ... ) );
int aID = GetAddress( 1024 );Wednesday, April 05, 2006 11:48 PM
“Forcing” + nullability will require classes as far as I know. If you have “a lot” of these, a small program to generate them is probably in order.Thursday, April 06, 2006 2:00 PM
First, it's worth pointing out that typedefs in C don't give you type safety. All typedef does is allow you to add a name to an existing type. So in your original example:
typedef int personID;
typedef int addressID;
void DoSomething(personID, addressID a) { ... }
int, personID, and addressID are all the same type and completely interchangable. Your C compiler won't stop you from passing two personIDs to DoSomething instead of the required personID and addressID, for example. The typedefs give you clarity, but not type safety.
If you're using C# and want type safety, you're going to have to declare new types for PersonID or AddressID. As for concerns about the "overhead" of using classes/structs in this case, the only way you're going to know for certain if the overhead is too high is to implement your program and then profile it.
-Tom Meschter
Software Dev, Visual C# IDEThursday, April 06, 2006 5:06 PM
Personally, i have never found a need for
typedef int personID;
typedef int addressID;
People should be writing: DoSomething( int personID). The name of the variable is sufficient; i.e. use the name to describe what it does, and use the type to describe what it is. If one actually does typedef int personID, they are writing code no better than code that casts things to int. Both are type-unsafe, and both represent poor C# code.
I think the demand for typedefs in C# is to hide the complexities of a long type:
typedef Dictionary<...,...> MyMap;
MyMap can be interchanged with Dictionary<...,...> in a function call: no type safety is lost by the use of a typedef in this case.
BrianThursday, April 06, 2006 5:14 PM
- Good points Tom. You've convinced me that a class is the way to go, but am not sure what needs to be implemented to allow using the new class. For example in a class like this:
public class LongWrapper
{
private long id;
protected LongWrapper( long id )
{
this.id = id;
}
}
The following code:
LongWrapper w1 = new LongWrapper(1024);
LongWrapper w2 = new LongWrapper(1024);
string doesItWork = (w1 == w2) ? "yes" : "no";
Sets doesItWork to "no". I want two classes with the same internal IDs to be treated as identical. I suppose this is because == compares references, not values, and I need to override ==. But I only found this out after I tried it. If I implement it as a class, what other things do I need to override?Thursday, April 06, 2006 5:23 PM
I'll kick off the list with:
- ==
- <=
- >=
- >
- <
- !=
- Equals()
- GetHashCode()
I can't think of anything else at the moment.Thursday, April 06, 2006 6:38 PM
- I agree with making it a value type.Thursday, April 06, 2006 6:51 PM
Richard - you’re talking about passing an ‘ID’… how about passing the object that contains the ID? Rather than wrapping the fundamental types, can you write your functions like
void Func ( Person p ) { // … }
?Thursday, April 06, 2006 8:25 PM
- I understand the suggestions to just use value types, not classes, but that is what got us in the situation we now have.
We have a large number of SQL database tables with unique integer primary keys; account, user, staff, provider, supplier, retailer, guest, and on and on. We have an API that wraps and caches database calls on all these tables. You get, for example, account info by passing an account ID, user info by passing a user ID, and so on.
The problem is that all these IDs are integers, and developers are constantly passing the wrong type of ID to the API. I am working on a project to introduce type-safe wrappers to the API so one can only pass (for example) a UserID to functions that require user IDs, other types must cause compiler errors.Thursday, April 06, 2006 9:17 PM
- There is still no reason why you can't make the wrappers themselves value types instead of classes.Thursday, April 06, 2006 9:22 PM
- Value types in .Net 1.1 can be null, and can't be assigned to each other?Thursday, April 06, 2006 9:39 PM
Value Types can't be null, not sure what you mean by 'can't be assigned to each other?'.Thursday, April 06, 2006 9:55 PM
- We need to be able to set IDs to null. By "can't be assigned to each other" I mean given:
PersonID pID = FunctionThatReturnsAPersonID();
AddressID aID = FunctionThatReturnsAPersonsAddressID( pID );
The following are all compile-time errors:
pID = 1024;
pID = aID;
int pID2 = FunctionThatReturnsAPersonID();
AddressID aID2 = FunctionThatReturnsAPersonsAddressID( aID );Thursday, April 06, 2006 10:05
- Yes, I agree with that as well. The only time I bother with typedef in c++ or using newtype=existingtype in c# is to save typing lengthy definitions. ie, if I'm using a lot of System.Collections.Generic.Dictionary<string, string> and a bunch of System.Collections.Generic.KeyValuePair<string, string>. It would help if I were not obsessed with always using complete namespaces in declarations, though.Tuesday, July 03, 2007 6:14 PM
Personally i dont agree with wrapping a valuetype inside a class/struct to emulate the typedef.
Compare this:
Dictionary<int, int> personAddress;
Dictionary<personId, addressId> personAddress;
It is no question that the 2nd declaration is much clearer.
Wrapping the int as a struct/class would cripple this function:
personAddress.ConstainsKey(new Person(personId)) ---> doesnt work
And lots of other operations which is simply not worth the effort.
Even though the compiler doesn't enforce typedef safety (which i believe is a shortcoming), if we agree that reading code is actually harder than writing, having a clear code helps a lot in the long run.Monday, July 28, 2008 3:29 AM
- I also take exception to the lack of a good typedef facility in C# (pun fully intended).
Typedefs in C++ help with producing clear code in an efficient manner by providing a functionality-to-name remapping facility. Implementing them as seperate classes / structs carries both a lot of dev time overhead and some potential performance overhead, in that something that was essentially an int suddenly becomes a class with overloaded operators which will no longer map directly to processor instructions.
But perhaps more importantly, typedefs are a key part of forward-looking architecture in C++. Using well-chosen typedefs gives you the opportunity to later on replace certain types throughout your code with your own classes at no cost, if the need arises. Not only can that insulate you from changing requirements, but also changing implementations in third-party code. This is something that refactoring can't do for you - how would you select 100 different uints out of 500 uses of that type? That is at least as good a reason for using a PersonID typedef as semantic clarity ;)
The using keyword only provides a local mimicking of the C++ keyword because of the file scope, which isn't really enough. Ideally the typedef keyword should be added to C#, but with the compiler treating typedef'ed types as first-class citizens, and at least attempting to enforce type correctness appropriately.Friday, August 15, 2008 10:01 PM
- Being fairly new to some of the Win32 functions, when I came across them I was overwhelmed by some of the types and how to implement them in C#. My solution was to use 'typedef' which of course wasn't initially possible...
Then, I can use them like this:
Of course, this is just the tip of the iceberg, and unfortunately you kinda have to implement your own other functions to allow the use of these types... Still though, handy when you're learning and you can't remember what each type is.
Thursday, September 25, 2008 10:27 AM
You can't assign a value type of one type to that of another type, if that's what you mean.
If you want to support a nullable value type in 1.1, you can always make the value type's default state indicate 'null'. The value type would have a property, such as "IsNull" to test for null. You could put a flag in the value type named "_isNotNull". If this flag is false, which it will be by default, then that means the struct is effectively null. In fact, that's pretty much how nullable value types work, except for some syntactic sugar added in. If 0 is not a valid ID, then you could do without the flag and just test whether the value type's internal integer is 0.
As far as what methods to implement, I would go with ==, !=, Equals, and GetHashCode. Relational operators don't make sense for most IDs. If you implement these types as value types, you wont need to implement ==, !=, Equals, and GetHashCode, as these are implemented with value semantics automatically. Also, if you implement IConvertible, you can use the object as a parameter value in ADO.NET. ADO.NET will use the IConvertible interface to convert it to the type it needs based upon the database type (probably an int in this case).
Also, I don't think implementing ID types like this is a waste of time. I often do similar things in my code. For one thing, even if the parameters are descriptively named, there are many ways for errors that the compiler otherwise would not catch to be introduced. For instance, if the method was refactored to reorder its arguments, it's very possible that one or more calls to that method will not be updated and error wont be revealed until the program is run.
In reality, the IDs aren't really integers anyway, as one can't meaningfully perform arithmetic opreations on them. These values are IDs that are being represented in the database as integers.
Another benefit is that you can add additional validation into the type itself, if necessary. For instance, an SSN can be easily represented as an int or a string. But, if you make an SSN type that validates its value when it is created, then when you are passed a value of that type you know you've been given a valid value. You don't have to introduce validation code for every method that accepts an SSN.Thursday, September 25, 2008 2:50 PM
- Nice post...I have used operator overloading since moving from c to c++ and now c#.
implicit and explicit casts do exactly what the original poster wanted
Thursday, October 02, 2008 3:44 PM
- I appreciate the value of typedefs. I think that the conversion of Windows to 32-bit and then to 64-bit is probably a good example, but I am not sure. This is more of a guess of what was needed for the conversions but hopefully they are a good example.
When Windows was converted from 16-bits to 32-bits, there was obviously substantial changes needed. I think that one technique used was to create many typedefs. I suspect that the typedefs made conversion from 32-bits to 64-bits easier. There are many typedefs in the Windows SDK for use in C++ programs that make it easier (for us) to write code that can be compiled for either. For C# and .Net, there are other solutions for 32-bit and 64-bit but there are many other ways that a C# equivalent to the C++ typedef would likely make programming easier.
Sam Hobbs; see my SimpleSamples.InfoTuesday, November 11, 2008 4:54 PM
- I fully agree.
I've got a similar problem using enums.
We implement a REST based interface and provide a client class library.
So, there are resources and resource collections, one collection for each resource.
We've got a
I'd love to be able to define a
so that I can overload methods to react differently, depending on which type the parameter has.
Ditto for integers:
Lots of Greetings!
VolkerThursday, January 29, 2009 3:16 PM
- After the original post until last two posts, this thread is a hallmark of coding badness.Your C/C++ compiler can enforce typedefs. But if you were just assuming it would, it probably doesn't. You'd actually need to define a distinct type to make it do that in all cases by default, which would probably mean defining a base class that implements an integer class, and then subclassing it to your specific indexes with a trait.template<const char* const SourceTable, const char* const SourceColumn>class ColumnIndex {unsigned int value;...};class AddressID : ColumnIndex<"t_address", "address_id"> {public:AddressID(int i) : ColumnIndex(i) {}AddressID(const AddressID& rhs) : ColumnIndex(rhs.value) {}private:AddressID(const ColumnIndex& rhs) { // Not allowed }};You can ditch the template part: my implementation has an additional template parameter that defines whether or not a particular column is allowed null values, making it a component of type definition - and I thought that might be a useful hint here.Either way, with this methodology, the path to C# becomes very natural: The closest thing I've found to typedef in C# ispublic class MyTypedef : Dictionary<string, Dictionary<int, List<string>>> {}Monday, March 15, 2010 8:32 PM
To clarify, I believe you're asking about using typedefs as one type that you may need to change later. Say you need "personID" to be a string later but don't want to go through and change your whole program; just edit one little line. It would cause much overhead to make whole new classes and limit your freedom to use them interchangeably while still being able to edit just one line. Also, it's not about being able to null them either. Maybe C# programmers just don't get this. It's a C thing... I've found that you can just use a "using" statement to define it within your namespace but not inside the classes.
Example:
namespace myname
{
using mytype = String;
class program
{
static void Main(string[] args)
{
mytype a, b;
Console.Read(a); Console.Read(b);
mytype c = a + b;
Console.WriteLine(c);
}
}
}
Later you may consider changing the mytype to an int instead of a string, or perhaps an abstract data type which you've made. It's good for testing and interchangeability.Wednesday, July 07, 2010 2:32 PM
IMO renaming an "int" via a typedef is pretty silly, be it in C++ or C#. Typically typedefs are used to rename complicated data types into something more friendly to read:
// Imagine you need to return this data type: // // Dictionary<Foo, Dictionary<Baz, IEnumerable<Qux>> // You can do a typedef in C# using inheritance: class Bar : Dictionary<Baz, IEnumerable<Qux>>{} // And now have a much friendlier name to work with: // Dictionary<Foo, Bar>Tuesday, January 04, 2011 6:32 PM
IMO renaming an "int" via a typedef is pretty silly, be it in C++ or C#.
Obviously you don't have as much experience as Microsoft.
Sam Hobbs; see my SimpleSamples.InfoTuesday, January 11, 2011 8:38 AM
// You can do a typedef in C# using inheritance:That's not a typedef, it's a new type, that derives from another.
class Bar : Dictionary<Baz, IEnumerable<Qux>>{} // And now have a much friendlier name to work with: // Dictionary<Foo, Bar>
Downcasts may fail. The only mean to create an alias name for a type in C#
is afaics the using directive.
using Bar = System.Collections.Generic.Dictionary<Baz, System.Collections.Generic.IEnumerable<Qux>>;
ChrisTuesday, January 11, 2011 2:29 PM
I'm surprised at how many people seem to be against C# supporting a standard C++-style typedef, they can be incredibly useful.
For example:
typedef MyType int // Define a 2D array of 'MyType' MyType[][]
A month down the road I decide I need my arrays to hold __int64s. Well that's easy enough, I just change my typedef, and every place using my array that was referring to the data within it by MyType should continue to work normally. typedefs also make it very clear to the programmer what type of data they're dealing with.
Having to create my own class to recreate the comparison, equality, etc... operators for an int is a bit much.
Now, all this being said, I also know that things like macros and typedefs make the compilers job a nightmare, and it's part of the reason why C# is a dream to work with in Visual Studio compared to C++. Still, it is a bit frustrating not having the typedef capability in C# without a lot of extra hassle.Thursday, July 07, 2011 5:52 AM
C/C++ typedefs are actually too weak, and I don't think C# needs to replicate that particularly nasty; that is:
typedef _uint32 unsigned int ; typedef UINT32 _uint32 ; void takes_uint32(const _uint32& value_) { /* stuff */ } int main(int argc, const char* const argv[]) { int a = 0 ; // Note: not unsigned. UINT32 b = 1 ; // Note: "1" is an unsigned int, and b is a UINT32<br/> unsigned int c = 2 ; takes_uint32(a) ; // Valid :( takes_uint32(b) ; // Also valid :(<br/> takes_uint32(c) ; // ALSO valid :( }
During parsing, typedefs are always rolled back to their lowest definition, almost like a macro.
That causes typedef to fail for type safety. But if it was actually strict, it would be really useful for distinguishing between all these other numbers we otherwise just routinely stuff into int, long, etc, fields.
All the attempts at type safety and strictness in C# are blown out the window without this feature, because nobody can be bothered to create micro numeric classes for every single instance of a numeric field with a discrete intention.
A strict typedef mechanism shouldn't put that much of a burden on the IDE, infact it could perhaps just be implemented as a short hand for a class that wraps your aliased entity with appropriate access.
class _uint32 : unsigned int ;
class UINT32 : _uint32 ;
unsigned int a ;
_uint32 b ;
UINT32 c ;
a = 1 ; // valid
b = a ; // invalid.
b = 1 ; // invalid.
b = (_uint32)1 ; // valid
c = 1 ; // invalid.
c = a ; // invalid.
c = b ; // invalid.
c = (UINT32)c ; // validFriday, July 08, 2011 6:53 AM
|
https://social.msdn.microsoft.com/Forums/en-US/019a258e-8d50-4a9f-b0ef-8311208ebb6a/typedefequivalent-in-c?forum=csharplanguage
|
CC-MAIN-2016-44
|
refinedweb
| 3,446
| 61.77
|
perlquestion liz A client of mine has a database with a great number of (MySQL) tables which may contain invalid UTF-8 data. Since this database is filled from very many different sources, it is not (yet) feasible to reliably stop invalidly coded data to enter the database. <P> Since some XML feeds are generated from this data, it is imperative that UTF-8 is obtained from the database. So, a small script was developed that regularly reads all tables and all text fields to check for correct UTF8ness and remove illegal characters when needed. <P> It would seem that such a script would be handy to have as a module. Would it make sense to generalize this into a module? And if so, what would be a good namespace for it? Or is there such a beast already and have we reinvented a wheel? <P> Liz
|
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=333210
|
CC-MAIN-2016-18
|
refinedweb
| 147
| 70.02
|
Created on 2008-09-21 18:30 by DenNukem, last changed 2010-05-22 11:33 by georg.brandl. This issue is now closed.
PROBLEM:
Some sites (e.g.) sends cookies where
version is "1" instead of 1. Cookielib chokes on it so none of the
cookies work after that.
PROBLEM CODE:
def _cookie_from_cookie_tuple(self, tup, request):
...
name, value, standard, rest = tup
...
version = standard.get("version", None)
if version is not None: version = int(version) << CRASH HERE!!!
WORKAROUND:
use my own cookie jar, e.g.:
class MyCookieJar(CookieJar):
def _cookie_from_cookie_tuple(self, tup, request):
name, value, standard, rest = tup
standard["version"]= None
CookieJar._cookie_from_cookie_tuple(self, tup, request)
REAL FIX:
do not assume that version is int, keep it as string if it does not
parse as int:
CRASH STACK:
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/cookielib.py:1577:
UserWarning: cookielib bug!
Traceback (most recent call last):
File
"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/cookielib.py",
line 1575, in make_cookies
parse_ns_headers(ns_hdrs), request)
File
"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/cookielib.py",
line 1532, in _cookies_from_attrs_set
cookie = self._cookie_from_cookie_tuple(tup, request)
File
"/Users/denis/Documents/svn2/tson/main/sales/src/download_sales.py",
line 28, in _cookie_from_cookie_tuple
CookieJar._cookie_from_cookie_tuple(self, tup, request)
File
"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/cookielib.py",
line 1451, in _cookie_from_cookie_tuple
if version is not None: version = int(version)
ValueError: invalid literal for int() with base 10: '"1"'
_warn_unhandled_exception()
The sensible fix for this is to strip the quotes off, defaulting to
version 0 on failure to parse the version cookie-attribute. It's not
necessary to retain the original version string.
By the way, what you posted warning rather than a strictly unhandled
exception or "crash" -- it's a bug, but won't cause the program to stop.
And by "none of the cookies work after that", you mean that no cookies
in headers containing the quoted version cookie-attribute are accepted
by the cookiejar.
FWIW, this bug only affects RFC 2109 cookies, not RFC 2965 cookies.
Patch with tests attached. The patch is slightly different to my first
suggestion: in the patch, invalid version values cause the cookie to be
ignored (but double quotes around valid versions are fine).
The bug is present on trunk and on the py3k branch, so I've selected
versions "Python 2.7" and "Python 3.0"
This is a straightforward bug, so I selected 2.5.3 and 2.6 also, to
indicate this is a candidate for backport.
As the patch hasn't been applied to the trunk yet, I'm rejecting it for
2.5.3.
The cookiejar workaround in the first comment did not work for me. The
cookies didn't stick in it. I guess version needs to be set.. this
worked for me:
class ForgivingCookieJar(cookielib.CookieJar):
def _cookie_from_cookie_tuple(self, tup, request):
name, value, standard, rest = tup
version = standard.get("version", None)
if version is not None:
# Some servers add " around the version number, this module
expects a pure int.
standard["version"] = version.strip('"')
return cookielib.CookieJar._cookie_from_cookie_tuple(self, tup,
request)
Thank you Henrik. The workaround in the first comment caused some
cookies to be handled incorrectly due to ignoring version on all
cookies, but your workaround is nice.
It seems that the patch jjlee supplied should really be applied,
however, to save others from having this problem.
Thanks for the patch! Applied in r81465 f. Merged to 2.x in r81467, will merge to 3k later.
|
http://bugs.python.org/issue3924
|
CC-MAIN-2014-41
|
refinedweb
| 587
| 52.15
|
Red Hat Bugzilla – Bug 141000
strtold uses uninitialized array element
Last modified: 2007-11-30 17:10:55 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.5)
Gecko/20041111 Firefox/1.0
Description of problem:
strtold("42.00000000000000000001", NULL)
uses an unintialized array element. [There are 19 zeroes betweeen the
decimal point and the '1' in the first argument.] The uninit is
fetched at
-----stdlib/strtod_l.c:1505
n0 = num[densize];
-----
with 3==densize. The 'while' loop at line 1507 is skipped (the
condition is false the first and only time), then
-----strtod_l.c:1550
for (i = densize; num[i] == 0 && i >= 0; --i)
;
return round_and_return (retval, exponent - 1, negative,
quot, BITS_PER_MP_LIMB - 1 - used,
more_bits || i >= 0);
-----
The condition "num[i] == 0" uses the uninit value num[densize]. The
result is an unpredictable value in "i", so the round_and_return can
perform a bad computation.
Version-Release number of selected component (if applicable):
glibc-2.3.3-74
How reproducible:
Always
Steps to Reproduce:
1.gcc -std=c99 -g bug.c; valgrind --tool=memcheck ./a.out
-----bug.c
#include <stdlib.h>
#include <stdio.h>
int
main()
{
printf("%Lg\n", strtold("42.00000000000000000001", NULL));
return 0;
}
-----
2.
3.
Actual Results: ==17999== Memcheck, a memory error detector for
x86-linux.
==17999== Copyright (C) 2002-2004, and GNU GPL'd, by Julian Seward et al.
==17999== Using valgrind-2.2.0, a program supervision framework for
x86-linux.
==17999== Copyright (C) 2000-2004, and GNU GPL'd, by Julian Seward et al.
==17999== For more details, rerun with: -v
==17999==
==17999== Conditional jump or move depends on uninitialised value(s)
==17999== at 0x7B568F: __GI_____strtold_l_internal (in
/lib/tls/libc-2.3.3.so)
==17999== by 0x7AF0D6: strtold (in /lib/tls/libc-2.3.3.so)
==17999== by 0x80483CA: main (bug.c:7)
42
Expected Results: No complaints from valgrind.
Additional info:
This testcase appears in stdlib/tst-strtod.c. Please consider running
valgrind on the glibc testcases. (Have someone in the EU run them, if
necessary. [I used my own memory checker, then verified that valgrind
would reproduce the complaint.])
Fedora Core 3 includes valgrind and you can run it anywhere, not just in EU.
I have ran valgrind on glibc testsuite a few months ago and fixed what I saw,
am doing it now again. Hope I won't spend too much time on false positives.
"I have ran valgrind on glibc testsuite a few months ago ..." Thank you! This
bug #141000 and bug #141137 are the ones I found this time. After the batch of
several such bugs in spring 2003 (bug #88052, etc.) I noticed that the following
major OS releases had very few, which suggested the possibility that glibc had
begun to use a memory access checker. So this strtold case was a surprise.
It seems that valgrind is a new package in FC3, and had been barred from
previous OS releases because of US patent issues (or FUD). Did these get
resolved definitively?
This should be fixed in glibc-2.3.3-87 in rawhide.
|
https://bugzilla.redhat.com/show_bug.cgi?id=141000
|
CC-MAIN-2016-40
|
refinedweb
| 515
| 69.79
|
Ticket #783 (defect)
Opened 3 months ago
Last modified 2 months ago
File uploads corrupt when using built in SSL
Status: closed (fixed)
Here is POC code:
import shutil import os localDir = os.path.dirname(__file__) absDir = os.path.join(os.getcwd(), localDir) import cherrypy from cherrypy.lib import cptools cherrypy.config.update({ 'global': { # 'server.ssl_certificate': 'server.pem', # 'server.ssl_private_key': 'server.pem', } }) class PaperMill(object): def index(self): return """ <html><head></head><body> <form id='upload_form' method='post' action='upload' enctype='multipart/form-data'> Filename: <input id="filename_input" type="file" name="myFile"/> <input type="submit" value="Upload!"> </form> </body></html>""" index.exposed = True def upload(self, myFile): out = """<html> <body> myFile length: %s<br> myFile filename: %s<br> myFile mime-type: %s<br> </body> </html>""" size = 0 f = open("/tmp/fileupload", "wb") while True: data = myFile.file.read(1024 * 8) # Read blocks of 8KB at a time if not data: break f.write(data) size += len(data) f.close() shutil.move("/tmp/fileupload", absDir + myFile.filename) return out % (size, myFile.filename, myFile.type) upload.exposed = True if __name__ == "__main__": cherrypy.quickstart(PaperMill(),"/")
Uncomment the ssl lines and uploads will become corrupt.
Attachments
Change History
02/05/08 04:07:07: Modified by Stonekeeper
02/20/08 14:02:19: Modified by nzoschke@gmail.com
- attachment test_http_post_multipart.2.patch added.
patch for test_http.py that adds a test that exposes the ssl bug
02/20/08 14:10:50: Modified by nzoschke@gmail.com
I've been hit by this bug too. Very small posted files do not exhibit this behavior, but large ones do. The resulting file buffers are corrupted differently every time.
I attached a patch for test_http.py that exposes this error by posting a large amount of data (26 megs). I didn't do much testing to find the lower limit yet.
Occasionally the test does pass with SSL, which is curious and probably means a better test case is needed.
To see the behavior, compare:
python test.py --test_http python test.py --ssl --test_http
02/20/08 14:11:47: Modified by nzoschke@gmail.com
Should also add that test_sockets fails under ssl too...
02/20/08 14:36:03: Modified by fumanchu
- owner changed from rdelon to fumanchu.
- status changed from new to assigned.
- milestone set to 3.1.
03/09/08 00:42:04: Modified by fumanchu
03/12/08 01:47:20: Modified by fumanchu
- status changed from assigned to closed.
- resolution set to fixed.
I've done some more research on this problem. It seems that the uploaded files have about 20-ish% correct data then they become corrupt. I tried 2 different 7zip files and checked their size, difference between sizes in bytes and percentages and differences in where the file becomes corrupt. I can find no correlation. However, corruption always seems to occur on a 4 byte boundary.
|
http://cherrypy.org/ticket/783
|
crawl-001
|
refinedweb
| 478
| 53.17
|
#include <HeadPointerMC.h>
List of all members.
Definition at line 12 of file HeadPointerMC.h.
Constructor, defaults to all joints to current value in state (i.e. calls takeSnapshot() automatically).
Definition at line 12 of file HeadPointerMC.cc.
[inline, virtual]
Destructor.
Definition at line 18 of file HeadPointerMC.h.
true
Sets hold - if this is set to false, it will allow a persistent motion to behave the same as a pruned motion, without being pruned.
Definition at line 21 of file HeadPointerMC.h.
return hold
Definition at line 22 of file HeadPointerMC.h.
sets tolerance
Definition at line 24 of file HeadPointerMC.h.
returns tolerance
Definition at line 25 of file HeadPointerMC.h.
sets timeout
Definition at line 26 of file HeadPointerMC.h.
returns timeout
Definition at line 27 of file HeadPointerMC.h.
[virtual]
sets the target to last sent commands, and dirty to false; essentially freezes motion in place
This is very similar to takeSnapshot(), but will do the "right thing" (retain current position) when motion blending is involved. A status event will be generated if/when the joints reach the currently commanded position. Probably should use freezeMotion() if you want to stop a motion underway, but takeSnapshot() if you want to reset/intialize to the current joint positions.
Definition at line 22 of file HeadPointerMC.cc.
sets the target joint positions to current sensor values
Similar to freezeMotion() when a motion is underway, but only if no other MotionCommands are using neck joints. A status event will not be generated unless a motion was already underway. Probably should use freezeMotion() if you want to stop a motion underway, but takeSnapshot() if you want to reset/intialize to the current joint positions.
Definition at line 30 of file HeadPointerMC.cc.
Referenced by HeadPointerMC().
[inline]
Sets maxSpeed to 0 (no maximum).
Definition at line 43 of file HeadPointerMC.h.
1
Restores maxSpeed to default settings from Config::Motion_Config.
Definition at line 38 of file HeadPointerMC.cc.
Sets maxSpeed in rad/sec.
Definition at line 52 of file HeadPointerMC.h.
Returns maxSpeed in rad/sec.
Definition at line 57 of file HeadPointerMC.h.
Sets the weight values for all the neck joints.
Definition at line 59 of file HeadPointerMC.cc.
Request a set of neck joint values.
Originally this corresponded directly to the neck joints of the aibo, however on other platforms it will use capabilties mapping to try to set correspnding joints if available. (we're not doing kinematics here, trying to set joint values directly. If your want a more generic/abstract interface, use lookAtPoint()/lookInDirection().
Note that for a "pan-tilt" camera, you actually want to set the last two parameters, not the first two!
Definition at line 67 of file HeadPointerMC.cc.
Directly set a single neck joint value.
Definition at line 85 of file HeadPointerMC.h.
Referenced by lookAtPoint().
Returns the target value of joint i. Use this if you want to know the current commanded joint value; To get the current joint position, look in WorldState::outputs.
Definition at line 96 of file HeadPointerMC.h.
Centers the camera on a point in space, attempting to keep the camera as far away from the point as possible.
Point should be relative to the body reference frame (see BaseFrameOffset). Returns true if the target is reachable.
Definition at line 91 of file HeadPointerMC.cc.
Centers the camera on a point in space, attempting to move the camera d millimeters away from the point.
Definition at line 127 of file HeadPointerMC.cc.
Points the camera in a given direction.
Vector should be relative to the body reference frame (see BaseFrameOffset). Returns true if the target is reachable.
Definition at line 142 of file HeadPointerMC.cc.
Updates where the head is looking.
Implements MotionCommand.
Definition at line 158 of file HeadPointerMC.cc.
true if a change has been made since the last updateJointCmds() and we're active
Definition at line 134 of file HeadPointerMC.h.
Alive while target is not reached.
Definition at line 197 of file HeadPointerMC.cc.
marks this as dirty each time it is added
Reimplemented from MotionCommand.
Definition at line 136 of file HeadPointerMC.h.
[inline, protected]
checks if target point or direction is actually reachable
Definition at line 141 of file HeadPointerMC.h.
[inline, static, protected]
puts x in the range (-pi,pi)
Definition at line 149 of file HeadPointerMC.h.
Referenced by clipAngularRange().
if x is outside of the range of joint i, it is set to either the min or the max, whichever is closer
Definition at line 152 of file HeadPointerMC.h.
Referenced by setJoints(), and setJointValue().
[protected]
if targetReached, reassigns headCmds from MotionManager::getOutputCmd(), then sets dirty to true and targetReached to false
should be called each time a joint value gets modified in case the head isn't where it's supposed to be, it won't jerk around
MotionManager::getOutputCmd() is called instead of WorldState::outputs[] because if this is being called rapidly (i.e. after every sensor reading) using the sensor values will cause problems with very slow acceleration due to sensor lag continually resetting the current position. Using the last value sent by the MotionManager fixes this.
Definition at line 217 of file HeadPointerMC.cc.
Referenced by DoStart(), setJoints(), setJointValue(), and setWeight().
[static, protected]
Makes sure i is in the range (0,NumHeadJoints). If it is instead in the range (HeadOffset,HeadOffset+NumHeadJoints), output a warning and reset i to the obviously intended value.
Definition at line 226 of file HeadPointerMC.cc.
Referenced by getJointValue(), and setJointValue().
true if a change has been made since last call to updateJointCmds()
Definition at line 183 of file HeadPointerMC.h.
Referenced by freezeMotion(), isDirty(), and takeSnapshot().
if set to true, the posture will be kept active; otherwise joints will be marked unused after each posture is achieved (as if the posture was pruned); set through setHold()
Definition at line 184 of file HeadPointerMC.h.
Referenced by getHold(), and setHold().
when autopruning, if the maxdiff() of this posture and the robot's current position is below this value, isAlive() will be false, defaults to 0.05 radian (2.86 degree error)
Definition at line 185 of file HeadPointerMC.h.
Referenced by getTolerance(), and setTolerance().
false if the head is still moving towards its target
Definition at line 186 of file HeadPointerMC.h.
Referenced by isDirty().
time at which the targetReached flag was set
Definition at line 187 of file HeadPointerMC.h.
number of milliseconds to wait before giving up on a target that should have already been reached, a value of -1U will try forever
Definition at line 188 of file HeadPointerMC.h.
Referenced by getTimeout(), and setTimeout().
stores the target value of each joint
Definition at line 189 of file HeadPointerMC.h.
Referenced by freezeMotion(), getJointValue(), setJoints(), setJointValue(), and takeSnapshot().
stores the last values we sent from updateOutputs
Definition at line 190 of file HeadPointerMC.h.
Referenced by freezeMotion(), setWeight(), and takeSnapshot().
initialized from Config::motion_config, but can be overridden by setMaxSpeed(); rad per frame
Definition at line 191 of file HeadPointerMC.h.
Referenced by defaultMaxSpeed(), getMaxSpeed(), noMaxSpeed(), and setMaxSpeed().
provides kinematics computations, there's a small leak and safety issue here because ROBOOP::Robot contains pointers, and those pointers typically aren't freed because MotionCommand destructor isn't called when detaching shared region
Definition at line 192 of file HeadPointerMC.h.
Referenced by isReachable(), and lookAtPoint().
|
http://www.tekkotsu.org/dox/classHeadPointerMC.html
|
crawl-001
|
refinedweb
| 1,228
| 50.94
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/29/14, 9:58 PM, Marshall Giguere wrote: > Bash Version: 4.2 > Patch Level: 25 > Release Status: release > > Description: > Submitting a shell script job to "at" either via pipe, or > command line fails. It fails if you have exported functions in your environment. This is a bug in `at': specifically, its assumption that every string that appears in the environment is a valid shell identifier. This was never a valid assumption to make, but now that exported functions use characters outside the identifier namespace it's become more apparent. The problem is amplified by the fact that dash is reacting poorly to whatever `at' is trying to do to format the environment strings as assignment statements. I believe that the upstream `at' developer is working on a fix.) iEUEARECAAYFAlRST1gACgkQu1hp8GTqdKsqZgCfVgW688lpx7NNTDrRlQyCp15C PvkAl2TTvqdsoRh9r6+tnKk6QXQ3FNM= =i5Yh -----END PGP SIGNATURE-----
|
https://lists.gnu.org/archive/html/bug-bash/2014-10/msg00232.html
|
CC-MAIN-2018-43
|
refinedweb
| 143
| 54.12
|
A bytearray work-alike using a gap buffer for storage
Project Description
A Python bytearray work alike which uses a gap buffer as underlying storage. It is a data structure optimised for locally coherent insertions and deletions. It is the usual data structure in text editors.
A utility class, codedstring, is provided which provides a string-like view on a bytegapbuffer transparently encodes and decodes Unicode strings. It provides efficient common-case indexing.
Installation
Installation is via pip. To install the latest release version:
$ pip install bytegapbuffer
To install the current development version from git:
$ pip install git+
Usage
The bytegapbuffer collection aims to behave just like a bytearray. For example:
from bytegapbuffer import bytegapbuffer a = bytegapbuffer(b'hello') a.insert(3, 65) a.insert(4, 66) assert a == b'helABlo'
Status
This project is used as part of a personal project of mine and, as such, implements just enough of the sequence, mutable sequence and bytearray interface for my needs. Pull requests adding missing functionality are welcome. Please also add a test for the functionality.
Current features:
- Retrieving element(s) via [i], [i:j] and [i:j:k] style slicing.
- Deletion of element(s) via [i], [i:j] style slicing.
- Insertion/replacement of element(s) via [i], [i:j] style slicing.
- Insertion of element via insert().
- Length query via len().
- Sub-sequence search via index() and find() methods.
- Equality (and inequality) testing.
- Iteration over contents.
- Efficient codedstring wrapper allowing bytegapbuffer to be used as underlying storage in a text editor.
All of the above should work exactly as the bytearray object does. (This is tested in the test suite.) Additional non-bytearray features:
- Deep copying via copy() method.
Test suite
The test suite may be run via the tox utility. The Travis builds are set up to run the test suite on the latest released Python 2 and Python 3 versions.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/bytegapbuffer/
|
CC-MAIN-2018-09
|
refinedweb
| 333
| 60.21
|
Given a binary tree, print nodes of extreme corners of each level but in alternate order.
Example:
For above tree, the output can be
1 2 7 8 31
– print rightmost node of 1st level
– print leftmost node of 2nd level
– print rightmost node of 3rd level
– print leftmost node of 4th level
– print rightmost node of 5th level
OR
1 3 4 15 16
– print leftmost node of 1st level
– print rightmost node of 2nd level
– print leftmost node of 3rd level
– print rightmost node of 4th level
– print leftmost node of 5th level
The idea is to traverse tree level by level. For each level, we count number of nodes in it and print its leftmost or the rightmost node based on value of a Boolean flag. We dequeue all nodes of current level and enqueue all nodes of next level and invert value of Boolean flag when switching levels.
Below is C++ implementation of above idea –
/* C++ program to print nodes of extreme corners of each level in alternate order */ #include <bits/stdc++.h> using namespace std; /* A binary tree node has data, pointer to left child and a pointer to right child */ struct Node { int data; Node *left, *right; }; /* Helper function that allocates a new node with the given data and NULL left and right pointers. */ Node* newNode(int data) { Node* node = new Node; node->data = data; node->right = node->left = NULL; return node; } /* Function to print nodes of extreme corners of each level in alternate order */ void printExtremeNodes(Node* root) { if (root == NULL) return; // Create a queue and enqueue left and right // children of root queue<Node*> q; q.push(root); // flag to indicate whether leftmost node or // the rightmost node has to be printed bool flag = false; while (!q.empty()) { // nodeCount indicates number of nodes // at current level. int nodeCount = q.size(); int n = nodeCount; // Dequeue all nodes of current level // and Enqueue all nodes of next level while (n--) { Node* curr = q.front(); // Enqueue left child if (curr->left) q.push(curr->left); // Enqueue right child if (curr->right) q.push(curr->right); // Dequeue node q.pop(); // if flag is true, print leftmost node if (flag && n == nodeCount - 1) cout << curr->data << " "; // if flag is false, print rightmost node if (!flag && n == 0) cout << curr->data << " "; } // invert flag for next level flag = !flag; } } /* Driver program to test above functions */ int main() { // Binary Tree of Height 4 Node* root = newNode(1); root->left = newNode(2); root->right = newNode(3); root->left->left = newNode(4); root->left->right = newNode(5); root->right->right = newNode(7); root->left->left->left = newNode(8); root->left->left->right = newNode(9); root->left->right->left = newNode(10); root->left->right->right = newNode(11); root->right->right->left = newNode(14); root->right->right->right = newNode(15); root->left->left->left->left = newNode(16); root->left->left->left->right = newNode(17); root->right->right->right->right = newNode(31); printExtremeNodes(root); return 0; }
Output:
Time complexityof above solution is O(n ) where n is total number of nodes in given binary tree.
Exercise –Print nodes of extreme corners of each level from bottom to top in alternate order..
|
http://126kr.com/article/49daj2zu2y1
|
CC-MAIN-2017-09
|
refinedweb
| 526
| 63.22
|
RT73 Wireless
Kernel 2.6.18
Kernel 2.6.19
The latest arch kernel (2.6.19) broke the Ralink (and the rt2x00) drivers. The drivers won't compile without modifying the source (already done in rt2x00, but we must do it to the Ralink source ourselves). In addition, iwpriv must be used when manually setting up the connection.
Changes to rtmp_main.c:
This section occurs twice in the file (one says >= 11, the other >= 12, change them both to >= 12):
#if WIRELESS_EXT >= 11 netdev->get_wireless_stats = rt73_get_wireless_stats; netdev->wireless_handlers = (struct iw_handler_def *) &rt73_iw_handler_def; #endif
Change both occurences to match this:
#if WIRELESS_EXT >= 12 #if WIRELESS_EXT < 17 netdev->get_wireless_stats = rt73_get_wireless_stats; #endif netdev->wireless_handlers = (struct iw_handler_def *) &rt73_iw_handler_def; #endif
After the changes to rtmp_main.c, the module should compile normally:
make
After the build is finished, copy new module to kernel tree
cp rt73.ko /lib/modules/2.6.19-ARCH/kernel/drivers/net/wireless/
Install module:
insmod /lib/modules/2.6.19>
If you are using kernel 2.6.19 the above. Additionally, it should work whether you have kernel 2.6.18 or 2.6.19 if the driver was built successfully.
Edit the rt73sta.dat file:
nano /etc/Wireless/RT73STA/rt73sta.dat
Replace placeholders in the file with your wireless network settings, using the README in the tarball as a guide, or using the Ralink RaConfig utility (TODO: describe RaConfig setup).
Edit /etc/rc.conf to initiate dhcp on rausb0 at bootup:
# /etc/rc.conf rausb0="dhcp" INTERFACES=(lo rausb0)
This is all you have to put in /etc/rc.conf. The rest of the settings will automatically be read from rt73sta.dat on bootup.
|
https://wiki.archlinux.org/index.php?title=RT73_Wireless&oldid=20730
|
CC-MAIN-2018-26
|
refinedweb
| 274
| 61.02
|
February 12, 2019 · 2 min read
In a previous post, we looked at creating a Resolver. The resolver will route the user if the data “resolves” but not if an error occurs and the data is not available. This is perfectly fine for some use cases but what if you want the error to be handled by the component that’s associated with the route the user requested?
I recently ran into this exact scenario. The API returned a
409 Conflict if some upstream event hadn’t occurred yet. This error needed to be handled in the component so that we could display a message to the user letting them know what was wrong. Any errors that were not a
409 should be thrown and handled elsewhere.
If we have the following resolver, we can catch any errors that occur using the RxJs catchError operator:
import { Resolve, ActivatedRouteSnapshot, RouterStateSnapshot, } from '@angular/router' import { Injectable } from '@angular/core' import { Observable } from 'rxjs'().pipe( map(res => res), catchError(error => { // do something with the error }) ) } }
If we want to pass through any
409 errors to the component we can check the error
status:
import { of } from 'rxjs' ... catchError(error => { if (error.status === 409) { return of({ error: error }) } })
We use the RxJS function of which converts our error object to an observable.
We can now handle the error in our component but it’s no longer an error, it is a property of the ActivatedRoute data:
... displayError: boolean = false; errorMessage: string; ngOnInit() { this.route.data.subscribe(data => { if (data.error) { displayError = true; errorMessage = data.error.message; } else { this.products = data.products; } }); }
If we have other types of errors that we don’t want to pass to the component we can throw those errors in the resolver using the throwError function from RxJS:
import { of, throwError } from 'rxjs' ... catchError(error => { if (error.status === 409) { return of({ error: error }) } else { throwError(error) } })
Joshua Colvin is a software developer specialzing in JavaScript. He lives with his wife and two kids in Michigan.
|
https://www.joshuacolvin.net/angular-resolver-error-handling/
|
CC-MAIN-2020-05
|
refinedweb
| 335
| 55.64
|
[Today's Post has been contributed by Brian Huneycutt].
I've spent many years investigating / troubleshooting WMI related issues – especially as they relate to SMS / Configuration Manager. Based on that experience I've compiled a few tips and general observations for the community. This list is by no means comprehensive.
Assumptions are made regarding a basic understanding of WMI, such as general structure, terms and usage of the WBEMTest tool..
Can lost WMI data be recovered?
Probably, but that's never a good state to be in. Therefore, I say avoid this operation whenever possible.
On the flip side, I certainly recognize there is a tradeoff between operational needs and individual investigations.
Over time some customers have seen that rebuilding the repository makes a problem seem to go away quickly. Typically this also comes with a loss of ability to find root cause, could mask other problems, and may not actually solve anything long term. On the whole I strongly recommend against deleting the repository folder as a means to resolving WMI issues.
What can I do other than rebuild the repository?
One low risk, potentially high gain operation that can be performed is to recompile MOF files, and register component DLL's associated with WMI operations. If an important class or component registration needed for WMI operation was somehow removed you can put the needed structure back.
These steps can be automated easily, but aren't generally recommended on a large scale as they too can mask issues. This is just one more option to try short of rebuilding the repository. There are variations of the steps below available between XP and Vista, but this most basic version should work for either.
//
1..
//
Starting in Windows 7, there may be MOF files that contain uninstall information, such as the Mgmtprovideruninstall.mof file that ships with Windows Management Framework 3.0.
If such a file is included in the list and compiled it will have the unintentional effect of unregistering a provider or deleting a class. Check the contents of MOF files for pragma delete entries such as “#pragma deleteclass” or “#pragma deleteinstance”. If needed, temporarily move those files to another location before recompiling all other MOFs.
For the latest on troubleshooting WMI, we recommend you contact Microsoft Support.
Where can I find the log files and error codes?
Start here: the WMI Troubleshooting page on MSDN. This page serves as a jumping off point for many important details such as logging and tracing information, WMI Error constants, and more.
Common Errors
These errors are referenced in greater detail on the WMI Troubleshooting page and subsequent links but I still wanted to mention them here.
WBEM_E_NOT_FOUND – 0x80041002
The Not Found message was very common in XP log files, a little less so in Vista and up. Without context this one isn't very helpful, as you have no way of knowing if the requested data is supposed to be present. Simply put, it may not always be a bad thing.
Access Denied
Echoing the troubleshooting page, if you're seeing 0x80070005 (E_ACCESS_DENIED) when connecting you're being turned away by DCOM, not WMI. Similarly the 0x800706BA (RPC_S_SERVER_UNAVAILABLE) means you're being turned away before you've talked to DCOM or WMI. A Network capture is often the quickest way to get to make progress for the RPC error.
There's also a bit more info in the Remoting and Security blog entry from the WMI team.
WBEM_E_PROVIDER_LOAD_FAILURE – 0x80041013
The Provider Event Troubleshooting Classes are a great resource, but may be a little overwhelming. The MSFT_WmiProvider_LoadOperationFailureEvent class is one that I've found useful quite often. Most Provider Load Failures I've encountered have been the result of bad component registration (either in the registry or WMI), or permissions related.
WBEM_E_INVALID_CLASS – 0x80041010 / WBEM_E_INVALID_NAMESPACE – 0x8004100E
Similar to the Not Found error, context is important here. Some operation was being performed against a class / namespace that isn't present on the target machine.
Is that bad? Depends on the situation. It may be perfectly normal. If investigation tell you it's not, the class or namespace can usually be recovered by recompiling the appropriate MOF file.
Generic Failure – 0x80004005
Among the least helpful errors, and not WMI specific. I only bring it up here as many people see this and mistakenly think it's an Access Denied message given the 5 at the end. Remember access denied is 0x80070005
WMIDiag
An invaluable tool for diagnosing WMI issues, even if it's a little dated.
It has many configuration options available and can be deployed via Configuration Manager. One of the more helpful features is the report that is generated at the end. It contains details on how to correct many common issues that are found when running the tool.
Tracking resource usage of WMI
By default the core WMI service lives in the shared Network Services instance of scvhost.exe. This can make debugging or identifying resource issues a little challenging. As a general rule of thumb I run (and recommend to customers) that they keep WMI separated into its own instance of svchost.
On XP/Server 2003 this can be accomplished automatically via the following case sensitive command:
RUNDLL32.EXE %Systemroot%SYSTEM32WBEMWMISVC.DLL,MoveToAlone
For Vista and up this is done with
Where is the provider?
The WMI Provider host process (wmiprvse.exe) will create one instance for each different hosting (security) model defined. To find out which instance by PID a given provider resides in (such as smsprov.dll) you can simply run
Tasklist /m smsprov.dll
It is possible to isolate a provider into its own instance by changing the hosting model.
This is fairly rare and not necessarily a best practice, but if you're running into resource or performance problems that could be traced back to multiple providers running in the same instance, it may be worth investigating a split – at least for the purpose of issue isolation. The Provider Hosting and Security page has more information.
WMI configuration
There are quite a few options available for tuning WMI performance. Two that I'll cover here are important for Configuration Manager Site (provider) servers – the MemoryPerHost and HandlesPerHost values that can be found in the __ProviderHostQuotaConfiguration class in the root namespace.
First a little background:
For each instance of WMIPrvse.exe that is running, the classes above dictate how much virtual memory or handles that instance may consume. When exceeding that limit the process may terminate, or in some rare cases may hang.
As more providers for various applications are being used on server machines, and Configuration Manager environments get larger, it's expected to see increased resource usage with our provider.
Prior to Vista the limits were 128MB (134217728 bytes) and 4096 handles.
In a large Configuration Manager environment (in terms of number of objects that exist, such as collections, advertisements, AdminUI connections, as well as clients) you could definitely exceed those limits.
Quadruple the memoryperhost value to 512MB – 536870912 is the value to enter – is what I recommend to all my customers.
512 is even the default value now on Vista and above, further indication that a larger limit was needed.
If performance monitoring tools indicate that you're hitting or exceeding the 4096 handle limit, you can increase that as well but be a little more conservative since handles are a shared resource. It could likely be doubled but I usually recommend 5120, again if monitoring indicates an increase is needed.
It's important to remember that increased memory usage alone is not an indication of a problem state, or a leak – it's quite likely just normal behavior. In other words, many objects (and perhaps many objects from multiple remote connections) mean more resources required to handle everything.
If you see the Process ID (PID) of wmiprvse.exe that hosts smsprov.dll changing frequently, or multiple instances of smsprov.dll loaded you should definitely increase this value. Some customers have reported an increase helping with Administrator Console performance as well.
WMI repository stability fix
Lastly, if you're still on XP SP2 or Server 2003 SP1 or SP2 you should apply this fix to help further stabilize the repository files. Note it won't correct a system already having problems but makes for good preventative maintenance. for the XP version for the Server 2003 version
Thank you,
This posting is provided "AS IS" with no warranties and confers no rights.
|
https://blogs.technet.microsoft.com/enterprisemobility/2009/05/08/wmi-troubleshooting-tips/
|
CC-MAIN-2017-34
|
refinedweb
| 1,406
| 54.93
|
sub Win_OS_Type {
return unless $^O =~ /win32|dos/i; # is it a MS box?
# It _should_ have Win32 unless something is really weird
return unless eval('require Win32');
# Use the standard API call to determine the version
my ( undef, $major, $minor, $build, $id ) = Win32::GetOSVersion;
return "win32s" unless $id; # If id==0 then its a win32s
+ box.
my $os = { # Magic numbers from MSDN do
+cumentation of GetOSVersion
1 => {
0 => "95",
10 => "98",
90 => "Me"
},
2 => {
0 => "2000",
1 => "XP/.Net",
51 => "NT3.51"
}
}->{$id}->{$minor};
# This _really_ shouldnt happen. At least not for quite a while
die "$id:$major:$minor Has no name of record!"
unless defined $os;
# Unfortunately the logic used for the various versions isnt so cl
+ever..
# so we have to handle an outside case.
return ( $os eq "2000" & $major != 5 ) ? "NT4" : $os;
}
Please install ActiveState Build 633, it comes with a function called Win32::GetOSName().
|
http://www.perlmonks.org/?node_id=144660
|
CC-MAIN-2017-09
|
refinedweb
| 148
| 74.49
|
Introduction :
We can find the factorial of a number using a loop. The factorial of a number is the product of all numbers from 1 to that number. Finding out the factorial using a loop like for or while loop is easy. In this post, I will show you how to find the factorial of a user given number in C++ using a loop.
Example 1 : C++ program to find factorial using a for loop :
To find the factorial, we will run one for loop from 1 to that number. On each iteration, we will multiply the numbers to get the final factorial. Below is the complete program :
#include <iostream> using namespace std; int main() { int number, factorial = 1; cout << "Enter a number :" << endl; cin >> number; for (int i = 1; i <= number; i++) { factorial *= i; } cout << "Factorial : " << factorial << endl; return 0; }
Here, the program will ask the user to enter a number. It reads the number and stores it in variable number. We are running one for loop from i = 1 to i = number. We also have initialized another variable factorial as 1. This variable is to hold the final factorial. Inside the loop, on each iteration, we are multiplying the current value of i with factorial. Once the loop will end, factorial will hold the factorial of the given number. We can also run this loop from i = 2 i.e. we can multiply all numbers from i = 2 to i = number. Multiplying a number by 1 willl result the same.
Sample outputs of the above program :
Enter a number : 4 Factorial : 24 Enter a number : 5 Factorial : 120
Example 2 : C++ program to find factorial using a while loop :
Using a while loop is similar to for loop. The only difference is that, while loop checks one condition and execute its body. It will keep executing until the condition is true. In our case, we will keep decrementing the user given number by 1 and on each step, we will keep the current value multiplying to the final factorial. Below is the complete program :
#include <iostream> using namespace std; int main() { int number, factorial = 1; cout << "Enter a number :" << endl; cin >> number; while(number > 1) { factorial *= number; number --; } cout << "Factorial : " << factorial << endl; return 0; }
The while loop runs if the value of number is more than 1. On each step, we are multiplying its value to the variable factorial and decrementing it by 1. For example, if the value of number is 5, it will run for 5, 4, 3, 2 and the value of factorial will be 5 * 4 * 3 * 2 i.e. the factorial of 5. We don’t have to multiply it with 1 as that will be same. The benifit of this approach is that we don’t have to initialize one extra variable i like we did with the for loop. It is more efficient than the for loop approach in terms of space complexity. It will produce the same output. You can also use any other loop like do while to find the factorial of a number in C++. It will be a similar way to do that. You can try that and drop one comment below :).
Conclusion :
I hope that you have learned how to find factorial in C++. Both ways are equally useful. You can use any of these methods. If you have any queries, don’t hesitate to drop one comment below.
|
https://www.codevscolor.com/c-plus-plus-factorial-for-loop
|
CC-MAIN-2020-40
|
refinedweb
| 573
| 63.49
|
Question:
In a C++ program I write:
#include<iostream> #include<vector> using namespace std; int main() { vector<int> a; a.resize(1); for( int i = 0 ; i < 10 ; i++ ) { cout << a[i] << " "; } return 0; }
this program prints the correct value of a[0] (because it is allocated) but also prints values at the rest of the 10 locations instead of giving a segmentation fault.
How to overcome this while writing your code? This causes problems when your code computes something and happily accesses memory not meant to be accessed.
Solution:1
Use this:
a.at(i)
at() will throw an
out_of_range exception if the index is out of bounds.
The reason that
operator [] doesn't do a bounds check is efficiency. You probably want to be in the habit of using
at() to index into a vector unless you have good reason not to in a particular case.
Solution:2
When you call resize() the vector implementation reallocates the buffer to the size enough to store the requested number of elements - it can't be less than you require but the implementation is free to make it larger to reduce memory fragmentation and make reallocations less often.
The only way to avoid such errors is to only loop within a valid range of indices in your code. For the code you provided the following would do:
for ( int i = 0 ; i < a.size(); i++ ) { cout << a[i] << " "; }
Solution:3
You can avoid the problem completely by using iterators. The for loop would then look something like
for(vector<int>::iterator i = a.begin(); i != a.end(); ++i) cout << *i << " ";
Solution:4
This isn't a memory allocation question per se, it is a bounds checking question. If the memory you overrun (either reading or writing) is still within the legal bounds of the program, you will not segfault.
In the past I've seen an overloaded [ operator that does bounds checking. It was a lot of work to turn C++ into ForTran (one of the better features of ForTran, I might add).
Besides using vectors and iterators, the best answer is to use good programming technique.
Solution:5
Check the size you are allocating for the vector
Solution:6
Over come this problem by being more conscientious while accessing memory: Check for bounds!
Solution:7
I cannot comment yet, but resize() is not a hint for memory allocation. According to the STL documentation, resize(n) inserts or removes elements at the end of the vector. So after calling resize(1), the vector contains exactly 1 element. To allocate memory in advance, you have to call reserve(n).
Solution:8
The segmentation fault happens because the hardware (Memory Management Unit) reconizes that you do not have access to the region, and so it throws a exception. The OS gets that exception and decides what to do about it; in those cases, it realizes you are making an illegal access and kills your application with a segmentation fault.
The same mechanism is how swap is implemented; the OS might reconize that you do have access to the memory, but that it's on the disk right now. Then it brings in the memory from disk and allows your program to continue.
However, this whole memory protection scheme only has enough resolution for pages of memory, e.g., 4k at a time. So the MMU can't protect you from every little overrun you might do. There are tools like ElectricFences that replace malloc and free and take advantage of the MMU, but those are intended only for "spot-checks"... they are good for debugging, but you wouldn't want to run that way forever.
Solution:9
Access to elements outside the bounds of the allocated object results in undefined behavior. That means that the implementation is free to any thing that occurs to it. It might throw and exception if you are very lucky. If you are very unlucky, it will simply appear to work.
It is permitted, in principle, for it to cause demons to fly out of your nose.
Its behavior is undefined.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2018/06/tutorial-overcoming-wrong-memory.html
|
CC-MAIN-2018-43
|
refinedweb
| 703
| 62.68
|
Our next iteration on the contacts control (nee gadget) is now live and ready for you to experiment with!
The snapshot of the contacts control in the online documentation is a static image. The one below is not! This a live (as in real) contacts control sitting in an iframe on this blog page. Go head, click on it. Login and check out your Live contacts.
New and Improved in Beta v0.2:
- Performance – We’ve moved the control to a server closer to the contacts storage server, reducing the number of domain hops by half. If you have a lot of contacts, you’ll notice a huge improvement (order of magnitude?) in the load time with this new release compared to the first release.
- Clicks – The click noises in IE have been significantly reduced
- History litter – The cross-domain mechanism no longer leaves “droppings” in the browser history
- No ActiveX control prompts! – Proper use of the IE7 “native” XMLHttpRequest object. We figured out why the IE7 XMLHttpRequest wasn’t cooperating with our code and now its humming along nicely. More on that snag later.
- Support for https – The cross-domain communication channel now conforms to the protocol of the host page URL. You can now use the contacts control on your https pages (such as shopping cart checkout – hint hint). This was a strong beta feedback request. In hindsight, it seems pretty obvious, but we overlooked it the first time around.
- Contact details in tooltips – Hover the mouse over a contact in the list and the popup tooltip will display their name and address infos. This helps show a lot more detail without consuming more screen space.
- Refactored for modularity and versioning – Ok, so this isn’t an end-user feature, but it is a big deal for enabling us to move forward quickly with future revision cycles for this control and others in the pipeline. Modularity (encapsulation and isolation) are virtually non-existent in JavaScript. I’m changing that.
The iframe on this page was necessary to get around the html rewriting done by the blogs.msdn.com host server. (grr…) It’s also remarkably difficult to upload a file to blogs.msdn.com, which is a critical requirement for getting the cross-domain communication channel to work. Then again, blogs.msn.com wasn’t intended to host files, just blog articles.
So, I put the contacts control client demo on my own server and just placed an iframe here to point over there.
What does this little demo do?
It displays the contact info received from the control when the user selects and ok’s the transfer. Sign into the control with your LiveID identity, select a few of your contacts, and press Send. The control will display the data and ask for confirmation. If you approve the transfer, the data is sent to the host page via the cross-domain channel.
How do I set it up?
We’ve streamlined the control setup, switching from a programmatic constructor call model to a declarative HTML element model. The only JavaScript you write is the function to do something with the returned data. To put the control on your page, you add two script references to your page (live.js and control.js, on versioned URLs), declare an XML namespace (xmlns:devlive=), and insert an HTML element tag named “contactscontrol”. The long list of parameters required by the previous constructor call are now attributes of the contactscontrol element. Like this:
<devlive:contactscontrol
devlive:privacyStatementURL=””
devlive:channelEndpointURL=””
devlive:dataDesired=”name, email”
devlive:onData=”myDataProc”
devlive:onSignin=”mySigninProc”
devlive:onSignout=”mySignoutProc”
devlive:onError=”myErrorProc” />
You’ll probably also want to set the width and height of the box as well, by adding a style=”width:250;height:400;float:right” attribute in there. The devlive namespace is optional, but a good idea to avoid colliding with somebody else’s contactscontrol tag or channelEndpointURL attribute.
Put that on your page, along with the two required includes and the callback functions you’ve indicated, and when the page loads, the contacts control code will discover the contactscontrol HTML tag, grab the info it needs from the attributes, and do all the behind the scenes stuff necessary to display its UI and talk to the Windows Live Contacts database server. Full step by step details are on the Getting Started page.
There are two critical pieces you need to get right: placing the channel.htm on your site, and writing your onData callback function. Without those, everything can look fine but you’ll get no data.
Channel.htm
Channel.htm is easy: just copy the file from to some place on your web site. Don’t bother clicking on that link in your browser – channel.htm autonavigates to about:blank, so by the time you can click View: Source the source you want is already gone. This isn’t an obfuscation measure, it’s an integral part of the channel mechanism – immediately navigating to a different URL allows the channel to detect consecutive navigations to the same URL. Otherwise, the browser would ignore the second request because the page is already on the requested URL.
To copy the channel.htm file to your machine, right click on the URL and select Save Target As, or open the URL in Notepad or Visual Studio, then Save As to a local file. It doesn’t matter what the file name or path is, it just needs to be in the same domain as the page you want to use the contacts control on.
Do not use MS Word or Write to open channel.htm – word processors are prone to reformatting HTML in ways that make sense for interoffice memos but wreak havoc on HTML and JavaScript syntax. Word is particularly
bad helpful about replacing simple double quotes with fancy back slanted and forward slanted quote chars. Very pretty on paper, but nasty, hard to spot syntax errors in JavaScript and HTML. “What do you mean ‘Unterminated string’? The friggin quote is right there!!”
Receiving the Data
Writing the code to receive the contacts is pretty easy, too. The code on the demo page looks like this:
function receiveData(p_contacts) {
var s = “Done! ” + p_contacts.length + ” records received.”;
for (var i = 0; i < p_contacts.length; i++) {
s += “<p>”;
s += “name: ” + p_contacts[i].name +”<br/>”;
s += “email: ” + p_contacts[i].email;
s += “</p>”;
}
document.getElementById(“ContactsDisplay”).innerHTML = s;
}
The p_contacts param is an array of JavaScript objects, one object per contact. The data for each contact is stored in properties / attributes on the object. p_contacts[0].name returns the name field of the first contact in the array.
Keep in mind that some of the contact records you receive may not have data for all the fields you asked for. Currently, that means the field values you receive will be undefined, and will display “undefined” when you convert them to strings. We’ll fix that to return empty strings in a future release.
But How Does It Work, Inside?
That’s tomorrow’s post.
As reported by Danny Thorpe , Microsoft has released it’s next version of the Windows Live Contacts Control
Danny Thorpe over at the Windows Live team has blogged about an update to the Windows Live Contacts control.
|
https://blogs.msdn.microsoft.com/dthorpe/2006/10/05/windows-live-contacts-control-beta-0-2-released/
|
CC-MAIN-2017-22
|
refinedweb
| 1,212
| 65.32
|
I was trying to compare a pair of histograms by drawing on the same canvas, but I needed to resize. When resizing, I found that none of the TH1 methods were giving me accurate values. For instance, all of the following methods returned 0:
hist->GetXaxis()->GetXmin(); hist->GetXaxis()->GetXmax(); hist->GetMinimum(); hist->GetMaximum();
and the FindFirstBinAbove and FindLastBinAbove methods from this thread return -1, as though it were an empty histogram. I checked on a TBrowser, and the histogram is filled and has 1000 entries. Is there something wrong here?
I’ve loaded my testing code and a root file containing the histogram:
#include <iostream> #include "TFile.h" #include "TH1.h" using namespace std; int main(){ TFile *f = new TFile("temp.root"); if (f == NULL) return 0; TH1F *hist = (TH1F*)f->Get("EBRecHits"); cout << "Entries : " << hist->GetEntries() << endl; cout << "GetXmin() : " << hist->GetXaxis()->GetXmin() << endl; cout << "GetXmax() : " << hist->GetXaxis()->GetXmax() << endl; cout << "FindFirstBinAbove() : " << hist->FindFirstBinAbove() << endl; cout << "FindLastBinAbove() : " << hist->FindLastBinAbove() << endl; cout << "GetMinimum() : " << hist->GetMinimum() << endl; cout << "GetMaximum() : " << hist->GetMaximum() << endl; }
(It should be mentioned that I created these histograms originally with my limits as 0 so that it would automatically resize as it was filled. Could this be affecting the results of the methods?)
Edit: I should also mention that I know that GetMinimum() and GetMaximum() will not be returning the same value as GetXmin and GetXmax, but I was still shocked at getting zero values for those as well.
ROOT Version: ROOT 6.02/05
Compiler: GCC 4.9.1
|
https://root-forum.cern.ch/t/th1-methods-cant-find-correct-minimum-and-maximum-values-for-a-filled-histogram/42894
|
CC-MAIN-2022-33
|
refinedweb
| 252
| 53.92
|
Name
cyg_mdns_getservicelabel — Get current service label value
Synopsis
#include <mdns.h>
cyg_bool cyg_mdns_getservicelabel(const cyg_mdns_service_identity *id, cyg_uint8 *dstbuf, cyg_uint8 *len
);
Description
This function provides access to the currently configured mDNS UTF-8
label for the specified
id service
descriptor. This function is intended for debug or non-time critical
UI usage. The active label is copied into the supplied buffer, where
the passed
*len specifies the valid buffer
length of
dstbuf. Normally the referenced
dstbuf should
have at least
MDNS_MAX_LABEL available space,
and
*len set accordingly. The return boolean
state indicates success or failure, with
*len updated with the number of bytes
written/required. On failure the contents of
dstbuf are
undefined. If
dstbuf is
NULL then
the call can be used with a valid
len pointer to
ascertain the amount of storage required to hold the name.
Return value
If a non-null
len is supplied then the
referenced location is updated with the label length at the time of
the function call. Boolean
true is returned
if
dstbuf is NULL or is a pointer to
a
*len buffer large enough to hold the service label
at the time of the call, which is filled with the service label. On error
boolean
false is returned.
|
https://doc.ecoscentric.com/ref/mdns-api-cyg-mdns-getservicelabel.html
|
CC-MAIN-2022-21
|
refinedweb
| 206
| 53
|
#ifndef DWARF2READ_H #define DWARF2READ_H 1 #include "bfd.h" extern asection *dwarf_frame_section; extern asection *dwarf_eh_frame_section; /* APPLE LOCAL debug map take a bfd parameter */ char *dwarf2_read_section (struct objfile *, bfd *, asection *); /* When expanding a psymtab to a symtab we get the addresses of all the symbols in the executable (the "final" addresses) and the minimal symbols (linker symbols, etc) from the .o file and create a table of these address tuples (plus the symbol name) to allow for fixing up all the addresses in the .o file's DWARF. NB: I don't think I actually use the symbol name once this array is created, just the address tuples. But for now I'll keep it around to aid in debugging. */ struct oso_final_addr_tuple { /* Linker symbol name aka minsym aka physname */ char *name; /* Start address in the .o file */ CORE_ADDR oso_low_addr; /* End address in the .o file (the same as the start address of the next highest oso_final_addr_tuple). */ CORE_ADDR oso_high_addr; /* Low address in the final, linked image */ CORE_ADDR final_addr; /* Whether this function is present in the final executable or not. */ int present_in_final; }; /* This array is sorted by OSO_ADDR so that we can do quick lookups of addresses we find in the .o file DWARF entries. */ struct oso_to_final_addr_map { struct partial_symtab *pst; int entries; struct oso_final_addr_tuple *tuples; /* PowerPC has a "-mlong-branch" option that generates trampoline code for long branches. This trampoline code follows the code for a function. The address range for the function in the .o DWARF currently spans this trampoline code. The address range for the function in the linked executable can be smaller than the address range in the .o DWARF when our newest linker determines that the long branch trampoline code is not needed. In such cases, the trampoline code gets stripped, and the branch to the trampoline code gets fixed to branch directly to its destination. This optimization can cause the size of the function to be reduced in the final executable and in the debug map. There didn't seem to be any easy way to propagate the function size (N_FUN end stab with no name) found in the debug map up to the debug map address translation functions whilst only using only minimal and partial symbols. Minimal symbols are made from the normal nlist entries (non-STAB) and these nlist entries have no size (even though the min symbols in gdb have a size member that we could use). Partial symbols for functions get made from debug map N_FUN entries and the ending N_FUN entry that contains the new, and possibly smaller, function size get used only to set the max partial symtab address. Partial symbols in gdb also don't have a size member. The oso_to_final_addr_map.tuples array is sorted by OSO_LOW_ADDR. The FINAL_ADDR_INDEX member was added so we can quickly search the oso_to_final_addr_map.tuples array by FINAL_ADDR. The FINAL_ADDR_INDEX contains zero based indexes into the oso_to_final_addr_map.tuples array and gets created in CONVERT_OSO_MAP_TO_FINAL_MAP. When translating a high pc address in TRANSLATE_DEBUG_MAP_ADDRESS, we can shorten a function's address range by making sure the next FINAL_ADDR is not less than our current value for our translated high pc. */ /* In short: An array of element index values sorted by final address. */ int *final_addr_index; /* Reuse the above for the common symbols where the .o file has no address (just 0x0) -- so for COMMON_PAIRS we're storing the symbol name and the final address. This array is sorted by the symbol name. */ int common_entries; struct oso_final_addr_tuple *common_pairs; }; int translate_debug_map_address (struct oso_to_final_addr_map *, CORE_ADDR, CORE_ADDR *, int); #endif /* DWARF2READ_H */
|
http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/dwarf2read.h
|
CC-MAIN-2015-14
|
refinedweb
| 586
| 54.83
|
Other AliasSbFifo
SYNOPSIS
#include <Inventor/threads/SbFifo.h>
Public Member Functions
void assign (void *ptr, uint32_t type)
void retrieve (void *&ptr, uint32_t &type)
SbBool tryRetrieve (void *&ptr, uint32_t &type)
unsigned int size (void) const
void lock (void) const
void unlock (void) const
SbBool peek (void *&item, uint32_t &type) const
SbBool contains (void *item) const
SbBool reclaim (void *item)
Detailed Description
A class for managing a pointer first-in, first-out queue.
Member Function Documentation
void SbFifo::assign (void *ptr, uint32_ttype) [inline]Puts pointer ptr of type type into the fifo.
The type argument is just meant as a user data tag, and a 0 value can be given as the type argument if type data is uninteresting.
void SbFifo::retrieve (void *&ptr, uint32_t &type) [inline]Reads a pointer from the queue. Blocks until a pointer is available for reading.
SbBool SbFifo::tryRetrieve (void *&ptr, uint32_t &type) [inline]Tries to read a pointer from the queue. If no data can be read, FALSE is returned, and TRUE otherwise. The function does not block.
unsigned int SbFifo::size (void) const [inline]Returns number of pointers currently in the queue.
void SbFifo::lock (void) const [inline]Blocks until the queue can be locked.
void SbFifo::unlock (void) const [inline]Unlocks the queue.
SbBool SbFifo::peek (void *&item, uint32_t &type) const [inline]Peeks at the head item of the queue without removing it. In the case where the fifo is empty, FALSE is returned.
The queue must be locked with SbFifo::lock() before using this function, then unlocked.
SbBool SbFifo::contains (void *item) const [inline]Returns TRUE or FALSE depending on whether the item is in the queue.
The queue must be locked with SbFifo::lock() before using this function, then unlocked.
SbBool SbFifo::reclaim (void *item) [inline]This function removes the given item from the queue. Returns TRUE or FALSE depending on whether the item was in the queue in the first place.
The queue must be locked with SbFifo::lock() before using this function, then unlocked.
Author
Generated automatically by Doxygen for Coin from the source code.
|
http://manpages.org/a-class-for-managing-a-pointer-first-in/3
|
CC-MAIN-2017-09
|
refinedweb
| 345
| 56.05
|
08 October 2008 19:53 [Source: ICIS news]
TORONTO (ICIS news)--German photovoltaics firm Schott Solar has cancelled plans for an initial public offer (IPO) due to the drastic deterioration and turmoil on capital markets, it said on Wednesday.
?xml:namespace>
“The renewed drastic deterioration of conditions on international capital markets in past days has prompted us to make this decision,” the company said.
The sudden decline had been unforeseeable, it said. Schott Solar did not say if or when it may go ahead with an offer.
Meanwhile, the company had backing from its parent, glass maker Schott AG, to fund plans to expand capacities in the ?xml:namespace>
In particular the
Photovoltaics are an important end market for polyvinyl butyral (PVB) and other chemicals.
With targeted gross proceeds of up to €546m ($748m) Schott Solar’s IPO would have been
Apart from Schott, only rail carrier Deutsche Bahn had shown serious interest in raising money on equities markets, they added.
Germany's DAX stock index lost another 5.88% on Wednesday, closing at a 52-week low of 5,013.62. This compares with the index's 52-week high of 8,117.70 in December
|
http://www.icis.com/Articles/2008/10/08/9162423/german-photovoltaics-firm-schott-cancels-ipo.html
|
CC-MAIN-2014-10
|
refinedweb
| 197
| 54.63
|
Hide Forgot
klogd uses lseek on /dev/kmem. In case of an error this
would return (off_t)-1 but the return value is considered an
error if <0 which is inaccurate when large numbers are
treated as a signed int.
try upgrading to a later syslogd (such as 1.3.31-*, from
the errata). Does this solve the problem?
Upgrading to the latest available sysklogd solved this problem.
Unfortunately this update was not officially announced and is not in
the "new-kernel" updates directory.
The sysklogd update is actually in the regular 5.2 errata,
IIRC. We'll probably throw a link from one to the other.
|
https://partner-bugzilla.redhat.com/show_bug.cgi?id=2113
|
CC-MAIN-2020-24
|
refinedweb
| 107
| 61.33
|
If you’re just getting started with testing (and general test-first or test-driven development) in your Ruby and Ruby on Rails applications, you have a couple of choices.
You can go with Ruby’s Unit::Test, built right in to Ruby, and built-in to Rails with unit, functional, and integration test suites setup for you.
Or, you could setup Rspec with Mocha, and implement a form of testing called Behavior Driven Development or BDD.
Both approaches serve the same goal: better, tested code, easier code to maintain, and in general, just better practices.
My advice is to start with unit tests, and then move to Rspec later.
Once you get the hang of the built-in unit testing framework, you’ll start to write tests named like this:
def test_login # end def test_create # end
which is fine, and gets the job done (I do it like this!).
Eventually, after getting the hang of it, you’ll start to write
shorter and shorter tests named like this:
def test_should_allow_login # ... end def test_should_not_let_user_view_admin_page # ... end
etc. etc.
And next thing you know, you’re doing a form of behavior driven development, but without Rspec (I do this too – and yes I know, it’s not completely rspec – it lacks the mocking/stubbing
non-boundary-crossing unit-test tools). But, there’s a lot more documentation and examples out there as of right now using good ole unit testing.
I forget where I read somewhere that David Heinemeier Hansson (DHH) doesn’t use Rspec (yet), but writes his test with the Rspec style should and should_not methods, and tries to adhere to good, single units of functionality and behavior in tests.
If you install plugins in your Rails application, they will have their own tests, but they don’t actually run in conjunction with your code. So, while you should run a plugin’s tests once you pull down the code – you don’t really need to run them again. You’ll be writing your own tests, that may execute parts of the plugins code, and that can be your own flavor/style.
It will feel at first like you are troubleshooting tests, along with code – which is irritating, I know, but stick with it. It takes longer in the beginning, but is well worth it, once you get going.
I recommend treating your tests like debugging statements, and using puts statements everywhere while you figure it out, so that when you run your tests (hopefully using Apple-R while inside Textmate, or using the Run-command in most other editors), you can see:
puts "widget should have errors: #{@widget.errors.inspect}" assert !@widget.errors.empty?
Of course, be sure to remove or comment out those puts statements before you check in your tests. You can always uncomment them for debugging later.
A former Big Nerd Ranch alumni says:
A great tool to help migrate between Test::Unit and RSpec / BDD is the plugin Shoulda.
It looks a lot like RSpec / Test::Spec but it is all just decoration on top of Test::Unit. You get a lot of really nice tools like nested contexts, which help hierarchically structure your tests, and some excellent ActiveRecord association and validation tests.
|
https://www.bignerdranch.com/blog/getting-started-with-testing-unit-testing-or-bdd-with-rspec/
|
CC-MAIN-2017-43
|
refinedweb
| 538
| 65.05
|
MS Dynamics CRM 3.0
I can't figure out how to follow these links - anyone have any ideas?
> I can't figure out how to follow these links - anyone have any ideas?
-- Philip Whole-site HTML validation, link checking and more
>
import nntplib username = my username password = my password nntp_server = 'newsclip.ap.org' n = nntplib.NNTP(nntp_server, 119, username, password) n.group('ap.spanish.online.headlines')
m_id = n.next()[1] n.article(m_id)
I'll get output like this headline and full story message link: (truncated for length)
> >
> I am, but I once I got into the article itself, I couldn't figure out > how to "call" a link inside the resulting message text:
> >>> ... 'Castro: Bush desea mi muerte, pero las ideas no se matan', > >>> 'news://newsclip.ap.org/D8PE2G6O0@news.ap.org', ...
> How can I take the message link 'news://newsclip.ap.org/ > D8PE2G@news.ap.org' and follow it?
If I were you I'd try handling news: URLs with nttplib. I bet it will work.
Sorry I couldn't provide more than guesses. Good luck!
|
http://www.megasolutions.net/python/How-to-parse-usenet-urls_-78659.aspx
|
CC-MAIN-2015-35
|
refinedweb
| 179
| 69.38
|
In case you're interested in writing portable Perl and Python scripts that can be deployed anywhere and use Growl with minimal hassle (i.e., without installing all the extra bridging crud), this bit of Python code works fine for me:
import os APPLESCRIPT = "/usr/bin/osascript" def notify(title, description, icon = "Finder"): # see if we're on a Mac if os.path.exists(APPLESCRIPT): # See if Growl is installed if os.path.exists("/Library/Frameworks/GrowlAppBridge.framework"): applescript = os.popen(APPLESCRIPT, 'w') applescript.write( 'tell application "GrowlHelperApp"\n' + 'notify with title "%s" description "%s" icon of application "%s"\n' % (title, description, icon) + 'end tell') applescript.close() else: # use something else here, or edit the if clauses to fall straight down pass else: # use the age old UNIX way print "NOTIFICATION - %s: %s" % (title, description) if __name__ == '__main__': notify( "Python", "Poor man's Growl bridge using piping" )
(The Perl version is left as an exercise to the readers - after all, it's trivial to port, and each Perl geek will want to do it their own way...)
Of course, I've barely scratched the surface - but if you're anything like me and hate depending on any one platform, this should be easy enough to graft into your own scripts.
Arrrrrrr!
That's my little contribution to Talk Like a Pirate Day. I've got a busy Sunday ahead, so if you want to see the whole front page in pirate talk, it be here.
|
http://taoofmac.com/space/blog/2004/09/19
|
CC-MAIN-2017-13
|
refinedweb
| 245
| 57.81
|
Created attachment 167772 [details]
py-numpy build log
Attempting to run something like osmocom_fft results in the issue below...
[kitsune@vixen42]/home/kitsune% osmocom_fft
Traceback (most recent call last):
File "/usr/local/bin/osmocom_fft", line 34, in <module>
from gnuradio import blocks
File "/usr/local/lib/python2.7/site-packages/gnuradio/blocks/__init__.py", line 34, in <module>
from stream_to_vector_decimator import *
File "/usr/local/lib/python2.7/site-packages/gnuradio/blocks/stream_to_vector_decimator.py", line 23, in <module>
from gnuradio import gr
File "/usr/local/lib/python2.7/site-packages/gnuradio/gr/__init__.py", line 44, in <module>
from top_block import *
File "/usr/local/lib/python2.7/site-packages/gnuradio/gr/top_block.py", line 30, in <module>
from hier_block2 import hier_block2
File "/usr/local/lib/python2.7/site-packages/gnuradio/gr/hier_block2.py", line 25, in <module>
import pmt
File "/usr/local/lib/python2.7/site-packages/pmt/__init__.py", line 58, in <module>
from pmt_to_python import pmt_to_python as to_python
File "/usr/local/lib/python2.7/site-packages/pmt/pmt_to_python.py", line 22, in <module>
import numpy
File "/usr/local/lib/python2.7/site-packages/numpy/__init__.py", line 180,48/libgfortran.so.3 not found
Exit 1
[kitsune@vixen42]/home/kitsune% uname -a
FreeBSD vixen42.vulpes.vvelox.net 10.3-BETA3 FreeBSD 10.3-BETA3 #1 r296090: Fri Feb 26 06:29:58 CST 2016 kitsune@vixen42.vulpes.vvelox.net:/usr/obj/usr/src/sys/GENERIC amd64
[root@vixen42]/arc/src/ports/math/py-numpy# make showconfig
===> The following configuration options are available for py27-numpy-1.10.4,1:
DOCS=off:
[root@vixen42]/arc/src/ports/math/py-numpy# cat /etc/src.conf
WITHOUT_LPR=yes
WITHOUT_SENDMAIL=yes
[root@vixen42]/arc/src/ports/math/py-numpy# cat /etc/make.conf
CUPS_OVERWRITE_BASE=yes
#DEFAULT_VERSIONS+=5.18
DEFAULT_VERSIONS+= perl5=5.22
I have tried replicating this issue as there was just another inquiry about that same failure in #freebsd@Freenode, but I couldn't. It wasn't related to osmocom_fft but merely importing numpy would trigger it.
I tried importing numpy and specifically, repeat that last import before the ImportError in this report, it didn't fail.
I built numpy with poudriere, and only had to change math/suitesparse config to use Netlib, because Openblas fails to build like that.
FreeBSD 10.3-p5 (amd64), ports from SVN head. Clean poudriere jail just for this test.
It's not Numpy's problem, but rather GNU Radio. I already encountered this error with several applications (especially Pitivi devel release, Flowblade).
Import of modules with Numpy is very "fragile", it doesn't support circular import (in fact relative import). GNU Radio must import each own modules globally (it's not a trivial process, especially when there are many).
Using sys.path() method is generally helpful.
(In reply to Vladimir Krstulja from comment #1)
Hi Vlad, that was me inquiring, btw: yes, the port will build in poudriere just fine with math/suitesparse set to the same.
However, the (single) bug here probably bears repeating: 'import numpy' while using the interactive shell throws the same error as seen and reported in bug #188114, and can be solved the same way. Calling the shell as
env LD_LIBRARY_PATH=/usr/local/lib/gcc48 python
removes the problem. However, this *only* affected me in the interactive shell, which indicates it may likely be my problem – I'm building a clean instance now to try this out on without any prior settings, and I'll leave another comment if and when I find out what the difference was.
This has come up a few times. Setting LD_LIBRARY_PATH is a known workaround. If you have several apps with this issue then you may want to set this in your shell startup file (.cshrc or .bashrc) or create a substitute script that is found in your path before the real app that sets this and then calls the real app.
bug #208120 is looking at a way to fix this properly.
I'm seeing this, too. This is preventing me from running comms/gnuradio successfully.
I was able to solve this permanently by deleting gcc and everything that depends on it, and then reinstalling numpy.
If I can narrow it down more, I'll report back, but this appears to be a pkg upgrade failure of some sort.
FreeBSD unixdev.ceas bacon ~ 402: cat junk.py
#!/usr/bin/env python
import sys,os
import numpy
print 'Hello!\n'
FreeBSD unixdev.ceas bacon ~ 403: python2 junk.py
Traceback (most recent call last):
File "junk.py", line 4, in <module>
import numpy
<<<ROOT@unixdev.ceas>>> /home/bacon 1005 # pkg delete -y gcc\*
[snip irrelevant pkg output]
ImportError: /lib/libgcc_s.so.1: version GCC_4.6.0 required by /usr/local/lib/gcc49/libgfortran.so.3 not found
Proceed with deinstalling packages? [y/N]: y
[1/84] Deinstalling py27-qiime-1.9.1...
[1/84] Deleting files for py27-qiime-1.9.1: 100%
[2/84] Deinstalling py27-statsmodels-0.6.1_1...
[2/84] Deleting files for py27-statsmodels-0.6.1_1: 100%
[3/84] Deinstalling hs-bio-0.5.3_2...
[3/84] Deleting files for hs-bio-0.5.3_2: 100%
[4/84] Deinstalling ghemical-3.0.0_9...
[4/84] Deleting files for ghemical-3.0.0_9: 100%
[5/84] Deinstalling py27-biom-format-2.1.5_1...
[5/84] Deleting files for py27-biom-format-2.1.5_1: 100%
[6/84] Deinstalling brian-1.4.3...
[6/84] Deleting files for brian-1.4.3: 100%
[7/84] Deinstalling py27-pandas-0.19.2...
[7/84] Deleting files for py27-pandas-0.19.2: 100%
[8/84] Deinstalling hs-QuickCheck-2.8.1...
[8/84] Deleting files for hs-QuickCheck-2.8.1: 100%
[9/84] Deinstalling petsc-3.7.4...
[9/84] Deleting files for petsc-3.7.4: 100%
[10/84] Deinstalling libghemical-3.0.0_7...
[10/84] Deleting files for libghemical-3.0.0_7: 100%
[11/84] Deinstalling octave-4.0.3_2...
[11/84] Deleting files for octave-4.0.3_2: 100%
[12/84] Deinstalling py27-scipy-0.19.0...
[12/84] Deleting files for py27-scipy-0.19.0: 100%
[13/84] Deinstalling arpack++-1.2_7...
[13/84] Deleting files for arpack++-1.2_7: 100%
[14/84] Deinstalling py27-numexpr-2.6.2...
[14/84] Deleting files for py27-numexpr-2.6.2: 100%
[15/84] Deinstalling py27-bottleneck-1.0.0...
[15/84] Deleting files for py27-bottleneck-1.0.0: 100%
[16/84] Deinstalling py27-patsy-0.4.1...
[16/84] Deleting files for py27-patsy-0.4.1: 100%
[17/84] Deinstalling py27-matplotlib-1.5.3_1...
[17/84] Deleting files for py27-matplotlib-1.5.3_1: 100%
[18/84] Deinstalling py27-h5py-2.6.0...
[18/84] Deleting files for py27-h5py-2.6.0: 100%
[19/84] Deinstalling pyfasta-0.5.2_1...
[19/84] Deleting files for pyfasta-0.5.2_1: 100%
[20/84] Deinstalling pycogent-1.9...
[20/84] Deleting files for pycogent-1.9: 100%
[21/84] Deinstalling hs-tf-random-0.5_2...
[21/84] Deleting files for hs-tf-random-0.5_2: 100%
[22/84] Deinstalling hs-parsec-3.1.9...
[22/84] Deleting files for hs-parsec-3.1.9: 100%
[23/84] Deinstalling hs-old-time-1.1.0.3...
[23/84] Deleting files for hs-old-time-1.1.0.3: 100%
[24/84] Deinstalling hs-tagsoup-0.13.3...
[24/84] Deleting files for hs-tagsoup-0.13.3: 100%
[25/84] Deinstalling mpqc-2.3.1_28...
[25/84] Deleting files for mpqc-2.3.1_28: 100%
[26/84] Deinstalling qrupdate-1.1.2_4...
[26/84] Deleting files for qrupdate-1.1.2_4: 100%
[27/84] Deinstalling superlu-5.2.1_1...
[27/84] Deleting files for superlu-5.2.1_1: 100%
[28/84] Deinstalling scalapack-2.0.2_10...
[28/84] Deleting files for scalapack-2.0.2_10: 100%
[29/84] Deinstalling plink-1.07_5...
[29/84] Deleting files for plink-1.07_5: 100%
[30/84] Deinstalling libtsnnls-2.3.3_5...
[30/84] Deleting files for libtsnnls-2.3.3_5: 100%
[31/84] Deinstalling levmar-2.6_5...
[31/84] Deleting files for levmar-2.6_5: 100%
[32/84] Deinstalling lapacke-3.5.0_1...
[32/84] Deleting files for lapacke-3.5.0_1: 100%
[33/84] Deinstalling lapack95-1.0_11...
[33/84] Deleting files for lapack95-1.0_11: 100%
[34/84] Deinstalling kktdirect-0.5_5...
[34/84] Deleting files for kktdirect-0.5_5: 100%
[35/84] Deinstalling harminv-1.3.1_8...
[35/84] Deleting files for harminv-1.3.1_8: 100%
[36/84] Deinstalling gretl-1.9.13_7...
[36/84] Deleting files for gretl-1.9.13_7: 100%
Unknown media type in type 'chemical/x-alchemy'
Unknown media type in type 'chemical/x-cache'
Unknown media type in type 'chemical/x-cactvs-ascii'
Unknown media type in type 'chemical/x-cactvs-binary'
Unknown media type in type 'chemical/x-cactvs-table'
Unknown media type in type 'chemical/x-cdx'
Unknown media type in type 'chemical/x-cdxml'
Unknown media type in type 'chemical/x-chem3d'
Unknown media type in type 'chemical/x-cif'
Unknown media type in type 'chemical/x-cml'
Unknown media type in type 'chemical/x-daylight-smiles'
Unknown media type in type 'chemical/x-dmol'
Unknown media type in type 'chemical/x-gamess-input'
Unknown media type in type 'chemical/x-gamess-output'
Unknown media type in type 'chemical/x-gaussian-input'
Unknown media type in type 'chemical/x-gaussian-log'
Unknown media type in type 'chemical/x-genbank'
Unknown media type in type 'chemical/x-gulp'
Unknown media type in type 'chemical/x-hin'
Unknown media type in type 'chemical/x-inchi'
Unknown media type in type 'chemical/x-inchi-xml'
Unknown media type in type 'chemical/x-jcamp-dx'
Unknown media type in type 'chemical/x-macromodel-input'
Unknown media type in type 'chemical/x-mdl-molfile'
Unknown media type in type 'chemical/x-mdl-rdfile'
Unknown media type in type 'chemical/x-mdl-rxnfile'
Unknown media type in type 'chemical/x-mdl-sdfile'
Unknown media type in type 'chemical/x-mdl-tgf'
Unknown media type in type 'chemical/x-mmcif'
Unknown media type in type 'chemical/x-mol2'
Unknown media type in type 'chemical/x-mopac-graph'
Unknown media type in type 'chemical/x-mopac-input'
Unknown media type in type 'chemical/x-mopac-out'
Unknown media type in type 'chemical/x-msi-car'
Unknown media type in type 'chemical/x-msi-hessian'
Unknown media type in type 'chemical/x-msi-mdf'
Unknown media type in type 'chemical/x-msi-msi'
Unknown media type in type 'chemical/x-ncbi-asn1'
Unknown media type in type 'chemical/x-ncbi-asn1-binary'
Unknown media type in type 'chemical/x-ncbi-asn1-xml'
Unknown media type in type 'chemical/x-pdb'
Unknown media type in type 'chemical/x-shelx'
Unknown media type in type 'chemical/x-vmd'
Unknown media type in type 'chemical/x-xyz'
Unknown media type in type 'all/all'
Unknown media type in type 'all/allfiles'
[37/84] Deinstalling getdp-2.8.0_1...
[37/84] Deleting files for getdp-2.8.0_1: 100%
[38/84] Deinstalling py27-numpy-1.11.2_2,1...
[38/84] Deleting files for py27-numpy-1.11.2_2,1: 100%
[39/84] Deinstalling ltl-1.9.1_4...
[39/84] Deleting files for ltl-1.9.1_4: 100%
[40/84] Deinstalling R-cran-optparse-1.3.2...
[40/84] Deleting files for R-cran-optparse-1.3.2: 100%
[41/84] Deinstalling R-cran-randomForest-4.6.7_6...
[41/84] Deleting files for R-cran-randomForest-4.6.7_6: 100%
[42/84] Deinstalling xlapack-3.5.0_3...
[42/84] Deleting files for xlapack-3.5.0_3: 100%
[43/84] Deinstalling hs-random-1.1...
[43/84] Deleting files for hs-random-1.1: 100%
[44/84] Deinstalling hs-primitive-0.6...
[44/84] Deleting files for hs-primitive-0.6: 100%
[45/84] Deinstalling hs-mtl-2.2.1...
[45/84] Deleting files for hs-mtl-2.2.1: 100%
[46/84] Deinstalling hs-text-1.2.1.3...
[46/84] Deleting files for hs-text-1.2.1.3: 100%
[47/84] Deinstalling hs-old-locale-1.0.0.7...
[47/84] Deleting files for hs-old-locale-1.0.0.7: 100%
[48/84] Deinstalling hs-extensible-exceptions-0.1.1.4_7...
[48/84] Deleting files for hs-extensible-exceptions-0.1.1.4_7: 100%
[49/84] Deinstalling hs-parallel-3.2.0.6...
[49/84] Deleting files for hs-parallel-3.2.0.6: 100%
[50/84] Deinstalling suitesparse-4.0.2_5...
[50/84] Deleting files for suitesparse-4.0.2_5: 100%
[51/84] Deinstalling lapack-3.5.0_1...
[51/84] Deleting files for lapack-3.5.0_1: 100%
[52/84] Deinstalling arpack-96_14...
[52/84] Deleting files for arpack-96_14: 100%
[53/84] Deinstalling linpack-1.0_7...
[53/84] Deleting files for linpack-1.0_7: 100%
[54/84] Deinstalling lapack++-2.5.4_1...
[54/84] Deleting files for lapack++-2.5.4_1: 100%
[55/84] Deinstalling mcmc-jags-4.2.0...
[55/84] Deleting files for mcmc-jags-4.2.0: 100%
[56/84] Deinstalling R-cran-getopt-1.20.0...
[56/84] Deleting files for R-cran-getopt-1.20.0: 100%
[57/84] Deinstalling R-cran-permute-0.9.4...
[57/84] Deleting files for R-cran-permute-0.9.4: 100%
[58/84] Deinstalling R-cran-RColorBrewer-1.1.2...
[58/84] Deleting files for R-cran-RColorBrewer-1.1.2: 100%
[59/84] Deinstalling blas-3.5.0_3...
[59/84] Deleting files for blas-3.5.0_3: 100%
[60/84] Deinstalling libint-1.1.6_1...
[60/84] Deleting files for libint-1.1.6_1: 100%
[61/84] Deinstalling cblas-1.0_5...
[61/84] Deleting files for cblas-1.0_5: 100%
[62/84] Deinstalling mpich2-1.5_5,5...
[62/84] Deleting files for mpich2-1.5_5,5: 100%
[63/84] Deinstalling mopac-7.1.15_1,1...
[63/84] Deleting files for mopac-7.1.15_1,1: 100%
[64/84] Deinstalling GraphicsMagick-1.3.25_1,1...
[64/84] Deleting files for GraphicsMagick-1.3.25_1,1: 100%
[65/84] Deinstalling trlan-201009_4...
[65/84] Deleting files for trlan-201009_4: 100%
[66/84] Deinstalling slatec-4.1_5...
[66/84] Deleting files for slatec-4.1_5: 100%
[67/84] Deinstalling qd-2.3.7_5...
[67/84] Deleting files for qd-2.3.7_5: 100%
[68/84] Deinstalling netcdf-fortran-4.4.4_1...
[68/84] Deleting files for netcdf-fortran-4.4.4_1: 100%
[69/84] Deinstalling miracl-5.6_1,1...
[69/84] Deleting files for miracl-5.6_1,1: 100%
[70/84] Deinstalling ghc-7.10.2_1...
[70/84] Deleting files for ghc-7.10.2_1: 65%
pkg: /usr/local/share/doc/ghc-7.10.2/html/libraries/doc-index.html different from original checksum, not removing
[70/84] Deleting files for ghc-7.10.2_1: 100%
[71/84] Deinstalling eispack-1.0_7...
[71/84] Deleting files for eispack-1.0_7: 100%
[72/84] Deinstalling cgnslib-3.2.1_6,1...
[72/84] Deleting files for cgnslib-3.2.1_6,1: 100%
[73/84] Deinstalling openblas-0.2.19,1...
[73/84] Deleting files for openblas-0.2.19,1: 100%
[74/84] Deinstalling ised-2.7.1_1...
[74/84] Deleting files for ised-2.7.1_1: 100%
[75/84] Deinstalling star-2.5.2a...
[75/84] Deleting files for star-2.5.2a: 100%
[76/84] Deinstalling cd-hit-4.6.6...
[76/84] Deleting files for cd-hit-4.6.6: 100%
[77/84] Deinstalling FastTree-2.1.8_1...
[77/84] Deleting files for FastTree-2.1.8_1: 100%
[78/84] Deinstalling openmpi-1.10.6...
[78/84] Deleting files for openmpi-1.10.6: 100%
[79/84] Deinstalling R-3.3.3...
[79/84] Deleting files for R-3.3.3: 100%
[80/84] Deinstalling py27-ipython-5.3.0...
[80/84] Deleting files for py27-ipython-5.3.0: 100%
[81/84] Deinstalling sumaclust-1.0.20...
[81/84] Deleting files for sumaclust-1.0.20: 100%
[82/84] Deinstalling py27-scikit-bio-0.2.3...
[82/84] Deleting files for py27-scikit-bio-0.2.3: 100%
[83/84] Deinstalling gcc-4.9.4...
[83/84] Deleting files for gcc-4.9.4: 100%
[84/84] Deinstalling gcc-ecj-4.5...
[84/84] Deleting files for gcc-ecj-4.5: 100%
<<<ROOT@unixdev.ceas>>> /home/bacon 1006 # pkg install py27-numpy
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 8 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
py27-numpy: 1.11.2_2,1
suitesparse: 4.0.2_5
openblas: 0.2.19,1
gcc: 4.9.4
gcc-ecj: 4.5
lapack: 3.5.0_1
blas: 3.5.0_3
cblas: 1.0_5
Number of packages to be installed: 8
The process will require 576 MiB more space.
18 MiB to be downloaded.
Proceed with this action? [y/N]: y
[1/8] suitesparse-4.0.2_5.txz : 3% 48 KiB 49.2kB/s 00:2[1/8] suitesparse-4.0.2_5.txz : 100% 1 MiB 1.3MB/s 00:01
[2/8] openblas-0.2.19,1.txz : 10% 1 MiB 1.1MB/s 00:0[2/8] openblas-0.2.19,1.txz : 37% 4 MiB 2.6MB/s 00:0[2/8] openblas-0.2.19,1.txz : 64% 6 MiB 2.7MB/s 00:0[2/8] openblas-0.2.19,1.txz : 90% 9 MiB 2.6MB/s 00:0[2/8] openblas-0.2.19,1.txz : 100% 9 MiB 2.5MB/s 00:04
[3/8] gcc-ecj-4.5.txz : 58% 800 KiB 819.2kB/s 00:0[3/8] gcc-ecj-4.5.txz : 100% 1 MiB 1.4MB/s 00:01
[4/8] lapack-3.5.0_1.txz : 27% 2 MiB 1.8MB/s 00:0[4/8] lapack-3.5.0_1.txz : 67% 4 MiB 2.6MB/s 00:0[4/8] lapack-3.5.0_1.txz : 100% 6 MiB 3.2MB/s 00:02
[5/8] blas-3.5.0_3.txz : 100% 128 KiB 131.3kB/s 00:01
[6/8] cblas-1.0_5.txz : 100% 48 KiB 48.8kB/s 00:01
Checking integrity... done (0 conflicting)
[1/8] Installing gcc-ecj-4.5...
[1/8] Extracting gcc-ecj-4.5: 100%
[2/8] Installing gcc-4.9.4...
[2/8] Extracting gcc-4.9.4: 100%
[3/8] Installing openblas-0.2.19,1...
[3/8] Extracting openblas-0.2.19,1: 100%
[4/8] Installing blas-3.5.0_3...
[4/8] Extracting blas-3.5.0_3: 100%
[5/8] Installing suitesparse-4.0.2_5...
[5/8] Extracting suitesparse-4.0.2_5: 100%
[6/8] Installing lapack-3.5.0_1...
Extracting lapack-3.5.0_1: 100%
[1/8] Installing cblas-1.0_5...
[1/8] Extracting cblas-1.0_5: 100%
[2/8] Installing py27-numpy-1.11.2_2,1...
[2/8] Extracting py27-numpy-1.11.2_2,1: 100%
Message from gcc-4.9.4:
To ensure binaries built with this toolchain find appropriate versions
of the necessary run-time libraries, you may want to link using
-Wl,-rpath=/usr/local/lib/gcc49
For ports leveraging USE_GCC, USES=compiler, or USES=fortran this happens
transparently.
Message from cblas-1.0_5:
===> NOTICE:
The cbl:
FreeBSD unixdev.ceas bacon ~ 405: python junk.py
Hello
*** Bug 217968 has been marked as a duplicate of this bug. ***
This port is deprecated; you may wish to reconsider installing it:
Unsupported by upstream. Use GCC 6 or newer instead..
seems overcome by events.
|
https://bugs.freebsd.org/bugzilla/show_bug.cgi?format=multiple&id=207750
|
CC-MAIN-2020-50
|
refinedweb
| 3,269
| 56.11
|
I learned about a module called shlex. It's stated to be a simple lexical analyzer, and I don't really know what this means, but I found at least one of its uses. It provides a convenience method that lets me split a command line string, to feed into subprocess module.
Let's say I want to run the command /bin/cat 'file with spaces' from within python. A normal split won't work, because it uses white space as a delimiter (by default). To test, I will create a file named "file with spaces" and add text (content of 'file with spaces').
$ echo 'content of file with spaces' > 'file with spaces'
And here's the code, using the normal split method:
import subprocess cmd = "/bin/cat 'file with spaces'" formatted_cmd = cmd.split() subprocess.Popen(formatted_cmd)
Output:
/bin/cat: 'file: No such file or directory /bin/cat: with: No such file or directory /bin/cat: spaces': No such file or directory
That's when shlex module gets to be useful.
import shlex, subprocess cmd = "/bin/cat 'file with spaces'" formatted_cmd = shlex.split(cmd) subprocess.Popen(formatted_cmd)
Output:
content of 'file with spaces'
|
http://demo-reveal.tshepang.net/shlex-and-subprocess
|
CC-MAIN-2018-09
|
refinedweb
| 193
| 62.27
|
Forum:Tolololpedia.wikia.com : Official Uncyclopedia Branch / Not?
From Uncyclopedia, the content-free encyclopedia.
Forums: Index > Village Dump > Tolololpedia.wikia.com : Official Uncyclopedia Branch / Not?
Note: This topic has been unedited for 469 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response.
anyone has seen ?
the web is Indonesian. is it really an official branch for Uncyclopedia ? --—The preceding unsigned comment was added by DNL17 (talk • contribs)
- There's no such thing as an "official branch" as the different-language Uncyclopedias are free to work however they wish. Usually, any Uncyclopedia hosted by Carlb or Wikia can be considered legitimate. --Sir Starnestommy
(Talk • Contribs • CUN) 03:07, Mar. 20, 2008
- I say we make them pay royalties to us for using the Uncyclopedia aspect. :) -
Admiral Enzo Aquarius-Dial the Gate
03:10, 20 March 2008 (UTC)
- CC-BY-NC-SA, biatch.
- Also, Forum:Interwiki links covers pretty much all the foreign language uncyclopedias. The "official" the ones are wherever the users of the language think the official one is. • Spang • ☃ • talk • 08:06, 20 Mar 2008
- I don't like the idear of all them ferriners sneakin' into our innernets and stealin' all our jobs. Sir Modusoperandi Brute! 14:08, 20 March 2008 (UTC)
- There seem to be two Spanish ones: Inciclopedia and Frikipedia. Does anyone know how this could possibly have happened? Also, more importantly, what's with Encyclopedia Daemonica? --Syndrome 01:20, 22 March 2008 (UTC)
- The Frikipedia closure and re-creation are mentioned here: wikipedia:Inciclopedia, although not a whole lot of detail is provided. As for Indonesia? If you're not sure that they're a real country, it would be best to ask them for id: :) --02:57, 22 March 2008 (UTC)
- Frikipedia is not uncyc's Spanish version, never intended as such, never claimed to be so, it's one of several humor wikis in Spanish. Inciclopedia is the only one claiming to be uncyc's Spanish version so far, since it took off from uncyc's babel namespace.---Asteroid B612
(aka Rataube) - Ñ 18:17, 25 March 2008 (UTC)
|
http://uncyclopedia.wikia.com/wiki/Forum:Tolololpedia.wikia.com_:_Official_Uncyclopedia_Branch_/_Not%3F
|
crawl-002
|
refinedweb
| 356
| 59.3
|
Introduction to Python Map Function
A lot of code is spent analyzing, filtering, and combining the items in a list. Python gives you functions to streamline these tasks. One such functions are the map(). Map, Filter and Reduce functions are built-in high order functions in Python. Often, using a generator expression over map function is preferred. Whichever is given preference is up to the user. Defining a function inline using lambda, the preferential expression is a generator and will be clean over the map function.
Python is good and known as a brilliant programming language with respect to is speed, efficiency, and reliability. Suitable a lot more as any project environment. Due to the functional evolution and capabilities of each, the things above are possible in a smooth way. The map is a function that executes a specific function of elements. These functions should be iterable. Parameters are sent to the function as items. map() function returns a map object. Map object of a Python is an iterator, so we can iterate it over other elements.
Lists of the results are returned when Map function is applied to a given function of each item of iterable such as tuples and lists (using factory functions). Unlike functions in Java, functions in python can return multiple values. Like if a function is given with a name, roll number, age, marks, and other things, it can return all the parameters at once or those parameters that the user wants to output or to be displayed.
The below can be taken as an example of such functions in Python:
What are Lambda Expressions?
A lambda expression is an anonymous, in-line declaration of a function, usually passed as an argument. Lambda functions can do whatever a usual function or a regular function can. Though an ultimate exception would be that it cannot be called from anywhere outside the line of definition. Meaning, where it is defined. So-called anonymous for the same reason.
Lambda functions are quite useful when you require a short, throwaway anonymous function. Simplicity is that it can be used only once. Applied frequently near filtering and sorting the data.
lambda arguments: expression
Type a keyword Lambda followed by zero or more inputs. Just like functions, it is perfectly acceptable to have anonymous functions with no inputs.
Next, type a colon. Then finally you enter a single expression. This expression is a return value. Using such expressions for more than a line or multi-line functionality is not possible.
Syntax
map (function, iterable)
Terms of Syntax:
- Function obligation as to be executed for each item
- Iterable is a sequence, a collection. It returns an iterator object As many iterables as one desire can be sent. But you should look over that each iterable has one parameter in the function.
Try typing “implicit this” on Python IDE, you’ll find a poem highlighting the philosophies of Python:
A Functional Preview of Map
- Data: a1, a2, a3,..,an
- Function: f
- map (f, data) :
1. Suppose you have a list/tuple /other iterable collection of data, (consider Data: a1, a2, a3….., an. as it for time being)
2. Now you consider applying the Function: f to each piece of data.
3. With the map function(map (f, Data):), you first specify the function and then the data to iterate over.
4. The map function will iterate over the collection (f(a1), f(a2,),…., f(an)) of f applied to each piece of data..
Examples of Python Map Function
The examples of the following are given below:
Example #1
A simple example that prints the squares of a list
Code:
def SquareOf_Num(a):
return a*a
list_numbers = (5, 10, 15, 20)
result = map(SquareOf_Num, list_numbers)
print(result)
#map object to a set conversion
Square_numbers = set(result)
print(Square_numbers)
Output:
The first line in the Output represents the map object. To convert the map object and print a list, the below is applied.
Example #2
Understanding how to deal with Lambda expressions in the map() function
Code:
list_ = (1, 2, 3, 4)
ans = map(lambda a: a*a, list_)
print(ans)
#map object to set conversion
squarenum = set(ans)
print(squarenum)
Output:
Conclusion – Python Map Functions-liners.
Recommended Articles
This is a guide to Python Map Function. Here we discuss the introduction to Python Map Functions, what are lambda expressions with respective syntax and the examples. You can also go through our other related articles to learn more–
|
https://www.educba.com/python-map-function/
|
CC-MAIN-2020-24
|
refinedweb
| 743
| 56.86
|
Starling has several classes that use variables marked as internal. Is this necessary? It makes me have to put my subclasses in the same folders as the Starling classes, which I'd rather not do.
As it says in starling_internal.as
/**
* This namespace is used for undocumented APIs -- usually implementation
* details -- which can't be private because they need to visible
* to other classes.
*
* APIs in this namespace are completely unsupported and are likely to
* change in future versions of Starling.
*/
something to be aware of if you are using them. If there’s an internal feature you’d like exposed you could put in a feature request.
If it's not public or protected, it's not meant to be used outside of Starling.
Perhaps there's another way to implement what you're trying to do without using internal APIs.
Exactly - the internal namespace is reserved for stuff that I'd actually wanted private within Starling, but needed it across different packages.
You can still use them, most of the APIs are now rather solid. However, if I ever need to modify one of them, you will be affected. (Which might happen with public APIs, too, but I'm a little more careful with them.)
You can always send me feature requests with specific use-cases, though!
|
https://forum.starling-framework.org/d/20913-are-starling-internal-variables-necessary
|
CC-MAIN-2019-26
|
refinedweb
| 219
| 65.93
|
Hi,I just started using Team System this week, and I've been able to muddle along fairly well, but I've just discovered something that to me is a real show stopper. And based on the searches that I have done in Google and in these forums, I'm beginning to think there is something fundamental I missing based on the small number of responses to this problem.My problem is that the team system build does not build the Setup projects. I understand that this is because MSBuild does not support them, but I am flabbergasted by this fact. What use is a build system that does not create the fundamental deployment deliverable Obviously Microsoft feels there is value, or Team System would not have been RTM'ed without this feature. That is where I'm stumped. My existing projects and solutions all have setup projects within them, and my current build process builds these setup projects, installs them, and then runs the Nunit tests for that product. It then proceeds on to the next product/solution that depends on the previous build target, builds it, installs it, and runs the tests. And so on through all of the 8 or 9 solutions.I think that my nightly build scenario seems reasonable, and I don't see how running the tests on the output executables and dlls in the drop directory even makes any sense since they won't run because the drop directory doesn't contain the files needed to execute the programs, let alone the log sources and performance counters, etc created by the custom install code.So obviously I'm missing something fundamental or have taken a completely naive approach to team system. Do I need to supplement Team System with my own hand crafted MSBuild script that handles my build and testing requirements If so, since I'm completely new to MSBuild, can someone point me to examples of how to do this Thanks for reading my rant]Monty[
<!--
<
</
>
/>
I would also advice anyone who wants to integrate setup creation into the build process to have a look at WIX. I believe it is a much nicer way to create MSI setups than the build in Visual Studio solution.
One possible workaround could be:< xml:namespace prefix = o
Please refer Microsoft.TeamFoundation.Build.targets which is located at %ProgramFiles%\MSBuild\Microsoft\VisualStudio\v8.0\TeamBuild on TeamBuild machine for a sample of MSBuild task.
Thanks, Monty!
The only remaining question is how to avoid the
warning MSB4078: The project file "xxx.vdproj" is not supported by MSBuild and cannot be built.
Since the vdproj-file needs to be part of the solution, I see no way of getting rid of this warning (which obscures other, "real", warning). Any ideas
Thanks!
/F
Wait for the next version of Team System and hope they get a clue and support real-world deployment :)
]Monty[
|
https://databaseforum.info/30/500101.aspx
|
CC-MAIN-2022-05
|
refinedweb
| 487
| 58.82
|
What should I do next for a (newbie) program?
This is a discussion on Can't think of any new programs. within the C++ Programming forums, part of the General Programming Boards category; What should I do next for a (newbie) program?...
What should I do next for a (newbie) program?
I started working on a program ( just for practice ) to make a color coded html page, if it's given a C++ code file.
Right now all it does is colorizes the single-line comments.
If you want you can build on this
it is two file cc2html.cc and defs.h
defs.h
cc2html.cccc2html.ccCode:#define DOCUMENT_TITLE "Html Generated by cc2html" #define DOCUMENT_BGCOLOR "white" #define DOCUMENT_COMMENTCOLOR "red" #define DOCUMENT_KEYWORDCOLOR "blue" #define DOCUMENT_STRINGCOLOR "green" #define DOCUMENT_TEXTCOLOR "black"
Code:#include <iostream.h> #include <fstream.h> #include <string> #include "defs.h" #define ADD outFile << using namespace std; void html_header(ofstream &outFile); void html_footer(ofstream &outFile); int main(int argc, char *argv[]) { if(argc != 3) { cout << "error - usage : cc2html <filename>.cc <filename>.html\n"; return 0; } int numKeywords; ifstream inFile(argv[1]); //the cc file ofstream outFile(argv[2]); //the html file html_header(outFile); char c; char' : ADD ">"; break; case ' ' : ADD " "; break; case '\t' : ADD " "; break; default : if(c != EOF)ADD c; break; }//end switch }//end while html_footer(outFile); inFile.close(); outFile.close(); }//end main void html_header(ofstream &outFile) { ADD "<html>\n"; ADD "<head>\n"; ADD "<title>"<<DOCUMENT_TITLE<<"</title>\n"; ADD "</head>\n"; ADD "<body bgcolor="<<DOCUMENT_BGCOLOR<<">\n"; outFile.flush(); } void html_footer(ofstream &outFile) { ADD "</body>\n"; ADD "</html>\n"; outFile.flush(); }
I need an idea that is simple enough for a proggrammer who started about 3 weeks ago.
Why dont you create a simple calculator type program. Creat a class for each operand +-*/ and using a switch or if statement have the user select which one he wants to do. Then in each class there would be a function for getting each users numbers. So example Class add
getnumb(); // get the numbers
addnumb(); // add the two numbs
printnumb(); // print them
Try it!
Go to the Contest board. Look around until you find something you think you can do.
C Code. C Code Run. Run Code Run... Please!
"Love is like a blackhole, you fall into it... then you get ripped apart"
Just three weeks of programming....hmm....Just three weeks of programming....hmm....Originally posted by fuh
I need an idea that is simple enough for a proggrammer who started about 3 weeks ago.
Make a program which asks the user for a string and then responds telling them if its an anagram ( the same backwords and forwards ) or exits if they enter "exit". For Example
Enter a string: fuh
fuh is not an anagram
Enter a string: anna
anna is an anagram
Enter a string: exit
exiting program....Goodbye!
Last edited by beege31337; 12-15-2002 at 06:36 PM.
Make a simple RPG
I am against the teaching of evolution in schools. I am also against widespread
literacy and the refrigeration of food.
****rpg ********rpg ****Originally posted by abrege
Make a simple RPG
i started at least 5 weeks ago how is that poseble graphics can in last.i started at least 5 weeks ago how is that poseble graphics can in last.
fuh first regster then tell me your age but for now just stick with numbers or make a program that does something you need(if its made already make it again or improve it).
hope it helps
try char for words
any problems just pm (privet message)me
I wouldn't jump into anything too overwhelming (well knowing me I would but I wouldn't suggest it). How about a program that deletes files. Like del or rm.
I would take golfinguy's idea. Write a simple calculator program
ok, thats pretty much a virus get it on another computer and get arestedok, thats pretty much a virus get it on another computer and get arestedOriginally posted by master5001
I wouldn't jump into anything too overwhelming (well knowing me I would but I wouldn't suggest it). How about a program that deletes files. Like del or rm.
. are you drunk. are you drunk
missles on metriods
|
http://cboard.cprogramming.com/cplusplus-programming/30768-can%27t-think-any-new-programs.html
|
CC-MAIN-2015-32
|
refinedweb
| 703
| 66.03
|
Other packages may define their own REPL modes in addition to the default modes. For instance, the
Cxx package defines the
cxx> shell mode for a C++ REPL. These modes are usually accessible with their own special keys; see package documentation for more details.
After installing Julia, to launch the read-eval-print-loop (REPL):
Open a terminal window, then type
julia at the prompt, then hit Return. You should see something like this come up:
Find the Julia program in your start menu, and click it. The REPL should be launched.
The Julia REPL is an excellent calculator. We can start with some simple operations:
julia> 1 + 1 2 julia> 8 * 8 64 julia> 9 ^ 2 81
The
ans variable contains the result of the last calculation:
julia> 4 + 9 13 julia> ans + 9 22
We can define our own variables using the assignment
= operator:
julia> x = 10 10 julia> y = 20 20 julia> x + y 30
Julia has implicit multiplication for numeric literals, which makes some calculations quicker to write:
julia> 10x 100 julia> 2(x + y) 60
If we make a mistake and do something that is not allowed, the Julia REPL will throw an error, often with a helpful tip on how to fix the problem:
julia> 1 ^ -1 ERROR: DomainError: Cannot raise an integer x to a negative power -n. Make x a float by adding a zero decimal (e.g. 2.0^-n instead of 2^-n), or write 1/x^n, float(x)^-n, or (x//1)^-n. in power_by_squaring at ./intfuncs.jl:82 in ^ at ./intfuncs.jl:106 julia> 1.0 ^ -1 1.0
To access or edit previous commands, use the ↑ (Up) key, which moves to the last item in history. The ↓ moves to the next item in history. The ← and → keys can be used to move and make edits to a line.
Julia has some built-in mathematical constants, including
e and
pi (or
π).
julia> e e = 2.7182818284590... julia> pi π = 3.1415926535897... julia> 3π 9.42477796076938
We can type characters like
π quickly by using their LaTeX codes: press \, then p and i, then hit the Tab key to substitute the
\pi just typed with
π. This works for other Greek letters and additional unicode symbols.
We can use any of Julia's built-in math functions, which range from simple to fairly powerful:
julia> cos(π) -1.0 julia> besselh(1, 1, 1) 0.44005058574493355 - 0.7812128213002889im
Complex numbers are supported using
im as an imaginary unit:
julia> abs(3 + 4im) 5.0
Some functions will not return a complex result unless you give it a complex input, even if the input is real:
julia> sqrt(-1) ERROR: DomainError: sqrt will only return a complex result if called with a complex argument. Try sqrt(complex(x)). in sqrt at math.jl:146 julia> sqrt(-1+0im) 0.0 + 1.0im julia> sqrt(complex(-1)) 0.0 + 1.0im
Exact operations on rational numbers are possible using the
// rational division operator:
julia> 1//3 + 1//3 2//3
See the Arithmetic topic for more about what sorts of arithmetic operators are supported by Julia.
Note that machine integers are constrained in size, and will overflow if the result is too big to be stored:
julia> 2^62 4611686018427387904 julia> 2^63 -9223372036854775808
This can be prevented by using arbitrary-precision integers in the computation:
julia> big"2"^62 4611686018427387904 julia> big"2"^63 9223372036854775808
Machine floating points are also limited in precision:
julia> 0.1 + 0.2 0.30000000000000004
More (but still limited) precision is possible by again using
big:
julia> big"0.1" + big"0.2" 3.000000000000000000000000000000000000000000000000000000000000000000000000000017e-01
Exact arithmetic can be done in some cases using
Rationals:
julia> 1//10 + 2//10 3//10
There are three built-in REPL modes in Julia: the Julia mode, the help mode, and the shell mode.
The Julia REPL comes with a built-in help system. Press ? at the
julia> prompt to access the
help?> prompt.
At the help prompt, type the name of some function or type to get help for:
Even if you do not spell the function correctly, Julia can suggest some functions that are possibly what you meant:
help?> printline search: Couldn't find printline Perhaps you meant println, pipeline, @inline or print No documentation found. Binding printline does not exist.
This documentation works for other modules too, as long as they use the Julia documentation system.
julia> using Currencies help?> @usingcurrencies Export each given currency symbol into the current namespace. The individual unit exported will be a full unit of the currency specified, not the smallest possible unit. For instance, @usingcurrencies EUR will export EUR, a currency unit worth 1€, not a currency unit worth 0.01€. @usingcurrencies EUR, GBP, AUD 7AUD # 7.00 AUD There is no sane unit for certain currencies like XAU or XAG, so this macro does not work for those. Instead, define them manually: const XAU = Monetary(:XAU; precision=4)
See Using Shell from inside the REPL for more details about how to use Julia's shell mode, which is accessible by hitting ; at the prompt. This shell mode supports interpolating data from the Julia REPL session, which makes it easy to call Julia functions and make their results into shell commands:
|
https://sodocumentation.net/julia-lang/topic/5739/repl
|
CC-MAIN-2022-33
|
refinedweb
| 885
| 62.78
|
> at91rm9200bsp.rar > sysALib.s
/* sysALib.s - ARM Integrator system-dependent routines */ /* Copyright 1999-2001 ARM Limited */ /* Copyright 1999-2001 Wind River Systems, Inc. */ /* modification history -------------------- 2004/10/23 this file is modified form VxWorks demo bsp integrator920t */ /* DESCRIPTION This module contains system-dependent routines written in assembly language. It contains the entry code, sysInit(), for VxWorks images that start running from RAM, such as 'vxWorks'. These images are loaded into memory by some external program (e.g., a boot ROM) and then started. The routine sysInit() must come first in the text segment. Its job is to perform the minimal setup needed to call the generic C routine usrInit(). sysInit() masks interrupts in the processor and the interrupt controller and sets the initial stack pointer. Other hardware and device initialisation is performed later in the sysHwInit routine in sysLib.c. NOTE The routines in this module don't use the "C" frame pointer %r11@ ! or establish a stack frame. SEE ALSO: .I "ARM Architecture Reference Manual," .I "ARM 7TDMI Data Sheet," .I "ARM 720T Data Sheet," .I "ARM 740T Data Sheet," .I "ARM 920T Technical Reference Manual", .I "ARM 940T Technical Reference Manual", .I "ARM 946E-S Technical Reference Manual", .I "ARM 966E-S Technical Reference Manual", .I "ARM Reference Peripherals Specification," .I "ARM Integrator/AP User Guide", .I "ARM Integrator/CM7TDMI User Guide", .I "ARM Integrator/CM720T User Guide", .I "ARM Integrator/CM740T User Guide", .I "ARM Integrator/CM920T User Guide", .I "ARM Integrator/CM940T User Guide", .I "ARM Integrator/CM946E User Guide", .I "ARM Integrator/CM9x6ES Datasheet". */ #define _ASMLANGUAGE #include "vxWorks.h" #include "asm.h" #include "regs.h" #include "sysLib.h" #include "config.h" #include "arch/arm/mmuArmLib.h" .data .globl VAR(copyright_wind_river) .long VAR(copyright_wind_river) /* internals */ .globl FUNC(sysInit) /* start of system code */ .globl FUNC(sysIntStackSplit) /* routine to split interrupt stack */ /* externals */ .extern FUNC(usrInit) /* system initialization routine */ .extern FUNC(vxSvcIntStackBase) /* base of SVC-mode interrupt stack */ .extern FUNC(vxSvcIntStackEnd) /* end of SVC-mode interrupt stack */ .extern FUNC(vxIrqIntStackBase) /* base of IRQ-mode interrupt stack */ .extern FUNC(vxIrqIntStackEnd) /* end of IRQ-mode interrupt stack */ .text .balign 4 /******************************************************************************* * * sysInit - start after boot * * This routine is the system start-up entry point for VxWorks in RAM, the * first code executed after booting. It disables interrupts, sets up * the stack, and jumps to the C routine usrInit() in usrConfig.c. * * The initial stack is set to grow down from the address of sysInit(). This * stack is used only by usrInit() and is never used again. Memory for the * stack must be accounted for when determining the system load address. * * NOTE: This routine should not be called by the user. * * RETURNS: N/A * sysInit () /@ THIS IS NOT A CALLABLE ROUTINE @/ */ _ARM_FUNCTION(sysInit) */ /* set initial stack pointer so stack grows down from start of code */ ADR sp, FUNC(sysInit) /* initialise stack pointer */ /* now call usrInit */ MOV fp, #0 /* initialise frame pointer */ MOV r0, #BOOT_WARM_AUTOBOOT /* pass startType */ #if (ARM_THUMB) LDR r12, L$_usrInit BX r12 #else B FUNC(usrInit) #endif /* (ARM_THUMB) */ /******************************************************************************* * * sysIntStackSplit - split interrupt stack and set interrupt stack pointers * * This routine is called, via a function pointer, during kernel * initialisation. It splits the allocated interrupt stack into IRQ and * SVC-mode stacks and sets the processor's IRQ stack pointer. Note that * the pointer passed points to the bottom of the stack allocated i.e. * highest address+1. * * IRQ stack needs 6 words per nested interrupt; * SVC-mode will need a good deal more for the C interrupt handlers. * For now, use ratio 1:7 with any excess allocated to the SVC-mode stack * at the lowest address. * * Note that FIQ is not handled by VxWorks so no stack is allocated for it. * * The stacks and the variables that describe them look like this. * .CS * * - HIGH MEMORY - * ------------------------ <--- vxIrqIntStackBase (r0 on entry) * | | * | IRQ-mode | * | interrupt stack | * | | * ------------------------ <--{ vxIrqIntStackEnd * | | { vxSvcIntStackBase * | SVC-mode | * | interrupt stack | * | | * ------------------------ <--- vxSvcIntStackEnd * - LOW MEMORY - * .CE * * NOTE: This routine should not be called by the user. * void sysIntStackSplit * ( * char *pBotStack /@ pointer to bottom of interrupt stack @/ * long size /@ size of stack @/ * ) */ _ARM_FUNCTION_CALLED_FROM_C(sysIntStackSplit) /* * r0 = base of space allocated for stacks (i.e. highest address) * r1 = size of space */ SUB r2, r0, r1 /* r2->lowest usable address */ LDR r3, L$_vxSvcIntStackEnd STR r2, [r3] /* == end of SVC-mode stack */ SUB r2, r0, r1, ASR #3 /* leave 1/8 for IRQ */ LDR r3, L$_vxSvcIntStackBase STR r2, [r3] /* now allocate IRQ stack, setting irq_sp */ LDR r3, L$_vxIrqIntStackEnd STR r2, [r3] LDR r3, L$_vxIrqIntStackBase STR r0, [r3] MRS r2, cpsr BIC r3, r2, #MASK_MODE ORR r3, r3, #MODE_IRQ32 | I_BIT /* set irq_sp */ MSR cpsr, r3 MOV sp, r0 /* switch back to original mode and return */ MSR cpsr, r2 #if (ARM_THUMB) BX lr #else MOV pc, lr #endif /* (ARM_THUMB) */ /******************************************************************************/ /* * PC-relative-addressable pointers - LDR Rn,=sym is broken * note "_" after "$" to stop preprocessor preforming substitution */ .balign 4 L$_vxSvcIntStackBase: .long VAR(vxSvcIntStackBase) L$_vxSvcIntStackEnd: .long VAR(vxSvcIntStackEnd) L$_vxIrqIntStackBase: .long VAR(vxIrqIntStackBase) L$_vxIrqIntStackEnd: .long VAR(vxIrqIntStackEnd) #if (ARM_THUMB) L$_usrInit: .long FUNC(usrInit) #endif /* (ARM_THUMB) */
|
http://read.pudn.com/downloads56/sourcecode/embed/198274/at91rm9200bsp/at91rm9200/sysALib.s__.htm
|
crawl-002
|
refinedweb
| 831
| 59.8
|
On Sat, Aug 8, 2015 at 2:49 PM, wm4 <nfxjfg at googlemail.com> wrote: > On Sat, 8 Aug 2015 14:31:21 +0200 > Hendrik Leppkes <h.leppkes at gmail.com> wrote: > >> On Sat, Aug 8, 2015 at 1:36 PM, Andreas Cadhalpun >> <andreas.cadhalpun at googlemail.com> wrote: >> > They are used by the not deprecated av_frame_{g,s}et_qp_table. >> > >> > Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun at googlemail.com> >> > --- >> > libavutil/frame.h | 6 ++---- >> > 1 file changed, 2 insertions(+), 4 deletions(-) >> > >> > diff --git a/libavutil/frame.h b/libavutil/frame.h >> > index 196b578..c4e333c 100644 >> > --- a/libavutil/frame.h >> > +++ b/libavutil/frame.h >> > @@ -285,21 +285,19 @@ typedef struct AVFrame { >> > #if FF_API_AVFRAME_LAVC >> > attribute_deprecated >> > int reference; >> > - >> > +#endif > > Stray change. > >> > /** >> > * QP table >> > */ >> > - attribute_deprecated >> > int8_t *qscale_table; >> > /** >> > * QP store stride >> > */ >> > - attribute_deprecated >> > int qstride; >> > >> > - attribute_deprecated >> > int qscale_type; >> > >> > +#if FF_API_AVFRAME_LAVC >> > /** >> > * mbskip_table[mb]>=1 if MB didn't change >> > * stride= mb_width = (width+15)>>4 >> >> >> Didn't this stuff move into sidedata > > In FFmpeg. It's completely gone in Libav. (FFmpeg "needs" it for their > relatively useless postproc filters.) > > Removing the deprecation won't make this work either; it just makes > projects referencing it compile. And apparently distros can't be > bothered to patch this, even though making sure the projects actually > _work_ as opposed to merely compiling them got to be much more work. > Makes no sense to me. Then we should move it into side data, just like all the other video metadata which was already moved, it has no place in a generic AVFrame (and makes it consistent with the other stuff as well). - Hendrik
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2015-August/176948.html
|
CC-MAIN-2019-26
|
refinedweb
| 260
| 68.26
|
#include <sys/pccard.h> int32_t csx_DupHandle(acc_handle_t handle1, acc_handle_t *handle2, uint32_t flags);
Solaris DDI Specific (Solaris DDI)
The access handle returned from csx_RequestIO(9F) or csx_RequestWindow(9F) that is to be duplicated.
A pointer to the newly-created duplicated data access handle.
The access attributes that will be applied to the new handle.
This function duplicates the handle, handle1, into a new handle, handle2, that has the access attributes specified in the flags argument. Both the original handle and the new handle are active and can be used with the common access functions.
Both handles must be explicitly freed when they are no longer necessary.
The flags argument is bit-mapped. The following bits are defined: flags. Setting this bit also implies re-ordering.
The CPU may cache the data it fetches and reuse it until another store occurs. The default behavior is to fetch new data on every load. Setting this bit also implies merging and re-ordering.
The CPU may keep the data in the cache and push it to the device (perhaps with other data) at a later time. The default behavior is to push the data right away. Setting this bit also implies load caching, merging, and re-ordering.
These values are advisory, not mandatory. For example, data can be ordered without being merged or cached, even though a driver requests unordered, merged and cached together.
Successful operation.
Error in flags argument or handle could not be duplicated for some reason.
No PCMCIA hardware installed.
This function may be called from user or kernel context.
csx_Get8(9F), csx_GetMappedAddr(9F), csx_Put8(9F), csx_RepGet8(9F), csx_RepPut8(9F), csx_RequestIO(9F), csx_RequestWindow(9F)
PC Card 95 Standard, PCMCIA/JEIDA
|
https://docs.oracle.com/cd/E36784_01/html/E36886/csx-duphandle-9f.html
|
CC-MAIN-2019-09
|
refinedweb
| 279
| 58.89
|
Detection of rust with OpenCV (Python) Part 2
This is a follow up to the previous post: Detection of rust with OpenCV (Python)
Original Image: Rust Image
After reading through the comments on the previous post, we tried working in HSV instead of BGR. However, there wasn't too much of a difference.
This is our current code with reference from here: Colour Detection HSV
import cv2 import numpy as np img = cv2.imread('/home/brendanloe/img.jpeg', 1) hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) boundaries1 = [ ([169, 100 , 100], [189, 255, 255]) ] boundaries2 = [ ([3, 100, 100], [17, 255, 255]) ] boundaries3 = [ ([2, 100, 100], [22, 255, 255]) ] boundaries4 = [ ([6, 100, 100], [26, 255, 255]) ] for (lower1, upper1) in boundaries1: lower1 = np.array(lower1, dtype = "uint8") upper1 = np.array(upper1, dtype = "uint8") mask = cv2.inRange(hsv, lower1, upper1) output1 = cv2.bitwise_and(img, img, mask = mask) for (lower2, upper2) in boundaries2: lower2 = np.array(lower2, dtype = "uint8") upper2 = np.array(upper2, dtype = "uint8") mask = cv2.inRange(hsv, lower2, upper2) output2 = cv2.bitwise_and(img, img, mask = mask) for (lower3, upper3) in boundaries3: lower3 = np.array(lower3, dtype = "uint8") upper3 = np.array(upper3, dtype = "uint8") mask = cv2.inRange(hsv, lower3, upper3) output3 = cv2.bitwise_and(img, img, mask = mask) for (lower4, upper4) in boundaries4: lower4 = np.array(lower4, dtype = "uint8") upper4 = np.array(upper4, dtype = "uint8") mask = cv2.inRange(hsv, lower4, upper4) output4 = cv2.bitwise_and(img, img, mask = mask) final = cv2.bitwise_or(output1, output2, output3) final1 = cv2.bitwise_or(output4, final) cv2.imshow("final", final1) while(1): k = cv2.waitKey(0) if(k == 27): break cv2.destroyAllWindows()
How can we remove the yellow colour "parking" sign which is still showing in our output? Is it because we are using a jpeg image?
please add the original, bgr image so folks here can try your (and alternative) ideas
Hi, I have added the original image if that is what you were asking for.
it is, thank you ;)
It seems that the rust is a dark red, and the parking sign is yellow-green. Can you use that difference in their properties as a way to exclude the sign?
|
https://answers.opencv.org/question/178883/detection-of-rust-with-opencv-python-part-2/
|
CC-MAIN-2019-43
|
refinedweb
| 350
| 69.89
|
Document: WG14 N1464
Submitter: Fred J. Tydeman (USA)
Submission Date: 2010-05-10
Related documents: N818, SC22WG14.8195, N1399, N1419, N1431
Subject: Creation of complex value
Problem: (x + y*I) will NOT do the right thing if "I" is complex and "y" is NaN or infinity. It does work fine if "I" is imaginary. Users and library implementors have noticed this deficiency in the standard and have been surprised that there is no easy to use portable way to create a complex number that can be used in both assignment and static initialization.
WG14 paper N818 presented more details on why the problem exists as well as many possible solutions. Papers N1419 and N1431 added some more possible solutions.
Proposal
This has been shipping for several years from HP.
Add 3 new function-like macros to <complex.h> in section 7.3.9 Manipulation functions:
7.3.9.x The CMPLX macros
Synopsis
#include <complex.h> double complex CMPLX( double x, double y ); float complex CMPLXF( float x, float y ); long double complex CMPLXL( long double x, long double y );
Description
The function-like macros CMPLX(x,y), CMPLXF(x,y), and CMPLXL(x,y) each expands to an expression of the specified complex type, with real part having the value of x (converted) and imaginary part having the value of y (converted). Each macro can be used for static initialization if and only if both x and y could be used as static initializers for the corresponding real type.
The macros act "as if" an implementation supports imaginary and the macros were defined as:
#define CMPLX(x,y) ((double)(x)+_Imaginary_I*(double)(y)) #define CMPLXF(x,y) ((float)(x)+_Imaginary_I*(float)(y)) #define CMPLXL(x,y) ((long double)(x)+_Imaginary_I*(long double)(y))
Returns
The CMPLX macros return the complex value x + i*y created from a pair of real values, x and y.
Add to the rationale in the section on complex:
x + y*I will not create the expected value x + iy if I is complex and "y" is a NaN or an infinity; however, the expected value will be created if I is imaginary. Because of this, CMPLX(x,y) as an initializer of a complex object was added to C1x to allow a way to create a complex number from a pair of real values.
|
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1464.htm
|
CC-MAIN-2017-30
|
refinedweb
| 390
| 50.67
|
I'm looking way for my algorithm to only trade during regular stock market hours. Is there any helper method for this?
I'm looking way for my algorithm to only trade during regular stock market hours. Is there any helper method for this?
Hi Andrew,
You probably want to use
schedule_function and specify the calendar of the scheduler to be the
US_EQUITIES calendar.
Here's an example:
from quantopian.algorithm import calendars def initialize(context): # Runs at equity market open. schedule_function( func=myfunc, date_rule=date_rules.every_day(), time_rule=time_rules.market_open(minutes=1), calendar=calendars.US_EQUITIES, )
If you want to run something every minute of equity market hours, you could schedule a function at equity market open and close and flip a boolean flag and then in handle_data, only execute your trade logic when the boolean flag is true.
Does this help?.
|
https://www.quantopian.com/posts/es-futures-only-trade-during-spy-hours
|
CC-MAIN-2018-17
|
refinedweb
| 141
| 50.12
|
Locky: JavaScript Deobfuscation
Last Updated: 2016-02-20 18:35:06 UTC
by Didier Stevens (Version: 1)
Yesterday, Wayne Smith submitted a sample (MD5 F1F31B18259DC9768D8B6132E543E3EE) to the ISC. Xavier, handler on duty, analyzed the (malicious) JavaScript in his sandbox, but it failed with an error. As I wrote in a previous diary, if malware malfunctions, you can still use static analysis.
Here is the script:
The expression I labeled 1 is a list of strings. The last string has a method call (e.()). This String method is defined farther down in the script: look at the function definition I labeled 2. Method e() returns the first character of the string to which it is applied. So the expression ('office', 'modal', 'dialect', '\u0074informer'.e()) can be replace with expression ('office', 'modal', 'dialect', '\u0074'), or ('office', 'modal', 'dialect', 't'). When a list is evaluated in JavaScript, it evaluates to its last emelent. So the expression finally becomes 't'. You can see that this script contains many expressions similar to the one I just reduced: this is the kind of string obfuscation used in this sample.
So what I would like to do is replace each expression with the character it evaluates to. Python has an interesting function I want to use in this case: re.sub. re.sub takes a regular expression and applies it to a given string. For each match in the string, it will replace the matched character sequence with a string or (and this is what I need) the return value of a function that is called for each match. So I can write a regular expression that will match strings like ('office', 'modal', 'dialect', '\u0074informer'.e()), and then write a function that will evaluate this expression (to 't' in this case). I won't write a Python program from scratch to do this, but I will use my translate.py tool. Here is the Python code (decode-1.py) I will use:
import re
def DecodeExpresssion(oMatch):
return "'" + chr(int(oMatch.group(1), 16)) + "'"
def Decode(data):
return re.sub(r"\([^\\\(]+\\u([0-9a-f]{4})[a-z]+'\.e\(\)\)", DecodeExpresssion, data)
Function Decode does the re.sub call with the regular expression and DecodeExpression function:
This translates the expressions as we wanted, expect for one: ('sabotage', 'arctic', 'special', 'minimal', 'gram(me)', 'memorial', '\u0045international'.e()). Our translation failed for this expression, because my regular expression is not designed to match words that contain parentheses: 'gram(me)'. In stead of trying to design a regular expression that will also match this expression, we can just remove the parentheses: gramme.
If you look closely, you will see some keywords and maybe a URL. But to make it easier to read, we will concatenate the string expressions with this Python script (decode-2.py):
import re
def DecodeExpresssion(oMatch):
return "'" + eval(oMatch.group(0)) + "'"
def Decode(data):
return re.sub(r"('[^']*' \+ )+'[^']*'", DecodeExpresssion, data)
Now you can clearly see the URL, but let's add a newline after each semi-colon (;) to make the script a bit more readable:
I downloaded this Locky sample with the deobfuscated URL: MD5 91d8ab08a37f9c26a743380677aa200d
Didier Stevens
SANS ISC Handler
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com
IT Security consultant at Contraste Europe.
|
https://isc.sans.edu//diary/Locky:+JavaScript+Deobfuscation/20749
|
CC-MAIN-2016-40
|
refinedweb
| 538
| 64.3
|
Steven Van de Craen's Blog feed for the Posts list.en-US2018-07-15T18:23:40-07:00Subscribe with BloglinesSubscribe with GoogleSubscribe with Live.com Van de Craen's Blog sync client and green locks (read-only sync) 365SharePointOneDriveContent TypesPnPTroubleshootingSteven Van de Craen2017-12-07T07:31:07-08:00The issue foll... (More)<img src="" height="1" width="1" alt=""/> 10 Creators Update: Slow wireless connection Van de Craen2017-05-08T04:07:07-07:00A... (More)<img src="" height="1" width="1" alt=""/> GetTermSets Failed to compare two elements in the array Van de Craen2016-12-23T08:36:51-08:00This sys... (More)<img src="" height="1" width="1" alt=""/> - Upgrading SharePoint - Some views not upgraded to XsltListViewWebPart Van de Craen2016-08-24T08:03:00-07:00The old st... (More)<img src="" height="1" width="1" alt=""/> 2013: InfoPath client forms may open twice [Oct15CU bug] Van de Craen2015-12-16T12:47:00-08:00Oops After a recent Patch Night one of my customers had pulled in SharePoint updates along with Windows Updates and people started complaining about changed behavior PDF files no longer immediately open in the browser. Instead the PDF client (Adobe Reader) opens up and provides rich integration with... (More)<img src="" height="1" width="1" alt=""/> 2013: Programmatically set crawl cookies for forms authenticated web sites Van de Craen2015-08-29T00:54:29-07:00Last week I was having difficulties in crawling forms authenticated web sites. When configuring a crawl rule to store the authentication cookies the login page was returning multiple cookies with same name but different domain. This gave issues in a later stage (during crawl) because all cookies... (More)<img src="" height="1" width="1" alt=""/> 2013: Some observations on Enterprise Search Van de Craen2015-08-13T09:44:08-07:00I’m doing some testing with the Enterprise Search in SharePoint 2013 for a customer scenario and here are some observations… Content Source as Crawled Property The “Content Source” name is out of the box available as Managed Property on all content in the search index This makes it possible t... (More)<img src="" height="1" width="1" alt=""/>: Portal navigation limited to 50 dynamic items Van de Craen2015-08-12T04:07:28-07:00Is... (More)<img src="" height="1" width="1" alt=""/>: Users cannot create new subsites Van de Craen2015-07-13T10:02:49-07:00Issue “Sit... (More)<img src="" height="1" width="1" alt=""/> 2013: Open PDF files in client application Van de Craen2015-06-26T09:23:14-07:00Share colleagu... (More)<img src="" height="1" width="1" alt=""/> 2013: Enable 'Change Item Order' Ribbon Action Van de Craen2015-06-04T10:00:00-07:00My S... (More)<img src="" height="1" width="1" alt=""/> site collections from explicit managed path to wildcard managed path Van de Craen2015-03-14T00:41:20-07:00 o... (More)<img src="" height="1" width="1" alt=""/> Publishing Feature activation failed Van de Craen2014-12-16T07:19:22-08:00Last week I ran into an issue while reactivating the Publishing Feature on webs in a migrated (Dutch) site collection. If you have ever upgraded localized SharePoint 2007 Publishing sites this should sound familiar to you. What happens is that while in SharePoint 2007 the Pages library was called “P... (More)<img src="" height="1" width="1" alt=""/>: Rendering inside iframes ServicesOffice Web ApplicationsInfoPathSteven Van de Craen2014-10-31T09:35:00-07:00This-hos... (More)<img src="" height="1" width="1" alt=""/> a Summary Links Web Part: List does not exist 365TroubleshootingSteven Van de Craen2014-10-23T08:23:12-07:00Issue i... (More)<img src="" height="1" width="1" alt=""/> 2013: Bulk Content Approval of list items fails if user has read permissions on the web 365TroubleshootingSteven Van de Craen2014-10-17T09:51:42-07:00Update 6/08/2015 Issue is still present in May 2015 Cumulative Update and July 2015 Cumulative Update. Will contact Microsoft on this. 3/12/2014 Microsoft has confirmed this issue and will roll out a fix in the next Cumulative Update. Issue Last week I was notified of an issue wher... (More)<img src="" height="1" width="1" alt=""/> 2013 Web Applications requests not logged to ULS logs Van de Craen2014-10-16T15:04:23-07:00Issue m... (More)<img src="" height="1" width="1" alt=""/> 10 Technical Preview and Cisco AnyConnect Van de Craen2014-10-03T14:26:01-07:00Today I decided to look into Windows 10 Technical Preview without safety net and run it on my main work machine. No real issues so far, except connecting to our corporate network via Cisco AnyConnect (version 3.1.04059). Failed to initialize connection subsystem This can easily be resolved ... (More)<img src="" height="1" width="1" alt=""/> Server: allow multiple RDP sessions per user Van de Craen2014-10-02T04:56:09-07:00I’ve often worked on SharePoint environments where I accidentally got kicked or kicked others because we were working with the same account on the same server via Remote Desktop. By default each user is restricted to a single session but there’s a group policy to change this. In Windows Server 2008... (More)<img src="" height="1" width="1" alt=""/> web services in Nintex Workflow and different authentication mechanisms Van de Craen2014-09-12T13:40:06-07:00With the rise of claims based authentication in SharePoint we’ve faced new challenges in how to interact with web services hosted on those environments. Claims based authentication allows many different scenario’s with a mixture of Windows, Forms and SAML Authentication. When you’re working with ... (More)<img src="" height="1" width="1" alt=""/> and Content Databases Van de Craen2014-08-13T04:05:25-07:00Today I found a gotcha with the Restore-SPSite command when restoring “over” an existing Site Collection. The issue occurs if all Content Databases are at a maximum of their maximum Site Collection count. The error you’ll receive is that there is basically no room for the new Site Collection:... (More)<img src="" height="1" width="1" alt=""/> creating subsites when a built-in field is modified Van de Craen2014-06-30T07:26:43-07:00One of our site collections in a migration to SharePoint 2013 experienced an issue with creating sub sites: Sorry, something went wrong The URL 'SitePages/Home.aspx' is invalid. It may refer to a nonexistent file or folder, or refer to a valid file or folder that is not in the cur... (More)<img src="" height="1" width="1" alt=""/>: How to troubleshoot issues with Save as template Van de Craen2014-05-23T07:32:17-07:00On an upgrade project to SharePoint 2013 we ran into an issue where a specific site couldn’t be saved as a template (with or without content). You get the non-descriptive “Sorry, something went wrong” and “An unexpected error has occurred” messages. Funny enough the logged Correlation Id is totally ... (More)<img src="" height="1" width="1" alt=""/> SharePoint DCOM errors the easy way - revised Van de Craen2014-05-08T09:17:39-07:00Tagline: Fix your SharePoint DCOM issues with a single click ! - revised for Windows Server 2012 and User Account Control-enabled systems Update 8/05/2014: Scripts were revised to work with Windows Server 2008 R2 and Windows Server 2012 with User Account Control enabled. Original post&#... (More)<img src="" height="1" width="1" alt=""/> 2013: CreatePersonalSite fail when user license mapping incorrectly configured Van de Craen2014-05-05T06:49:14-07:00Last week I was troubleshooting a farm with ADFS where MySite creation failed. The ULS logs indicated that the user was not licensed to have a MySite. 04/29/2014 17:34:10.15 w3wp.exe (WS12-WFE1:0x031C) 0x1790 SharePoint Portal Server Personal Site Instantiation af1lc High Skipping cr... (More)<img src="" height="1" width="1" alt=""/> 2013: Workflows failing on start Van de Craen2014-04-29T05:10:30-07:00Recently I helped out a colleague with an issue in a load balanced SharePoint 2013 environment with Nintex Workflow 2013 on it. All the workflows that were started on WFE1 worked fine, but all started on WFE2 failed on start with the following issue logged to the SharePoint ULS logs: Load Wo... (More)<img src="" height="1" width="1" alt=""/> Saturday Belgium 2014 - Content Enrichment in SharePoint Search Van de Craen2014-04-28T08:06:22-07:00Last Saturday I delivered a session on “Content Enrichment in SharePoint Search” on the Belgian SharePoint Saturday 2014, showing how to configure it, its potential and some development tips and tricks. Although it was a very specific and narrow topic there was a big audience for it. We even had to ... (More)<img src="" height="1" width="1" alt=""/> 2013 search open in client 365Steven Van de Craen2014-03-26T09:26:43-07:00Issue SharePoint 2013 search results uses Excel Calculation Services to open workbooks found in the search results, despite having "open in client" specified on the Document Library and/or the Site Collection level. Notice the URL pointing to _layouts/xlviewer.aspx at the bottom of t... (More)<img src="" height="1" width="1" alt=""/> and PowerShell remoting Van de Craen2014-02-28T05:30:00-08:00In my current project I’m dabbling with PowerShell to query different servers and information from different SharePoint 2010 farms in the organization. This blog contains a brief overview of the steps I took in order to get a working configuration. Enable remoting and credential pass-through You n... (More)<img src="" height="1" width="1" alt=""/> REST API not refreshing data ServicesTroubleshootingSteven Van de Craen2014-02-19T23:00:00-08:00We’re using the Excel REST API in SharePoint 2010 to visualize some graphs directly on a web page. The information is stored in an Excel workbook in a document library and that had connections to backend data stores. The connection settings inside the workbook were configured with credentials ins... (More)<img src="" height="1" width="1" alt=""/> Workflow and emailing to groups Van de Craen2014-02-19T07:08:20-08:00Nintex Workflow is able to send emails via the Send notification action. A question often asked is if it can send emails to SharePoint groups or Active Directory groups. The answer is; Yes it can! There are some things you need to know though… Send to an Active Directory group You can use AD s... (More)<img src="" height="1" width="1" alt=""/> and NAT Van de Craen2014-01-15T23:13:21-08:00Networking “challenges” I like Hyper-V. I really like it. But I’m not blind for shortcomings either. The biggest frustration for me has always been the lack of NAT. Up until now I was using ICS (Internet Connection Sharing) but this was far from perfect; » It used the same IP address range as t... (More)<img src="" height="1" width="1" alt=""/> Foundation 2013 broken search experience Van de Craen2013-12-10T02:33:41-08:00Issue I recently examined a SharePoint Foundation 2013 environment where all Search Boxes had gone missing overnight. Also, when browsing to the Search Center I received an error. The ULS logs showed the following error: System.InvalidProgramException: Common Language Runtime d... (More)<img src="" height="1" width="1" alt=""/> GroupBy ordering with calculated field 365Steven Van de Craen2013-12-06T09:16:08-08:00Something that almost every client asks me is how to change the display order of the GroupBy field in a SharePoint List. For instance, let’s say you have a grouping on a status field. Unfortunately the List Settings only allow you to sort them alphabetically, either ascending or descending. ... (More)<img src="" height="1" width="1" alt=""/> a broken People Picker in Office 2010 Van de Craen2013-11-27T12:41:04-08:00Recently I was on a troubleshooting mission in a SharePoint 2010 / Office 2010 environment where the People Picker in the Document Information Panel of Word wasn’t resolving input, nor did the address book pop up after clicking it. I fired up Fiddler to see a HTTP 500 System.ServiceModel.ServiceAct... (More)<img src="" height="1" width="1" alt=""/> SharePoint Lookup Fields Van de Craen2013-11-20T13:32:03-08:00Recently I was in an upgrade project and, as any good upgrade project, there are some kinks that needed ironing out. The issue was a corrupted list that had to be recreated and repopulated. Now there’s a challenge in that itself, but it’s not the subject of this post. Let’s just say that we recreate... (More)<img src="" height="1" width="1" alt=""/> system Content Types TypesSteven Van de Craen2013-11-07T11:22:01-08:00I was at a client recently that couldn’t access ANY of their document libraries anymore. New libraries were also affected by this. The SharePoint ULS logs kept spawning the following error: 10/18/2013 14:11:10.08 w3wp.exe (0x128C) 0x1878 SharePoint Foundation Runtime tkau Unexpected... (More)<img src="" height="1" width="1" alt=""/> the Search Service Application for a specific site (programmatically) Van de Craen2013-10-30T06:46:35-07:00IApplica... (More)<img src="" height="1" width="1" alt=""/> Forms Services and the xsi:nil in code behind ServerInfoPathOfficeSteven Van de Craen2013-10-25T05:27:35-07:00Yesterday I had the requirement to programmatically add and remove the xsi:nil attribute from an InfoPath 2010 browser form hosted in InfoPath Forms Services in SharePoint 2010. There are several solutions for adding and removing xsi:nil to be found on the internet, but I’ve found only one ... (More)<img src="" height="1" width="1" alt=""/> SharePoint Farm Solutions from the Config Database Van de Craen2013-10-18T17:03:51-07:00Why I was working on what was supposed to be a quick final-run migration from an old SharePoint 2010 farm to a new SharePoint 2010 farm. There had been a test-run and testing period so it should have gone breezy. After I shut down the SharePoint servers in the old environment I used SQL backup... (More)<img src="" height="1" width="1" alt=""/> at me Van de Craen2013-10-11T09:12:07-07:00Hey you, look at me! I’m a blog. A SharePoint blog would you believe it? Don’t I look fancy? A new design For the last few weeks the incredible Tom Van Bortel has been working on a new blog design for this blog. A task I wouldn’t dare to commit myself to, I have very little design skills. But ... (More)<img src="" height="1" width="1" alt=""/> the MySite Url broke the Activity Feed de Craen Steven2013-09-04T02:54:00-07:00Yesterday I was confronted with the issue of a changed My Site Url. The client had asked to change to. The first step is to recreate the Web Application with the new primary Url and set up IIS (certificates). You could extend but they didn't re... (More)<img src="" height="1" width="1" alt=""/> Auto SignIn Van de Craen2013-05-13T06:32:48-07:00Claims SS... (More)<img src="" height="1" width="1" alt=""/> Saturday Belgium 2013 - Claims for developers Van de Craen2013-05-04T01:02:45-07:00Last Saturday I delivered a session on “Claims for developers” at the 3rd Belgian SharePoint Saturday edition, focusing on Claims Based Authentication. It was great to see that there was a lot of interest in this topic, since it’s something that allows you to do some very cool things. It was a re... (More)<img src="" height="1" width="1" alt=""/> 2013 crashes when opening a document (SharePoint 2013) Van de Craen2013-04-17T08:32:20-07:00Since Co... (More)<img src="" height="1" width="1" alt=""/> and Claims: Map Network Drive issue Van de Craen2013-02-06T05:42:09-08:00Scenario If a SharePoint Web Application is configured with Claims Authentication, you might run into an issue when trying to map SharePoint as a network drive. If you only have Windows Authentication configured on the Zone… …you’ll be either automatically signed in or get a credential prompt... (More)<img src="" height="1" width="1" alt=""/> Development: Cannot connect to the targeted site StudioSharePointSteven Van de Craen2013-02-05T03:38:32-08:00Do you have your SharePoint 2013 / Visual Studio 2012 development environment up and running ? If so, you might encounter the following error when you create a new SharePoint Project and enter the URL to your site: Cannot connect to the targeted site. This error can occur if the specified ... (More)<img src="" height="1" width="1" alt=""/> 2013 and anonymous users accessing lists and libraries Van de Craen2013-01-26T11:41:37-08:00The ... (More)<img src="" height="1" width="1" alt=""/> migration to SharePoint 2013 - Part 2 Van de Craen2013-01-21T04:44:00-08:00Here’s a second post regarding the upgrade of my old SharePoint 2007 blog to a newer SharePoint version. Right now I’m running my blog on our new SharePoint 2013 infrastructure in SharePoint 2010 modus (deferred Site Collection upgrade). CKS:EBE I deployed the original SharePoint 2007 WSP to t... (More)<img src="" height="1" width="1" alt=""/> SharePoint - Some views not upgraded to XsltListViewWebPart Van de Craen2013-01-08T06:14:37-08:00Update 24/08/16 Revisited post: Update 16/07/13 Included the tool and source for SharePoint 2013 and expanded functionality so that yo... (More)<img src="" height="1" width="1" alt=""/> migration to SharePoint 2013 - Part 1 Van de Craen2013-01-02T13:33:19-08:00I... (More)<img src="" height="1" width="1" alt=""/> Information Management Policy: Invalid field name. Van de Craen2012-12-12T09:11:45-08:00One of our projects makes use of Information Management Policies in SharePoint Server. We were programmatically adding these policies to Content Types, but for some reason this didn’t work when we migrated the application to SharePoint 2010. Issue We receive the following error: System.Argume... (More)<img src="" height="1" width="1" alt=""/> migration issue: List does not exist Van de Craen2012-12-11T06:50:51-08:00Recently bumped into this weird issue when migrating from SharePoint 2007 to SharePoint 2010. The site collections were correctly upgraded using the DB ATTACH method. Issue A first look at the site and libraries showed no issues, but when going to the List Settings of certain libraries and lists, ... (More)<img src="" height="1" width="1" alt=""/> Data integration in Office and SharePoint using BDC or BCS Van de Craen2012-12-07T05:42:38-08:00Today I discovered a rather unpleasant change during a migration of SharePoint 2007 to SharePoint 2010. The customer is still using Office 2007 but is planning on upgrading next year. Situation In SharePoint 2007 you had Business Data Connectivity (BDC) to bring external data (think back-end syst... (More)<img src="" height="1" width="1" alt=""/> Hyper-V 3.0 machine to VMWare Workstation Van de Craen2012-12-05T05:41:00-08:00There is a lot of ink already on the subject of converting Microsoft Virtual Machines to VMWare Virtual Machines, but I’m writing down what worked for me to get it up and running. VMWare vCenter Converter Download it and install it. If you’re not carrying multiple machines then install it on your W... (More)<img src="" height="1" width="1" alt=""/> SharePoint DCOM errors the easy way Van de Craen2012-11-29T05:23:38-08:00Tagline: Fix your SharePoint DCOM issues with a single click ! Update 8/05/2014: Scripts were revised to work with Windows Server 2008 R2 and Windows Server 2012 with User Account Control enabled. Revised post: Fixing SharePoint DCOM errors the easy way - revised Direct... (More)<img src="" height="1" width="1" alt=""/> Conference 2012 Report - Mental Overload Van de Craen2012-11-14T14:17:00-08:00Share... (More)<img src="" height="1" width="1" alt=""/> Conference 2012 Report - Viva Las Vegas Van de Craen2012-11-12T07:47:00-08:00My colleague Dimitri and I got the opportunity to go to this year’s SharePoint Conference (#SPC12) in Las Vegas. I have to admit that I had mixed feelings at the beginning; I’m not much of a traveler to begin with and also leaving my wife and kids didn’t sound too appealing. But of course we’re t... (More)<img src="" height="1" width="1" alt=""/> Forms Services 2010: glitch with repeating dynamic hyperlinks Van de Craen2012-05-22T04:15:00-07:00 t... (More)<img src="" height="1" width="1" alt=""/> 8 Van de Craen2012-03-17T03:16:00-07:00A t... (More)<img src="" height="1" width="1" alt=""/> Computed Field - updated with XSL Computed FieldSharePointSteven Van de Craen2012-03-02T00:28:00-08:00Yesterday I updated the download and source code for the Advanced Computed Field at the Ventigrate Public Code Repository with an XSL stylesheet. This was needed to fix an issue with the field not rendering values when filtered through a Web Part Connection. The Advanced Computed Field relies on CA... (More)<img src="" height="1" width="1" alt=""/> Saturday Belgium 2012 Van de Craen2012-03-01T08:31:57-08:00Join SharePoint architects, developers, and other professionals on 28th April for the second Belgian ‘SharePoint Saturday’ event. SharePoint Saturday is an educational, informative & lively day filled with sessions from respected SharePoint professionals & MVPs, covering a wide variety of Sh... (More)<img src="" height="1" width="1" alt=""/> All Items Ribbon Button SolutionsRibbonjQuery/JavaScriptSteven Van de Craen2012-02-08T09:38:12-08:00A SharePoint 2010 Sandboxed Solution that adds a Ribbon Button that recycles all items in a Document Library with a single mouse click. I' have created this miniproject more as an academic exercise in creating a Ribbon Button than for real business value. It can come in handy for development enviro... (More)<img src="" height="1" width="1" alt=""/> 2010: Taxonomy issue and Event Receivers Receivers.NETTroubleshootingSteven Van de Craen2012-02-02T03:59:00-08:00Issue c... (More)<img src="" height="1" width="1" alt=""/> 2010 SOAP Service Error (The top XML element 'string'...) Van de Craen2012-01-26T05:09:00-08:00Today we were experimenting with SharePoint 2010 CSOM (Client Side Object Model) and we noticed strange errors such as HTTP ERROR 417 were returned. When browsing to Lists.asmx and Sites.asmx we got the following error: The top XML element 'string' from namespace '.... (More)<img src="" height="1" width="1" alt=""/> Server 2010 and PDF Indexing Van de Craen2012-01-05T03:22:25-08:00Posting this for personal reference: SharePoint 2010 - Configuring Adobe PDF iFilter 9 for 64-bit platforms Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\14.0\Search\Setup\ContentIndexCommon\Filters\Extension\.pdf] @=hex(7):7b,00,45,00... (More)<img src="" height="1" width="1" alt=""/> 2007: Anonymous Version History Van de Craen2012-01-02T23:20:33-08:00You can configure the permission level of anonymous users to allow for viewing versions of documents and items, but no matter what you do, they get prompted for credentials. The Version History page has the property “AllowAnonymousAccess” set to false. It is a virtual property in the Microsoft.Shar... (More)<img src="" height="1" width="1" alt=""/> 2010 User Profile Page: Add as colleague–Link Fixup Van de Craen2011-12-27T02:43:00-08:00Here’s a small fix for which I didn’t have time to investigate more properly: Issue The User Profile Page of another user shows a link “Add as colleague” when that user isn’t already a colleague: It seems however that the link behind “Add as colleague” directs to the Default AAM URL rather... (More)<img src="" height="1" width="1" alt=""/>–link-fixup.aspxThe sandbox is too busy to handle the request ReceiversSandbox SolutionsSharePointTroubleshootingSteven Van de Craen2011-12-07T08:32:00-08:00SharePoint 2010 and SharePoint Online (Office 365) allow for custom development in the form of Sandbox Solutions. Your code is restricted to a subset of the SharePoint API but allows you do most regular operations inside a Site Collection. Problem Here’s a scenario that fails every time and everywh... (More)<img src="" height="1" width="1" alt=""/> - Default Library Content Type is Folder TypesSteven Van de Craen2011-10-19T05:01:55-07:00Ever Ty... (More)<img src="" height="1" width="1" alt=""/> APIs in SharePoint 2010 Service Pack 1 (SP1) Van de Craen2011-10-17T14:57:46-07:00New APIs in SharePoint 2010 Service Pack 1 (SP1) Funny enough a colleague pointed out to me he couldn’t find the UserProfile.ModifyProfilePicture method. Neither could I (running Service Pack 1 + June 11 CU). Can you ? Perhaps it was removed in the June 2011 Cumulative Update ? UPDATE (18 ... (More)<img src="" height="1" width="1" alt=""/> Administration Time Zone incorrect Van de Craen2011-10-13T05:04:16-07:00If you happen to change the Windows Time Zone settings AFTER Central Administration has been provisioned, you will see that the time zone/date format is not updated in the administration pages: Luckily, the fix is quite easy. You can just update the Regional Settings of the Central Administra... (More)<img src="" height="1" width="1" alt=""/> October 2011 Van de Craen2011-10-02T05:42:54-07:00): Stop thinking about feature... (More)<img src="" height="1" width="1" alt=""/> User Management Van de Craen2011-09-30T09:31:47-07:00This project is further maintained at the Ventigrate Codeplex Repository (). Please go there to get the latest news or for any questions regarding this topic. Page was cross-posted to this blog on 09/30/2011. External User Management The External User Manage... (More)<img src="" height="1" width="1" alt=""/> and InfoPath Promoted Fields Van de Craen2011-08-25T10:28:30-07:00A... (More)<img src="" height="1" width="1" alt=""/> Up - Programmatically approving a workflow Van de Craen2011-06-28T02:10:27-07:00In my previous post “SharePoint 2010: Programmatically Approving/Rejecting Workflow” I mentioned the possibility of an issue when programmatically altering a workflow task. I have been testing it on several SharePoint 2010 farms with different parameters (different patch level, etc). UPDATED Ju... (More)<img src="" height="1" width="1" alt=""/> 2010: Programmatically Approving/Rejecting Workflow Van de Craen2011-06-23T15:41:00-07:00Interacting with a workflow in SharePoint is relatively straight-forward if you understand the underlying process. I’ve done multiple projects in both SharePoint 2007 and SharePoint 2010 with some level of workflow interaction: starting a workflow, approving or rejecting a workflow task, or r... (More)<img src="" height="1" width="1" alt=""/> 2010: Announcing the Advanced Computed Field for SharePoint 2010 Computed FieldSharePointCustom Field TypesSteven Van de Craen2011-05-26T09:07:47-07:00Finally available: the SharePoint 2010 version of the Advanced Computed Field. Don’t know what it is ? Check this out: (the Advanced Computed Field rendering a highlighted/italic item title)<img src="" height="1" width="1" alt=""/> issues with DispForm, EditForm and NewForm Van de Craen2011-03-11T10:17:00-08:00Each SharePoint List can be altered with a custom DispForm.aspx, EditForm.aspx and NewForm.aspx to display, edit or create list items and metadata. This post is about restoring them to a working point. Botched Forms So one of these forms are edited to a loss state or deleted altogether. If this is ... (More)<img src="" height="1" width="1" alt=""/> Forms Server 2010 Parameterized Submit issue ServerSharePointTroubleshootingSteven Van de Craen2011-03-09T05:16:00-08:00I’m currently testing our InfoPath Web Forms for an upcoming migration to InfoPath 2010 and SharePoint 2010 and have come up with a reproducible issue. Issue InfoPath Web Forms cannot use values from other Data Sources as parameters in a Submit Data Connection. When the form is submitted a warning ... (More)<img src="" height="1" width="1" alt=""/> URL Changes and InfoPath Forms ServerSteven Van de Craen2011-03-08T02:28:18-08:00InfoPath Form Template URL ou... (More)<img src="" height="1" width="1" alt=""/>: SPWeb.Properties versus SPWeb.AllProperties SolutionsSteven Van de Craen2011-03-05T23:41:18-08:00Property Bag SPWeb exposes two ways of interacting with the property bag: SPWeb.Properties exposes a Microsoft.SharePoint.Utilities.SPPropertyBag SPWeb.AllProperties exposes a System.Collections.Hashtable The former is considered legacy, also it stores the Key value as lowercase, whi... (More)<img src="" height="1" width="1" alt=""/> 2010: Content Type Syndication experiments TypesSharePointSteven Van de Craen2011-03-05T04:40:47-08:00In my post yesterday I raised the question about Content Type syndication using the Content Type Hub mechanism, and how this would work together with Lookup Fields, since they don’t support crossing Site Collection boundaries. Another question is in regard to the “challenges” of using OOXML (docx, ... (More)<img src="" height="1" width="1" alt=""/> Lookup Field Types: migration from 2007 to 2010 Field TypesSteven Van de Craen2011-03-04T09:23:31-08:00Custom Field Types are a rather advanced topic, but very powerful as well. They allow for real integration of custom components inside standard List and Library rendering. (See my other posts on Custom Field Types) There are some things you can run into. Especially if you have custom Lookup Field T... (More)<img src="" height="1" width="1" alt=""/> multiple credential prompts for Office in combination with SharePoint Van de Craen2011-02-22T02:39:00-08:00I had seen and tried most of this already, but didn’t know the Network Location bit: Multiple Authentication (login) Prompts - Office Products with SharePoint<img src="" height="1" width="1" alt=""/> Policy For Web Application: Account operates as System Van de Craen2011-02-12T02:58:02-08:00Both... (More)<img src="" height="1" width="1" alt=""/> 2010: Increase debugging time by configuring Health Monitoring Van de Craen2011-02-06T09:51:43-08:00If incr... (More)<img src="" height="1" width="1" alt=""/> Content Productivity Hub 2010 - updated Van de Craen2011-01-18T09:31:43-08:00Recently updated: the Productivity Hub The Productivity Hub is a Microsoft SharePoint Server 2010 site collection that offers training materials for end-users. The Hub is a SharePoint Server site collection that serves as a learning community and is fully customizable. It provides a centr... (More)<img src="" height="1" width="1" alt=""/> 2007 and the mysterious ever required field TypesSharePointSteven Van de Craen2011-01-06T09:47:59-08:00Today is trouble solving day at a customer. One of the issues was a SharePoint Library on their Intranet having a required field, even though Require that this column contains information was set to “No”. So I opened up SharePoint Manager 2007 to inspect the SchemaXml: <Field Name="... (More)<img src="" height="1" width="1" alt=""/> 2008 R2 unable to connect to locally shared folder Van de Craen2011-01-05T07:22:44-08:00... (More)<img src="" height="1" width="1" alt=""/> Office Web Applications for an Upgrade (a day of agony) Web ApplicationsSharePointSteven Van de Craen2011-01-04T02:51:21-08:00Yesterday... (More)<img src="" height="1" width="1" alt=""/> Live Writer Twitter Plug-in with OAuth Van de Craen2010-11-25T01:29:55-08:00Since the introduction of OAuth in Twitter the Twitter Plug-in for Windows Live Writer to tweet new blog posts stopped working. A while back an update was released that works with the new authentication scheme. (This post is really for testing if it works correctly :))<img src="" height="1" width="1" alt=""/> Library Thumbnail Size Van de Craen2010-11-23T05:10:32-08:00Today I was looking into a way to increase the Thumbnail size of slides in a Slide Library (MOSS 2007, SharePoint Server 2010). The SPPictureLibrary has a ThumbnailSize property which you can set, but this is not available for a slide library. So I tried reflection on an SPList to update the relat... (More)<img src="" height="1" width="1" alt=""/>: Slide Library and Folders Van de Craen2010-10-11T05:54:35-07:00The Slide Library is available in MOSS 2007 and SharePoint Server 2010 and allows to upload several slides from a PowerPoint Presentation as separate slides. Then it is really easy to compose a new presentation based on slides you select from the library. Working with folders in Slide Libraries is ... (More)<img src="" height="1" width="1" alt=""/> to Windows Token Service Van de Craen2010-10-07T07:48:50-07:00In SharePoint 2010 the Claims To Windows Token Service (c2wts) is a very nice addition that allows for conversion of claims credentials to windows tokens. The service is required to run as the LocalSystem account. If you –like me- have accidentally switched it to a specific user there’s no way in t... (More)<img src="" height="1" width="1" alt=""/> for my colleagues Van de Craen2010-09-27T15:00:52-07:00Another colleague of mine has started his blog on. And don’t be fooled by the host name, it’s for posts on SharePoint 2010 as well !! Find them here and add them to your feed readers :) Tom Van Rousselt: Sebastian Bouckaert... (More)<img src="" height="1" width="1" alt=""/> 2010: User cannot be found after using stsadm migrateuser ReceiversSharePointTroubleshootingSteven Van de Craen2010-09-07T13:18:00-07:00I. Aft... (More)<img src="" height="1" width="1" alt=""/> 2010 exams behind me Van de Craen2010-09-01T12:50:52-07:00I decided to have a go on the SharePoint 2010 exams with little or no preparation and see how it’d go. The first three went smoothly. I have to admin I struggled somewhat with the PRO Administrator exam today, but cleared it nevertheless. Exam 70-573: TS: Microsoft SharePoint 2010, Appli... (More)<img src="" height="1" width="1" alt=""/> Event Receivers and HttpContext ReceiversSharePointSteven Van de Craen2010-08-24T13:49:51-07:00A lesser known trick to make use of the HttpContext in asynchronous Event Receivers in SharePoint 2007 or SharePoint 2010 is to define a constructor on the Event Receiver class that assigns the HttpContext to an internal or private member variable. Next you can access it from the method overrides fo... (More)<img src="" height="1" width="1" alt=""/> Web Apps - creating new documents Web ApplicationsOfficeSharePointSteven Van de Craen2010-08-16T02:24:00-07:00The Office Web Apps allow users without Microsoft Office installed to display or work on Word, Excel or PowerPoint files from the browser. It is a separate installation to your SharePoint Farm and controllable by two Site Collection Features: When active it will render Office 2007/2010 file f... (More)<img src="" height="1" width="1" alt=""/> 2010 and Remote Blob Storage for better file versioning Van de Craen2010-08-06T07:37:53-07:00A feature of using the Remote Blob Storage with SharePoint 2010 (FILESTREAM provider in SQL Server 2008) is that document versions do not always create a new full copy of the original file, as it would in a non-RBS environment. This is a huge improvement since 5 versions of a 10 MB file where only ... (More)<img src="" height="1" width="1" alt=""/> 2010: June 2010 Cumulative Update UpdatesSharePointSteven Van de Craen2010-07-31T05:23:06-07:00Spreading the news… The first Cumulative Update (called “June 2010 CU”) for SharePoint 2010 was made available a few days ago: SharePoint Foundation 2010 KB 2028568 – Download SharePoint Server 2010 KB 983319 – Download KB 983497 – Download KB 2182938 – Do... (More)<img src="" height="1" width="1" alt=""/> upgraded to CKS:EBE 3.0 Van de Craen2010-07-24T05:40:00-07:00The smart people behind the Community Kit for SharePoint have released 3.0 of the Enhanced Blog Edition. Check out the improvements it brings: Cheers !<img src="" height="1" width="1" alt=""/> 2010 Development: Replaceable Tokens Van de Craen2010-07-24T04:19:00-07:00Quick reference: (Source:) Name Description $SharePoint.Project.FileName$ The name of the containing project file, such as, "NewProj.csproj". $SharePoint.Project.FileNameWithoutExtension$ The name of the containing p... (More)<img src="" height="1" width="1" alt=""/> Page Home Page Van de Craen2010-07-19T02:15:00-07:00In SharePoint Server 2010 you have the option to create an Enterprise Wiki, but what if you have SharePoint Foundation 2010 or Search Server 2010 and you want to create a Wiki site ? You’ll notice that there’s no Wiki Site Template available like in SharePoint 2007 but nowadays you can activate Wik... (More)<img src="" height="1" width="1" alt=""/> availability Van de Craen2010-06-25T09:26:02-07:00There have been a lot of issues recently on the availability of my blog. This is because we’re currently cleaning out the server room with old servers and virtualizing them. In that process some of the IP addresses got mixed around and of course the ISA rules were not updated yet. Apologies for tha... (More)<img src="" height="1" width="1" alt=""/> language of the Spell Check in MOSS 2007 Van de Craen2010-06-04T06:46:00-07:00Want a quick way to change the language of the Spell Check for publishing pages in MOSS 2007 ? Place this script in your masterpage to set the language parameter before the call to the SpellCheck Web Service is made… <script type="text/javascript"> L_Language_Text = 100c; &l... (More)<img src="" height="1" width="1" alt=""/> | SPSaturday: wrap up SolutionsSharePointSteven Van de Craen2010-05-12T01:49:00-07:00Last Saturday (8 May 2010) the first SharePoint Saturday event in Belgium took place. It was a day full of SharePoint 2010 aimed specifically at developers. As promised here’s the slide deck and demo files I used. Slide deck Demo1: creating a sandboxed web part Demo2: display sandbox res... (More)<img src="" height="1" width="1" alt=""/> 2010 Protected View and SharePoint Van de Craen2010-05-10T03:00:00-07:00If envir... (More)<img src="" height="1" width="1" alt=""/> Saturday coming near you on May 8th ! Van de Craen2010-03-30T02:22:00-07:00BIWUG is organizing the first SharePoint Saturday in Belgium ever. It’s held in Hof Ter Elst, Edegem on the 8th of May 2010. Topics include Visual Studio 2010 Tools, LINQ to SharePoint, Client Object Model, Sandbox solutions, Managed Metadata and WCF and REST in SharePoint. I’ll be there presenti... (More)<img src="" height="1" width="1" alt=""/> Search Scopes: Approximate Item Count is incorrect Van de Craen2010-03-18T08:28:00-07:00The playe... (More)<img src="" height="1" width="1" alt=""/> is back in the air Van de Craen2010-03-17T06:11:00-07:00It took about a week to resolve the DNS issue but now this site is back online to provide you with the occasional posting on SharePoint, .NET, Silverlight and alike. Regards, Steven<img src="" height="1" width="1" alt=""/> does SharePoint know the Content Type of an InfoPath form saved to a document library ? TypesInfoPathSharePointSteven Van de Craen2010-03-10T12:13:00-08:00When” att... (More)<img src="" height="1" width="1" alt=""/> pages to meeting workspaces programmatically Van de Craen2010-03-10T12:13:00-08:00You ... (More)<img src="" height="1" width="1" alt=""/> 2007 document templates and Content Types in SharePoint – lessons learned TypesDebuggingOfficeSharePointSteven Van de Craen2010-01-29T07:09:00-08:00A while ago I stumbled upon a serious design limitation regarding Content Types and centralized document templates. What then followed was a series of testing, phone calls with Microsoft, finding alternative solutions and deep dive into Office Open XML. Request from the customer “We want to use MOSS... (More)<img src="" height="1" width="1" alt=""/>-–-lessons-learned.aspxProgrammatically change the Toolbar on a List View Web Part Van de Craen2010-01-28T05:31:00-08:00A refresher c... (More)<img src="" height="1" width="1" alt=""/> + serialize an InfoPath Form loses the processing instructions ServerInfoPathSteven Van de Craen2010-01-21T09:16:00-08:00Using)... (More)<img src="" height="1" width="1" alt=""/> Search indexes some files with fldXXXX_XXXX file names UpdatesSteven Van de Craen2010-01-20T08:00:00-08:00Today som... (More)<img src="" height="1" width="1" alt=""/> Word 2007 Content Controls with empty Placeholder Text Van de Craen2009-12-03T09:03:00-08:00Word 2007 uses Content Controls to display document fields inside the document (eg. the header). These document fields can be standard fields but also your SharePoint Fields. By default when the value of a Content Control is empty it will display the Placeholder Text between square brackets as foll... (More)<img src="" height="1" width="1" alt=""/> in SharePoint Van de Craen2009-11-12T03:58:00-08:00Recently stumbled upon this old article: Microsoft Office Thumbnails in SharePoint.Has anyone actually done anything like this ?<img src="" height="1" width="1" alt=""/> and screen manipulation flickering Van de Craen2009-11-09T06:43:00-08:00We hap... (More)<img src="" height="1" width="1" alt=""/> Formatting using the Advanced Computed Field Computed FieldSharePointCustom Field TypesSteven Van de Craen2009-11-07T03:31:00-08:00I got a question on how to use the Advanced Computed Field for conditional formatting and when I finished writing up the response I figured I might as well share it with the community, being you all :) Here’s the config I used: <FieldRefs> <FieldRef Name="TestField" /&g... (More)<img src="" height="1" width="1" alt=""/> Edit Mode doesn’t trigger an update Van de Craen2009-10-26T15:14:00-07:00Funny thing last week when I wrote a “Page Information Web Part” (something that showed something like ‘This page was last modified by X on Y’) and it didn’t update at all when Web Parts were added, modified or deleted. I can see why it wouldn’t but I still think this is a flaw because there’s no w... (More)<img src="" height="1" width="1" alt=""/>’t-trigger-an-update.aspxRooms and Equipment Reservations: Consistency Check Web Part Van de Craen2009-10-26T14:54:00-07:00Still flaw... (More)<img src="" height="1" width="1" alt=""/> on list item body using jQuery Field TypesjQuery/JavaScriptSharePointAdvanced Computed FieldSteven Van de Craen2009-10-01T15:35:00-07:00A... (More)<img src="" height="1" width="1" alt=""/> Search for MOSS 2007 Van de Craen2009-09-23T10:43:00-07:00Wild Ale... (More)<img src="" height="1" width="1" alt=""/> Document Templates in a library: Document Information Panel shows incorrect properties TypesOfficeSharePointSteven Van de Craen2009-08-20T02:56:00-07:00Every th... (More)<img src="" height="1" width="1" alt=""/> Types cannot be created declaratively on a child web Van de Craen2009-08-07T06:23:00-07:00One can easily create a Content Type on a child web through the SharePoint interface. One can easily create a Content Type on a child web through the SharePoint Object Model. One cannot create a Content Type on a child web through a Feature declaratively (you could create it through ... (More)<img src="" height="1" width="1" alt=""/> 2007 SP2 fixed activation issue UpdatesSteven Van de Craen2009-08-04T02:12:00-07:00The download for SharePoint 2007 SP2 has been updated to no longer change your environment to trial mode. More on this here:<img src="" height="1" width="1" alt=""/>: Someone changes an item that appears in the following view Van de Craen2009-07-27T04:33:00-07:00Recently I was asked how to receive notifications when specific metadata for an item changed. I recalled this was somehow possible with the out of the box Alerts and then configure them to send a notification when something to the View changed. “This feature is not available when I create an aler... (More)<img src="" height="1" width="1" alt=""/> on Win7 RC: issues with OWA 2007 ServerGeneralInternet ExplorerWindowsSteven Van de Craen2009-07-22T13:48:00-07:00I would have preferred if Win7 RTM came sooner so that I could avoid migrating from Beta to RC and then from RC to RTM but no point to keep on whining about it :) So I decided to install the 64 bit issue of Windows 7 Release Candidate. I love how smooth those Win7 installs are. Very little inte... (More)<img src="" height="1" width="1" alt=""/> Receiver Definition: Data and Filter TypesEvent ReceiversSharePointSteven Van de Craen2009-06-30T04:05:00-07:00I recently declared an Event Receiver through a SharePoint Feature: Event Registrations Some of the samples online had surprising elements such as <Data /> and <Filter />. I have never seen them used before and didn’t know their purpose. Perhaps they are not to be messed with ? Per... (More)<img src="" height="1" width="1" alt=""/> and Structure Reports troubleshooting Site QuerySteven Van de Craen2009-06-29T14:30:00-07:00If you have MOSS 2007 than you get the Site Manager for advanced management and with it the “Content and Structure Reports” (http://<server>/Reports List). You can easily display the results of a report through the Site Actions menu. You can also add custom reports since the underlying t... (More)<img src="" height="1" width="1" alt=""/> Receivers on Content Types TypesEvent ReceiversSteven Van de Craen2009-05-07T10:06:00-07:00I'm currently doing quite some research on Event Receivers (ERs) on Content Types (CTs) and activating those CTs on a document library in SharePoint 2007 (WSS 3.0 to be exact, but same applies for MOSS 2007 and MSS(X) 2008). The setup is a single document library with the out of the box Document C... (More)<img src="" height="1" width="1" alt=""/> 2007: Update system properties (Created, Created By, Modified, Modified By) Van de Craen2009-04-13T07:09:00-07:00The system metadata can be changed to any value desired using the object model. I did notice a bug that the "Created By” (aka Author) field wouldn’t update unless also the “Modified By” (aka Editor) field was set (either to a new or it’s current value). SPListItem item = ...; item[“Created By... (More)<img src="" height="1" width="1" alt=""/> Renewal 2009 Van de Craen2009-04-01T11:12:00-07:00I’m so happy I get to be a SharePoint MVP for another year ! The email arrived only two hours ago and I really wasn’t convinced of my renewal but nevertheless it is a fact. Also congrats to all the other guys and gals thathave gotten their renewal !! Steven <img src="" height="1" width="1" alt=""/> Computed Field Field TypesAdvanced Computed FieldSteven Van de Craen2009-03-31T14:34:00-07:00Introduction This project originally started as ‘TitleLinkField’ because I needed a way to display the Title field as a hyperlink to the document in a document library, but it ended up being more than just that so I chose a more generic name for it. I had some experience with Custom Field Type... (More)<img src="" height="1" width="1" alt=""/> custom document properties on a file share Van de Craen2009-01-08T04:06:00-08:00A file share with Word and Excel documents (.doc, .docx, .xls, .xlsx) having custom document properties is indexed via MOSS 2007 or MSS 2008. When the crawl has finished the custom properties are listed in 'Crawled properties' but the details view mentions "There are zero documents in the in... (More)<img src="" height="1" width="1" alt=""/> 2007: December Update ServicesForms ServerSharePointSharePoint UpdatesSteven Van de Craen2008-12-29T04:16:00-08:00This update really combines all previous updates so installation order is simplified. WSS SP1 (+ for all language packs) MOSS SP1 (+for all language packs) WSS December Update: x86 - x64 (separate downloads !) MOSS December Update: x86 - x64 (separate downloads !) More info about that here... (More)<img src="" height="1" width="1" alt=""/> permissions: 'Only their own' Van de Craen2008-12-19T08:36:00-08:00The title of this blog post is referring to this screen in the List Settings: It interests me because it allows you to control ownership of the item is only available to Lists but not to Document Libraries doesn't use unique permissions but some other mechanism One thing it mentions is th... (More)<img src="" height="1" width="1" alt=""/> 2007 Event Receiver and Enterprise Library: TargetInvocationException ReceiversSteven Van de Craen2008-12-15T08:08:00-08:00Today I got into some code reviewing of an Item Event Receiver using Enterprise Library for data access. The problem occurred when registering the Event Receiver for a SharePoint List using the object model (SPList.EventReceivers.Add) Exception has been thrown by the target of an invocation. Here... (More)<img src="" height="1" width="1" alt=""/> a new security group to an Audience Van de Craen2008-12-11T05:29:00-08:00I had a new Active Directory security group created a few days ago but it still wasn't showing up in the Audience management pages when I tried to create a rule User member of AD group. After double checking my group settings I couldn't find anything out of the ordinary and my MOSS 2007 server had ... (More)<img src="" height="1" width="1" alt=""/> permissions for 'Save list as template' Van de Craen2008-12-09T01:41:00-08:00Consider the following scenario: A MOSS 2007 Portal where everyone has read permissions but some 'content owners' have elevated permissions on libraries and lists and want to save a list as template (.stp). In this case they need to have elevated access to the List Template Catalog (http://... (More)<img src="" height="1" width="1" alt=""/> and jQuery coolness Van de Craen2008-11-27T16:00:00-08:00Yes I know, you already have it first hand from Jan's blog but still... How cool is that teaser ?<img src="" height="1" width="1" alt=""/>: Sharing data over applications ? Van de Craen2008-11-12T06:37:00-08:00Introduction... (More)<img src="" height="1" width="1" alt=""/> 2007 and ZIP indexing Van de Craen2008-11-10T14:32:00-08:00Introduction Here's a post about indexing ZIP archives in the same style as the one I did on PDF indexing. The search engine makes use of IFilters to be able to read the specific structure of a certain file type and retrieve information from it that it puts in an index. When you perform a search qu... (More)<img src="" height="1" width="1" alt=""/> Update, August Update, October Update, Service Pack 2 UpdatesExcel ServicesForms ServerSteven Van de Craen2008-10-30T06:43:00-07:00update (December Update): The December Update was released recently and is a real cumulative update which simplifies installation quite a bit. Read more about it here: SharePoint 2007- December Update It's hard to keep up with the updates for SharePoint these day... (More)<img src="" height="1" width="1" alt=""/> Server: userName() function and Forms Based Authentication ServerSharePointSteven Van de Craen2008-10-20T07:15:00-07:00So you have an InfoPath 2007 form that renders as a Web page and you use the userName() function to get the current user. This works fine when you're using Windows Authentication but stays empty when you're using Forms Based Authentication !! Also note that for Windows Authentication it doesn't ret... (More)<img src="" height="1" width="1" alt=""/> Server: XmlFormView and setting the InitialView on load ServerSharePointSteven Van de Craen2008-10-10T05:50:00-07:00I have been struggling with this for too long and the result is still not quite satisfying but it'll have to do for now. I have a Web Control that renders an InfoPath form using the Microsoft.Office.InfoPath.Server.Controls.XmlFormView control and want the form to open with a specific View rather t... (More)<img src="" height="1" width="1" alt=""/> decompressing data from a cabinet file Van de Craen2008-10-07T04:40:00-07:00Introduction A SharePoint 2007 site with configured lists, content types, web parts, data, etc is saved into a template (.STP) for future creation of sites. However you receive the error "Failure decompressing data from a cabinet file". In one of our customer cases the culprit here were t... (More)<img src="" height="1" width="1" alt=""/> without hardcoding the modifications Van de Craen2008-10-03T03:12:00-07:00Description Not sure if this has been done or not but it seems unlogical to have the web.config modifications hard coded into the Feature Receiver, so I whipped up a quick mechanism to read it from an external config XML file. Feel free to use and improve as desired. The code uses LINQ to XML for r... (More)<img src="" height="1" width="1" alt=""/> Automation: extract embedded Word Documents Van de Craen2008-09-18T07:09:00-07:00I'm normally not into Office Automation but today I needed to extract all embedded files from a Word Document. Those files were Word, Excel and PDF documents. Luckily the majority were Word documents, because the quick solution I whipped up only works for those types, not for Excel or PDF. Here's t... (More)<img src="" height="1" width="1" alt=""/> a list based on a multivalue column and filter Van de Craen2008-09-17T04:48:00-07:00Today I got into testing how to filter a list with a multivalued Choice (or Lookup) column based on a multivalued filter. The setup I'm using is to connect a Choice Filter Web Part with the List View Web Part. Once you have configured the Filter Web Part and connected it to the List View Web Part ... (More)<img src="" height="1" width="1" alt=""/> and restore Site Collections between localized SharePoint installations Van de Craen2008-08-21T04:15:00-07:00Question Pop quiz: I have a Dutch installation of SharePoint (WSS 3.0 or MOSS 2007) that already contains a lot of data. I want to restore the main site collection on a different server with English installation of SharePoint. Can it be done ? Considerations Take into account that a localized v... (More)<img src="" height="1" width="1" alt=""/>: TODAY + 1 hour formula ? Van de Craen2008-08-19T04:29:00-07:00I clicked together an easy custom announcements list with Begin and End Date and wanted for each new item the following default values: Begin Date = TODAY End Date = TODAY + 1 hour Setting the Begin Date is easy but I scratched my head a few times for the End Date. I found the following info... (More)<img src="" height="1" width="1" alt=""/> what's new ? Van de Craen2008-07-22T04:42:00-07:00I've been out of the game for somewhat a month now and it seems I've been missing quite a bit. I'm not going to make another annoucement about the Infrastructure Update (Oops... I think I just did) because it has already been widely blogged in the community. Can't wait to run it on some of ou... (More)<img src="" height="1" width="1" alt=""/> Action Button Field TypesSteven Van de Craen2008-06-20T10:24:00-07:00Introduction Don't you just miss the possibility to have buttons on a SharePoint List or Library View similar to an ASP.NET GridView ? You could add Edit or Delete functionality or even cooler things such as navigating to a LAYOUTS page rather than having to use the item context menu drop down:... (More)<img src="" height="1" width="1" alt=""/> Hyperlink fields in SharePoint 2007 Van de Craen2008-06-04T05:06:00-07:00Description Here's a small solution for those of you lucky enough to have MOSS 2007. It will not work if you only have WSS 3.0 installed. This project will add a browse button to all Link fields in your SharePoint Lists by using the AssetUrlSelector control. For more information about this see:... (More)<img src="" height="1" width="1" alt=""/> with the AssetUrlSelector Van de Craen2008-06-04T01:02:00-07:00I'm doing some tests with the AssetUrlSelector control to improve user experience in a SharePoint 2007 environment. The AssetUrlSelector control gives your end users a nice interface for selecting a link or image URL from a site collection. You can read the MSDN documentation here: AssetUrlSel... (More)<img src="" height="1" width="1" alt=""/> -o export: FatalError: Failed to compare two elements in the array Van de Craen2008-04-28T03:49:00-07:00Community Update It’s nice to see the community providing feedback on my tools and improving them. I strongly encourage this and here’s another example of this kind of interaction. Achim Ismaili has improved the FaultyFeatureTool and added it to codeplex:... (More)<img src="" height="1" width="1" alt=""/> and Equipment Reservations v2 (UNOFFICIAL) Van de Craen2008-04-21T06:25:00-07:00Introduction Microsoft released the Fabulous 40 Application Templates for SharePoint 2007 a (long) while ago. One of them is Rooms and Equipment Reservations (link) where you can make bookings for conference rooms, material, etc and get a visual overview of all bookings. Issues If you really star... (More)<img src="" height="1" width="1" alt=""/> a SharePoint MVP... Van de Craen2008-04-17T06:24:00-07:00This post says it all: Hope to be there next MVP Summit :)<img src="" height="1" width="1" alt=""/> MVP on the block Van de Craen2008-04-02T07:17:00-07:00Yesterday I got notice about being awarded MVP for SharePoint Server ! At first I figured it to be an April Fool's joke but no such thing :) I'm really loving this !!!<img src="" height="1" width="1" alt=""/> MySite Blogs Van de Craen2008-03-28T13:58:00-07:00Introduction I was recently tasked with writing a handler to display the combined RSS feed of all MySite blogs on a SharePoint Intranet. The handler takes into account the current user permissions (no rights to see the blog or post means you don't see it in the feed). Deployment Add the SharePoint... (More)<img src="" height="1" width="1" alt=""/> and asynchronous Event Handlers Van de Craen2008-03-21T02:16:00-07:00In SharePoint 2007 it is possible to handle events that occur on list items. These types of events come in two flavours: synchronous and asynchronous. Synchronous: happens 'before' the actual event, you have the HttpContext and you can show an error message in the browser and cancel the ev... (More)<img src="" height="1" width="1" alt=""/> 2.0 Final Release Van de Craen2008-03-14T02:14:00-07:00CKS:EBE 2.0 has been released since a few days ! Check it out here:<img src="" height="1" width="1" alt=""/> 2008 wrap up Van de Craen2008-03-13T13:45:00-07:00Today was the last days of TechDays 2008 and it was nice to see all those familiar faces again. The group of people you meet at such events just increases every year; I wonder how it would be like in 20 or 30 years from now... I didn't sit through many SharePoint sessions but decided to go broader ... (More)<img src="" height="1" width="1" alt=""/> Activation Dependency + Hidden = Automagic Van de Craen2008-02-23T10:56:00-08:00In SharePoint 2007 you can make features dependent on each other so that FeatureB can only be activated if FeatureA is. If this isn't the case it will prompt you with a message to activate FeatureA first: Now, when you mark FeatureA as HIDDEN the Feature Dependency will automagically active i... (More)<img src="" height="1" width="1" alt=""/> Errors: 6482, 7888, 6398 and 7076 Van de Craen2008-02-23T10:56:00-08:00Problem I was receiving multiple complaints from clients with the above errors in the machine's Event Log: Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Besides the errors it ... (More)<img src="" height="1" width="1" alt=""/> node in new Content Type XML Van de Craen2008-02-23T10:54:00-08:00If you declare your Site Columns and Content Types in XML to deploy them as a Site Collection feature, you must make sure to ALWAYS include the <FieldRefs> node. If you leave it out then your Content Type will not have any of the inherited fields of the parent Content Type !!! <ContentTyp... (More)<img src="" height="1" width="1" alt=""/> 2008 Van de Craen2008-01-30T03:00:00-08:00Thought I'd promote the upcoming TechDays 2008 event (previously the Developer & IT Pro Days) in Ghent. Three days packed with loads of presentations about upcoming technologies and releases. Find out about the program, speakers and more on the following site:... (More)<img src="" height="1" width="1" alt=""/>: Selected node CSS bug Van de Craen2008-01-24T02:14:00-08:00I may have found a bug in the SPTreeView control. When you select a node the CSS class declared in the SelectedNodeStyle isn't applied to the node. Other than that it seems to behave correctly. Sample markup (in a LAYOUTS page) <wssuc:InputFormControl in XP: Cannot add this hardware Van de Craen2008-01-19T02:05:00-08:00I had an issue for some time now where I could not add new USB Plug & Play devices (memory sticks, hard drives, camera's, etc). Devices that were added before the problem occurred still functioned properly, just the new ones wouldn't install. I tried: Changing stuff in the registry (values... (More)<img src="" height="1" width="1" alt=""/> start workflow Van de Craen2008-01-18T08:51:00-08:00Programmatically starting a workflow requires the following code: Guid wfBaseId = new Guid("{6BE0ED92-BB12-4F8F-9687-E12DC927E4AD}"); SPSite site = ...; SPWeb web = site.OpenWeb(); SPList list = web.Lists["listname"]; SPListItem item = list.Items[0]; SPWorkflowAs... (More)<img src="" height="1" width="1" alt=""/> Indicator not working on client Van de Craen2008-01-17T03:58:00-08:00If the Online Presence Indicator is not working properly on some clients it could be that the IE plugin is corrupted. Client has Office 2003 Try reinstalling the Office Web Components. Client has Office 2007 Try the following: Close all IE windows Rename the file C:\Program Files\Microso... (More)<img src="" height="1" width="1" alt=""/>: FullTextQuery RowLimit Van de Craen2008-01-14T02:05:00-08:00When you query the SharePoint Search Service the number of rows returned defaults to 100 but can be increased as required. Note that when you specify a value above the maximum RowLimit the query will only return the default value of 100 items ! ServerContext ctx = ServerContext.Defaul... (More)<img src="" height="1" width="1" alt=""/> updated with CKS:EBE Van de Craen2007-12-27T12:42:00-08:00 late... (More)<img src="" height="1" width="1" alt=""/>: Text Property Builder in a custom ToolPane/EditorPart Van de Craen2007-12-27T06:49:00-08:00Introduction By default text properties in the Web Part Property pane can be filled in using a Property Builder as shown below: This only applies to the default SharePoint ToolPanes (or EditorParts as they're called in the new terminology). If you develop a custom ToolPane or EditorPart with ... (More)<img src="" height="1" width="1" alt=""/> Services Trusted Locations and Alternate Access Mappings ServicesSteven Van de Craen2007-12-20T04:10:00-08:00Yesterday I discovered that Excel Services' Trusted Locations don't use the Alternate Access Mappings collection from MOSS 2007 to grant or deny access to a workbook inside a SharePoint Document Library. I did a search on the Web and apparently it is already a known issue: Excel Services will ... (More)<img src="" height="1" width="1" alt=""/> a SharePoint Workflow Van de Craen2007-12-20T03:35:00-08:00Terminating the workflow will set its status to Canceled and will delete all tasks created by the workflow. Via browser interface Via code // Cancel SPWorkflowManager.CancelWorkflow(workflowProperties.Workflow); Applies To Windows SharePoint Services 3.0 (+ Service Pack 1) Microsoft Office Share... (More)<img src="" height="1" width="1" alt=""/> Framework 2.0 and 3.0 Service Pack 1 Van de Craen2007-12-15T07:23:00-08:00I just noticed that Service Pack 1 for .NET Framework 2.0 and 3.0 has been released. I missed it completely due to the large number of announcements regarding Office 2007 Service Pack 1. .NET FX 2.0 SP1 redist x86 .NET FX 2.0 SP1 redist x64 .NET FX 3.0 SP1 redist x86 .NET FX 3.0 SP1 redist x64 Th... (More)<img src="" height="1" width="1" alt=""/> Part Gallery: Change Web Part metadata tool Van de Craen2007-12-10T06:27:00-08:00Using the Web Part Gallery you can easily add new Web Parts to a Site Collection and it will even generate the .webpart or .dwp XML file for you. Just drop the assembly in the BIN or GAC, add a correct <SafeControl> node in the web.config and you should see your Web Part(s) in the New dialog ... (More)<img src="" height="1" width="1" alt=""/>: Some things worth mentioning Van de Craen2007-12-06T09:09:00-08:00There are some things you need to know as a SharePoint developer... Creating a list When you create a list using the browser you get to specify the name for the list. Basically this means both the url part as the title. Not all characters are allowed in a URL so SharePoint will just filter them ou... (More)<img src="" height="1" width="1" alt=""/> Columns in CAML: No option to modify values ? Van de Craen2007-12-06T06:41:00-08:00One of our projects is being automated using solution files and Features. When the Site Collection Feature is activated it automatically creates Site Columns, Site Content Types and a Document Library using both of the aforementioned. Currently the entire process is via Object Model code. However, ... (More)<img src="" height="1" width="1" alt=""/> Part Properties and the Event Life Cycle Van de Craen2007-11-30T04:12:00-08:00Something I noticed during Web Part development was that the Web Part Properties were not loaded at constructor call time. Makes sense since the control has to be instanciated before the properties can be loaded but I recently forgot and it had me troubled for a while. Q. So when are the Web Part P... (More)<img src="" height="1" width="1" alt=""/> Server 2008: installation gimmick Van de Craen2007-11-30T03:15:00-08:00I just installed the Release Candidate of Microsoft Search Server 2008 Express edition. Although the documentation mentioned a Basic and Advanced installation I didn't get that option. Another thing I noticed: SharePoint is everywhere :)<img src="" height="1" width="1" alt=""/>: Immediate alert notifications stopped working Van de Craen2007-11-23T02:09:00-08:00One of our MOSS servers was not sending out any alert notifications anymore. I'm not sure when it stopped working but recently users started complaining about it. When I subscribe to a new alert I still get the 'New alert' notification, but that's actually a different mechanism than the actual aler... (More)<img src="" height="1" width="1" alt=""/> Search: Basic Authentication issues Van de Craen2007-11-22T09:15:00-08:00One of our MOSS 2007 servers has a single Web Application (no extended Web Apps) and is configured to use Basic Authentication. I have confirmed that my dedicated crawl account has sufficient permissions in the Policy for Web Application section of Central Administration > Application management... (More)<img src="" height="1" width="1" alt=""/> Exception: Maximum retry on the connection exceeded NAVSOAPSteven Van de Craen2007-11-22T02:12:00-08:00Setup. C/AL // Create SOAP mes... (More)<img src="" height="1" width="1" alt=""/> 2007 and PDF indexing Van de Craen2007-11-21T07:16:00-08:00Introduction By default the SharePoint 2007 Search indexed only the meta data of a PDF document. By installing and configuring a PDF IFilter the Search will also index the contents of the PDF document. This allows users to find documents based on text inside the document. This process is called ful... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 5 Van de Craen2007-11-12T02:27:00-08:00Time sure flies when you're having fun. It has been an exciting week with a lot of interesting sessions which I have blogged about the last couple of days. Following now will be a variety of workshops for my colleagues at home about the new things we picked up here. The flight back was at 5:30 ... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 4 Van de Craen2007-11-08T14:38:00-08:00My head feels as if had single-handedly emptied the Sal Cafe's wine cellar... But other than the after effects I'm currently experiencing I had a great time at the Tech Ed Country Drink for Belgians and Luxemburgers. Workflow in Microsoft SharePoint Products and Technologies 2007 This session on ... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 3 Van de Craen2007-11-07T10:18:00-08:00Develop a Community Solution with VSTO 3.0, Office Open XML and WSS 3.0 Mario Szpuszta builds a restaurant rating web solution from scratch. Restaurant owners can submit their restaurant details in the form of a Word 2007 document containing Content Controls mapped to a custom XML part. A custom ... (More)<img src="" height="1" width="1" alt=""/> 2007: Exception when handling a document renaming event Van de Craen2007-11-07T06:07:00-08:00Situation I have written a small Event Handler to automatically copy the file name and version to custom text fields in order to be able to use them in Microsoft Word. ItemAdded ItemCheckedOut ItemUpdated Here's a small sample of the code I'm using: public override void ItemUpdated(SPItemE... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 2 Van de Craen2007-11-06T15:50:00-08:00I didn't get a lot of sleep last night because here in Spain they tend to have dinner really late in the evening. .NET Developers Advanced Introduction to SharePoint 2007 Ted Pattison delivered an excellent session for .NET developers explaining in detail some of the aspects and pitfalls... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 1 Van de Craen2007-11-06T15:49:00-08:00The first day of Tech Ed 2007 started very early for my flight from Brussels to Barcelona followed by a brief check-in at the hotel. This was my first year of Tech Ed and it was amazing to see how massive this event really is. Keynote S. Somasegar provided the keynote with some interesting upcomin... (More)<img src="" height="1" width="1" alt=""/> Kit For SharePoint: Beta 2 Van de Craen2007-10-16T07:18:00-07:00If you're wondering why my RSS feed was acting funny today; my blog was updated to CKS:EBE 2.0 Beta 2. There were no issues installing this release but I still needed to update my theme files (master page, XSL) because of all the new stuff in Beta 2. I immediately fixed some minor bugs where ... (More)<img src="" height="1" width="1" alt=""/> by FeedBurner Van de Craen2007-10-12T05:00:00-07:00From now on I'm syndicated by FeedBurner:<img src="" height="1" width="1" alt=""/> Forms Services problem with anonymous submitted forms using administrator-approved form templates Van de Craen2007-10-12T04:32:00-07:00Introduction We have designed an InfoPath 2007 Form that can be filled in from the browser using InfoPath Forms Services (using MOSS 2007). The MOSS 2007 Enterprise environment has an NTLM authenticated Web Application () and an anonymous extended Web Application (... (More)<img src="" height="1" width="1" alt=""/> Explorer crash when opening a MS Office document on SharePoint Van de Craen2007-10-12T02:53:00-07:00Issue You click on a Microsoft Office document inside a SharePoint Document Library and the browser crashes. Event Type: ErrorEvent Source: Application ErrorEvent Category: NoneEvent ID: 1000Date: 10/08/2007Time: 14:47:01 AMUser: N/AComputer: PC001Descr... (More)<img src="" height="1" width="1" alt=""/> Method Van de Craen2007-09-24T05:48:00-07:00A post about the ReplaceLink method of a Microsoft.SharePoint.SPListItem object. This method replaces all instances of a given absolute URL with a new absolute URL inside a SharePoint List Item. Remarks Only applies to URLs formatted as hyperlink (Rich Text Field) or inside Hyperlin... (More)<img src="" height="1" width="1" alt=""/>: 25 September 2007 Van de Craen2007-09-12T05:04:00-07:00 18:00 – 18:30 Registration and Welcome 18:30 – 20:15 Session 1: Guidelines and Best Practices for a Successful SharePoint Deployment within Your Organization Join this session if you are looking for answers to questions like ‘When is it appropr... (More)<img src="" height="1" width="1" alt=""/> Kit for SharePoint Van de Craen2007-09-10T04:27:00-07:00In case you were wondering what I've been up to lately: I have joined the Community Kit for SharePoint team to work on the Enhanced Blog Edition (EBE). This amazing project really extends some of the basic SharePoint 2007 functionality such as Intranet/Extranet deployments, blogging, wik... (More)<img src="" height="1" width="1" alt=""/> 2007: People Search Options shown by default (repost) Van de Craen2007-08-28T02:03:00-07:00Instructions Go to the People Search Page in your SharePoint Portal and modify the page (eg. /SearchCenter/Pages/people.aspx) Add a Content Editor Web Part right below the People Search Box Web Part Add the following javascript to the CEWP: <script type="text/javascript">var a... (More)<img src="" height="1" width="1" alt=""/> 1.1 CTP released Van de Craen2007-08-21T07:05:00-07:00Via Microsoft SharePoint Products and Technologies Team Blog: What's New in VSeWSS 1.1? WSP View, aka "Solution Package editing" New Item Templates: "List Instance" project item "List Event Handler" project item "_layout file" project item Fas... (More)<img src="" height="1" width="1" alt=""/>, Antivirus and Alert Notification Van de Craen2007-08-21T01:24:00-07:00Problem You receive a notification about an alert being created for a SharePoint List but receive no actual alerts after that. Cause Alerts not being received can be caused by so many things but one of the possible causes is your antivirus software on the SharePoint Server. Our antivirus softw... (More)<img src="" height="1" width="1" alt=""/> Server 2005: Importing from Excel error (repost) Van de Craen2007-08-14T02:55:00-07:00Something weird that occured today: I had an Excel file that I needed to import into my SQL Server 2005 database. I followed the instructions but kept getting an error. Problem Error 0xc020901c: Data Flow Task: There was an error with output column "Agenda 2" (63) on output &... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-08-13T01:46:00-07:00Description By default when you upload multiple files they are stored as the default 'Document' Content Type. Since I had a lot of templates to upload (over and over again) I created this tool to ease the pain. It's a Windows application that can upload multiple files directly as a specified Conte... (More)<img src="" height="1" width="1" alt=""/> and Site Templates (repost) Van de Craen2007-07-24T04:52:00-07:00By default your SharePoint pages are ghosted, which means the layout template is stored on the web front end server's file system, while the content is stored in SQL Server. Once you make modifications to a page using SharePoint Designer 2007, the page becomes unghosted; both layout and content ar... (More)<img src="" height="1" width="1" alt=""/> to connect publishing custom string handler for output caching (repost) Van de Craen2007-07-24T04:46:00-07:00Issue This had me troubling for quite a while and looking on the Internet didn't really give any real solutions (here and here). In my case the errors were caused by a custom Web Service that I added to SharePoint. When one of the Web Parts called the Web Service an event log entry was written:... (More)<img src="" height="1" width="1" alt=""/> characters in Site URL (repost) Van de Craen2007-07-24T04:43:00-07:00General When creating sites through the SharePoint Object Model, the Web Services or the Web Interface you need to filter the illegal characters when specifying the Site Name (which is the part of the URL that references your site). Here's a list of common illegal characters: # % & * {... (More)<img src="" height="1" width="1" alt=""/> WebPart Life Cycle reminder (repost) Van de Craen2007-07-24T04:38:00-07:00I'm currently programming some connectable Web Parts for MOSS 2007 and want to make this a small note to self: Web Part Life Cycle on page load OnInit OnLoad Connection Consuming CreateChildControls OnPreRender Render (RenderContents, etc) Web Part Life Cycle on button click OnInit CreateChi... (More)<img src="" height="1" width="1" alt=""/> 2007 Content Control mapping tool (repost) Van de Craen2007-07-24T04:36:00-07:00Here's a really handy tool for mapping your custom XML to Content Controls inside the Word document.<img src="" height="1" width="1" alt=""/> 2007 and SQL Server collation: Latin1_General_CI_AS_KS_WS (repost) Van de Craen2007-07-24T04:25:00-07:00Make sure the SQL Server collation for your SharePoint 2007 databases is set to Latin1_General_CI_AS_KS_WS. Case insensitive Accent sensitive Kana sensitive Width sensitive This was implemented because this collation most resembles NTFS file name restrictions.<img src="" height="1" width="1" alt=""/> Belux article: Customizing the Content Query Web Part (repost) Van de Craen2007-07-24T04:16:00-07:00My first article on MSDN Belux: It was written for MOSS 2007 Beta 2 but still very much applies to MOSS 2007 RTM.<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-24T04:08:00-07:00Description A SharePoint Library event handler to automatically link a picture to a MOSS 2007 User Profile. The file names must be of the following format: [accountName].[ext] ItemAdded The event handler extracts the account name from the new file The Profile Picture URL property of the c... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-24T03:40:00-07:00Description A tool for registering/unregistering Event Handlers from a SharePoint List or Library. It uses an XML file to register event handlers. <definitions> <definition assembly="ProfilePictureEventHandler, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4... (More)<img src="" height="1" width="1" alt=""/> create site using custom site definition error Van de Craen2007-07-23T05:28:00-07:00A while ago I blogged about creating a custom site definition for WSS 3.0/MOSS 2007. The problem I experienced today was with a WSS 3.0 installation and creating sites programmatically. Say I have copied the STS Site Definition and made some changes so only the 'blank' template remains. I ad... (More)<img src="" height="1" width="1" alt=""/> Studio 2005 Extensions for Windows SharePoint Services 3.0 Van de Craen2007-07-23T05:27:00-07:00They have been available for quite a while now: I have just started experimenting with them and they do have some specific oddities: "Object reference not set to... (More)<img src="" height="1" width="1" alt=""/> Query and "Attempted to perform an unauthorized operation" Van de Craen2007-07-23T05:26:00-07:00When I was trying some of the SharePoint 2007 (both MOSS 2007 and WSS 3.0) Search Query tools available... MOSS Query Tool Search Query Web Service Test Tool ... I kept getting the following error: System.Web.Services.Protocols.SoapException: Server was unable to process request. --... (More)<img src="" height="1" width="1" alt=""/> and try/catch Van de Craen2007-07-23T05:24:00-07:00When doing some SharePoint manipulations using the Object Model you will likely have an instance of a SPWeb object. When you try to update an item or list located in the SPWeb instance or update the instance itself you will most likely set the AllowUnsafeUpdates property to true before calling the ... (More)<img src="" height="1" width="1" alt=""/> 2007: Login failed for user 'DOMAIN\user' periodically (per minute) Van de Craen2007-07-23T04:36:00-07:00Issue One of our SharePoint installations suffered a crash, so we recreated the farm and restored a content backup. The search indexes and user profiles were rebuilt/imported. Since the crash we noticed a lot of 'Login failed' messages in the SQL Server machine's event log. These events o... (More)<img src="" height="1" width="1" alt=""/> 3.0 Event Handler: pre-event cancelling issues Van de Craen2007-07-20T08:37:00-07:00I'm currently implementing an Event Handler for a Picture Library - a project I will blog about in the near future - and have learned the hard way about handling event handlers... The issue I am experiencing applies to WSS 3.0 (and MOSS 2007) and seems to be about the pre-event event types. Th... (More)<img src="" height="1" width="1" alt=""/> 2003 Web Parts and Office 2007 clients Van de Craen2007-07-20T02:13:00-07:00The Office 2003 Web Parts can still be installed on WSS 3.0 and MOSS 2007. STSADM.EXE -o addwppack -filename "Microsoft.Office.DataParts.cab" -globalinstall For your clients to see the Web Parts correctly rendered they will need the Office 2003 client libraries and Internet E... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-18T08:50:00-07:00Description A default List View Web Part allows most types of columns to be filtered by a user. Unfortunately it is not possible to filter on eg. the name of a file in a document library. The FilteredViewWebPart allows to dynamically specify an additional filter before retrieving ite... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-18T08:45:00-07:00Description A default Content Query Web Part has an item limit that just retrieves the first X items. For displaying it allows to group these items based on a property (Created Date, Author, Site, ...). The GroupByItemLimitWebPart allows to specify an item limit per group (when grouping is enabled... (More)<img src="" height="1" width="1" alt=""/>
|
http://feeds.feedburner.com/vandest
|
CC-MAIN-2018-30
|
refinedweb
| 13,671
| 60.55
|
>
I would like to make an interactive world map that consists of clickable regions, just like in this map.
I have create two textures, one for the display of the world map and the other is a painted texture where each province is colored in a different way.
When I click on the map I send a Raycast and get the texture coordinate that hit.
This is my C# code
public class MouseManager : MonoBehaviour {
public Texture2D id_map;
public Color color;
private void Update()
{
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
if (Input.GetMouseButtonDown(0))
{
Vector2 textcoord = hit.textureCoord;
textcoord.x *= id_map.width;
textcoord.y *= id_map.height;
color = id_map.GetPixel((int)textcoord.x, (int)textcoord.y);
}
}
}
}
I want to change provinces color to the color of it's owner.
For example province of Provence belong to France and has blue color, province of England belong to United Kingdom and has red color....
Do you know any ways to do that?
Thanks for the help!
Answer by tormentoarmagedoom
·
Sep 19, 2018 at 10:37 PM
Good day.
Why are you reading the pixel? If you want to change the color of all region, you only need to have all textures stored (texture for each region in each color) , and detect the object (region) you clicked. You can also do this tieh Event System, or with colliders and use
OnMouseOver()
OnMOuseDown()
OnMOuseExit()
Then just change the object sprite.
Bye!
Thanks for your reply @tormentoarmagedoom
I am reading the pixel because in the painted map each province is colored in a certain way and in game when I click the mouse I send a Raycast and get the coordinate that I hit. Then I compare the color at that pixel and get the appropriate province.
My problem is how mark in game the provinces with the same color for the same country.
Thats a mmmm "stupid way" to detect what region you clicked... Its a way, yea, but is like, to know what fruit do you have in your hand, you close your eyes, go outside, find someone, and ask him what fruit do you have in your hand... you can simply open your eyes.
If you pretend to change the color of 1 region, you will need to have all regions separed, as different objects, so you will have 1 object for each region, so if you just detect the object, you know what object is and you can change its texture, color, make anything to that region... Knowing the color to know what region is to then search the region with that name .. bla bla bla.. so
Assigning UV Map to model at runtime
0
Answers
Initialising List array for use in a custom Editor
1
Answer
Kinect Map Depth into Color
0
Answers
|
https://answers.unity.com/questions/1554865/interactive-world-map-with-selectable-regions-1.html
|
CC-MAIN-2019-22
|
refinedweb
| 470
| 70.84
|
Context bounds were introduced in Scala 2.8.0, and are typically used with the so-called type class pattern, a pattern of code that emulates the functionality provided by Haskell type classes, though in a more verbose manner.
A context bound requires a parameterized type, such as
Ordered[A],
but unlike
String.
A context bound describes an implicit value. It is used to declare that for
some type
A, there is an
implicit value of type
B[A] available. The syntax goes like this:
def f[A : B](a: A) = g(a) // where g requires an implicit value of type B[A]
The common example of usage in Scala is this:
def f[A : ClassTag](n: Int) = new Array[A](n)
An
Array initialization on a parameterized type requires a
ClassTag context bounds are implemented with implicit parameters, given their definition. Actually, the syntax I showed are syntactic sugars for what really happens. See below how they de-sugar:
ClassTag usage, which is required to
initialize new arrays without concrete types..
Related questions of interest:
This answer was originally submitted in response to this question on Stack Overflow.blog comments powered by Disqus
Contents
|
http://docs.scala-lang.org/tutorials/FAQ/context-bounds
|
CC-MAIN-2017-04
|
refinedweb
| 195
| 53.21
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.