text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On 10/08/16 22:45, Samuel Thibault wrote:
> It looks a bit odd to be remapping something that was just allocated,
> but I guess it makes portability easier.
Remapping needs the page allocator to be active as page tables need
to be allocated. :-)
>
> Juergen Gross, on Fri 05 Aug 2016 19:36:00 +0200, wrote:
>> diff --git a/balloon.c b/balloon.c
>> index 4c18c5c..75b87c8 100644
>> --- a/balloon.c
>> +++ b/balloon.c
>> @@ -44,3 +44,20 @@ void get_max_pages(void)
>> nr_max_pages = ret;
>> printk("Maximum memory size: %ld pages\n", nr_max_pages);
>> }
>> +
>> +void alloc_bitmap_remap(void)
>> +{
>> + unsigned long i;
>> +
>> + if ( alloc_bitmap_size >= ((nr_max_pages + 1) >> (PAGE_SHIFT + 3)) )
>> + return;
>> +
>> + for ( i = 0; i < alloc_bitmap_size; i += PAGE_SIZE )
>> + {
>> + map_frame_rw(virt_kernel_area_end + i,
>> + virt_to_mfn((unsigned long)(alloc_bitmap) + i));
>> + }
>> +
>> + alloc_bitmap = (unsigned long *)virt_kernel_area_end;
>> + virt_kernel_area_end += round_pgup((nr_max_pages + 1) >> (PAGE_SHIFT +
>> 3));
>
> Ditto here, better check against hitting VIRT_DEMAND_AREA.
Okay.
>
>> diff --git a/include/balloon.h b/include/balloon.h
>> index b8d9335..0e2340b 100644
>> --- a/include/balloon.h
>> +++ b/include/balloon.h
>> @@ -31,11 +31,13 @@ extern unsigned long virt_kernel_area_end;
>>
>> void get_max_pages(void);
>> void arch_remap_p2m(unsigned long max_pfn);
>> +void alloc_bitmap_remap(void);
>>
>> #else /* CONFIG_BALLOON */
>>
>> static inline void get_max_pages(void) { }
>> static inline void arch_remap_p2m(unsigned long max_pfn) { }
>> +static inline void alloc_bitmap_remap(void) { }
>
> I'd say call it rather mm_alloc_bitmap_remap(). We have C namespace
> issues with the stubdom applications, and the alloc_bitmap_ prefix seems
> quite generic (even if less that bitmap_), mm_ adds some kernelish
> notion.
Okay.
>
>> extern unsigned long nr_free_pages;
>>
>> +extern unsigned long *alloc_bitmap;
>> +extern unsigned long alloc_bitmap_size;
>
> Ditto, mm_bitmap and mm_bitmap_size.
Okay.
>
> Otherwise it looks good.. | https://lists.xenproject.org/archives/html/minios-devel/2016-08/msg00045.html | CC-MAIN-2019-09 | refinedweb | 252 | 55.44 |
This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.
.NET 6
Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy
C# 10
Global Using Statements
File Scoped Namespaces
In a previous post, we talked about the coming ability to use global using statements in C# 10. The main benefit being that you were now able to avoid the clutter of declaring namespaces over and over (Things like using System etc) in every single file. I personally think it’s a great feature!
So it only makes sense that when you create a new .NET 6 project, that global usings are implemented right off the bat. After all, if you create a new web project, there are many many files auto generated as part of the template that will call upon things like System or System.IO, and it makes sense to just use GlobalUsings straight away from the start right?
Well… .NET 6 have solved the problem in a different way. With Implicit Using statements, your code will have almost invisible using statements declared globally! Let’s take a look at this new feature, and how it works.
Getting Setup With .NET 6 Preview
At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.
To get set up using .NET 6, you can go and read out guide here :
Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.
Implicit Global Usings
Implicit Global Usings are actually a hidden auto generated file, inside your obj folder, that declares global using statements behind the scenes. Again, this is only the case for .NET 6 and C# 10. In my case, if I go to my project folder then go obj/Debug/net6.0, I will find a file titled “RandomNumbers.ImplicitNamespaceImports.cs”.
Opening this file, I can see it contains the following :
global using global::System;
global using global::System.Collections.Generic;
global using global::System.IO;
global using global::System.Linq;
global using global::System.Net.Http;
global using global::System.Threading;
global using global::System.Threading.Tasks;
Note the fact this is an auto generated file, and we can’t actually edit it here. But we can see that it declares a whole heap of global using statements for us.
The project I am demoing this from is actually a console application, but each main project SDK type has their own global imports.
Console/Library
System
System.Collections.Generic
System.IO
System.Linq
System.Net.Http
System.Threading
System.Threading.Tasks
Web
In addition to the console/library namespaces :
System.Net.Http.Json
Microsoft.AspNetCore.Builder
Microsoft.AspNetCore.Hosting
Microsoft.AspNetCore.Http
Microsoft.AspNetCore.Routing
Microsoft.Extensions.Configuration
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Hosting
Microsoft.Extensions.Logging
Worker
In addition to the console/library namespaces :
Microsoft.Extensions.Configuration
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Hosting
Microsoft.Extensions.Logging
Of course, if you are unsure, you can always quickly create a project of a certain type and check the obj folder for what’s inside.
If we try and import a namespace that previously appeared in our implicit global usings, we will get the usual warning.
Opting Out
As previously mentioned, if you are using .NET 6 and C# 10 (Which in a years time, the majority will be), then this feature is turned on by default. I have my own thoughts on that, but what if you want to turn this off? This might be especially common when an automatically imported namespace has a type that conflicts with a type you yourself are wanting to declare. Or what if you just don’t like the hidden magic full stop?
The only way to turn off implicit using statements completely is to add the following line to your .csproj file :
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<LangVersion>preview</LangVersion>
<DisableImplicitNamespaceImports>true</DisableImplicitNamespaceImports>
</PropertyGroup>
This turns off all implicit imports. However, there is also another option to selectively remove (and add) implicit namespaces like so :
<ItemGroup>
<Import Remove=”System.Threading” />
<Import Include=”Microsoft.Extensions.Logging” />
</ItemGroup>
Here we are removing System.Threading and adding Microsoft.Extensions.Logging to the global implicit using imports.
This can also be used as an alternative to using something like a GlobalUsings.cs file in your project of course, but it is a somewhat “hidden” feature.
Is This A Good Feature? / My Thoughts
I rarely comment on new features being good or bad. Mostly because I presume that people much smarter than me know what they are doing, and I’ll get used to it. I’m sure when things like generics, lambdas, async/await got introduced, I would have been saying “I don’t get this”.
On the surface, I like the idea of project types having some sort of implicit namespace. Even imports that I thought were kinda dumb to be global such as “System.Threading.Tasks”, I soon realized are needed in every single file if you are using async/await since your methods must return a type of Task.
That being said, I don’t like the “hidden”-ness of everything. I’m almost certain that stackoverflow will be overwhelmed with people asking why their compiler is saying that System previously appeared in the namespace when it clearly didn’t. It’s not exactly intuitive to go into the obj folder and check what’s in there. In fact, I can’t think of a time I have in the past 15 years. Remember, if you upgrade an existing .NET 5 solution to .NET 6, you will automatically be opted in.
I feel like a simpler idea would have been to edit the Visual Studio/Command Line templates that when you create a new web project, it automatically creates a GlobalUsings.cs file in the root of the project with the covered implicit namespaces already in there. To me, that would be a heck of a lot more visible, and wouldn’t lead to so much confusion over where these hidden imports were coming from.
That being said, maybe in a years time we just get used to it and it’s just “part of .NET” like so many other things. What’s your thoughts?
The post Implicit Using Statements In .NET 6 appeared first on .NET Core Tutorials. | https://online-code-generator.com/implicit-using-statements-in-net-6/ | CC-MAIN-2021-43 | refinedweb | 1,169 | 58.38 |
import "github.com/go-acme/lego/challenge"
challenges.go provider.go
const ( // HTTP01 is the "http-01" ACME challenge // Note: ChallengePath returns the URL path to fulfill this challenge HTTP01 = Type("http-01") // DNS01 is the "dns-01" ACME challenge // Note: GetRecord returns a DNS record which will fulfill this challenge DNS01 = Type("dns-01") // TLSALPN01 is the "tls-alpn-01" ACME challenge TLSALPN01 = Type("tls-alpn-01") )
func GetTargetedDomain(authz acme.Authorization) string
type Provider interface { Present(domain, token, keyAuth string) error CleanUp(domain, token, keyAuth string) error }
Provider enables implementing a custom challenge provider. Present presents the solution to a challenge available to be solved. CleanUp will be called by the challenge if Present ends in a non-error state.
ProviderTimeout allows for implementing a Provider where an unusually long timeout is required when waiting for an ACME challenge to be satisfied, such as when checking for DNS record propagation. If an implementor of a Provider provides a Timeout method, then the return values of the Timeout method will be used when appropriate by the acme package. The interval value is the time between checks.
The default values used for timeout and interval are 60 seconds and 2 seconds respectively. These are used when no Timeout method is defined for the Provider.
Type is a string that identifies a particular challenge type and version of ACME challenge.
Package challenge imports 3 packages (graph) and is imported by 14 packages. Updated 2019-11-06. Refresh now. Tools for package owners. | https://godoc.org/github.com/go-acme/lego/challenge | CC-MAIN-2019-51 | refinedweb | 251 | 52.39 |
Long term TODO list
This list is primarily for entertainment. However, it might turn out to be useful should the unthinkable happen, and Silver needs to get into proper shape, fast, before it has too many users. ;)
A principle to start:
- This list isn’t a general TODO list. It’s specifically for things that would have serious effects on code written in Silver. i.e. things that should be fixed before there ever hits a critical mass of code that would make otherwise good changes impractical.
- Replace the entire IO portion of the language. Fortunately this is as small as it is bad.
- Full featured FFI. We should be able to ‘import java …’ (Perhaps also structured in such a way that you can write ‘import scala …’ as a pure extension.)
- Package management built-in. Must include versioning support. (Things to look into: maven & OSGi, since integration with the ecosystem might be nice. What does Scala do? Anyone implementing this should look at Go, and consider any pitfalls from Hackage, too. Other comparisons? Perl CPAN? Ruby Gems?)
- Fully solve Unicode and text handling.
- Fully solve code documentation.
- Uniform error/exception handling. Right now, error handling is awful. Trying to read from a non existent file means unrecoverable program termination.
- Prelude redesign. Pretty much it’s just accumulating at the moment. It should be actually designed. I also think IO shouldn’t be in the prelude. Also need a better picture of what’s actually in the standard library.
- Fix up scoping a bit. The compiler could be a lot more efficient with a small tweak to module import semantics, I think… Certainly at the moment we’re importing the prelude on a per-file basis which is kinda bogus!
- Need efficient libraries for basic things. Arrays, sets, different kinds of maps. Need to de-emphasize lists. Again, the key here is design. The interface to standard libraries will affect user developed ones. In addition, we need to figure out how to deal with “variant proliferation.” e.g. Problems likes Haskell’s “strict vs lazy, boxed vs unboxed, etc.” combinatorial explosion.
- Language features that would induce a dramatic redesign of standard library components:
- “Pseudo attributes” that allow functions to be moved within the dot namespace of a type. e.g. lists should not be munched with
head(l)but with
l.head, as this completely removes
headfrom the global namespace. (bonus: this will aid completion, later, too.) (aside: well, okay, really you should use pattern matching for lists, but you get the example.)
- OO-style abstraction mechanism. Typeclasses, probably. Not certain just how fully-featured, yet. But certainly this will be used extensively in libraries. There are dragons here, though, the interaction with attributes needs careful thought. (The trouble is attributes sorta ARE one function typeclasses, without the ability to abstract i.e. there is no type for “anything with .attr”. So a typeclass declaration should allow not just functions supplied in instances, but perhaps also let it ensure an attribute occurs on the type, too.)
- There is also the matter of Location and possibly annotation, but this might actually be accomplished in the near term!
- There is a substantial matter of dealing with state, but I suspect this is so difficult to do really well, and the pay off is too far on the practice side of “practice vs research,” that it simply doesn’t pay to do it until after this hypothetical critical mass, suffering whatever consequences of that it must. (State would play into doing well the things remote attributes, circular attributes, and threaded attributes are often tasked to do in other systems. The bonus of state is largely efficiency, and not much more than that.)
- There are very few features of the language itself that imply any kind of poor performance. The only possible one is the choice to be lazy by default. This should be reconsidered only carefully. Very, very carefully. Strictness could impact extensibility in unknown ways. (On the other hand, extensibility that relies on incidental strictness is bad.)
We should probably:
- Maintain “silver grammar = java package.”
- A jar file should consist of a “Silver element.” Elements should declare version and versioned dependencies on other elements. And version of Silver.
- Should probably be compatible with Java jar file schemes of some sort… OSGi/Maven2/Ivy? This needs investigation.
- Must tolerate multiple versions of a library being “installed.” At first, however, we could require that any particular build use just one version. Long term, however, we might even want to think about multiple versions functioning in parallel? (So long as their types are considered distinct.)
- Metadata should probably be external to the package, though. It’d be really nice if we could have a CI server determine what versions are compatible, etc. (Also, we might have a unique ability to statically check semantic versioning, to some extent…)
- Much of the “do” list above should probably be taken care of. This could be summarized as making the Silver language more general purpose.
- IO, Typeclasses, library redesign, namespace management, FFI, SilverDoc
- Need packaging.
- Need runtime composition. There for silver grammars, missing Copper!
- Q: How far should the IDE component progress? At least mildly. Certainly many languages have a 1 without an IDE!
- Q: Debugging? Perhaps not a show stopper for a 1? | https://melt.cs.umn.edu/silver/dev/long-term-todo/ | CC-MAIN-2021-49 | refinedweb | 884 | 59.8 |
Intel To Make A Greener Microprocessor
simoniker posted more than 10 years ago | from the blooming-silicon dept.... (5, Funny)
Anonymous Coward | more than 10 years ago | (#8801158)
Haha, just kidding, I own an AMD.
Re:So, wait... (1)
the_thunderbird (682833) | more than 10 years ago | (#8801362)
Reduced lead? (5, Insightful)
mkaiser (20342) | more than 10 years ago | (#8801160)
Next step: reduce power consumption.
Re:Reduced lead? (4, Insightful)
Specialist2k (560094) | more than 10 years ago | (#8801231)
This is definitely a necessity as the major ecological impact of modern consumer and IT products occurs during the utilization phase and not during the production or disposal phase.
Banias and Dothan (4, Insightful)
ciroknight (601098) | more than 10 years ago | (#8801285) (4, Insightful)
sfe_software (220870) | more than 10 years ago | (#8801364):Reduced lead? (0)
Anonymous Coward | more than 10 years ago | (#8801315)
Good job (2, Funny)
SinistarJAB (558943) | more than 10 years ago | (#8801161)
Greener Chips? (0, Offtopic)
FS1 (636716) | more than 10 years ago | (#8801165)
Re:Greener Chips? (1, Interesting)
DeathPenguin (449875) | more than 10 years ago | (#8801232)
Re:Greener Chips? (-1, Flamebait)
Anonymous Coward | more than 10 years ago | (#8801398)
You lot must mod your own buddies. Do you have a pretend you're my friend arrangement ?
Retards.
Re:Greener Chips? (1)
MrIrwin (761231) | more than 10 years ago | (#8801431)
Re:Greener Chips? (2, Informative)
sotonboy (753502) | more than 10 years ago | (#8801233)
Re:Greener Chips? (5, Insightful)
joe_bruin (266648) | more than 10 years ago | (#8801257)
intel is meeting its upcoming legal requirements. the real win here (for intel), is turning something they are legally obligated to do into an "environmentally friendly" pr victory. the news media seems to be eating it up.
Re:Greener Chips? (1)
Beeswarm (693535) | more than 10 years ago | (#8801415)
This product is known by the state of California to contain lead. Wash hands after using.
This warning was printed on the front of the box my mouse came in. It isn't just the Europeans.
Re:Greener Chips? (1)
MrIrwin (761231) | more than 10 years ago | (#8801439)
first post (-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#8801166)
question (5, Interesting)
weekendwarrior1980 (768311) | more than 10 years ago | (#8801169)
Re:question (5, Informative)
Max Romantschuk (132276) | more than 10 years ago | (#8801193) (5, Informative)
krosk (690269) | more than 10 years ago | (#8801224)
Actually, intel is moving away from measuring chip speed by GHZ. Wired just had this article [wired.com] about it.
Basically, Intel is a couple years behind AMD who is now using numbers like 2300+ to describe chip speed.
Re:question (0)
Anonymous Coward | more than 10 years ago | (#8801244)
Re:question (5, Interesting)
ciroknight (601098) | more than 10 years ago | (#8801251)
Re:question (3, Informative)
MrIrwin (761231) | more than 10 years ago | (#8801449) (2, Interesting)
ciroknight (601098) | more than 10 years ago | (#8801470)
Apple got it right by using Benchmarks to sell their product, even if the benchmarks are strange and deceptive. Hey, lying, cheating, and stealing are what got Microsoft to the top, everyone's gotta play a little dirty.
And yes, buying a PC should be an emotional experience, as well as a scientific one. PC's, like Cars, are where us humans now spend a great deal of our lives. (nerds/geeks especially). We need to have some level of attachment to the machine we're using just so that it doesn't drive us mad.
Re:question (0)
Anonymous Coward | more than 10 years ago | (#8801455)
Anyway. Back to the rant about ignorant regular people...
Re:question (1)
kdart (574) | more than 10 years ago | (#8801474)
"I got more bogmips than you! "
"Yeah! prove it!"
"watch me 'cat
"Oooh... duuuude...."
Re:question (0)
tarunthegreat (746088) | more than 10 years ago | (#8801283)
Hey the 3200 series computers cost $800, how come the 2100 series computers cost $1500?
And then the inevitable: "That's all dandy, but what's the clock speed?". I think they'd be better off keeping things simple:
Category I/Fast/Home-SOHO use
Category II/SuperFast/Medium-Large Business
Category III/!#@$!#@/Industrial Strength
Category IV/Only visible after smoking up...
Okay, maybe I've confirmed that I'm not getting any jobs in marketing, but seriously, they can come up with something better than the esiting proposal...
Re:question (0)
Anonymous Coward | more than 10 years ago | (#8801366)
Re:question (4, Insightful)
sfe_software (220870) | more than 10 years ago | (#8801388) (1)
kidgenius (704962) | more than 10 years ago | (#8801203)
Re:question (1)
weekendwarrior1980 (768311) | more than 10 years ago | (#8801302)
Re:question (1)
sfe_software (220870) | more than 10 years ago | (#8801409) secondary (IOW, it's not so much Intel that makes this a secondary concern, but the market).
Transmeta's market is all about power consumption; thus, their CPUs are designed to be more efficient, and slower.
AMD is more toward Intel in my opinion, though they've gotten a lot better (remember when the original Athlons were known to have major heat issues?)
To sum it up, I agree with you for the most part, but power consumption is something they all have to deal with quite heavily. That final balance, however, is mostly about what their target market wants.
Re:question (1, Insightful)
Anonymous Coward | more than 10 years ago | (#8801213)
So... (4, Funny)
scrame (767779) | more than 10 years ago | (#8801176)
There not doing it out of the kindness of... (3, Insightful)
Anonymous Coward | more than 10 years ago | (#8801177)
They're (-1, Flamebait)
Anonymous Coward | more than 10 years ago | (#8801199)
Re:They're (0)
Anonymous Coward | more than 10 years ago | (#8801421)
Seriously, is using the right word so damn hard? Or should we forget all this written English nonsense and just post links to mp3s of us saying our comments, since nobody can seem to remember the definitions of the dozen or so commonly used words that sound or are spelled similarly? It's not like learning that "their" and "they're" (or "lose" and "loose, or "your" and "you're", or "to" and "too", etc...) are two completely different words is that hard... You probably spent more effort installing whatever OS you're running than it'd take to learn this little bit of grammar that makes it a hell of a lot easier to understand what you're trying to say.
Re:There not doing it out of the kindness of... (1)
reub2000 (705806) | more than 10 years ago | (#8801294)
Re:There not doing it out of the kindness of... (1)
Propagandhi (570791) | more than 10 years ago | (#8801390)
And as for the amount of lead in a CPU being too little to do any serious damage I'll have you know that according to the U.S. Environmental Protection Agency water with more than 15 parts per billion of lead poses potential health risks. So (let's see if I can divide correctly) 0.000000015% is a health risk. Obviously even if the amount of lead is in the grams or milligrams range it's enough for just 1 cpu to cause serious drinking water risks.
Re:There not doing it out of the kindness of... (1, Insightful)
Anonymous Coward | more than 10 years ago | (#8801360)
Re:There not doing it out of the kindness of... (1)
sfe_software (220870) | more than 10 years ago | (#8801420) cost might just go down). I just think that throwing in a new rule (relating to the particular materials used) into the process at such a potentially deep level would cause a lot of changes to be made (ultimately the added cost may be mostly R&D recoup)...
Green (-1)
Anonymous Coward | more than 10 years ago | (#8801179) [wookielove.net]
Green friendly? (3, Insightful)
DeathPenguin (449875) | more than 10 years ago | (#8801182)
Re:Green friendly? (0)
Anonymous Coward | more than 10 years ago | (#8801210)
Re:Green friendly? (4, Insightful)
ciroknight (601098) | more than 10 years ago | (#8801234)? (2, Funny)
the_womble (580291) | more than 10 years ago | (#8801314)
I am confused: I thought StrongARM was an Intel processor [intel.com]
Re:Green friendly? (3, Insightful)
Yokaze (70883) | more than 10 years ago | (#8801322)? (1)
ciroknight (601098) | more than 10 years ago | (#8801373)), and likewise, they've made a motherboard format for those communities (Pico and Micro BTX). Prescott will most likely be dismissed internally as a mistake, passed into the Xeon line, where Tejas will pick up later. So, they're still hitting 5GHz, but it's a big whoop, as their desktop processor Dothon will be hitting 6000+ (and only running roughly 3.6Ghz, if my predictions on the scalability of Banias and Dothan hold out) The 3.2GHz Prescott consumes even a fair amount more energy than its 3.2Ghz predecessor.
If I were Intel, I'd (falsely) argue that the power it uses extra makes it a more reliable chip for a server, and therefore should be there. Companies are going to hate that it's a thermal hogg, but they'll buy because it's Intel, and they've got the best reputation. No, it is because the mainboards and psu can't deliver the 100A those devices would require. And it is quite a problem to dissipate the heat of such a thing. Remember the new mainboard-layout which shoud cope with that thing? Also an idea of Intel.
In fact, I believe they not only could, but are, delivering more than enough power, although your old 200 Watt would have to say goodnight. Most PC's built by Dell ship with a proprietary atx-like PSU running a measly 220 Watts or so, on the P4 line none-the-less. The power subsystem's there, but using it is more of a risk than anyone wants to take, no?
Re:Green friendly? (4, Interesting)
sfe_software (220870) | more than 10 years ago | (#8801444)? (0)
Anonymous Coward | more than 10 years ago | (#8801338)
They are giving their processors model numbers. There's quite a big difference.
The model number is not necesarily related in any way to performance. As such they will still be publishing the clock speed along with the model number.
RTFA (1)
ciroknight (601098) | more than 10 years ago | (#8801426)
Re:Green friendly? (0)
Anonymous Coward | more than 10 years ago | (#8801374)
Can't dispose of computer parts? (2, Funny)
Anonymous Coward | more than 10 years ago | (#8801191)
What are we supposed to do with our old computers, a beowulf cluster?
Re:Can't dispose of computer parts? (1)
corporatewhore (308338) | more than 10 years ago | (#8801230)
seriously - the third world would love a supply of last year's machines and some tech support to get up on the curve...
Re:Can't dispose of computer parts? (0)
Anonymous Coward | more than 10 years ago | (#8801248)
- some "old 486" boxens are BROKEN, they won't run
- working old boxens have abysmal MIPS/watt ratios
Re:Can't dispose of computer parts? (0)
corporatewhore (308338) | more than 10 years ago | (#8801263)
Don't dispose computer, just install the right SW (1, Informative)
Anonymous Coward | more than 10 years ago | (#8801250)
You can and should make "old" PCs new again with projects like RULE [rule-project.org] (temporarily on idle, will come back for Fedora Core 2)
Reducing waste (3, Insightful)
jmv (93421) | more than 10 years ago | (#8801197)
Anybody know how this is done? (4, Insightful)
benchbri (764527) | more than 10 years ago | (#8801205)? (0)
Anonymous Coward | more than 10 years ago | (#8801220)
Re:Anybody know how this is done? (0)
Anonymous Coward | more than 10 years ago | (#8801271)
Re:Anybody know how this is done? (0)
Anonymous Coward | more than 10 years ago | (#8801352)
Re:Anybody know how this is done? (0)
Anonymous Coward | more than 10 years ago | (#8801471)
Offtopic whining: 40% tin? What was I thinking? I've got a spool of solder on my desk that clearly says 60% Sn! Well, I got the 40% part right at least...
Re:Anybody know how this is done? (1, Funny)
ColaMan (37550) | more than 10 years ago | (#8801432)
Maybe some alloy with cadmium could replace it
hype (4, Insightful)
frovingslosh (582462) | more than 10 years ago | (#8801209)
Re:hype (0)
Anonymous Coward | more than 10 years ago | (#8801288)
That sounds like a really heavy processor.
They should consider reducing the power consumption of the power LED as well, because a typical computer power LED and mainboard consume... well, a lot of power
;-)
a new Viet Nam (0)
Anonymous Coward | more than 10 years ago | (#8801441)
It's a quagmire.
Reservists are being killed. Rumfeld wants to keep the troops that are there now even longer.
SUPPORT THE TROOPS! Toss out Bush - bring them HOME.
(Policticians love to say that a comparison with Viet Nam is "dangerous". Yes, it is - for the ones that sent us into Iraq.)
Alternative heating. (2, Funny)
AndyFewt (694753) | more than 10 years ago | (#8801215)
Re:Alternative heating. (1, Funny)
tarunthegreat (746088) | more than 10 years ago | (#8801326)
How much (0)
Anonymous Coward | more than 10 years ago | (#8801216)
Re:How much (0)
Anonymous Coward | more than 10 years ago | (#8801236)
As far as we know, maybe Intel has been artifically adding 20 atoms of lead to their otherwise lead-free processors, planning ahead for a nice "95% reduction" PR move in 2004, and a "now totally lead-free" PR move in 2008.
How much lead is present in a microprocessor? (5, Insightful)
unixwin (569813) | more than 10 years ago | (#8801223)? (4, Informative)
mhifoe (681645) | more than 10 years ago | (#8801321)
A flip-chip package currently contains 0.4 grams of lead. A tiny amount compared to that in the solder in a motherboard, let alone a monitor.
Possible Solution to Terrorism? (0, Flamebait)
Anonymous Coward | more than 10 years ago | (#8801238)
Perhaps the older parts could be given (or sold!) to any number of Islamic countries that foster a "idiots-with-guns" mentality. Eventually... no more terrorists.!
where's the 8 lbs of lead?? (4, Insightful)
iamhassi (659463) | more than 10 years ago | (#8801253)?? (4, Informative)
JKR (198165) | more than 10 years ago | (#8801268)
Seriously, look at the bigger monitor tubes (especially in the EU); they have a radio-dosage sticker certifying the level of beta radiation emitted, usually at the preset acceleration voltage.
Jon.
Re:where's the 8 lbs of lead?? (5, Informative)
mhifoe (681645) | more than 10 years ago | (#8801275)
The amount of lead in a base unit is limited to solder and tiny amounts within the ICs.
Bravo! (1, Interesting)
kdachev (471319) | more than 10 years ago | (#8801258)
Thumbs up, Intel!
Lesser power consumption, better optimizing compilers, new technology in place of the older x86 and creating more jobs are still in your list... don't forget it!
I myself have doubts they are doing this only because of environmental reasons..., but nevertheless it is a step in the right direction.
What a Load of Twaddle (5, Insightful)
nukenerd (172703) | more than 10 years ago | (#8801284).
Re:What a Load of Twaddle (1, Insightful)
tarunthegreat (746088) | more than 10 years ago | (#8801329)
U also won't stop first-world countries from trying to kill 3rd-world countries either.
Watch out for unwanted side-effects (2, Insightful)
pe1chl (90186) | more than 10 years ago | (#8801292).
Publicity Stunt? (-1, Redundant)
Anonymous Coward | more than 10 years ago | (#8801296)
How many dead computers does it take to have the same enviromental impact of a discarded transformer?
Anonymous Joe
Lead is the least of our worries (5, Interesting)
pdxdada (684092) | more than 10 years ago | (#8801306).
It's just PR (4, Interesting)
RockyMountain (12635) | more than 10 years ago | (#8801308) (2, Informative)
mhifoe (681645) | more than 10 years ago | (#8801424)
The main problem relates to the higher temperatures needed to melt lead free solder. These higher temperatures can stress components and are particuarly worrying in products that have to last 20 years.
Pb Free - Not just Intel (4, Informative)
Anonymous Coward | more than 10 years ago | (#8801313) (2, Funny)
xtronics (259660) | more than 10 years ago | (#8801317)!
Re:Chemophobes - Metalic lead not a danger (0)
Anonymous Coward | more than 10 years ago | (#8801389)
aQazaQa
No deposit No return (1)
zakezuke (229119) | more than 10 years ago | (#8801324)
If manufacturers actually took into account the cost of disposal, it would likely raise prices but could have the benifit of actually not making its way into landfills. The design can actually in theory take into account the fact that all materials used be recovered. Unforunatly I can't see this happening anytime soon.
Since 2000 I've gone though the following CPUs
Pentium 166
pentium 200
AMD k6-3 400
Pentium III 500 [motherboard change]
Pentium III 733
AMD athlon 1700xp (motherboard change]
I have found homes for all the the above... but pretty damn soon they will reach end of life and no bugger would want them anymore. Chances are it'll just end up in a landfill at such time.
Eutectic alloys vs pure tin (4, Interesting)
haggar (72771) | more than 10 years ago | (#8801327).
Re:Eutectic alloys vs pure tin (1, Informative)
Anonymous Coward | more than 10 years ago | (#8801412)
If it gets any greener... (2, Funny)
DatAsian (626692) | more than 10 years ago | (#8801344)
Maybe someone will finally answer my question... (2, Funny)
ChiralSoftware (743411) | more than 10 years ago | (#8801383)
--------
WAP hosting [chiralsoftware.net]
Fair trade (-1, Flamebait)
Doc Ruby (173196) | more than 10 years ago | (#8801391)
Recycling (-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#8801403)
Problems with gold (2, Interesting)
Maljin Jolt (746064) | more than 10 years ago | (#8801417)
The problem with the pure gold was it was contaminated with about 0.9% of mix of platinum and iridium so it was much harder then normal soft pure gold. It was not usable for local dentist nor for making jewellery.
We did not find any usable process how to separate platinum and/or iridium from the gold, so the only practical purpose of the pure gold was.. a magic stick.
Fuddy-duddy leftie destroying the environment (-1, Flamebait)
Anonymous Coward | more than 10 years ago | (#8801423)
I May switch! (1)
BeCre8iv (563502) | more than 10 years ago | (#8801440)
Lead is expensiv and if you make chips by the truckload there is significant savings to be made on logistics.
But if by some quirk of fate, Intel actually start to givashit I may actually pay a little more for an Intel.
'burn' (1)
1u3hr (530656) | more than 10 years ago | (#8801464) (4, Interesting)
__david__ (45671) | more than 10 years ago | (#8801467)
-David | http://beta.slashdot.org/story/44856 | CC-MAIN-2014-41 | refinedweb | 3,090 | 70.02 |
current position:Home>Python crawler from introduction to mastery (IV) extracting information from web pages
Python crawler from introduction to mastery (IV) extracting information from web pages
2022-01-31 17:37:51 【zhulin1028】
「 This is my participation 11 The fourth of the yuegengwen challenge 17 God , Check out the activity details :2021 One last more challenge 」
One 、 Type of data
The types of data in web pages can be divided into the following three categories :
1、 Structured data
Data that can be represented by a unified structure . You can use a relational database to represent and store , Data in two dimensions . The general characteristics are : Data in behavioral units , A row of data represents information about an entity , The properties of each row of data are the same .
such as MySQL Data in database tables :
id name age gender
aid1 ma 46 male
aid2 Jack ma, 53 male
aid3 Robin Li 49 male
2、 Semi-structured data
Is a form of structured data , It does not conform to the data model structure associated in the form of relational database or other data tables , But contains related tags , Used to separate semantic elements and to layer records and fields . therefore , It is also known as a self-describing structure . Common semi-structured data are HTML,XML and JSON etc. , In fact, it is stored in the structure of tree or graph .
such as , A simple XML Express :
<person> <name>A</name> <age>13</age> <class>aid1710</class> <gender>female</gender> </person> Copy code
perhaps
<person> <name>B</name> <gender>male</gender> </person> Copy code
The order of attributes in a node is not important , The number of attributes of different semi-structured data is not necessarily the same . This data format , Free to express a lot of useful information , Include self describing information ( Metadata ). therefore , The scalability of semi-structured data is very good , It is especially suitable for large-scale dissemination on the Internet .
3、 Unstructured data
Data with no fixed structure . All kinds of documents 、 picture 、 video / Audio and so on are unstructured data . For this type of data , We tend to store them directly as a whole , And is generally stored in a binary data format ;
All data except structured and semi-structured data are unstructured data .
Two 、 About XML,HTML,DOM and JSON file
1、XML, HTML, DOM
XML namely Extentsible Markup Language( Extensible markup language ), It's a meta language used to define other languages , Its predecessor is SGML( Standard universal markup language ). It has no label set (tagset), There are no grammatical rules (grammatical rule), But it has syntactic rules (syntax rule). whatever XML Documents must be well constructed for any type of application and for proper parsing (well-formed), That is, every open label must have a matching end label , Do not contain labels in reverse order , And the sentence structure should meet the requirements of technical specifications .XML Documents can be valid (valid), But it doesn't have to be effective . A valid document is one that conforms to its document type definition (DTD) Documents . If a document conforms to a pattern (schema) The provisions of the , So this document is schema valid (schema valid).
HTML(Hyper Text Mark-up Language) Hypertext markup language , yes WWW Description language of .HTML And XML The difference and connection :
XML and HTML Are used to manipulate data or data structures , It is roughly the same in structure , But they are obviously different in essence . The comprehensive information on the Internet is summarized as follows .
( One ) Different grammar requirements :
stay HTML Case insensitive in , stay XML Medium strict distinction .
stay HTML in , Sometimes it's not strict , If the context clearly shows where the paragraph or list key ends , Then you can omitperhaps
stay XML in , An element with a single tag and no matching end tag must have a / Character as end . So the parser knows that it doesn't have to look up the end tag .
stay XML in , Property values must be enclosed in quotation marks . stay HTML in , Quotation marks are available .
stay HTML in , Can have property names without values . stay XML in , All attributes must have corresponding values .
stay XML In the document , The white space is not automatically removed by the parser ; however html It's filtering out spaces .
XML The grammar requirements are better than HTML Strictly .
( Two ) Different marks :
HTML Use inherent markers ; and XML No inherent markers .
HTML Labels are predefined ; XML The label is free 、 Self defined 、 Extensible .
( 3、 ... and ) The effect is different :
HTML It's used to show data ; XML Is used to describe data 、 Storing data , So it can be used as a persistent medium .HTML Combine data and display , Show this data on the page ;xml Separate data from display . XML Designed to describe data , The focus is on the content of the data .HTML Designed to display data , The focus is on the appearance of the data .
XML No HTML substitute ,XML and HTML It's two different languages . XML Not to replace HTML; actually XML It can be regarded as right HTML A supplement to .XML and HTML Different goals HTML The design goal of is to display data and focus on data appearance , and XML The goal of the design is to describe the data and focus on the content of the data .
There's no action XML, And HTML be similar , XML No operations ( Common ground ).
about XML The best description might be : XML It's a cross platform , And soft 、 Hardware independent , Tools for processing and transmitting information .
XML The future will be everywhere ,XML Will become the most common data processing and data transmission tools .
About DOM:
Document object model (Document Object Model, abbreviation DOM), yes W3C Organization recommended standard programming interface for handling extensible markup language . On the web , Organize pages ( Or document ) Objects are organized in a tree structure , The standard model used to represent objects in a document is called DOM.Document Object Model Our history can be traced back to 1990 Microsoft and Netscape Of “ Browser Wars ”, Both sides in order to JavaScript And JScript Life and death , So large scale gives browser powerful function . Microsoft has added a lot of proprietary things to its Web technology , both VBScript、ActiveX、 And Microsoft's own DHTML Format, etc. , So that many web pages using non Microsoft platforms and browsers can not be displayed normally .DOM It is a masterpiece produced at that time .
DOM= Document Object Model, Document object model ,DOM The content and structure of a document can be accessed and modified in a platform and language independent way . let me put it another way , This is to show and deal with a HTML or XML Common methods of documentation .DOM Very important ,DOM The design of object management organization (OMG) Based on the rules of , So it can be used in any programming language . At first people thought it was a kind of letting JavaScript Portability between browsers , however DOM The application of has gone far beyond this scope .DOM Technology enables user pages to change dynamically , For example, you can dynamically show or hide an element , Change their properties , Add an element, etc ,DOM Technology greatly enhances the interactivity of pages .
DOM It's actually a document model described in an object-oriented way .DOM Defines the objects needed to represent and modify documents 、 The behavior and properties of these objects and the relationship between them . You can put DOM It is considered as a tree representation of data and structure on the page , But of course, the page may not be implemented in this tree way .
adopt JavaScript, You can refactor the entire HTML file . You can add 、 remove 、 Change or rearrange items on a page . To change something on the page ,JavaScript You need to get the right HTML Access to all elements in the document . This entrance , Together with HTML Element to add 、 Move 、 Methods and properties changed or removed , They are all obtained through the document object model (DOM).
2、JSON file
JSON(JavaScript Object Notation, JS Object tag ) Is a lightweight data exchange format . It's based on ECMAScript (w3c To formulate the JS standard ) A subset of , Use text format completely independent of programming language to store and represent data . A simple and clear hierarchy makes JSON Become the ideal data exchange language . Easy to read and write , At the same time, it is also easy for machine analysis and generation , And effectively improve the network transmission efficiency .
JSON Rule of grammar :
stay JS In language , Everything is an object . therefore , Any type of support is available through JSON To express , Like strings 、 Numbers 、 object 、 Array etc. .
But objects and arrays are two special and common types :
1. Objects are represented as key value pairs
2. Data is separated by commas
3. Curly braces hold objects
4. Square brackets hold arrays
JSON Key value pairs are used to hold JS A way of targeting , and JS The writing method of the object is also similar ,
key / The key name in the value pair combination is written before and in double quotation marks "" The parcel , Use a colon : Separate , And then it's worth
{"firstName": "Json","class":"aid1710"}
It's easy to understand , Equivalent to this JavaScript sentence :
{firstName : "Json","class":"aid1710"}
JSON And JS Relationship of objects :
A lot of people don't know JSON and JS Relationship of objects , Even who is not clear . Actually , It's understandable :JSON yes JS String representation of object , It uses text to represent a JS Object information , The essence is a string .
Such as :
var obj = {a: 'Hello', b: 'World'}; // This is an object , Note that key names can also be enclosed in quotation marks var json = '{"a": "Hello", "b": "World"}'; // This is a JSON character string , The essence is a string . Copy code
Python About China JSON Simple demonstration of the operation of :
See... For code examples josnTest.py
JSON and XML Comparison :
1. Readability :
JSON and XML Its readability is comparable , On one side is simple grammar , One side is the standard label form , It's hard to tell the difference .
2. Extensibility :
XML It's naturally extensible ,JSON Of course, there are , Nothing is XML Can be extended and JSON But it can't be extended . however JSON stay Javascript Home game , Can be stored Javascript Compound objects , with xml Incomparable advantages .
3. Coding difficulty :
XML There are plenty of coding tools , such as Dom4j、JDom etc. ,JSON There are also tools provided . Without tools , I believe that skilled developers can write what they want quickly xml Documentation and JSON character string , however ,xml There are many more structural characters in the document .
4. Decoding difficulty
XML There are two ways of parsing :
One is to parse through the document model , That is, a group of tags are exported through the parent tag . for example :xmlData.getElementsByTagName("tagName"), But this is to be used when the document structure is known in advance , General encapsulation is not possible .
Another way is to traverse nodes (document as well as childNodes). This can be achieved by recursion , However, the parsed data are still in different forms , Often can not meet the pre requirements . All such extensible structure data must be difficult to parse .JSON The same is true . If you know in advance JSON In the case of structure , Use JSON It's wonderful to transfer data , Can write very practical, beautiful and readable code .
If you are a pure front-end Developer , I'm sure I'll like it very much JSON. But if you're an application developer , I don't like it so much , After all xml Is the real structured markup language , For data transfer . And if you don't know JSON To analyze the structure of JSON Words , It was a nightmare . It takes time and effort not to say , Code can also become redundant and procrastinating , The results are not satisfactory .
However, this does not affect the choice of many foreground developers JSON. because json.js Medium toJSONString() You can see that JSON The string structure of . Of course not using this string , This is still a nightmare . Commonly used JSON When people see this string , That's right JSON The structure of the is clear , It's easier to operate JSON. The above is in Javascript For data transfer only xml And JSON Parsing .
stay Javascript In the territory ,JSON It's home after all , Of course, its advantages are far superior to xml.
If JSON Storage in Javascript Compound objects , And if you don't know its structure , I believe many programmers are crying and parsing JSON Of . In addition to the above ,JSON and XML Another big difference is the effective data rate .JSON It has higher efficiency when transmitted as a packet format , This is because JSON Unlike XML That requires a strict closed label , This greatly improves the ratio of effective data volume to total packets , So as to reduce the same data traffic , The transmission pressure of the network .
Example comparison :
XML and JSON All use structured methods to tag data , Let's make a simple comparison .
use XML Data of some provinces and cities in China are as follows :
<?xml version="1.0" encoding="utf-8"?> <country> <name> China </name> <province> <name> heilongjiang </name> <cities> <city> Harbin </city> <city> Daqing </city> </cities> </province> <province> <name> guangdong </name> <cities> <city> Guangzhou </city> <city> Shenzhen </city> <city> zhuhai </city> </cities> </province> <province> <name> Taiwan </name> <cities> <city> Taipei </city> <city> Kaohsiung </city> </cities> </province> <province> <name> xinjiang </name> <cities> <city> urumqi </city> </cities> </province> </country> Copy code
use JSON Shown by the following :
{ "name": " China ", "province": [{ "name": " heilongjiang ", "cities": { "city": [" Harbin ", " Daqing "] } }, { "name": " guangdong ", "cities": { "city": [" Guangzhou ", " Shenzhen ", " zhuhai "] } }, { "name": " Taiwan ", "cities": { "city": [" Taipei ", " Kaohsiung "] } }, { "name": " xinjiang ", "cities": { "city": [" urumqi "] } }] } Copy code
You can see :JSON Simple grammar format and clear hierarchy are obviously better than XML Easy to read , And in terms of data exchange , because JSON The characters used are more than XML much less , It can greatly save the bandwidth of data transmission .
3、 ... and 、 How to extract information from web pages
1、 XPath And lxml
XPath Is a door in XML The language in which information is found in a document , Yes XPath Our understanding is a lot of advanced XML The basis of application ,XPath stay XML Navigate through elements and attributes in .
lxml It's a XML The third party of Python library , It is encapsulated in the bottom layer with C language-written libxml2 and libxslt, And with simple and powerful Python API, Compatible with and enhanced the famous Element Tree API.
install :pip install lxml
Use :from lxml import etree
1. XPath The term :
stay XPath In context ,XML The document is treated as a node tree , The root node of the node tree is also called the document node . XPath The nodes in the node tree (Node) Divided into seven categories : Elements (Element), attribute (Attribute), Text (Text), Namespace (Namespace), A processing instruction (Processing-instruction), notes (Comment) And document nodes (Document nodes).
to glance at XML Document examples :
<?xml version="1.0" encoding="ISO-8859-1"?> <bookstore> <book> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> </bookstore> Copy code
The above XML In the document :
( This is a “ root ”)
J K. Rowling ( This is a “ Elements ”)
lang="en" ( This is a “ attribute ”)
From another perspective, it :
bookstore ( root )
book ( Elements )
title ( Elements )
lang = en ( attribute )
text = Harry Potter ( Text )
author ( Elements )
text = J K. Rowling ( Text )
year ( Elements )
text = 2005 ( Text )
price ( Elements )
text = 29.99 ( Text )
2. The relationship between nodes
Father (Parent): Every element must have a parent node , The parent of the topmost element is the root node . Similarly, every attribute must have a parent , Their parent is the element . The above example XML In the document , root bookstore Is an element book Parent node ,book Is an element title, author, year, price Parent node ,title yes lang Parent node .
Son (Children): An element can have zero or more children . The above example XML In the document ,title, author, year, price yes book Child nodes of .
Compatriot (Sibling): Nodes with the same parent node are siblings of each other , Also known as each other's brother nodes . The above example XM In the document ,title, author, year, price Each other's compatriots .
Forefathers (Ancestor): The parent node of a node 、 Father's father , And so on, all the nodes between the root nodes are traced . The above example XM In the document ,title, author, year, price Our ancestors were book, bookstore.
Progeny (Descendant): The child node of a node 、 Son of son , And so on to all nodes between the last child node . The above example XM In the document ,bookstore Our offspring are title, author, year, price .
3. Select node
The following is the expression of the basic path , remember XPath All path expressions are based on a node , For example, the original current node is usually the root node , This is related to Linux The principle of lower path switching is the same .
Expression description :
nodename Select the node named... Under the matched node nodename The child element node of
/ If the / start , Indicates that the root node is used as the selection starting point .
// Select nodes from the descendants of matched nodes , Regardless of the location of the target node .
. Select the current node .
.. Select the parent element node of the current node .
@ Select Properties .
4. wildcard
* Match any element .
@* Match any property .
node() Match any type of node .
5. Anticipation (Predicates) or Conditional selection
Prediction is used to find a specific node or a node that meets certain conditions , The prediction expression is in square brackets . Use “|” Operator , You can choose to match “ or ” Several paths of conditions .
See the following code for specific examples lxmlTest.py.
6. Axis
XPath Axis : The coordinate axis is used to define the node set for the current node .
Axis name meaning
ancestor Select all predecessor elements and root nodes of the current node .
ancestor-or-self Select all predecessors of the current node and the current node itself .
attibute Select all attributes of the current node .
child Select all child elements of the current node .
descendant Select all descendant elements of the current node .
descendant-or-self Select all descendant elements of the current node and the current node itself .
following Select all nodes after the end tag of the current node in the document .
following-sibling Select all peers after the current node .
namespace Select all namespace nodes of the current node .
parent Select the parent of the current node .
preceding Select all nodes before the start label of the current node .
preceding-sibling Select all peers before the current node .
self Select the current node .
7. Expression for location path
The location path can be an absolute path , It could be a relative path . Absolute path to “/” start . Each path includes one or more steps , Between each step with “/” Separate .
Absolute path :/step/step/…
Relative paths :step/step/…
Each step is calculated according to the nodes in the current node set .
Step (step) Including three parts :
Axis (axis): Defines the relationship between the selected node and the current node .
Node test (node-test): Identify nodes inside a coordinate axis .
Anticipation (predicate): A pre judgment condition is proposed to filter the node set .
Step grammar : Axis :: Node test [ Anticipation ]
2、 BeautifulSoup4
Beautiful Soup Yes, it is Python Write a HTML/XML The parser , It can handle non-standard tags well and generate parse trees (parse tree). It provides simple and common navigation (navigating), Search and modify the parse tree . It can save you a lot of programming time .
install :(sudo) pip install beautifuilsoup4
Use :
Import... Into the program Beautiful Soup library :
from BeautifulSoup import BeautifulSoup # For processing HTML from BeautifulSoup import BeautifulStoneSoup # For processing XML import BeautifulSoup # To get everything Copy code
# Code example from bs4() Copy code
Locate some soup The element is simple , For example, the above example :> Copy code
You can also use soup, Get a specific tag or tag with a specific attribute , modify soup It's also very simple. ;
BS4 And lxml Comparison :
lxml C Realization , Only local traversal , fast ; complex , Grammar is not very friendly ;
BS4 Python Realization , The entire document will be loaded , slow ; Simple ,API Hommization ;
3、 Regular expressions re
Used to retrieve \ Replace those that match a pattern ( The rules ) The text of , For text filtering or rule matching , The most powerful is regular expressions , yes python An indispensable weapon in reptiles .
Basic matching rules :
[0-9] Any number , Equivalent \d
[a-z] Any lowercase letter
[A-Z] Any capital letter
[^0-9] Match non numeric , Equivalent \D
\w Equivalent [a-z0-9_], Alphanumeric underline
\W Equivalent pair \w Take the
. Any character
[] Match any internal character or subexpression
[^] Take non... For character set
- Match the preceding character or subexpression 0 Times or times
- Match the previous character at least 1 Time
? Match the previous character 0 Times or times
^ Match the beginning of a string
$ End of match string
Python Using regular expressions
Python Of re modular
pattern Compiled regular expressions
Several important methods :
match: Match once from the beginning ;
search: Match once , From a certain position ;
findall: Match all ;
split: Separate ;
sub: Replace ;
Two modes that need attention :
Greedy mode :(.*)
Lazy mode :(.*?)
- Use regular expressions to achieve the following effect :
hold i=d%0A&from=AUTO&to=AUTO&smartresult=dict
Convert to the following form :
i:d%0A
from:AUTO
to:AUTO
smartresult:dict
summary : Regular ,BS,lxml Comparison
author[zhulin1028] | https://en.pythonmana.com/2022/01/202201311737494815.html | CC-MAIN-2022-27 | refinedweb | 3,622 | 51.07 |
I added a link in the original post above. Here you go:
Wiki: Lesson 1
I’ve written a few basic scripts that help with the workflow for setting up problems similar to
Cats and Dogs. Deep Learning Utilities
There are three simple scripts:
image_download.py,
make_train_valid.py and
filter_img.py I developed these tools because I wanted to very simply experiment with the
Lesson 1 image classifier on a variety of different datasets.
Here is a sample work flow:
image_download.py 'bmw' 500 --engine 'bing' --gui image_download.py 'cadillac' 500 --engine 'google' mv dataset cars filter_img.py cars/bmw filter_img.py cars/cadillac make_train_valid.py cars --train .75 --valid .25
I’ve used these on a variety of training exercises. All I need to do is change the
path in
Lesson1.ipynb and it’s very straight-forward to try out new sets of images.
I’ve added these to
GitHub. Please feel free to clone and use. If anyone has questions or feedback, please feel free to let me know.
I use the Google cloud platform to run the jupyter notebook. However, i often face one problem.
The Kernel is always restarting. The error message is “The kernel appears to have died. It will restart automatically.”
Is there something wrong with my GCP set? I wonder if I ran out of memory.
Hi, I discovered the ML course kind of by accident here. A previous question’s answer implies that it’s okay for us lay public people to watch it, that you won’t get in trouble if I do BUT the lecture says to watch it on course.fast.ai and not on YouTube for reasons which were highly unclear. However, I can’t find those lessons on course.fast.ai.
Should I be waiting until they are up on course.fast.ai or is it okay to watch them on YouTube?
Hi Kaitlin,
Here is the link for Deep Learning Part 1 lessons at fast.ai.
It’s ok to watch them on YouTube until the videos are posted to the website (possibly later with some edits)
machine learning videos
Hi Hwang,
I am running GCE too, and although there may be some weirdness around GCE (google compute engine) the errors, the most common error I have is the GPU running out of memory. You can lower memory by manually lowering the batch size
bs in the
ConvLearner constructor.
Hello,
Is it just me or is Paperspace now requiring security clearance even on the lowest machine after selecting Public Templates. Actually the security block comes on right after selecting Public Templates. This is my 2nd day of waiting for clearance and I am eager to get started. Perhaps they have more customers than they can handle. What are they worried about? North Korean hackers?
I reached the end of the Lesson 1 video, but it stopped at the “Improving our model” section of the Jupyter notebook. Will the rest of the notebook be covered in a later video, or are we supposed to explore the rest of the notebook ourselves?
Encountered the same Paperspace problem, been days and I still can not have access to create even an Ubuntu instance! How long do you guys need to wait to be approved?
Is there a good alternative to Paperspace?
Hello,
I found out from the first lesson that I have to configure web services like ‘paperspace’. But, I have a laptop with GTX1050, is it not possible to configure this GPU itself and run the programs??
Please help me understand this.
Thank you!
Anil
I came across the same issue last week when I was trying to set up my machine. I waited about 3 days before I sent a support ticket and they approved my request within 24 hours after.
I am using Google Cloud Computing, I get 300$ credit free when register. I follow this guide for setup. It take 2-3 days for the step increasing GPU quotas. Everything works fine.
As long as your GPU has CUDA 3.0 you’re good to go. I has GTX 745 but still works fine.
Hi, I had run the lesson 1 notebook earlier on Google Colab without any issue. However yesterday when I tried to re-run it, facing below error.
AxisError: axis 1 is out of bounds for array of dimension 1
The notebook is same as what @manikanta_s put up in this post here
Step: Fine-tuning and differential learning rate annealing
Code:
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
accuracy(probs, y) <<<<< Error in this line
Error:
---------------------------------------------------------------------------
AxisError Traceback (most recent call last)
in ()
----> 1 accuracy(probs, y)
/usr/local/lib/python3.6/dist-packages/fastai/metrics.py in accuracy(preds, targs) 3 4 def accuracy(preds, targs): ----> 5 preds = np.argmax(preds, axis=1) 6 return (preds==targs).mean() 7 /usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in argmax(a, axis, out) 1002 1003 """ -> 1004 return _wrapfunc(a, 'argmax', axis=axis, out=out) 1005 1006 /usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds) 50 def _wrapfunc(obj, method, *args, **kwds): 51 try: ---> 52 return getattr(obj, method)(*args, **kwds) 53 54 # An AttributeError occurs if the object does not have AxisError: axis 1 is out of bounds for array of dimension 1
Any idea why I am getting this error now?
-Thanks
Nikhil
Lesson 1: name 'accuracy_np' is not defined
Well, looks like ‘log_preds’ should be used instead of ‘probs’. Is this a recent change in the accuracy function? Thanks.
Hi,
Thanks for the reply!
I am new to these GPU configurations. Could you please clarify my below thoughts.
- We use WEB services for training when one does not have a GPU in his\her PC.
- If the appropriate GPU is available in PC, we do not need to use any WEB services. Right? We just have to configure the GPU available on PC for training.
If my understanding is correct, how do I configure GPU on my laptop for trainin g?
Thank you!
Anil | http://forums.fast.ai/t/wiki-lesson-1/9398?page=9 | CC-MAIN-2018-13 | refinedweb | 1,022 | 75.71 |
Building a Minimal Glibc with Componentization
Glibc componentization is a process to
build a custom minimal set of the glibc C libraries, using only the
necessary objects required by a specific executable or group of
executables. By minimizing the footprint of the libraries,
resource-limited embedded targets can maximize resources available
for applications and storage. This article discusses the
feasibility of componentizing glibc as well as the development of
some custom analysis tools. With the help of these tools it was
possible to build test executables successfully, each with a custom
minimal version of libc.
Embedded systems typically have tighter resource constraints
than desktop computers or servers, although they often are expected
to perform similar functions such as serving web pages and storing
important information. Therefore, the applications they run use
much of the same functionality from the system libraries as their
desktop and server counterparts. With a reduced expectation of
expandability, it is logical to provide a minimal subset of the
same libraries.Independent embedded versions of the system libraries do
exist. While these libraries greatly reduce the footprint, they
sacrifice functionality (such as pthreads), do not guarantee
complete API compatibility with a complete glibc and must be
maintained separately.There are several advantages to building a minimal library
from the source of the complete library. The primary advantage is a
guaranteed equivalent API. Because there is only one source tree to
maintain, whenever glibc is updated so are all the minimal
libraries built from glibc. For example, developers don't have to
concern themselves with whether or not the embedded library's
printf function supports the %f parameter. This enables developers
to design applications on a desktop system, with all the amenities
they have to offer, and deploy them to an embedded target without
concerning themselves with API compatibility. The difficulty of
this approach involves how to create a minimal library from such a
large source tree without over-complicating the source code. This
study investigates the possibility of building a custom libc.so
from only the necessary prebuilt object files of a complete glibc
build.When glibc is linked as the final step of the build
processes, the various objects (1,756 total) satisfy undefined
symbols among themselves. Glibc contains nearly 250,000 implicit
dependencies among its various objects. With this many
dependencies, manually selecting which objects to include would be
tedious at best and impossible at worst. To make this task
manageable, a MySQL database containing all the object dependencies
for all of glibc was implemented. A detailed description of the
library analysis tool can be found in the Sidebar ``Library
Analysis Tool''. With this tool, a list of all the objects needed
to build a custom library can be generated based on the required
symbols of a given application set. From the output of this tool,
three test executables were successfully built, each with a custom
minimal version of glibc. These custom libraries are considerably
smaller than the complete versions, as small as 19% of the original
size for the simplest case.Library Analysis ToolBuilding GlibcThe first step was to build glibc, understand its build
process and note the size of each of its libraries. This analysis
was performed on a clean build of a recent version (2.1.3) with the
crypt and linuxthreads add-ons. The glibc library set consists of
21 libraries and the linker (ld.so); Table 1 lists all of them and
their respective sizes. It should be noted that glibc builds 21
libraries, and of these 21, libc is the largest, accounting for
nearly 50% of the total size. For this reason, this research is
focused on componentizing libc.so, with the reasoning that the
other 20 libraries are already sufficiently modular.Table 1. Original Libraries and
SizesBy default, glibc builds three versions of its libraries:
static, shared and profiled. Only the process of building the
shared libraries is relevant to this study. This process consists
of five steps:
- All the object files (.os) are built with the -fPIC
flag to gcc, creating position-independent code.
- For each directory, a listing of every object from
that directory to be linked into libc is created in a stamp.os
file.
- An archive, libc_pic.a, is created from these lists
using ar.
- This archive is made relocatable with the -r flag
to gcc.
- The relocatable archive is linked into a shared
library, libc.so.
Preparing an ApplicationPrior to building a custom shared library, it is necessary to
determine which objects from libc.so will be needed for the target
application(s). This is done by compiling and linking the
application(s) to the newly built glibc, not the system glibc, and
then adding that application to the database managed by the
analysis tool. In order to avoid the need to install the newly
built glibc, the correct options must be passed at compile time to
link against the new library set.The sample application, test_printf.c, follows:
#include "stdio.h" int main() { int i; for (i = 0; i < 10; i++) { printf("iteration: %02d\n", i); } return 0; }
It is compiled with the commands shown in Listing 1. Note
that the system startup files and default libraries are omitted
with the -nostdlib and -nostartfiles options. They are replaced
with the startup files from the new glibc build (crt1.o, crti.o,
crtn.o, etc.), and the newly built libraries are explicitly
specified.
Listing 1. Compiling
test_printf.cThis application must be executed with the new loader as well
(or it will not find the right libraries). The command in Listing 2
specifies the new loader and library path and executes the
application. It can be verified that the appropriate libraries are
loaded by prepending strace to the previous command and examining
the output (the lines starting with open are of interest).Listing 2. Specifying the New Loader
and Library Path and Executing the ApplicationThe program is then added to the database with the
addApplication.pl script:
./addApplication ../projects/testcases/test_printf
Building a Minimal libc.soA minimal libc.so can be built based on any set of
applications in the database. The following example will use a
single application (test_printf from above) as the source for
required objects. The process, outlined below, consists of the
following five steps:
- Generate a list of required object files,
libc_objects.master.
- Generate a customized set of libc_objects
files.
- Create an archive, libc_pic.a, from these lists
using ar.
- Make the archive relocatable with the -r flag to
gcc.
- Link the relocatable archive into a shared library,
libc.so.
This process should be executed in the minilib directory,
containing only the Makefile and associated scripts. The Makefile
variable GLIBCPATH has to be updated to the path where glibc was
built; the rest of the process is automated with the
make command. The library analysis tool provides
a list of the object files that provides the symbols explicitly
required by an application, as well as the implicitly required
objects. This list, libc_objects.master, is generated by the
getAppDeps.pl script and should be copied to the minilib directory.
Running make first executes the script
getstamps, which descends into the glibc source directory and
recursively copies every stamps.os file to an equivalent tree
within the current directory. These stamps.os files are formatted
to list one object per line and are then sorted alphabetically. The
newly formatted stamp.os files are then joined with
libc_objects.master to create an intersection of the two files,
effectively removing any unnecessary objects from the list. The
full path is appended to the objects in the list, and the result is
stored in libc_objects (one per directory). With all the
libc_objects files in place, the custom library is ready to be
linked.The various commands needed to link the final shared library
were taken from the glibc make process and modified to account for
the new build location and object-list filenames (libc_objects).
Linking is done in three steps. First, ar is
used to link all the objects listed in the libc_objects files into
one archive with the command in Listing 3.Listing 3. Linking the Objects in
libc_objects into One ArchiveSecond, the archive is made relocatable:
gcc -nostdlib -nostartfiles -r -o libc_pic.os -Wl,-d -Wl,--whole-archive libc_pic.a
The -r option here generates relocatable code in the output
file, libc_pic.os; -nostdlib and -nostartfiles prevent gcc from
linking in the standard system libraries and startup files;
--whole-archive instructs gcc to include everything from the
archives listed after --whole-archive and before
--no-whole-archive, and not just the symbols explicitly required by
the other objects scheduled for link.
Finally, the shared library is created, as shown in Listing
4.Listing 4. The Shared
LibraryThe linker option, --version-script, acts as a filter for
exported symbols, providing complete control over which symbols are
exported. Even if a symbol exists in the objects and archives
linked into the library, they will not be exported by the final
shared library unless they are listed in the version-script,
libc.map. The -e option forces __libc_main as the library's entry
point. The -u option forces the symbol __register_frame to be
undefined, forcing a link with libgcc.a, which provides this
symbol. And then -rpath-link specifies the first set of directories
to search for share libraries specified on the command line, such
as ld.so. It should be noted that as these commands were taken from
the partially automatically generated commands from the glibc build
process, it is likely that there are some unnecessary paths and
even unnecessary options listed.The resulting library is placed in the top-level directory as
libc.so, a nonstripped shared library.When linking the application it is possible that the
libc_objects.master list is not complete, and undefined symbol
errors are the result. These symbols must be tracked down (using
the findsymbol script), and their providing objects should be
appended to the libc_objects.master list. Running make
clean and make will attempt to rebuild
the shared library with the updated object list. In its current
state, the library analysis tool provides information assuming that
a custom version of every library will be built. Since only libc.so
is being rebuilt in this example, if the application requires
pthreads, the complete libpthread.so library will be used. If it
requires something of libc.so that the application does not, it
must be added manually. There are generally one or two objects that
must be added to the list. This manual step should be eliminated
with future versions of the analysis tool.Testing the Minimal LibraryTo test the custom library, the application for which it was
built must be relinked, using the new library. The new libc.so must
be copied into the glibc source tree, replacing the old one.
Running make again recompiles the test
application, linking to the new minimal library. This analysis
tested three test applications, each with unique requirements of
libc.so (see Table 2).Table 2. Test Cases and Minimal
Library StatisticsConclusionGlibc componentization offers the most customizable
libraries, while requiring very little from the developer. The
advantages for componentization include rapid development, API
consistency and by using the stock glibc source tree, zero
maintenance due to a forked tree. Target devices that are resource
limited, but that will be used for varying tasks (such as PDAs),
should consider other options such as glibc profiling. A profiled
version of glibc could be built so that frequently accessed
functions are grouped together in pages. Devices not so restricted
as to resources may find the best solution simply is to use the
complete library. This approach allows for future development of
new and more functional applications, without the need to redeploy
the system libraries as well. Componentization finds its
application in very specialized devices where resources are at a
premium, and the applications it must run are fixed and known prior
to deployment.This process defines dependencies at the object level; it
does not offer as high a level of granularity as a system based on
symbols could, but it is relatively simple and in no way modifies
the glibc source tree. The library could be reduced further by
implementing simplified versions of some of the larger components,
but this too would require modifying the source code. The test
cases show that glibc can be componentized with reasonable
granularity at the object level, and although not as fine as at the
symbol level, this process is far easier and requires less effort
from all parties involved. The process discussed can be used to
implement any standards-compliant library proposed by third parties
as well as to create completely customized minimal libraries for a
specific application set when no standard is appropriate.GlossaryResources
Darren Hart is a
24-year-old senior in Brigham Young University's undergraduate
Computer Engineering program. His fields of interest and study
include embedded systems and embedded application development as
well as operating systems--Linux in particular. He has done three
consecutive co-ops with IBM, most recently with the Linux
Technology Center where he researched glibc
componentization. | https://www.linuxjournal.com/article/5457 | CC-MAIN-2020-40 | refinedweb | 2,184 | 55.03 |
React Docs Net
Take the JSON output from react-docgen, and convert it to C# ViewModels for consumption in .NET projects. Why would you do this? It allows us to define the ViewModels in the Frontend where we actually use them.
⚠️ Warning
This is a basic rewrite of the React props to C#/.NET. No validation is done on the actual files. Not all Flow features are supported, since there's not a simple way to convert them to C#.
- All models are converted to upper camelcase, with
ViewModelappendend.
- Enum models are converted to upper camelcase, with
Typesappended
- Flow
numberis converted to
int. Use
@type {TYPE}in comment tag for the prop, to change the number type.
Requirements
- Node 8.x+
- Flow Currently only supports extracting models from Flow Types.
- react-docgen JSON files
Usage
Add the dependency
yarn add @charlietango/react-docs-net --dev
Generate JSON files with
react-docgen, and process them:
const docNet = require('@charlietango/react-docs-net'); docNet.createModels([{name: 'CustomModel', docs: {...}}], { namespace: 'Dk.CharlieTango', dest: 'dist/models', // Add dest to write to files });
or calling the
bin
$ react-doc-net src/models/**/*.json --ns Dk.CharlieTango --dest dist/models
The
.cs view models will be created in
dist/models.
Config
JSDoc flags
You can use these flags in JS comments to modify how a prop is handled.
@internal- Ignore this prop - It's only used internally in the React App.
@type- Set a specific C# type for this prop - Like
decimal
@static- Marks classes or fields as static.
@generic- Should always be put above a generic prop
@genericTypes T: Enum- Optional. Should be placed before the current type definition | https://www.npmtrends.com/@charlietango/react-docs-net | CC-MAIN-2022-27 | refinedweb | 274 | 60.01 |
I'm having trouble reading and writing a dynamically allocated array of structs to a file. I think it has to do with the way I'm allocating memory, because if I statically allocate them (i.e. vector vectout[3]) then it seems to work better, but I still get a debug assertion error with something about an invalid heap.
With the code the way it is displayed below, the file it outputs has random numbers in it, and I get a debug assertion error about an invalid block type.
Note: this is just some test code...in the real code, I'm reading the number of vectors from a header, dynamically allocating an array, and then reading them all into the array.
Any help is appreciated. Thanks.Any help is appreciated. Thanks.Code:#include <iostream> #include <fstream> using namespace std; struct vector { float x, y, z; }; int main(int argc, char* argv[]) { vector *vectout; vector *vectin; vectout = new vector[3]; vectin = new vector[3]; vectout[0].x = 1.0f; vectout[0].y = 11.0f; vectout[0].z = 111.0f; vectout[1].x = 2.0f; vectout[1].y = 22.0f; vectout[1].z = 222.0f; vectout[2].x = 3.0f; vectout[2].y = 33.0f; vectout[2].z = 333.0f; for (int i = 0; i < 3; i++) { cout << "vectout[" << i << "] x: " << vectout[i].x << " y: " << vectout[i].y << " z: " << vectout[i].z << endl; } ofstream out; out.open("vector.dat", ios::out | ios::binary); out.write((char *) &vectout, sizeof(vector) * 3); out.close(); ifstream in; in.open("vector.dat", ios::in | ios::binary); in.read((char *) &vectin, sizeof(vector) * 3); in.close(); for (int i = 0; i < 3; i++) { cout << "vectin[" << i << "] x: " << vectin[i].x << " y: " << vectin[i].y << " z: " << vectin[i].z << endl; } delete [] vectin; delete [] vectout; return 0; } | https://cboard.cprogramming.com/cplusplus-programming/78833-file-i-o-problem-dynamically-allocated-struct-array.html | CC-MAIN-2017-43 | refinedweb | 300 | 69.28 |
Web cam and prosilica camera did not work.
Hello, I installed vimbaviewer on windows 7. Then, I installed opencv 2.4.8. I build it using cmake with With_PVAPI and WITH_QT. I am using QT 5.2.0 with mingw32. At this stade the webcam is running correctly but the prosilica camera did not work.
After that I found the link link. So I nstalled the SDK AVT for windows and I rebuild it with With_PVAPI and WITH_QT. At this phase, the webcam did not work and also the prosilica camera always did not work. SEE my code to get video:
#include "opencv2/highgui/highgui.hpp" #include <iostream> using namespace cv; using namespace std; int main() { cout<<"Start"<<endl; VideoCapture cap(0); // open the video camera no. 0 //VideoCapture cap(CV_CAP_PVAPI); // open the video camera prosilica cout<<"end"<<endl;; }
When I execute this code, It seems that exit directly without any error, I did not execute the code. In fact , I get this window:
Thanks for help me
I get the solution | https://answers.opencv.org/question/27965/web-cam-and-prosilica-camera-did-not-work/ | CC-MAIN-2019-39 | refinedweb | 172 | 68.36 |
Ads
i have to write a program that stores vowels (a,e,i,o and u) in an array. ask the user to enter any character. the program should ignore the case of that character (uppercase or lowercase) and determine whether the entered character is vowel or not.
Hello Friend,
Try the following code:
import java.util.*; class Vowel { public static void main(String[] args) { char charr[]={'a','e','i','o','u'}; Scanner input=new Scanner(System.in); System.out.println("Enter a character: "); char ch=input.next().toLowerCase().charAt(0); if((charr[0]==ch)||(charr[1]==ch)||(charr[2]==ch)||(charr[3]==ch)||(charr[4]==ch)){ System.out.println("Entered character is vowel"); } else{ System.out.println("Entered character is not a vowel"); } } }
Thanks
tq so much 4 helping me...i really appreciate it... :)... i've another question..n try to do it...
an internet service provider has three different subscription packages for its customers:
package A : for RM9.99 per month 10 hours for access are provided. additional hours are RM2.99 per hour.
package B : for RM19.99 per month 20 hours of access are provided. additional hours are RM1.99 per hour.
package C : for RM29.99 per month unlimited access is provided.
write a program that calculates a customer's monthly bill. it should store the letter of the package that customer has purchased (A,B or C) and the number of hours that were used. the program should display the total charges.
here my coding so far...
import java.util.*; public class MonthlyBill { public static void main (String[]args) { double payrate=0; int hours;
Scanner scan=new Scanner(System.in); System.out.println("Enter customer's name: "); String name=scan.nextLine(); System.out.println("Enter package('A' or 'B' or 'C') : "); String pack=scan.nextLine(); if(pack.equalsIgnoreCase("A")) { payrate=9.99; System.out.println("Enter your customer's hour here:"); int hour=scan.nextInt(); if(hours >= 10) payrate= payrate * 2.99; } else if(pack.equalsIgnoreCase("B")) { payrate=19.99; System.out.println("Enter your customer's hour here:"); int hour=scan.nextInt(); if(hours >= 20) payrate= payrate * 1.99; } else if(pack.equalsIgnoreCase("C")) { payrate=29.99; System.out.println ("No extra charge for your additional hours"); System.out.println ("You get unlimited access"); } System.out.println ("Total monthly bill \n" + payrate); }
}
i think there is so many error...please try check it out.. | http://www.roseindia.net/answers/viewqa/Java-Beginners/12479-hello.html | CC-MAIN-2017-34 | refinedweb | 401 | 55.4 |
Agenda
See also: IRC log
<trackbot> Date: 06 October 2009
<fjh> Scribe 13 October is Bruce Rich
<fjh> 13 October scribe Bruce, chair is Thomas
fjh: f2f early bird registration has been extended
<fjh>
<fjh>
RESOLUTION: Minutes from 29 September approved
fjh: updated status section in
c14n 2.0
... Shivaram minor comments updated
updated to 1.1 draft
put in transition request to publish 2.0 docs
<fjh> transition request out for 2.0 C14N and Signature, approved
<fjh>
publish date set for Oct. 8
tlr: may not make Oct. 8 but soon
<fjh> issue-124?
<trackbot> ISSUE-124 -- Does w3c support conformance clauses for specification and minimum conformance levels, how to do properly -- OPEN
<trackbot>
fjh: ISSUE-124
... probably can close this issue
if we have test case document with readmes we are probably ok
<fjh> issue-124 close
<fjh> we do not have to have a test document, but do need coverage with tests and associated readme's
<fjh> issue-142?
<trackbot> ISSUE-142 -- Is a single schema needed for XML Signature 1.1 to validate against, given that we have 2nd edition schema plus 1.1 additional schema -- OPEN
<trackbot>
<fjh>
pdatta: multiple schemas are common
<fjh> ACTION: fjh ask xml coordination about use of multiple schemas and validation [recorded in]
<trackbot> Created ACTION-384 - Ask xml coordination about use of multiple schemas and validation [on Frederick Hirsch - due 2009-10-13].
<fjh> I do not believe this would be an issue to stop last call
<fjh> defining multiple schemas is common practice
<fjh> issue-135?
<trackbot> ISSUE-135 -- Review transforms for XML Encryption 1.1 and alignment with Signature 1.1 -- OPEN
<trackbot>
<fjh>
fjh: issue 135
... should we put transforms in alg section?
<fjh> The syntax of the URI and Transforms is defined in XML Signature [XML- DSIG], however XML Encryption places the Transforms element in the XML Encryption namespace since it is used in XML Encryption obtain an octet stream for decryption.
<fjh> currently says
<fjh> The syntax of the URI and Transforms is similar to that of [XML- DSIG]. However, there is a difference between signature and encryption processing.
<bal> small in proposed text:
<bal> The syntax of the URI and Transforms is defined in XML Signature [XML- DSIG], however XML Encryption places the Transforms element in the XML Encryption namespace since it is used in XML Encryption **TO** obtain an octet stream for decryption.
<fjh> proposed resolution - accept change to XML Encryption proposed in , adding "to" before "obtain"
RESOLUTION: accept change to XML Encryption proposed in , adding "to" before "obtain"
<fjh> ACTION: fjh to edit xml encryption 1.1 with change in , adding "to" before "obtain" [recorded in]
<trackbot> Created ACTION-385 - Edit xml encryption 1.1 with this change [on Frederick Hirsch - due 2009-10-13].
<fjh> issue: XML Encryption 1.1 table of contents incomplete, some headings not numbered correctly in document
<trackbot> Created ISSUE-147 - XML Encryption 1.1 table of contents incomplete, some headings not numbered correctly in document ; please complete additional details at .
<fjh> issue-137?
<trackbot> ISSUE-137 -- Normative reference to DRAFT-HOUSLEY-KW-PAD -- OPEN
<trackbot>
bal: make change later today for issue 137
fjh: make decision at TPAC to go
to last call for 1.1 docs
... will need to make decision about ECC requirements
... will make recommendation on ECC before going to last call t
<fjh> include XML Signature 1.1, XML Encryption 1.1, XML Security Generic Hybrid Ciphers, XML Signature Properties
fjh: list of docs: dsig an enc
1.1, generic hybrid ciphers, signature properties
... shouldn't go to last call w/o some impl experience?
tlr: impl exp not critical for last call
<fjh> Also publish an update to XML Security Algorithms Cross-Reference
<fjh> Plan is to resolve at TPAC F2F to bring XML Signature 1.1, XML Encryption 1.1, Generic Hybrid Ciphers and XML Signature Properties to Last Call
<fjh> issue-9?
<trackbot> ISSUE-9 -- Review WS-I BSP constraints on DSig -- OPEN
<trackbot>
fjh: need some help on this one
hal: will take a look at it, what about 1.0 vs 1.1?
fjh: look at 1.0, then 1.1, signature may be the same
<scribe> ACTION: hal to look at WS-I BSP constraints on DSig [recorded in]
<trackbot> Created ACTION-386 - Look at WS-I BSP constraints on DSig [on Hal Lockhart - due 2009-10-13].
<fjh> issue-32?
<trackbot> ISSUE-32 -- Define metadata that needs to be conveyed with signature, e.g. profile information -- OPEN
<trackbot>
fjh: will wait on this one for Scott to comment on
<fjh> issue-45?
<trackbot> ISSUE-45 -- Multiple or layered signatures -- OPEN
<trackbot>
<fjh> multiple signature blocks discussed last week
fjh: need to check requirements to see if we address this
<scribe> ACTION: Gerald to propose text for requirements for issue-45 [recorded in]
<trackbot> Created ACTION-387 - Propose text for requirements for issue-45 [on Gerald Edgar - due 2009-10-13].
<fjh> issue-60?
<trackbot> ISSUE-60 -- Define requirements for XML Security and EXI usage -- OPEN
<trackbot>
<fjh> requirement -ability to sign an EXI serialization without reformatting it
<scribe> ACTION: Gerald to propose text for requirements for issue-60 [recorded in]
<trackbot> Created ACTION-388 - Propose text for requirements for issue-60 [on Gerald Edgar - due 2009-10-13].
pdatta: added EXI in one of the encoding types in c14n 2.0
<fjh> issue-63?
<trackbot> ISSUE-63 -- Namespace requirements: undeclarations, QNames, use of partial content in new contexts -- OPEN
<trackbot>
fjh: should be a requirement to support QNames in content
<pdatta> EXI is one option for the serialization parameter in C14n 2.0 See
hal: need to be more precise: support QNames in content
pdatta: there is an option in c14n 2.0 to support this
<fjh> c14n2 has option related to QNames in content
<fjh> suggest - add to requirements that should be possible to have QNames in content
<scribe> ACTION: Gerald to propose requirements text for issue-63 [recorded in]
<trackbot> Created ACTION-389 - Propose requirements text for issue-63 [on Gerald Edgar - due 2009-10-13].
<pdatta> This is the section in C14N 2.0 about QNames in content -> , QNames in xsi:type are considered separately
fjh: be careful there are 2 requirements docs, transforms and general
<fjh> have we dealt with issue-63 ?
hal: couple of unresolved
points
... 1) false positives
may be a QName, might not be
<fjh> detecting colon might not be good enough notes hal
some software preprocesses doc and look for QName prefix and add to list of exclusive C14N
extra pass over data so not good for streaming
2) rare cases changes to namespace decl outside of what is signed can still cause false positives
<fjh> QNames in content are inherently ambiguous, since colon is also legitimate text
hal: an example will make it
clear for #2
... you can get two different signature values
<fjh> see Hal's workshop paper
hal: think both are called in out in workshop paper
pdatta: for QNames in content in xsi tags (80%) we have addressed
<fjh> suggestion - record difficult issues in requirements document, note approach taken
<fjh> pratik notes we need more use cases where we use QNames in content
fjh: need to define use cases in reqmts doc and show how we addressed them
hal: section in paper: spurious
validation and QNames in content
... has examples of edge case
<fjh> issue-65?
<trackbot> ISSUE-65 -- Define requirements on transforms -- OPEN
<trackbot>
fjh: thinks this is answered in 2.0 drafts
pdatta: 2.0 doc based on requirements but can take a look
<fjh> suggest this issue can be closed, based on 2.0 requirements and 2.0 signature and 2.0 C14n drafts
<fjh> issue-65 close
<fjh> issue-65 closed
<trackbot> ISSUE-65 Define requirements on transforms closed
<fjh> issue-66?
<trackbot> ISSUE-66 -- Which constraints can we impose on xml data model for simplification -- OPEN
<trackbot>
<fjh> issue-66 dealt with in 2.0 C14N and 2.0 Signature
fjh: think we have addressed this in 2.0 docs, recommend closing
<fjh> issue-66 closed
<trackbot> ISSUE-66 Which constraints can we impose on xml data model for simplification closed
<fjh> issue-68?
<trackbot> ISSUE-68 -- Enable generic use of randomized hashing -- OPEN
<trackbot>
bal: think we decided it didn't
have to be in 1.1 because someone could extend and add it
... but look at it for 2.0
... keep it open for 2.0
... support it depending how much demand/support in community
fjh: next step for someone to make a proposal
<fjh> this one deserves consideration
<fjh> issue-127?
<trackbot> ISSUE-127 -- Should XML Security WG consider supporting and/or defining EXI canonicalization -- OPEN
<trackbot>
fjh: combine with issue-60
<fjh> suggest consolidating this one with ISSUE-60
<fjh> any objection?
<fjh> ACTION: fjh consolidate ISSUE-127 and issue-60 [recorded in]
<trackbot> Created ACTION-390 - Consolidate ISSUE-127 and issue-60 [on Frederick Hirsch - due 2009-10-13].
<fjh> issue-131?
<trackbot> ISSUE-131 -- Is semantic equivalence robustness in requirements document -- OPEN
<trackbot>
<scribe> ACTION: Gerald to see if issue-31 is covered in requirements doc [recorded in]
<trackbot> Created ACTION-391 - See if issue-31 is covered in requirements doc [on Gerald Edgar - due 2009-10-13].
<fjh> action-391 closed
<trackbot> ACTION-391 See if issue-31 is covered in requirements doc closed
<scribe> ACTION: Gerald to see if issue-131 is covered in requirements doc [recorded in]
<trackbot> Created ACTION-392 - See if issue-131 is covered in requirements doc [on Gerald Edgar - due 2009-10-13].
<fjh> issue-131?
<trackbot> ISSUE-131 -- Is semantic equivalence robustness in requirements document -- OPEN
<trackbot>
<fjh> issue-136?
<trackbot> ISSUE-136 -- Is normalization of prefixes a goal for 2.0 c14n -- OPEN
<trackbot>
<fjh> believe we have option for this 2.0, need to check
<fjh> issue-139?
<trackbot> ISSUE-139 -- Need to collect streaming XPath requirements -- OPEN
<trackbot>
pdatta: still under discussion what the subset is
fjh: suggest we need to put something in reqmts doc
pdatta: looked at some of the
papers, they still have some restrictions on XPath for
streaming
... we need to define our subset somewhere in the middle
... will try to define our subset more clearly
hal: is there enough interest in defining an XPath subset is acceptable to community?
pdatta: many uses are just simple
XPath expressions
... ws security policy group should review 2.0 doc
<fjh> ws-sx, sstc
<fjh> ACTION: fjh announce 2.0 to oasis security tcs, draw attention to points [recorded in]
<trackbot> Created ACTION-393 - Announce 2.0 to oasis security tcs, draw attention to points [on Frederick Hirsch - due 2009-10-13].
<fjh>
fjh: streamable XPath
pdatta: in our subset it didn't have the issues in jeni's blog
<fjh> pratik notes that proposed 2.0 subset of XPath is different from other subsets that have been critiqued in research papers
pdatta: some expressions cannot
be done in one pass
... can do it in one pass but would require a lot of memory
<fjh> when we publish XPath subset as part of our 2.0 FPWD we can then seek constructive feedback
pdatta: proposed a simpler subset - we can now decide if we want to do any advanced ones
<fjh>
fjh: need to move forward on 1.1
interop
... can we use TPAC f2f to get things moving?
mullan: will be able to participate in interop testing DEREncodedKeyValue | http://www.w3.org/2009/10/06-xmlsec-minutes.html | CC-MAIN-2015-06 | refinedweb | 1,911 | 60.95 |
This is the first of a series of tutorials on Programming in Objective-C. It's not about iOS development though that will come with time. Initially though, these tutorials will teach the Objective-C language. You can run them using ideone.com.
Eventually we'll want to go a bit further than this, compiling and testing Objective-C on Windows and I'm looking at GNUStep or using Xcode on Macx.
- Want to learn C Programming? Try our free C Programming Tutorials
Before we can learn to write code for the iPhone, we really need to learn the Objective-C language. Although I'd written a developing for iPhone tutorial before, I realized that the language could be a stumbling block.
Also memory management and compiler technology has changed dramatically since iOS 5, so this is a restart.
To C or C++ developers, Objective-C can look quite odd with it's message sending syntax [likethis] so, a grounding in a few tutorials on the language will get us moving in the right direction.
What is Objective-C?
Developed over 30 years ago, Objective-C was backwards compatible with C but incorporated elements of the programming language Smalltalk.
In 1988 Steve Jobs founded NeXT and they licensed Objective-C. NeXT was acquired by Apple in 1996 and it was used to build the Mac OS X Operating System and eventually iOS on iPhones and iPads.
Objective-C is a thin layer on top of C and retains backward compatability such that Objective-C compilers can compile C programs.
Installing GNUStep on Windows
These instructions came from this StackOverflow post. They explain how to install GNUStep for Windows.
GNUStep is a MinGW derivative that lets you install a free and open version of the Cocoa APIs and tools on many platforms. These instructions are for Windows and will let you compile Objective-C programs and run them under Windows.
From the Windows Installer page go to the FTP site or HTTP Access and download the latest version of the three GNUStep installers for the MSYS System, Core and Devel. I downloaded gnustep-msys-system-0.30.0-setup.exe, gnustep-core-0.31.0-setup.exe and gnustep-devel-1.4.0-setup.exe. I then installed them in that order, system, core and devel.
Having installed those, I ran a command line by clicking start, then clicking run and typing cmd and pressing enter. Type gcc -v and you should see several lines of text about the compiler ending in gcc version 4.6.1 (GCC) or similar.
If you don't, ie it says File not found then you may have another gcc already installed and need to correct the Path. Type in set at the cmd line and you'll lots of environment variables. Look for Path= and many lines of text which should end in ;C:\GNUstep\bin;C:\GNUstep\GNUstep\System\Tools.
If it doesn't, then open the Windows Control Panel look for System and when a Window opens, click Advanced System Settings then click the Environment variables. Scroll down the System Variables list on the Advanced tab until you find Path. Click Edit and select All on the Variable Value and paste it into Wordpad.Now edit the paths so you add the bin folder path then select all and paste it back into the Variable value then close all the windows. Press ok, open a new cmd line and now gcc -v should work.
Mac Users
You should sign up to the free Apple development programs and then download Xcode. There's a bit of setting up a Project in that but once it's done (I'll cover that in a separate tutorial), you will be able to compile and run Objective-C code. For now the Ideone.com website provides the easiest method of all for doing that.
What's Different about Objective-C?
About the shortest program you can run is this:
#import <Foundation/Foundation.h>
int main (int argc, const char *argv[])
{
NSLog (@"Hello World") ;
return (0) ;
}
You can run this on Ideone.com. The output is (unsurprsingly) Hello World, though it will be sent to stderr as that's what NSLOG does.
Some Points.
- #import is the Objective-C equivalent of #include in C.
- Instead of zero terminated C string I've used Objective-C's strings. These always start with @ as in @"Example of a string".
- The main function is no different.
In the next Objective-C tutorial I'll look at objects and OOP in Objective-C. | http://cplus.about.com/od/iphonecodingtutorials/a/Objective-c-Programming-Online-Tutorial-One.htm | CC-MAIN-2014-15 | refinedweb | 762 | 64.91 |
C Pointer To Anonymous Struct
Is there a way to get a pointer to an anonymous struct? With out anonymous structs I could write the following:
struct a{ int z; }; struct b{ int y; struct a *x; }
This works fine, but I only use
struct a within
struct b and it seems redundant to pollute the global namespace with it. Is there a way I could define a pointer (
x) to an anonymous struct. Something that would probably look like the following:
struct b{ int y; struct { int z; } *x; }
Or is this valid on its own?
1 answer
- answered 2018-11-08 08:07 Antti Haapala
Yes you can do this. But there is a complication: there is no way to directly declare another pointer to same type - or an object of that type, because... the struct type is anonymous.
It is still possible to use it however, by allocating memory for it with
malloc, as conversions from
void *to any pointer to object are possible without an explicit cast:
struct b { int y; struct { int z; } *x; } y; y.x = malloc(sizeof *y.x * 5);
Why would you think that this is better than polluting the namespace is beyond my imagination.
GCC provides the
typeofso you can increase insanity by things like
typeof(y.x) foo;
or even declare a structure of that type
struct b y; typeof(y.x[0]) foo; foo.z = 42; y.x = &foo; | http://quabr.com/53203535/c-pointer-to-anonymous-struct | CC-MAIN-2019-09 | refinedweb | 240 | 71.95 |
While this blog is typically strictly for Scala developers interested in strongly-typed programming, this particular article is of interest to Java developers as well. You don’t need to know Scala to follow along.
Scala makes a welcome simplification in its type system:
type arguments
are always required. That is, in Java, you may (unsafely) leave off
the type arguments for compatibility with pre-1.5 code,
e.g.
java.util.List, forming a
raw type.
Scala does not permit this, and requires you to pass a type argument.
The most frequent trouble people have with this rule is being unable
to implement some Java method with missing type arguments in its
signature, e.g. one that takes a raw
List as an argument. Let us
see why they have trouble, and why this is a good thing.
Stripping the type argument list, e.g. going from
java.util.List<String> to
java.util.List is an unsafe cast.
Wildcarding
the same type argument, e.g. going from
java.util.List<String> to
java.util.List<?>, is safe. The latter type is written
java.util.List[_], or
java.util.List[T] forSome {type T}, in
Scala. In both Java and Scala, this is an
existential type.
As compiled with
-Xlint:rawtypes -Xlint:unchecked:
import java.util.Arrays; import java.util.ArrayList; import java.util.List; public abstract class TestEx { public static List<String> words() { return new ArrayList<>(Arrays.asList("hi", "there")); } // TestEx.java:17: warning: [rawtypes] found raw type: List // missing type arguments for generic class List<E> // where E is a type-variable: // E extends Object declared in interface List // ↓ public static final List wordsRaw = words(); // there is no warning for this public static final List<?> wordsET = words(); }
Also note that there is no warning for the equivalent to
wordsET in
Scala. Because it, like javac, knows that it’s safe.
scala> TestEx.words res0: java.util.List[String] = [hi, there] scala> val wordsET = TestEx.words : java.util.List[_] wordsET: java.util.List[_] = [hi, there]
The reason that existentials are safe is that the rules in place for values of existential type are consistent with the rest of the generic system, whereas raw types contradict those rules, resulting in code that should not typecheck, and only does for legacy code support. We can see this in action with two Java methods.
public static void addThing(final List xs) { xs.add(42); } public static void swapAround(final List<?> xs) { xs.add(84); }
These methods are the same, except for the use of raw types versus existentials. However, the second does not compile:
TestEx.java:26: error: no suitable method found for add(int) xs.add(84); ^ method Collection.add(CAP#1) is not applicable (argument mismatch; int cannot be converted to CAP#1) method List.add(CAP#1) is not applicable (argument mismatch; int cannot be converted to CAP#1) where CAP#1 is a fresh type-variable: CAP#1 extends Object from capture of ?
Why forbid adding 42 to the list? The element type of list is unknown. The answer lies in that statement: its unknownness isn’t a freedom for the body of the method, it’s a restriction. The rawtype version treats its lack of knowledge as a freedom, and the caller pays for it by having its data mangled.
public static void testIt() { final List<String> someWords = words(); addThing(someWords); System.out.println("Contents of someWords after addThing:"); System.out.println(someWords); System.out.println("Well that seems okay, what's the last element?"); System.out.println(someWords.get(someWords.size() - 1)); }
And it compiles:
TestEx.java:23: warning: [unchecked] unchecked call to add(E) as a member of the raw type List xs.add(42); ^ where E is a type-variable: E extends Object declared in interface List
But when we try to run it:
scala> TestEx.testIt() Contents of someWords after addThing: [hi, there, 42] Well that seems okay, what's the last element? java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String at rawtypes.TestEx.testIt(TestEx.java:32) ... 43 elided
It is a mistake to think that just because some code throws
ClassCastException, it must be to blame for a type error. This line
is blameless. It is the fault of the unchecked cast when we called
addThing, and more specifically, the unsafe assumption about the
List’s element type that was made in its body.
When we used the wildcard, we were forbidden from doing the unsafe thing. But what kinds of things can we do with the safe, existential form? Here’s one:
private static <E> void swapAroundAux(final List<E> xs) { xs.add(xs.get(0)); } public static void swapAround(final List<?> xs) { swapAroundAux(xs); }
In other words: let
E be the unknown element type of
xs.
xs.get() has type
E, and
xs.add has argument type
E. They
line up, so this is okay, no matter what the element type of
xs
turns out to be. Let’s try a test:
scala> val w = TestEx.words w: java.util.List[String] = [hi, there] scala> TestEx.swapAround(w) scala> w.get(w.size - 1) res1: String = hi
The body of
swapAround is guaranteed not to mangle its argument by
the type checker, so we, as a caller, can safely call it, and know
that our argument’s type integrity is protected.
Scala has more features to let us get away without
swapAroundAux.
This translation uses a lowercase
type variable pattern
to name the existential. To the right of the
=>, we can declare
variables of type
e and use
e to construct more types, while still
referring to the
_ in the
xs argument’s type. But in this case,
we just do the same as
swapAroundAux above.
def swapAround(xs: java.util.List[_]): Unit = xs match { case xs2: java.util.List[e] => xs2.add(xs2.get(0)) }
Let’s consider the
xs.get() and
xs.add methods, which have return
type and argument type
E, respectively. As you can’t write the name
of an existential type in Java, what happens when we “crush” it,
choosing the closest safe type we can write the name of?
First, we can simplify by considering every existential to be bounded.
That is, instead of
E, we think about
E extends Object super
Nothing, or
E <: Any >: Nothing in Scala. While
Object or
Any
is the “top” of the type hierarchy, which every type is a subtype
of,
Nothing is the “bottom”, sadly left out of Java’s type system,
which every type is a supertype of.
For
get, the
E appears in the result type, a covariant position.
So we crush it to the upper bound,
Any.
scala> wordsET.get _ res2: Int => Any = <function1>
However, for
add, the
E appears in the argument type, a
contravariant position. So if it is to be crushed, it must be
crushed to the lower bound,
Nothing, instead.
scala> (wordsET: java.util.Collection[_]).add _ : (Any => Boolean) <console>:12: error: type mismatch; found : _$1 => Boolean where type _$1 required: Any => Boolean (wordsET: java.util.Collection[_]).add _ : (Any => Boolean) ^ scala> (wordsET: java.util.Collection[_]).add _ : (Nothing => Boolean) res8: Nothing => Boolean = <function1>
Each occurrence of an existential in a signature may be crushed
independently. However, a variable that appears once but may be
distributed to either side, such as in a generic type parameter, is
invariant, and may not be crushed at that point. That is why the
existential is preserved in the inferred type of
wordsET itself.
scala> wordsET res9: java.util.List[_] = [hi, there]
Herein lies something closer to a formalization of the problem with
raw types: they crush existential occurrences in contravariant and
invariant positions to the upper bound,
Object, when the only safe
positions to crush in this way are the covariant positions.
Listand
List<?>relate?
It is well understood that, in Java,
List<String> is not a subtype
of
List<Object>. In Scala terms, this is because all type
parameters are invariant, which has exactly the meaning it had in
the previous section. However, that doesn’t mean it’s impossible to
draw subtyping relationships between different
Lists for different
type arguments; they must merely be mediated by existentials, as is
common in the Java standard library.
The basic technique is as follows: we can convert any
T in
List<T>
to
? extends T super T. Following that, we can raise the argument
to
extends and lower the argument to
super as we like. A
? by
itself, I have described above, is merely the most extreme course of
this formula you can take. So
List<T> for any
T is a subtype of
List<?>. (This only applies at one level of depth;
e.g.
List<List<T>> is not necessarily a subtype of
List<List<?>>.)
Does this mean that
List is a subtype of
List<?>? Well, kind of.
Following the rule for specialization of method signatures in
subclasses, we should be able to override a method that returns
List<?> with one that returns
List, and override a method that
takes
List as an argument with one that takes
List<?> as an
argument. However, this is like building a house on a foam mattress:
the conversion that got us a raw type wasn’t sound in the first place,
so what soundness value does this relationship have?
Let’s see the specific problem that people usually encounter in Scala.
Suppose
addThing, defined above, is an instance member of
TestEx:
class TestEx2 extends TestEx { @Override public void addThing(final List<?> xs) {} }
Or the Scala version:
class TestEx3 extends TestEx { override def addThing(xs: java.util.List[_]): Unit = () }
javac gives us this error:
TestEx.java:48: error: name clash: addThing(List<?>) in TestEx2 and addThing(List) in TestEx have the same erasure, yet neither overrides the other public void addThing(final List<?> xs) {} ^ TestEx.java:47: error: method does not override or implement a method from a supertype @Override ^
scalac is forgiving, though. I’m not sure how forgiving it is. However, the forgiveness is unsound: it lets us return less specific types when overriding methods than we got out.
Stop using raw types.
If you maintain a Java library with raw types in its API, you are doing a disservice to your users. Eliminate them.
If you are using such a library, report a bug, or submit a patch,
to eliminate the raw types. If you add
-Xlint:rawtypes to the
javac options, the compiler will tell you where you’re using
them. Fix all the warnings, and you’re definitely not using raw
types anymore.
Help Java projects, including your own, avoid introducing raw types
by adding
-Xlint:rawtypes permanently to their
javac options.
rawtypes is more serious than
unchecked; even if you do not
care about
unchecked warnings, you should still turn on and fix
rawtypes warnings.
You may also turn on
-Xlint:cast to point out casts that are no
longer necessary now that your types are cleaner. If possible, add
-Werror to your build as well, to convert
rawtypes warnings to
errors.
Adding wildcards isn’t a panacea. For certain raw types, you need to add a proper type parameter, even adding type parameters to your own API. The Internet has no copy and paste solutions to offer you; it all depends on how to model your specific scenario. Here are a few possibilities.
Pass a type argument representing what’s actually in the structure.
For example, replace
List with
List<String> if that’s what it
is.
Pass a wildcard.
Propagate the type argument outward. For example, if you have a
method
List doThis(final List xs), maybe it should be
<E>
List<E> doThis(final List<E> xs). Or if you have a
class
Blah<X> containing a
List, maybe it should be a
class Blah<A,
X> containing a
List<A>. This is often the most flexible
option, but it can take time to implement.
Combine any of these. For example, in some circumstances, a more
flexible version of #3 would be to define
Blah<A, X> containing a
List<? extends A>.
Wildcards and existentials are historically misunderstood in the Java community; Scala developers have the advantage of more powerful language tools for talking about them. So if you are unsure of how to eliminate some raw types, consider asking a Scala developer what to do! Perhaps they will tell you “use Scala instead”, and maybe that’s worth considering, but you’re likely to get helpful advice regardless of how you feel about language advocacy.
As you can see, the Java compatibility story in Scala is not as simple as is advertised. However, I favor the strong stance against this unsound legacy feature. If Scala can bring an end to the scourge of raw types, it will have been worth the compatibility trouble.
This article was tested with Scala 2.11.5 and javac 1.8.0_31.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2015/02/26/rawtypes.html | CC-MAIN-2019-13 | refinedweb | 2,189 | 66.94 |
Internationalize react apps within a lunch break
i18nize-react
Internationalize legacy react apps in a lunch break.
i18nize-react finds and replaces all the hardcoded string literals in your react project with i18n bindings. It uses babel to walk on react components and process them.
Getting started
- First install the
i18nize-reactglobally using npm
npm i -g i18nize-react
- Now in your react app run
npm install i18next
Tested on
i18next other variants should work with minor changes.
Make sure there are no unstaged changes, you may need to
git reset --hard.
- Now run.
i18nize-react
Go for lunch
Run your favourite linter to clean things up.
It should create four files
src/i18n/init.js,
src/i18n/keys.js,
src/i18n/english.js,
src/i18n/chinese.js. Add the line
import ./i18n/init.js;in your App's entry point. Usually it is
src/index.js.
Change the
lngkey in your browser's local storage to see changes.
Contributions
Create an issue ticket with a before and after code snippets, before writing any code and raising a PR.
For bugs create a minimum reproducible piece of code with original, received and expected snippets.
Troubleshooting
Sometimes
i18ize-reactmight conflict with the babel plugins installed in your project. If that happens go up one folder (
cd ..) and then run
i18ize-react ./your-dir ./your-dir
By default
i18ize-reactassumes that your code is in
<your workspace dir>/srcbut if you want to change that you can use the third argument. e.g.
i18ize-react ./ ./ webwill crawl
<your workspace dir>/webinstead.
Constant initialization outside react lifecycle is not guaranteed. To resolve this, move all initialized strings inside the component.
// String 1 might not load correctly const string1 = i18next.t(k.STRING1); const MyComponent = () => { // String 2 will load correctly const string2 = i18next.t(k.STRING2); return ( <div> {string1} {string2} </div> ) }
- TIP: Babel's parse and generate often shifts code around which causes files, with no programatic change, to show up in git diff. Sometimes running the linter alone does not fix this problem. A good way to fix this problem is to do a dry run
i18nize-react ./ ./ src true, run your linter and commit the code. Now run
i18nize-reactto run the transform and lint again. Now only the transformed changes should show up in git diff. | https://reactjsexample.com/internationalize-react-apps-within-a-lunch-break/ | CC-MAIN-2019-35 | refinedweb | 385 | 67.45 |
#include <stropts.h> int fdetach(const char *path);
The fdetach() function detaches a STREAMS-based file from the file to which it was attached by a previous call to fattach(3C). The path argument points to the pathname of the attached STREAMS file. The process must have appropriate privileges or be the owner of the file. A successful call to fdetach() causes all pathnames that named the attached STREAMS file to again name the file to which the STREAMS file was attached. All subsequent operations on path will operate on the underlying file and not on the STREAMS file.
All open file descriptions established while the STREAMS file was attached to the file referenced by path, will still refer to the STREAMS file after the fdetach() has taken effect.
If there are no open file descriptors or other references to the STREAMS file, then a successful call to fdetach() has the same effect as performing the last close (2) on the attached file.
Upon successful completion, fdetach() returns 0. Otherwise, it returns −1 | http://docs.oracle.com/cd/E36784_01/html/E36874/fdetach-3c.html | CC-MAIN-2016-40 | refinedweb | 173 | 69.31 |
In my earlier tutorial on creating a Scala REST client using the Apache HttpClient library, I demonstrated how to download the contents of a Yahoo Weather API URL, and then parse those contents using the Scala XML library. I didn't discuss the XML searching/parsing process used in that source code, so in this article I'll take a few moments to look at that code.
First, here's a revised version of that earlier source code, this time showing how to load an XML file from disk using Scala:
import java.io._ import scala.xml.XML object ScalaApacheHttpRestClient1 { def main(args: Array[String]) { // get the xml content from our sample file val xml = XML.loadFile("/Users/al/Projects/Scala/yahoo-weather.xml") // find what i want val temp = (xml \\ "channel" \\ "item" \ "condition" \ "@temp") text val text = (xml \\ "channel" \\ "item" \ "condition" \ "@text") text val currentWeather = format("The current temperature is %s degrees, and the sky is %s.", temp, text.toLowerCase()) println(currentWeather) } }
As you can see, I'm loading a file from disk named
yahoo-weather.xml, and I'm only concerned about the "temp" and "text" nodes within that XML document.
I created this file by saving the contents from the Yahoo Weather API URL. That way I wouldn't have to keep hitting their URL during my tests. Here are the contents of that file:
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <rss version="2.0" xmlns: <channel> <title>Yahoo! Weather - Broomfield, CO</title> <link>*</link> <description>Yahoo! Weather for Broomfield, CO</description> <language>en-us</language> <lastBuildDate>Thu, 10 Nov 2011 2:54 pm MST</lastBuildDate> <ttl>60</ttl><yweather:location <yweather:units <yweather:wind <yweather:atmosphere <yweather:astronomy  <item> <title>Conditions for Broomfield, CO at 2:54 pm MST</title> <geo:lat>39.92</geo:lat> <geo:long>-105.09</geo:long> <link>*</link> <pubDate>Thu, 10 Nov 2011 2:54 pm MST</pubDate> <yweather:condition <description> <![CDATA[<img src=""/><br /> <b>Current Conditions:</b><br />Partly Cloudy, 61 F<BR /> <BR /><b>Forecast:</b> <BR />Thu - Partly Cloudy. High: 58 Low: 37 <br />Fri - Mostly Cloudy. High: 58 Low: 39<br /> <br /><a href="*">Full Forecast at Yahoo! Weather</a><BR/> <BR/>(provided by <a href="" >The Weather Channel</a>)<br/>]]></description> <yweather:forecast <yweather:forecast <guid isPermaLink="false">USCO0044_2011_11_11_7_00_MST</guid> </item> </channel> </rss> <!-- api1.weather.sp2.yahoo.com uncompressed/chunked Thu Nov 10 14:23:09 PST 2011 -->
Searching XML using the Scala XML library
Cutting this down to just the basics, by looking at the Yahoo Weather XML document, I know I want to retrieve the "temp" attribute of the yweather:condition node:
<yweather:condition
To search this XML document in Scala, I write the following line of code, using the Scala XPath syntax:
val temp = (xml \\ "channel" \\ "item" \ "condition" \ "@temp") text
This code can be read as "Search the variable named 'xml' for the XML node attribute named 'temp', where that attribute is within a node named 'condition', which is a child of the 'item' tag, which itself is a child of the 'channel' tag."
In that XPath search pattern I'm being very explicit about what I'm searching for, specifying all of the parent and child nodes in the search path. Knowing what the XML document looks like, I could simplify that search path to just look like this, skipping the "channel" and "item" elements in the search path:
val temp = (xml \\ "condition" \ "@temp") text
I'm not an XML parsing wizard, so I'll leave the decision of which XPath expression you want to use up to you. Either way, when this expression is run, the Scala variable named "temp" will have the value "61" after this line of code is run.
The Scala XML \\, \, and @ operators
A couple of notes about the Scala XML \, \\, and @ operators:
- The \ operator doesn't descend into child elements.
- Therefore you have to use the \\ operator to find child elements like "item" and "condition" above.
- You use the "@" character to search for XML tag attributes, such as the "temp" attribute of the yweather:condition tag.
- \ and \\ are called "projection functions", and they return a NodeSeq object.
- The \ and \\ operators (functions) are based on XPath operators, but Scala uses backslashes instead of forward-slashes because forward-slashes are already used for math operations.
Scala, XMLNS namespaces, and attributes
As I just wrote, you use the "@" character to search for XML tag attributes. However, I skipped over the XMLNS namespace issue which caused me so much grief. In short, if you have an XML tag like this which uses an XMLNS namespace which has been properly declared at the beginning of the XML document:
<yweather:condition
you can ignore the "yweather" portion of the tag, and just access the node (and in this case, the desired node attribute) like this:
val temp = (xml \\ "condition" \ "@temp") text
Scala XML parsing, searching, XMLNS namespaces, and XPath - Summary
I ended up covering a lot more ground in this Scala XML tutorial than I planned to, but I hope it has been helpful. In the end, this tutorial covered all of these topics:
- How to load an XML document from a file in Scala.
- How to search XML documents using the Scala XPath syntax.
- Discussed the difference between the \ and \\ XPath operators.
- How to deal with XMLNS namespaces in Scala.
- How to find XML node attributes.
In summary, I hope this article has been helpful. I know the Scala code looks pretty simple, but finding this solution caused me a surprising amount of grief, particularly dealing with the XMLNS namespace in Scala.
Add new comment | https://alvinalexander.com/scala/scala-xml-searching-xmlns-namespaces-xpath-parsing | CC-MAIN-2018-30 | refinedweb | 956 | 59.23 |
Question 1 Assign coin_model_probabilities to a two-item array containing the chance of heads as the first element and the chance of tails as the second element under Gary’s model. Make sure your values are between 0 and 1.
Question 2 We believe Gary’s model is incorrect. In particular, we believe there to be a smaller chance of heads. Which of the following statistics can we use during our simulation to test between the model and our alternative? Assign statistic_choice to the correct answer. 1. The distance (absolute value) between the actual number of heads in 10 flips and the ex- pected number of heads in 10 flips (5) 2. The expected number of heads in 10 flips 3. The actual number of heads we get in 10 flips . In [25]: def coin_simulation_and_statistic (sample_size, model_proportions): return (sample_size * sample_proportions(sample_size,model_proportions)[ 0 ]) coin_simulation_and_statistic( 10 , coin_model_probabilities) Out[25]: 8.0 10
Question 4 Use your function from above to simulate the flipping of 10 coins 5000 times under the pro- portions that you specified in problem 1. Keep track of all of your statistics in coin_statistics .
Let’s take a look at the distribution of statistics, using a histogram.
11
Question 5 Given your observed value, do you believe that Gary’s model is reasonable, or is our alternative more likely? Explain your answer using the distribution drawn in the previous problem.
1.6 6. Submission Once you’re finished, select "Save and Checkpoint" in the File menu and then select "Download as" from the File menu, choosing "Notebook" and then "PDF". Upload the Notebook (.ipynb) into Gradescope and upload the PDF to Moodle under the HW5 assignment. 12 | https://www.coursehero.com/file/p4opk2c/Question-1-Assign-coinmodelprobabilities-to-a-two-item-array-containing-the/ | CC-MAIN-2021-49 | refinedweb | 279 | 56.35 |
#include <wx/richtext/richtextbuffer.h>
The base class for custom field types.
Each type definition handles one field type. Override functions to provide drawing, layout, updating and property editing functionality for a field.
Register field types on application initialisation with the static function wxRichTextBuffer::AddFieldType. They will be deleted automatically on application exit.
Creates a field type definition.
Copy constructor.
Returns true if we can edit the object's properties via a GUI.
Draw the item, within the given range.
Some objects may ignore the range (for example paragraphs) while others must obey it (lines, to implement wrapping)
Implemented in wxRichTextFieldTypeStandard.
Edits the object's properties via a GUI.
Returns the field type name.
There should be a unique name per field type object.
Returns the label to be used for the properties context menu item.
Returns the object size for the given range.
Returns false if the range is invalid for this object.
Implemented in wxRichTextFieldTypeStandard.
Returns true if this object is top-level, i.e. contains its own paragraphs, such as a text box.
Reimplemented in wxRichTextFieldTypeStandard.
Lay the item out at the specified position with the given size constraint.
Layout must set the cached size. rect is the available space for the object, and parentRect is the container that is used to determine a relative size or position (for example if a text box must be 50% of the parent text box).
Implemented in wxRichTextFieldTypeStandard.
Sets the field type name.
There should be a unique name per field type object.
Update the field.
This would typically expand the field to its value, if this is a dynamically changing and/or composite field. | https://docs.wxwidgets.org/trunk/classwx_rich_text_field_type.html | CC-MAIN-2018-51 | refinedweb | 275 | 61.43 |
In this post, we're going to analyze an XKCD comic about dating pools and derive the statistical analysis that's behind the curves shown in the comic.
First of all, let's take a look at the comic:
from IPython.display import Image Image(url="").
Formally, we can denote these two bounds as:
Still more mathematically, if we denote the creepiness rule by $f$, upper and lower bounds by $\text{upper_bound}$ and $\text{lower_bound}$ and your age by $x$ then we have:
$$\text{lower_bound} = f(x)$$ $$f(\text{upper_bound}) = x \Leftrightarrow \text{upper_bound} = f^{-1}(x)$$
This can be easily implemented, as the inverse creepiness rule is "you can't date people older than twice (your age minus 7 years)".
def creepiness_rule(age): return age / 2. + 7 def inverse_creepiness_rule(age): return 2. * (age - 7)
We can prepare a little plot to look at the lower and upper bounds as a function of your age:
from pylab import * %matplotlib inline
ages = linspace(15, 70) plot(ages, creepiness_rule(ages), label='youngest people you can date') plot(ages, inverse_creepiness_rule(ages), label='oldest people you can date') legend(loc=2) xlim((15, 70)) grid(True) xlabel("your age (years)") ylabel("age (years)")
<matplotlib.text.Text at 0x1067969b0>
As one can see from the plot above, the dating interval is getting larger with your age. When computed, the interval can be found to be equal to $\frac{3}{2}age - 14$..
def dating_bounds(age): return (creepiness_rule(age), inverse_creepiness_rule(age))
When you're 18, you can date people in the following interval:
dating_bounds(18)
(16.0, 22.0)
When you're 30, your dating pool is in the following age group:
dating_bounds(30)
(22.0, 46.0)
In the next section, we apply the previous functions to some real numbers to see if we can indeed derive the dating pool curves Randall Munroe is showing in his comic.
Figures such as the ones said to exist in the comic can indeed be found on the interwebz:. Downloading the excel file at, we can continue the analysis and replicate Randall's findings.. As an example, there are 7993 thousand singles of both sexes between the ages of 18 and 19.
ages_lower = array([15, 18, 20, 25, 30, 35, 40, 45, 50, 55, 65, 75, 85]) ages_upper = array([17, 19, 24, 29, 34, 39, 44, 49, 54, 64, 74, 84, 100]) data_both_sexes = array([12740, 7993, 19063, 13970, 9233, 7277, 7403, 7971, 7708, 12733, 7825, 6206, 3428]) data_males = array([6559, 4082, 10079, 7726, 4952, 3639, 3729, 3860, 3557, 5461, 2614, 1592, 826]) data_females = array([6183, 3910, 8986, 6244, 4281, 3638, 3673, 4112, 4151, 7272, 5212, 4616, 2601])
Due to the irregular sampling of the data, the next section will be devoted to building an interpolation function that returns the number of singles between given age bounds.
In this section, we'll be writing a function that computes the number of singles between two age bounds. We first compute the number of singles above a certain age:
def singles_above_age(age, data): """ returns the number of singles found in vector data above a given age """ if age >= ages_lower[-1]: return float(ages_upper[-1] - age) / (ages_upper[-1] - ages_lower[-1] + 1) * data[-1] ind = (ages_lower >= age).nonzero()[0][0] singles = sum(data[ind:]) if ind != 0: # delta_age = ages_lower[ind] - age singles += float(delta_age) / (ages_upper[ind - 1] - ages_lower[ind - 1] + 1) * data[ind - 1] return singles
Let's check if the function works.
print(singles_above_age(20, data_females)) print(sum(data_females[2:]))
54786.0 54786
print(singles_above_age(19, data_females)) print(sum(data_females[2:]) + data_females[1] / 2)
56741.0 56741.0
print(singles_above_age(18.5, data_females)) print(sum(data_females[2:]) + data_females[1] / 2 * 1.5)
57718.5 57718.5
print(singles_above_age(99, data_females)) print(data_females[-1] / 16)
162.5625 162.5625
This seems to work. So let's build a function that returns singles between two age bounds. We are using the fact that the number of people between
age1 and
age2 are all those above
age1 minus the ones above
age2.
def singles_between_bounds(bounds, data): return singles_above_age(bounds[0], data) - singles_above_age(bounds[1], data)
Let's check that the function works.
print(singles_between_bounds((15, 18), data_males)) print(data_males[0])
6559.0 6559
This is quite neat. We can now move on to the analysis we want to conduct.
First, let's look at the number of singles in 1 year intervals between 18 and 80!
ages = arange(18, 80, 1) both = [singles_between_bounds((age, age + 1), data_both_sexes) for age in ages] males = [singles_between_bounds((age, age + 1), data_males) for age in ages] females = [singles_between_bounds((age, age + 1), data_females) for age in ages] plot(ages, both, label="both sexes") plot(ages, males, label="males") plot(ages, females, label="females") xlabel("age class (lower bound)") ylabel("singles (in thousands)") title("number of singles as a function of age class") legend() grid()
It's interesting to comment on this curve:
Now that our preliminary work is done, we can easily compute dating pools using the previously defined functions.
We define a function that returns the size of the dating pool for a given age:
def dating_pool(age, dataset): age_bounds = dating_bounds(age) return singles_between_bounds(age_bounds, dataset)
Let's get a sample result:
dating_pool(26, data_females)
21693.800000000003
For a 26 year old woman, the dating pool consists of 21 million men.
To replicate the comic's data, we will now plot the dating pool as a function of age for males and females.
ages = arange(18, 80) pool = [dating_pool(age, data_females) for age in ages] plot(ages, pool, label='males') pool = [dating_pool(age, data_males) for age in ages] plot(ages, pool, label='females') title("Dating pool") xlabel("age (years)") ylabel("thousands of people") legend(loc=2) grid(True)
So, to comment on the comic's conclusion: indeed, the dating pool grows for men and women until ages 50 and 40, respectively. However, this assumes that people all share the standard creepiness function.
What if instead we assumed a more "normal" rule for dating bounds, say +- 5 years. We can easily plot this curve to see what it looks like.
def dating_pool2(age, dataset): age_bounds = (age - 5, age + 5) return singles_between_bounds(age_bounds, dataset) ages = arange(18, 80) pool = [dating_pool2(age, data_females) for age in ages] plot(ages, pool, label='males') pool = [dating_pool2(age, data_males) for age in ages] plot(ages, pool, label='females') title("Dating pool") xlabel("age (years)") ylabel("thousands of people") legend(loc='upper right') grid(True). Feel free to comment if you spot some inaccuracies or have some remarks.
This post was entirely written using the IPython notebook. Its content is BSD-licensed. You can see a static view or download this notebook with the help of nbviewer at 20150131_XKCDDatingPools.ipynb. | http://nbviewer.jupyter.org/github/flothesof/posts/blob/master/20150131_XKCDDatingPools.ipynb | CC-MAIN-2017-47 | refinedweb | 1,113 | 58.62 |
$; #
Let's first clean it up (which takes all the fun out of
it, but still...):
$; = $";
$;{Just=>another=>Perl=>Hacker=>} = $/;
print %;
[download]
$foo{$a, $b, $c}
[download]
$foo{ join $;, $a, $b, $c }
[download]
Next line, then:
$;{Just=>another=>Perl=>Hacker=>} = $/;
[download]
$;{Just,another,Perl,Hacker,} = $/;
[download]
Anway, though, now it makes more sense, doesn't it? Because it looks
like the example above, the example from perlvar. We're
just assigning to a hash element in the hash %;.
And $/ is the input record separator, the default of which
is a carriage return ("\n"). So we assign that value to
the hash element, so what we really have is something like
this:
$;{ join ' ', "Just", "another", "Perl", "Hacker", "" }
= "\n";
[download]
$;{"Just another Perl Hacker"} = "\n";
[download]
print %;
[download]
("Just another Perl Hacker", "\n")
[download]
Just another Perl Hacker
[download]
At first I thought, what the...? But then I realized that $; was the name of a variable. Next I say, "AHA! Does it work with use strict ?" At first I didn't think so since I thought that you would have to declare $; by saying "my $;", but then I hopped on over to perlvar and realized that $; is a "special variable" and that set me straight.
And a few moments later, I said, "Doh! I should have just read btrott's post and it would have saved me a lot of trouble. Oh, well."
Zenon Zabinski | zdog | zdog7@hotmail | http://www.perlmonks.org/index.pl/jacques?node_id=22319 | CC-MAIN-2014-23 | refinedweb | 236 | 71.24 |
Tinano is a local persistence library for flutter apps based on sqflite. While sqflite is an awesome library to persist data, having to write all the parsing code yourself is tedious and can become error-prone quickly. With Tinano, you specify the queries and the data structures they return, and it will automatically take care of all that manual and boring stuff, giving you a clean and type-safe way to manage your app's data.
First, let's prepare your
pubspec.yaml to add this library and the tooling
needed to automatically generate code based on your database definition:
dependencies: tinano: # ... dev_dependencies: tinano_generator: build_runner: # test, ...
The
tinano library will provide some annotations for you to write your
database classes, whereas the
tinano_generator plugs into the
build_runner
to generate the implementation. As we'll only do code-generation during
development (and not at runtime), these two can be a dev-dependency.
With Tinano, creating a database is simple:
import 'package:tinano/tinano.dart'; import 'dart:async'; part 'database.g.dart'; // this is important! @TinanoDb(name: "my_database.sqlite", schemaVersion: 1) abstract class MyDatabase { static DatabaseBuilder<MyDatabase> createBuilder() => _$createMyDatabase(); }
It is important that your database class is abstract and has a static
method called
createBuilder that uses the
=> notation. The
_$createMyDatabase() method will be generated automatically later on. Of
course, you're free to choose whatever name you want, but the method to create
the database has to start with
_$.
Right now, this code will give us a bunch of errors because the implementation
has not been generated yet. A swift
flutter packages pub run build_runner build
in the terminal will fix that. If you want to automatically rebuild your
database implementation every time you change the specification (might be useful
during development), you can use
flutter packages pub run build_runner watch.
To get an instance of your
MyDatabase, you can just use the builder function
like this:
Future<MyDatabase> openMyDatabase() async { return await (MyDatabase .createBuilder() .doOnCreate((db, version) async { // This await is important, otherwise the database might be opened before // you're done with initializing it! await db.execute("""CREATE TABLE `users` ( `id` INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, `name` TEXT NOT NULL )"""); }) .build()); }
The
doOnCreate block will be executed for the first time your database is
opened. The
db parameter will give you access to the raw sqflite database, the
version parameter is the schema version specified in your
@TinanoDb annotation.
You can use the
addMigration methods to do schema migrations - more info on
that below.
Of course, just opening the database is pretty boring. In order to actually
execute some queries to the database, just create methods annotated with either
@Query,
@Update,
@Delete or
@Insert. Here is an example that fits to
the
doOnCreate method defined above:
@TinanoDb(name: "my_database.sqlite", schemaVersion: 1) abstract class MyDatabase { static DatabaseBuilder<MyDatabase> createBuilder() => _$createMyDatabase(); @Query("SELECT * FROM users") Future<List<UserRow>> getAllUsers(); // If we know we'll only get one user, we can skip the List<>. Note that this // really expects there to be one row -> if there are 0, it will throw an // exception. @Query("SELECT * FROM users WHERE id = :id") Future<UserRow> getUserById(int id); // For queries with only one column that is either a String, a num or a // Uint8List, we don't have to define a new class. @Query("SELECT COUNT(id) FROM users") Future<int> getAmountOfUsers(); // Inserts defined to return an int will return the insert id. Could also // return nothing (Future<Null> or Future<void>) if we wanted. @Insert("INSERT INTO users (name) VALUES (:name)") Future<int> createUserWithName(String name); // Inserts return values based on their return type: // For Future<Null> or Future<void>, it won't return any value // For Future<int>, returns the amount of changed rows // For Future<bool>, checks if the amount of changed rows is greater than zero @Update("UPDATE users SET name = :updatedName WHERE id = :id") Future<bool> changeName(int id, String updatedName); // The behavior of deletes is identical to those of updates. @Delete("DELETE FROM users WHERE id = :id") Future<bool> deleteUser(int id); } // We have to annotate composited classes as @row. They should be immutable. @row class UserRow { final int id; final String name; UserRow(this.id, this.name); }
As you can see, you can easily map the parameters of your method to sql
variables by using the
:myVariable notation directly in your sql. If you want
to use a
: character in your SQL, that's fine, just escape them with a
backslash
\. Note that you will have to use two of them (
"\\:") in your dart strings.
The variables will not be inserted into the query directly (which could easily result in an sql injection vulnerability), but instead use prepared statements to first send the sql without data, and then the variables. This means that you won't be able to use variables for everything, see this for some examples where you can't.
After bumping your version in
@TinanoDb, you will have to perform some
migrations manually. You can do this directly with your
DatabaseBuilder by
using
addMigration:
MyDatabase .createBuilder() .doOnCreate((db, version) {...}) .addMigration(1, 2, (db) async { await db.execute("ALTER TABLE ....") });
For bigger migrations (e.g. from 1 to 5), just specify all the migrations for each step. Tinano will then apply them sequentially to ensure that the database is ready before it's opened.
As the database access is asynchronous, all methods must return a
Future.
A
Future<int> will resolve to the amount of updated rows, whereas a
Future<bool> as return type will resolve to
true if there were any changes
and to
false if not.
A
Future<int> will resolve to the last inserted id. A
Future<bool> will
always resolve to
true, so using it is not recommended here.
You'll have to use a
List<T> if you want to receive all results, or just
T
right away if you're fine with just receiving the first one. Notice that, in either
case, the entire response will be loaded into memory at some point, so please
set
LIMITs in your sql.
Now, if your result is just going to have one column, you can use that type directly:
@Query("SELECT COUNT(*) FROM users") Future<int> getAmountOfUsers();
This will work for
int,
num,
Uint8List and
String. Please see the
documentation from sqflite
to check which dart types are compatible with which sqlite types.
If your queries will return more than one column, you'll have to define it in a new immutable class that must only have the unnamed constructor to set the fields:
@row class UserResult { final int id; final String name; UserResult(this.id, this.name); } // this should be a method in your @TinanoDb class.... @Query("SELECT id, name FROM users") Future<List<UserResult>> getBirthmonthDistribution();
Each
@row class may only consist of the primitive fields
int,
num,
Uint8List and
String.
If you want to use
Tinano, but also have some use cases where you have to use
the
Database from
sqflite directly to send queries, you can just define a
field
Database database; in your
@TinanoDb class. It will be generated and
available after your database has been opened.
doOnCreateand instead define these methods right in our database class with some more annotations. This can also apply to migration functions.
Streamemitting new values as the underlying data changes. Could be similar to the Room library on Android.
DateTimeright from the library, auto-generating code to store it as a timestamp in the database.
@rowclasses that have other
@rowtypes as fields.
WHERE id = :user.idin your sql and then having a
User useras a parameter.
@FromColumn("my_column").
This library is still in quite an early stage and will likely see some changes on the way, so please feel free to open an issue if you have any feedback or ideas for improvement. Also, even though there are some awesome dart tools doing most of the work, automatic code generation based on your database classes is pretty hard and there are a lot of edge-cases. So please, if you run into any weird issues or unhelpful error messages during the build step, please do let me know so that I can take a look at them. Thanks! Of course, I greatly appreciate any PRs made to this library, but if you wankt to implement some new features, please let me know first by creating an issue first. That way, we can talk about how to approach that.
Add this to your package's pubspec.yaml file:
dependencies: tinano: ^0.1.1
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:tinano/tinano.dart';
We analyzed this package on Jan 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Document public APIs. (-0.06 points)
30 out of 31 API elements (library, class, field or method) have no adequate dartdoc content. Good documentation improves code readability and discoverability through search.
Fix
lib/api/builder.dart. (-0.50 points)
Analysis of
lib/api/builder.dart reported 1 hint:
line 5 col 25: Name non-constant identifiers using lowerCamelCase.
Maintain an example. (-10 points)
Create a short demo in the
example/ directory to show how to use this package. Common file name patterns include:
main.dart,
example.dart or you could also use
tinano.dart. | https://pub.dartlang.org/packages/tinano | CC-MAIN-2019-04 | refinedweb | 1,596 | 62.78 |
This chapter narrates you to create a web application project using BlueBream. If you complete this chapter, you should be able to:
Before proceeding further, let’s have an overview of the sections.
This document assumes that user has already installed Python 2.4, 2.5 or 2.6 and Distribute (or Setuptools). If Distribute (or Setuptools) is installed, the user will get an easy_install command which can be used to install the bluebream distribution. Another way of installing bluebream is by using PIP, a replacement for easy_install which is meant to improve on it. Among advantages, you will be able to uninstall packages.
The user can also install BlueBream inside an isolated Python environment created using Virtualenv. Although, virtualenv is not required when working on the application itself, because Buildout is available by default in the application created. Buildout is the recommended approach to create repeatable, isolated working environments. Buildout is a declarative, configuration-driven build system created by Jim Fulton.
It is recommended to use a custom-built Python for working with BlueBream. The user is required to install a C compiler (gcc) in his system, and to have internet access to PyPI to perform installation of the bluebream distribution, to bootstrap the buildout, and to build the application using Buildout. Internet access is not required for deployment if zc.sourcerelease package is used.
If the user has installed Distribute (or Setuptools), an easy_install command will be available and can be used to install BlueBream.
# easy_install bluebream
or:
$ sudo easy_install bluebream
Try to avoid running “easy_install” commands as root or with sudo for larger installation because it can lead to conflicts with the packaging system of your OS. Installing the bluebream template like this is ok, because it does not pull a lot of dependencies.
As mentioned earlier, Internet access to PyPI is required to perform installation of the bluebream distribution. If the user is behind a proxy, it’s up to him to make sure it works. The easy_install will look for the environment variable named http_proxy on a GNU/Linux platforms. The user can set it like this:
$ export http_proxy=""
Apart from the bluebream distribution, easy_install will download and install a few dependencies. The dependencies are:
Installing the bluebream template package is a one time process. Once the project package is ready, the user does not need the bluebream template package anymore because the package user is about to create will be self-bootstrappable.
The bluebream distribution provides a project template based on PasteScript templates. Once BlueBream is installed, run the paster command to create the project directory structure. The create sub-command provided by paster will show a command-line wizard to create the project directory structure.
$ paster create -t bluebream
This will bring a wizard asking details about the new project. The user can choose the package name and version number in the wizard itself. These details can also be modified later. Now, the user gets a working application with the project name as the name of the egg. The project name can be a dotted name, if the user wants his project to be part of a namespace. Any number of namespace levels can be used. The project can be called ‘sample’, ‘sample.main’ or ‘sample.app.main’ or anything deeper if necessary. The subfolder structure will be created accordingly.
Here is a screenshot of sample project creation:
The project name and other parameters can be given as a command line argument:
$ paster create -t bluebream sampleproject $ paster create -t bluebream sampleproject version=0.1
The user does not get asked by the wizard for the options whose values are already passed through command line. Other variables can also be given values from the command line, if required:
Note
Recommended use of Wizard
It is recommended to provide the details in the wizard itself but user can choose to provide the details at a later stage by simply pressing Enter/Return key.
As mentioned earlier, the generated package is bundled with a Buildout configuration (buildout.cfg) and a bootstrap script (bootstrap.py). First, the user needs to bootstrap the buildout itself:
$ cd sampleproject $ python bootstrap.py
The bootstrap script will download and install the zc.buildout and distribute packages. Also, it will create the basic directory structure.
Here is a screenshot of bootstrapping the buildout:
The next step is building the application. To build the application, run the buildout:
$ ./bin/buildout
Here is a screenshot of the application being built:
The buildout script will download all dependencies and setup the environment to run your application. This can take some time because many packages are downloaded. If you don’t want these packages to be downloaded again the next time you create a project, you can set a shared directory in your personal buildout configuration: create a file ~/.buildout/default.cfg (and the .buildout folder if needed), with the following contents:
[buildout] newest = false unzip = true download-cache = /opt/buildout-download-cache
You can choose any value for the download-cache, buildout will create it for you.
If you set the newest = false option, buildout will not look for new version of packages in package server by default. The unzip = true makes Buildout to unzip all eggs irrespective of whether it is Zip safe or not. The download-cache is the directory where Buildoout keeps a cached copy the source eggs downloaded.
The next section will show the basic usage.
The most common thing a user needs while developing an application is running the server. BlueBream uses the paster command provided by PasteScript to run the WSGI server. To run the server, the user can pass the PasteDeploy configuration file as the argument to the serve sub-command as given here:
$ ./bin/paster serve debug.ini
After starting the server, the user can access the site from his browser on this URL: . The port number (8080) can be changed in PasteDeploy configuration file (debug.ini) to user choice.
When the user opens the browser, it will look like as shown in this screenshot:
The second most common thing that should be run are the unit tests. BlueBream creates a testrunner using the zc.recipe.testrunner Buildout recipe. The user can see a test command inside the bin directory. To run the test cases, the following command is used:
$ ./bin/test
Sometimes the user may want to get the debug shell. BlueBream provides a Python prompt with your application object. You can invoke the debug shell in the following way:
$ ./bin/paster shell debug.ini
More details about the test runner and debug shell are explained in the BlueBream Manual.
The default directory structure created by the bluebream paster project template is as shown:
myproject/ |-- bootstrap.py |-- buildout.cfg |-- debug.ini |-- deploy.ini |-- etc | |-- site.zcml | |-- zope.conf | `-- zope-debug.conf |-- setup.py |-- src | |-- myproject | | |-- __init__.py | | |-- configure.zcml | | |-- debug.py | | |-- securitypolicy.zcml | | |-- startup.py | | |-- tests | | | |-- __init__.py | | | |-- ftesting.zcml | | | `-- tests.py | | `-- welcome | | |-- __init__.py | | |-- app.py | | |-- configure.zcml | | |-- ftests.txt | | |-- index.pt | | |-- interfaces.py | | |-- static | | | |-- logo.png | | | `-- style.css | | `-- views.py | `-- myproject.egg-info | |-- PKG-INFO | |-- SOURCES.txt | |-- dependency_links.txt | |-- entry_points.txt | |-- not-zip-safe | |-- requires.txt | `-- top_level.txt `-- var |-- filestorage | `-- README.txt `-- log `-- README.txt
The name of the top-level directory will always be the project name as given in the wizard. The name of the egg will also be the same as the package name by default. The user can change it to something else from setup.py. Here are the details about the other files in the project.
The next few sections will explain how to create a hello world applications.
You can watch the video creating hello world application here:
To create a web page which displays Hello World!, you need to create a view class and register it using the browser:page ZCML directive. In BlueBream, this is called a Browser Page. Sometimes more generic term, Browser View is used instead of Browser Page which can be used to refer to HTTP, XMLRPC, REST and other views. By default, the page which you are getting when you access: is a page registered like this. You can see the registration inside configure.zcml, the name of the view will be index. You can access the default page by explicitly mentioning the page name in the URL like this:. You can refer the Default view for objects HOWTO for more details about how the default view for a container object is working.
First you need to create a Python file named myhello.py at src/myproject/myhello.py:
$ touch src/myproject/myhello.py
You can define your browser page inside this module. All browser pages should implement the zope.publisher.interfaces.browser.IBrowserView interface. An easy way to do this would be to inherit from zope.publisher.browser.BrowserView which is already implementing the IBrowserView interface.
The content of this file could be like this:
from zope.publisher.browser import BrowserView class HelloView(BrowserView): def __call__(self): return "Hello World!"
Now you can register this page for a particular interface. So that it will be available as a browser page for any object which implement that particular interface. Now you can register this for the root folder, which is implementing zope.site.interfaces.IRootFolder interface. So, the registration will be like this:
<browser:page
Since you are using the browser XML namespace, you need to advertise it in the configure directive:
<configure xmlns="" xmlns:
You can add this configuration to: src/myproject/configure.zcml. Now you can access the view by visiting this URL:
Note
The @@ symbol for view
@@ is a shortcut for ++view++. (Mnemonically, it kinda looks like a pair of goggle-eyes)
To specify that you want to traverse to a view named bar of content object foo, you could (compactly) say .../foo/@@bar instead of .../foo/++view++bar.
Note that even the @@ is not necessary if container foo has no element named bar - it only serves to disambiguate between views of an object and things contained within the object.
In this example, you will create a hello world using a page template.
First you need to create a page template file inside your package. You can save it as src/myproject/helloworld.pt, with the following content:
<html> <head> <title>Hello World!</title> </head> <body> <div> Hello World! </div> </body> </html>
Update configure.zcml to add this new page registration.
<browser:page
This declaration means: there is a web page called hello2, available for any content, rendered by the template helloworld.pt, and this page is public. This kind of XML configuration is very common in BlueBream and you will need it for every page or component.
In the above example, instead of using zope.site.interfaces.IRootFolder interface, * is used. So, this view will be available for all objects.
Restart your application, then visit the following URL:
This section explain creating a dynamic hello world application.
In the src/myproject/hello.py file, add a few lines of Python code like this:
class Hello(object): def getText(self): name = self.request.get('name') if name: return "Hello %s !" % name else: return "Hello ! What's your name ?"
This class defines a browser view in charge of displaying some content.
Now you need a page template to render the page content in HTML. So let’s add a hello.pt in the src/myproject directory:
<html> <head> <title>hello world page</title> </head> <body> <div tal: fake content </div> </body> </html>
The tal:content directive tells BlueBream to replace the fake content of the tag with the output of the getText method of the view class.
The next step is to associate the view class, the template and the page name. This is done with a simple XML configuration language (ZCML). Edit the existing file called configure.zcml and add the following content before the final </configure>:
<browser:page
This declaration means: there is a web page called hello3, available for any content, managed by the view class Hello, rendered by the template hello.pt, and this page is public.
Since you are using the browser XML namespace, you need to declare it in the configure directive. Modify the first lines of the configure.zcml file so it looks like this (You can skip this step if the browser namespace is already there from the static hello world view):
<configure xmlns="" xmlns:
Restart your application, then visit the following URL:
You should then see the following text in your browser:
Hello ! What's your name ?
You can pass a parameter to the Hello view class, by visiting the following URL:
You should then see the following text:
Hello World !
This chapter walked through the process of getting started with web application development with BlueBream. It also introduced a few simple Hello World example applications. The Tutorial — Part 1 chapter will go through a bigger application to introduce more concepts.blog comments powered by Disqus | http://bluebream.zope.org/doc/1.0/gettingstarted.html | CC-MAIN-2018-30 | refinedweb | 2,156 | 58.08 |
Lab 2: Higher Order Functions
Due at 11:59pm on Friday, 07/05 question 3 is in lab02.py.
- Questions 4 and 5 (Environment Diagrams) do not need to be submitted, but highly recommended. Try to work on at least one of these if you finish the required section early.
- Questions 6-8 are also optional. It is recommended that you complete these problems on your own time. Starter code for the questions are in lab02_extra.py. lambda: x + y ... else: ... return x + y >>> snake(10, 20)______Function>>> snake(10, 20)()______30>>>>> snake(10, 20)______30
Coding Practice
Q3: or refer to the
textbook
if you're not sure what this means.
Your solution to this problem should fit entirely on the return line. You can try writing it first without this restriction, but rewrite it after in one line to test your understanding of this topic.) # Video Walkthrough:
Use Ok to test your code:
python3 ok -q lambda_curry2
Environment Diagram Practice
There is no submission for this component. However, we still encourage you to do these problems on paper to develop familiarity with Environment Diagrams, which will appear on the exam.
Q:
Optional Questions
Note: The following questions are in lab02_extra.py.
Q.
You may use the
compose1 function defined below. ***"def identity(x): return compose1(f, g)(x) == compose1(g, f)(x) return identity # Alternative solution return lambda x: f(g(x)) == g(f(x)) # Video Walkthrough:
Use Ok to test your code:
python3 ok -q composite_identity
Q7: Count van Count
Consider the following implementations of
count_factors and
count_primes:
def count_factors(n): """Return the number of positive factors that n has. >>> count_factors(6) 4 # 1, 2, 3, 6 >>> count_factors(4) 3 # 1, 2, 4 """ i, count = 1,, count = 1, counts all the numbers
from 1 to
n that satisfy
condition. ***"def counter(n): i, count = 1, 0 while i <= n: if condition(n, i): count += 1 i += 1 return count return counter # Video Walkthrough:
Use Ok to test your code:
python3 ok -q count_cond
Q ***"def ret_fn(n): def ret(x): i = 0 while i < n: if i % 3 == 0: x = f1(x) elif i % 3 == 1: x = f2(x) else: x = f3(x) i += 1 return x return ret return ret_fn # Video Walkthrough:
Use Ok to test your code:
python3 ok -q cycle | https://inst.eecs.berkeley.edu/~cs61a/su19/lab/lab02/ | CC-MAIN-2020-29 | refinedweb | 386 | 58.82 |
Py, you can read the article How To Make A Website With Python And Django to learn how to create Django project with Eclipse and PyDev ( which is totally free and open-source).
1. Create Django Project In PyCharm.
- Open PyCharm, click File —> New Project menu item in the top toolbar.
- Then select Django menu item in left navigation menu list, and select the Django project target directory in right panel, and then click Create button.
- Now you will find the Django project has been created successfully in the project explorer panel. Below Django project files list contains a Django application my_hello_world which will be created later. Now you should not see it because it has not been created.
2. Create Django Application my_hello_world.
As you know, one Django project can contain multiple Django applications, now we will run
$ python manage.py startapp my_hello_world command in PyCharm to create the my_hello_world application.
- Click Tools —> Run manage.py Tasks… menu item to open the python command window at bottom of PyCharm.
- Then input Django command
startapp my_hello_worldin the bottom console window and click enter key, you will find the my_hello_world application has been created in the project files list.
3. Coding The Django Hello World Example.
- Edit
DjangoHelloWorld / DjangoHelloWorld / settings.pyfile and add my_hello_world app at the end of INSTALLED_APPS section.
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'my_hello_world', ]
- Edit
DjangoHelloWorld / DjangoHelloWorld / urls.pyfile and add a new URL pattern.
from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), # when client request, the project will find the mapped process method in my_hello_world app's urls.py file. path('index/', include('my_hello_world.urls')) ]
- Create urls.py file in DjangoHelloWorld / my_hello_world folder and add below Python code in it.
from django.urls import path # import views from local directory. from . import views urlpatterns = [ # When user request home page, it will invoke the home function defined in views.py. path('', views.index_page, name='index'), ]
- Add index_page view function in DjangoHelloWorld / my_hello_world / views.py file. This view function will return an Html template file index.html back to the client browser
from django.shortcuts import render # Create your views here. # This function will return and render the home page when url is. def index_page(request): # Get the index template file absolute path. # index_file_path = PROJECT_PATH + '/pages/home.html' # Return the index file to client. return render(request, 'index.html')
- Create index.html in DjangoHelloWorld / templates folder and add the below code in it.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Django Hello World Example</title> </head> <body> <h2> Hello Django! :) </h2> </body> </html>
4. Run / Debug Django Hello World Example.
Now all the source code has been created. You need to follow below steps to run or debug it.
- Click Run —> Run ( or Run —> Debug )menu item in PyCharm top tool bar to run it.
- Select DjangoHelloWorld menu item in the popup menu list to run the Django project.
- When the application startup, input url in the web browser, then you will get below web page.
5. Create Django Project In PyCharm Community Edition.
If you are familiar with PyCharm community edition, and you want to create Django project in it, you can follow below steps.
- Install Django in a virtual environment, you can read article How To Install Python Django In Virtual Environment.
- Create Django project in command line, you can read article How To Make A Website With Python And Django.
- Create Django application in command line, you can read article How To Create / Remove Django Project Application.
- Import the Django project into PyCharm community edition, you can read article How To Import Existing Django Project And Enable Django Support In PyCharm.
- You just use pycharm to edit source code, and run / debug the Django project application in command line.
it was pretty much helpful, thanks a lot dev2qa.com | https://www.dev2qa.com/hello-world-django-pycharm-example/ | CC-MAIN-2021-31 | refinedweb | 660 | 59.5 |
In article <roy-57494B.09031926102003 at reader1.panix.com>, Roy Smith <roy at panix.com> wrote: >In article <bnfmnv$fqd$1 at panix3.panix.com>, aahz at pythoncraft.com (Aahz) >wrote: >> In article <roy-D86C06.22163525102003 at reader1.panix.com>, >> Roy Smith <roy at panix.com> wrote: >>> >>>My personal opinion is that you should be able to do the simplier: >>> >>>for key in myDict.keys().sort() >>> print key >>> >>>but unfortunately, sort doesn't work like that. It sorts the list >>>in-place and does NOT return the sorted list. >> >> Yup. Guido doesn't want you copying the list each time you sort; it's >> easy enough to make your own copy function. Nevertheless, it appears >> likely that 2.4 will grow list.sorted() (yes, a static method on the >> list type). > >What do you mean by "a static method on the list type"? Will I be able >to do: > > for key in myDict.keys().sorted(): > print key Nope: for key in list.sorted(myDict.keys()): print key >If that's what you're talking about, there's an obvious downside, which >is that now we'll have list.sort() and list.sorted() which do two >different things. This will be confusing. Yup. I pointed that out; I preferred copysort(), but the fact that you have to actually use the list object to access the method separates the namespace at least. I'm -0 on the current name, but Guido likes it. >Is there a PEP on this I could read? A quick look at the PEP index >didn't show anything that looked appropos. No, this was considered a small enough addition to warrant restricting the discussion to python-dev. If other people pipe up with objections, I'll forward that back to python-dev. >I certainly understand the efficiency aspects of in-place sorting, but >this has always seemed like premature optimization to me. Most of the >time (at least in the code I write), the cost of an extra copy is >inconsequential. I'll be happy to burn a few thousand CPU cycles if it >lets me avoid an intermediate assignment or a couple of extra lines of >code. When things get too slow, then is the time to do some profiling >and figure out where I can speed things up. It's not so much that as that it's much easier to get a copying sort (by making your own copy) than it is to get a non-copying sort. The efficiency issue here is less the CPU cycles than the RAM, and people *do* sort lists with thousands or millions of elements. Python is in many ways about keeping people from shooting themselves in the "worst case". -- Aahz (aahz at pythoncraft.com) <*> "It is easier to optimize correct code than to correct optimized code." --Bill Harlan | https://mail.python.org/pipermail/python-list/2003-October/220386.html | CC-MAIN-2016-36 | refinedweb | 471 | 76.72 |
Encryption
.
Open-source encryption program
Developers...Open Source Encryption
Open Source encryption module loses FIPS... certification of the open-source encryption tool OpenSSL under the Federal Information
Open Source E-mail
is considering making its Java Enterprise System server software open-source, John.. DRM
available) program under the open-source Common Development and Distribution...Open Source DRM
SideSpace releases open source DRM solution
SideSpace Solutions released Media-S, an open-source DRM solution. Media-S is format
Open Source E-mail Server
making its Java Enterprise System server software open-source, John Loiacono...Open Source E-mail Server
MailWasher Server Open Source
MailWasher Server is an open-source, server-side junk mail filter package
Encryption and Decryption - Java Beginners
Encryption and Decryption Hello sir,
i need Password encryption and decryption program using java program.
I dont know how to write the program... the source code of encyption and decryption using java.
Thanking you...
open source - Java Beginners
open source hi! what is open source? i heard that SUN has released open source .what is re requisite to understand that open source. i know core concepts only Hi friend,
Open source is an approach to design
Open Source Business Model
Open Source Business Model
What is the open source business
model
It is often confusing to people to learn that an open source company may give its... that open source companies do not generate stable and scalable revenue
streams Browser
, with an open-source program called Firefox aiming to challenge Microsoft's dominant... program.
Open
source browser improves Windows...
Open Source Browser
Building an Open Source Browser
One
Java open source software
Open source software for Java
In this page we will tell list down the most used Open source in Java. Java
is one of the programming language used... is the list of java open source
software used for the development and deployment
open source help desk
Open Source Help Desk
Open Source Help Desk Software
As my help desk... of the major open source help desk software offerings. I?m not doing...?s out there.
The OneOrZero Open Source Task Management
SSL Authentication - JSP-Servlet
SSL Authentication With the help of a sample code describe the use of SSL Authentication of java clients
Open Source GPS
Open Source GPS
Open
Source GPSToolKit
The goal of the GPSTk project is to provide a world class, open source computing suite to the satellite....
Open Source GPS Software
Working with GPS
Open Source Software
;
Open Source Aspect-Oriented Frameworks in Java
AspectJ...Open Source Software
Open Source Software
Open source doesn't just mean access to the source code
Open Source Windows
Open Source Windows
...;
Gnucleus
Gnucleus was the beginning of a few open-source Gnutella related... open-source office software suite for word processing,
spreadsheets...;
All About Open Source
Open source usually refers to a program in which
Open Source
In this section we are discussing Open source software... Open source software
available on the internet that can be used in the company. Open source software
is very useful because source code also distributed
Open Source software written in Java
Open Source software written in Java
Open Source Software or OSS...
Code Coverage
Open Source Java Collections API... Purpose
ERP/CRM Written in Java
Open Source
Open Source JMS (Java Message Service) Implementations written in Java JMS
Open Source JMS
Open Source JMS
OpenJMS is an open source... detection
.
Open Source JMS Implementations
JMS4Spread.... JMS4Spread is being actively developed as a open source project at the Center
Open Source Project Management
Open Source Project Management
Open Source Project Management... to access our support forums.
An Open-Source Based... provide these features thanks to the open-source nature of ]project-open
SSL Certificates
and
e-commerce websites with Strong 128-Bit Digital Certificate SSL Encryption... digital certificate products and 128-bit encryption SSL.
thawte is one...
SSL Certificates
Secure Sockets Layer or SSL for short is a protocol or Open Source Software
Open or Open Source Software what is mean by open or open source... or open source software means any technology or a software that are freely available... source operating system where as PHP is a open source scripting language web services tool java
Open
Source web services tool in java
...:
The Web Services Invocation Framework (WSIF) is a simple Java API
Palm Open Source
Palm Open Source
Open
source software for Palm OS... synchronise your Palm device.
Open Source Business Models..., use a simple open source business model to such beneficial effect. Not only
decryption and encryption - Java Beginners
decryption and encryption hi everyone. my question is that i have one problem which i have to solve and the issue is that i have to do the encryption and decryption thing there in my code, e.g if the user enters the abc the code
Open Source Database
Source Java Database
One$DB is an Open Source version of Daffodil...;
Open Source Database Program for PalmOS
Pilot-DB...Open Source Database
Open Source
Open Source e-commerce
;
Open Source E-Commerce Education Program
Zelerate, Inc...Open Source e-commerce
Open Source Online Shop E-Commerce Solutions
Open source Commerce is an Open
encryption and decryption - Java Beginners
encryption and decryption i need files encryption and decryption program using java. but i dont know how to write the program.pls help me
thank you so much Hi Friend,
Try the following code:
import java.io.
Open Source CMS
Management
Apache Lenya is an Open Source Java/XML Content...Open Source CMS
Open Source Content... and developers of Open Source Content Management solutions. OSCOM organizes
Open Source Identity
of "open source" stuff written in Java, here now is a review of "open source... is to open-source its single sign-on Java technologies for authenticating users remotely... sign-on code. There is, however, an open source project called Java Open Single
Open Source XML Editor
* multi-platform (Java 1.3+)
* free open-source software
...Open Source XML Editor
Open
source Extensible XML Editor... powerful XML editing features
.
Open Source XML Database Toolkit
Encryption Decryption In Scriptlets/Java
Encryption Decryption In Scriptlets/Java I have this code in my scriptlet, when I use the Encryption code to encrypt my data it executes... where the program stopped working and it stopped in the decryption part and I get
integrate its Windows Server system with open source Java middleware from...Open Source CRM
Open Source CRM
solution
Daffodil CRM is a commercial Open Source CRM Solution that helps enterprise businesses to manage customer
Open Source Database Connection Pools written in Java
Open Source Cache Softwares written in Java
Open Source Charting & Reporting written in Java
Open Source Reports
Open Source Reports
ReportLab Open Source
ReportLab, since its early beginnings, has been a strong supporter of open-source. In the past we have found the feedback and input from our open-source community an invaluable aid
Open Source Java Database
program, you might consider one of the many Java open-source options.
Because...Open Source Java Database
An Open Source Java Database
One$DB is an Open Source version of Daffodil DB, our commercial Java Database. One$DB Intelligence
Open Source Intelligence
Open
source Intelligence
The Open Source..., a practice we term "Open Source Intelligence". In this article, we use three...;
Open source intelligence Wikipedia
Open Source Intelligence (OSINT
Open Source ERP/CRM written in Java - Java Beginners
Open Source ERP/CRM written in Java I seek an Open Source ERP/CRM RECRUITMENT database. to integrate - candidate/client/tasks/records/ and for functionality use. Please advise
Open Source Chat
Open Source Chat
Open Source Chat Program
FriendlyTalk is a simple...;
Open Source APIs for Java Technology Games
Welcome to today's Java Live chat on open source APIs for Java Technology Games. These APIs include Java Binding
Open Source Blog
Open Source Blog
About Roller
Roller is the open source blog server...;
The Open Source Law Blog
This blog is designed to let you know about developments in the law and business of open source software. It also provides
Open Source Router
Open Source Router
Open-source router firm looks
Vyatta, an open... as an open-source router software project. Vyatta's code combines a modified Linux... from Cisco, Juniper or Alcatel.
Open source router
Open Source Antivirus
for the open-source model, and while there is at least one active program..
Open Source Installer
, install or update shared components.
Open Source Java Tool... of Java development tools and a few common open source tools that aren't just...Open Source Installer
Open source installer tool
NSIS (Null
Open Source CMS written in Java
Free Open Source Proxy Servers Developed in Java
Open Source Books
Open Source Books
Open Books
O'Reilly has published a number... books.
Open Source Revolution
Linux creator Linus... for the Open Source community in the story of Pauling's foundational work that made
;
IBM urges Sun to make Java open source
IBM's vice president... shepherd development of Java through an open-source development model...-source Java would speed the development of a first-class and compatible open are using the SecureAMFChannel as we deploy the application in SSL. We use
Open Source Workflow Engines in Java
Open Source Workflow Engines in Java
The Open For Business Project: Workflow... upon restart.
Open Source Graphical XPDL Java Workflow Editor
Enhydra JaWE (Java Workflow Editor) is the first open source graphical Java workflow process Accounting
;
Open Source Accounting and Inventory Program Launched
A web based, open source accounting and inventory program called NOLA has been...Open Source Accounting
Turbocase open source Accounting software
Open Source Document Management Solutions written in Java
Open Source Code Coverage Tools For Analyzing Unit Tests written in Java
Open Source Java
Open Source Java
Open Source Software in Java
AspectJ...-source Java
Sun's Java technology evangelist Raghavan Srinivas said an open... to be the first time Sun has explicitly stated its intention to open-source Java. Sun
Mac OS X Open Source
with the Java
Web Services pack.
Open Source software for Mac OS X...Mac OS X Open Source
Mac
os X wikipedia
Mac OS X was a radical....
Open Source
Mac OS X Server
Mac OS X
Open Source Groupware Software written in Java
Open Source VoIP
Open Source VoIP/TelePhony
Open source VoIP/Telephony
One of the first open source VoIP projects -and one of the earliest VoIP PBXes... for the platform, both commercial and open source.
why php is open source - PHP
why php is open source i know it's silly question but i would like to know why php is open source
Open Source Web Frameworks in Java
Open Source Web Frameworks in Java
Struts
Struts Frame work...-source, all-Java framework for creating leading edge web applications in Java.... Struts is maintained as a part of Apache Jakarta
project and is open
Java Encryption using AES with key password
Java Encryption using AES with key password /* AES alogrithm...);
}
}
/* AES encryption alogrithm in JAVA */
/* AES alogrithm... string: " + output);
}
}
/* AES encryption alogrithm in JAVA... your program under a source code version control management software from the very...;
Two Open-Source Version Control Programs Spring
Critical flaws have been
Open Source Wiki
Open Source Wiki
Open Source Wiki Roundup
The purpose of this article... that choice if the need arises.
The open source wiki behind... in the growth of
Wikipedia.
SourceLabs Open-Source Catalog Boasts | http://www.roseindia.net/tutorialhelp/comment/72329 | CC-MAIN-2014-52 | refinedweb | 1,902 | 63.39 |
Version 1.2 Beta 3
Jean-Marc Valin
December 8, 2007
c
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation
License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Section, with no Front-
Cover Texts, and with no Back-Cover. A copy of the license is included in the section entitled "GNU Free Documentation
License".
2
Contents
1 Introduction to Speex 6
1.1 Getting help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 About this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Codec description 7
2.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Codec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Preprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Adaptive Jitter Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Acoustic Echo Canceller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Resampler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Compiling and Porting 10
3.1 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Porting and Optimising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.1 CPU optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.2 Memory optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Command-line encoder/decoder 13
4.1 speexenc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 speexdec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 Using the Speex Codec API (libspeex) 15
5.1 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2 Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3 Codec Options (speex_*_ctl) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.4 Mode queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.5 Packing and in-band signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Speech Processing API (libspeexdsp) 19
6.1 Preprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.1.1 Preprocessor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Echo Cancellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2.1 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.3 Jitter Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.4 Resampler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.5 Ring Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7 Formats and standards 24
7.1 RTP Payload Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.2 MIME Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.3 Ogg file format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
8 Introduction to CELP Coding 26
8.1 Source-Filter Model of Speech Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8.2 Linear Prediction (LPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8.3 Pitch Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.4 Innovation Codebook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
8.5 Noise Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
8.6 Analysis-by-Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3
Contents
9 Speex narrowband mode 30
9.1 Whole-Frame Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.2 Sub-Frame Analysis-by-Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.3 Bit allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.4 Perceptual enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10 Speex wideband mode (sub-band CELP) 34
10.1 Linear Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.2 Pitch Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.3 Excitation Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.4 Bit allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A Sample code 36
A.1 sampleenc.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A.2 sampledec.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
B Jitter Buffer for Speex 39
C IETF RTP Profile 41
D Speex License 60
E GNU Free Documentation License 61
4
List of Tables
5.1 In-band signalling codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.1 Ogg/Speex header packet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9.1 Bit allocation for narrowband modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.2 Quality versus bit-rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
10.1 Bit allocation for high-band in wideband mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.2 Quality versus bit-rate for the wideband encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5
1 Introduction to Speex
The Speex codec () exists because there is a need for a speech codec that is open-source and
free from software patent royalties. These are essential conditions for being usable in any open-source software. In essence,
Speex is to speech what Vorbis is to audio/music. Unlike many other speech codecs, Speex is not designed for mobile phones
but rather for packet networks and voice over IP (VoIP) applications. File-based compression is of course also supported. (adjustable) complexity and a small memory footprint.
All the design goals led to the choice of CELP).
1.1 Getting help
As for many open source projects, there are many ways to get help with Speex. These include:
• This manual
• Other documentation on the Speex website ()
• Mailing list: Discuss any Speex-related topic on speex-dev@xiph.org (not just for developers)
• IRC: The main channel is #speex on irc.freenode.net. Note that due to time differences, it may take a while to get
someone, so please be patient.
• Email the author privately at jean-marc.valin@usherbrooke.ca only for private/delicate topics you do not wish to discuss
publically.
Before asking for help (mailing list or IRC), it is important to first read this manual (OK, so if you made it here it’s already
a good sign)..
Here are some additional guidelines related to the mailing list. Before reporting bugs in Speex to the list, it is strongly
recommended (if possible) to first test whether these bugs can be reproduced using the speexenc and speexdec (see Section 4)
command-line utilities. Bugs reported based on 3rd party code are both harder to find and far too often caused by errors that
have nothing to do with Speex.
1.2 About this document
This document is divided in the following way. Section 2 describes the different Speex features and defines many basic terms
that are used throughout this manual. Section 4 documents the standard command-line tools provided in the Speex distribution.
Section 5 includes detailed instructions about programming using the libspeex API. Section 7 has some information related to
Speex and standards. 8 explains the general idea behind CELP, while sections 9 and 10 are specific to
Speex.
6
2 Codec description
This section describes Speex and its features into more details.
2.1 Concepts
Before introducing all the Speex features, here are some concepts in speech coding that help better understand the rest of the
manual. Although some are general concepts in speech/audio processing, others are specific to Speex.
Sampling rate
The sampling rate expressed in Hertz (Hz) is the number of samples taken from a signal per second. For a sampling rate
of Fs kHz, the highest frequency that can be represented is equal to Fs /2 kHz (Fs .
Bit-rate).
Quality (variable).
Complexity (variable).
7
2 Codec description
Voice Activity Detection (VAD) (DTX)
Perceptual enhancement is a part of the decoder which, when turned on, attempts to reduce the perception of the noise/dis-
tortion produced by the encoding/decoding process. In most cases, perceptual enhancement brings the sound further from the
original objectively (e.g. considering only SNR), but in the end it still sounds better (subjective improvement).
Latency and algorithmic delay
Every speech.
2.2 Codec
The main characteristics of Speex can be summarized as follows:
• Free software/open-source, patent and royalty-free
• Integration of narrowband and wideband using an embedded bit-stream
• Wide range of bit-rates available (from 2.15 kbps to 44 kbps)
• Dynamic bit-rate switching (AMR) and Variable Bit-Rate (VBR) operation
• Voice Activity Detection (VAD, integrated with VBR) and discontinuous transmission (DTX)
• Variable complexity
• Embedded wideband structure (scalable sampling rate)
• Ultra-wideband sampling rate at 32 kHz
• Intensity stereo encoding option
• Fixed-point implementation
2.3 Preprocessor
This part refers to the preprocessor module introduced in the 1.1.x branch. The preprocessor is designed to be used on the
audio before running the encoder. The preprocessor provides three main functionalities:
• noise suppression
• automatic gain control (AGC)
• voice activity detection (VAD)
8
2 Codec description
Figure 2.1: Acoustic echo model
The denoiser can be used to reduce the amount of background noise present in the input signal. This provides higher quality
speech whether or not the denoised signal is encoded with Speex (or at all). However, when using the denoised signal with the
codec, there is an additional benefit. Speech codecs in general (Speex included) tend to perform poorly on noisy input, which
tends to amplify the noise. The denoiser greatly reduces this effect..
2.4 Adaptive Jitter Buffer.
2.5 Acoustic Echo Canceller
In any hands-free communication system (Fig. 2.1),.
2.6 Resampler
In some cases, it may be useful to convert audio from one sampling rate to another. There are many reasons for that. It can
be for mixing streams that have different sampling rates, for supporting sampling rates that the soundcard doesn’t support, for
transcoding, etc. That’s why there is now a resampler that is part of the Speex project. This resampler can be used to convert
between any two arbitrary rates (the ratio must only be a rational number) and there is control over the quality/complexity
tradeoff.
9
3 Compiling and Porting
Compiling Speex under UNIX/Linux or any other platform supported by autoconf (e.g. Win32/cygwin) is as easy as typing:
% ./configure [options]
% make
% make install
The options supported by the Speex configure script are:
–prefix=<path> Specifies the base path for installing Speex (e.g. /usr)
–enable-shared/–disable-shared Whether to compile shared libraries
–enable-static/–disable-static Whether to compile static libraries
–disable-wideband Disable the wideband part of Speex (typically to save space)
–enable-valgrind Enable extra hits for valgrind for debugging purposes (do not use by default)
–enable-sse Enable use of SSE instructions (x86/float only)
–enable-fixed-point Compile Speex for a processor that does not have a floating point unit (FPU)
–enable-arm4-asm Enable assembly specific to the ARMv4 architecture (gcc only)
–enable-arm5e-asm Enable assembly specific to the ARMv5E architecture (gcc only)
–enable-fixed-point-debug Use only for debugging the fixed-point code (very slow)
–enable-epic-48k Enable a special (and non-compatible) 4.8 kbps narrowband mode (broken in 1.1.x and 1.2beta)
–enable-ti-c55x Enable support for the TI C5x family
–enable-blackfin-asm Enable assembly specific to the Blackfin DSP architecture (gcc only)
–enable-vorbis-psycho Make the encoder use the Vorbis psycho-acoustic model. This is very experimental and may be
removed in the future.
3.1 Platforms on which Speex is known to work (it probably
works on many others) are:
• x86 & x86-64
• Power
• SPARC
• ARM
• Blackfin
• Coldfire (68k family)
• TI C54xx & C55xx
10
3 Compiling and Porting
• TI C6xxx
• TriMedia (experimental)
Operating systems on top of which Speex is known to work include (it probably works on many others):
• Linux
• µ Clinux
• MacOS X
• BSD
• Other UNIX/POSIX variants
• Symbian
The source code directory include additional information for compiling on certain architectures or operating systems in
README.xxx files.
3.2 Porting and Optimising
Here are a few things to consider when porting or optimising Speex for a new platform or an existing one.
3.2.1 CPU optimisation
The single that will affect the CPU usage of Speex the most is whether it is compiled for floating point or fixed-point. If your
CPU/DSP does not have a floating-point unit FPU, then compiling as fixed-point will be orders of magnitudes faster. If there
is an FPU present, then it is important to test which version is faster. On the x86 architecture, floating-point is generally
faster, but not always. To compile Speex as fixed-point, you need to pass –fixed-point to the configure script or define the
FIXED_POINT macro for the compiler. As of 1.2beta3, it is now possible to disable the floating-point compatibility API,
which means that your code can link without a float emulation library. To do that configure with –disable-float-api or define
the DISABLE_FLOAT_API macro. Until the VBR feature is ported to fixed-point, you will also need to configure with
–disable-vbr or define DISABLE_VBR.
Other important things to check on some DSP architectures are:
• Make sure the cache is set to write-back mode
• If the chip has SRAM instead of cache, make sure as much code and data are in SRAM, rather than in RAM
If you are going to be writing assembly, then the following functions are usually the first ones you should consider optimising:
• filter_mem16()
• iir_mem16()
• vq_nbest()
• pitch_xcorr()
• interp_pitch()
The filtering functions filter_mem16() and iir_mem16() are implemented in the direct form II transposed (DF2T).
However, for architectures based on multiply-accumulate (MAC), DF2T requires frequent reload of the accumulator, which
can make the code very slow. For these architectures (e.g. Blackfin and Coldfire), a better approach is to implement those
functions as direct form I (DF1), which is easier to express in terms of MAC. When doing that however, it is important to
make sure that the DF1 implementation still behaves like the original DF2T behaviour when it comes to filter values.
This is necessary because the filter is time-varrying and must compute exactly the same value (not counting machine rounding)
on any encoder or decoder.
11
3 Compiling and Porting
3.2.2 Memory optimisation
Memory optimisation is mainly something that should be considered for small embedded platforms. For PCs, Speex is already
so tiny that it’s just not worth doing any of the things suggested here. There are several ways to reduce the memory usage of
Speex, both in terms of code size and data size. For optimising code size, the trick is to first remove features you do not need.
Some examples of things that can easily be disabled if you don’t need them are:
• Wideband support (–disable-wideband)
• Support for stereo (removing stereo.c)
• VBR support (–disable-vbr or DISABLE_VBR)
• Static codebooks that are not needed for the bit-rates you are using (*_table.c files)
Speex also has several methods for allocating temporary arrays. When using a compiler that supports C99 properly (as of 2007,
Microsoft compilers don’t, but gcc does), it is best to define VAR_ARRAYS. That makes use of the variable-size array feature
of C99. The next best is to define USE_ALLOCA so that Speex can use alloca() to allocate the temporary arrays. Note that on
many systems, alloca() is buggy so it may not work. If none of VAR_ARRAYS and USE_ALLOCA are defined, then Speex
falls back to allocating a large “scratch space” and doing its own internal allocation. The main disadvantage of this solution
is that it is wasteful. It needs to allocate enough stack for the worst case scenario (worst bit-rate, highest complexity setting,
...) and by default, the memory isn’t shared between multiple encoder/decoder states. Still, if the “manual” allocation is the
only option left, there are a few things that can be improved. By overriding the speex_alloc_scratch() call in os_support.h, it
is possible to always return the same memory area for all states1 . In addition to that, by redefining the NB_ENC_STACK and
NB_DEC_STACK (or similar for wideband), it is possible to only allocate memory for a scenario that is known in advange.
In this case, it is important to measure the amount of memory required for the specific sampling rate, bit-rate and complexity
level being used.
1 In this case, one must be careful with threads
12
4 Command-line encoder/decoder
The base Speex distribution includes a command-line encoder (speexenc) and decoder (speexdec). Those tools produce and
read Speex files encapsulated in the Ogg container. Although it is possible to encapsulate Speex in any container, Ogg is the
recommended container for files. This section describes how to use the command line tools for Speex files in Ogg.
4.1 speexenc
The speexenc utility is used to create Speex files from raw PCM or wave files. It can be used by calling:
speexenc [options] input_file output_file
The value ’-’ for input_file or output_file corresponds respectively to stdin and stdout. The valid options are:
–narrowband (-n) Tell Speex to treat the input as narrowband (8 kHz). This is the default
–wideband (-w) Tell Speex to treat the input as wideband (16 kHz)
–ultra-wideband (-u) Tell Speex to treat the input as “ultra-wideband” (32 kHz)
–quality n Set the encoding quality (0-10), default is 8
–bitrate n Encoding bit-rate (use bit-rate n or lower)
–vbr Enable VBR (Variable Bit-Rate), disabled by default
–abr n Enable ABR (Average Bit-Rate) at n kbps, disabled by default
–vad Enable VAD (Voice Activity Detection), disabled by default
–dtx Enable DTX (Discontinuous Transmission), disabled by default
–nframes n Pack n frames in each Ogg packet (this saves space at low bit-rates)
–comp n Set encoding speed/quality tradeoff. The higher the value of n, the slower the encoding (default is 3)
-V Verbose operation, print bit-rate currently in use
–help (-h) Print the help
–version (-v) Print version information
Speex comments
–comment Add the given string as an extra comment. This may be used multiple times.
–author Author of this track.
–title Title for this track.
Raw input options
–rate n Sampling rate for raw input
–stereo Consider raw input as stereo
–le Raw input is little-endian
–be Raw input is big-endian
–8bit Raw input is 8-bit unsigned
–16bit Raw input is 16-bit signed
13
4 Command-line encoder/decoder
4.2 speexdec
The speexdec utility is used to decode Speex files and can be used by calling:
speexdec [options] speex_file [output_file]
The value ’-’ for input_file or output_file corresponds respectively to stdin and stdout. Also, when no output_file is specified,
the file is played to the soundcard. The valid options are:
–enh enable post-filter (default)
–no-enh disable post-filter
–force-nb Force decoding in narrowband
–force-wb Force decoding in wideband
–force-uwb Force decoding in ultra-wideband
–mono Force decoding in mono
–stereo Force decoding in stereo
–rate n Force decoding at n Hz sampling rate
–packet-loss n Simulate n % random packet loss
-V Verbose operation, print bit-rate currently in use
–help (-h) Print the help
–version (-v) Print version information
14
5 Using the Speex Codec API (libspeex)
The libspeex library contains all the functions for encoding and decoding speech with the Speex codec. When linking on a
UNIX system, one must add -lspeex -lm to the compiler command line. One important thing to know is that libspeex calls are
reentrant, but not thread-safe. That means that it is fine to use calls from many threads, but calls using the same state from
multiple threads must be protected by mutexes. Examples of code can also be found in Appendix A and the complete API
documentation is included in the Documentation section of the Speex website ().
5.1 Encoding
In order to encode speech using Speex, one first needs to:
#include <speex/speex.h>
Then in the code, a Speex bit-packing struct must be declared, along with a Speex encoder state:
SpeexBits bits;
void *enc_state;
The two are initialized by:
speex_bits_init(&bits);
enc_state = speex_encoder_init(&speex_nb_mode);
For wideband coding, speex_nb_mode will be replaced by speex_wb_mode. In most cases, you will need to know the frame
size used at the sampling rate you are using. You can get that value in the frame_size variable (expressed in samples, not
bytes) with:
speex_encoder_ctl(enc_state,SPEEX_GET_FRAME_SIZE,&frame_size);
In practice, frame_size will correspond to 20 ms when using 8, 16, or 32 kHz sampling rate. There are many parameters that
can be set for the Speex encoder, but the most useful one is the quality parameter that controls the quality vs bit-rate tradeoff.
This is set by:
speex_encoder_ctl(enc_state,SPEEX_SET_QUALITY,&quality);
where quality is an integer value ranging from 0 to 10 (inclusively). The mapping between quality and bit-rate is described
in Fig. 9.2 for narrowband.
Once the initialization is done, for every input frame:
speex_bits_reset(&bits);
speex_encode_int(enc_state, input_frame, &bits);
nbBytes = speex_bits_write(&bits, byte_ptr, MAX_NB_BYTES);
where input_frame is a (short *) pointing to the beginning of a speech frame, byte_ptr is a (char *) where the encoded frame
will be written, MAX_NB_BYTES is the maximum number of bytes that can be written to byte_ptr without causing an overflow
and nbBytes is the number of bytes actually written to byte_ptr (the encoded size in bytes). Before calling speex_bits_write,
it is possible to find the number of bytes that need to be written by calling speex_bits_nbytes(&bits), which returns
a number of_bits_destroy(&bits);
speex_encoder_destroy(enc_state);
That’s about it for the encoder.
15
5 Using the Speex Codec API ( libspeex)
5.2 Decoding
In order to decode speech using Speex, you first need to:
#include <speex/speex.h>
You also need to declare a Speex bit-packing struct
SpeexBits bits;
and a Speex decoder state
void *dec_state;
The two are initialized by:
speex_bits_init(&bits);
dec_state = speex_decoder_init(&speex_nb_mode);
For wideband decoding, speex_nb_mode will be replaced by speex_wb_mode. If you need to obtain the size of the frames
that will be used by the decoder, you can get that value in the frame_size variable (expressed in samples, not bytes) with:
speex_decoder_ctl(dec_state, SPEEX_GET_FRAME_SIZE, &frame_size);
There is also a parameter that can be set for the decoder: whether or not to use a perceptual enhancer. This can be set by:
speex_decoder_ctl(dec_state, SPEEX_SET_ENH, &enh);
where enh is an int with value 0 to have the enhancer disabled and 1 to have it enabled. As of 1.2-beta1, the default is now
to enable the enhancer.
Again, once the decoder initialization is done, for every input frame:
speex_bits_read_from(&bits, input_bytes, nbBytes);
speex_decode_int(dec_state, &bits, output_frame);
where input_bytes is a (char *) containing the bit-stream data received for a frame, nbBytes is the size (in bytes) of that
bit-stream, and output_frame is a (short *) and points to the area where the decoded speech frame will be written. A NULL
value as the second argument indicates that we don’t have the bits for the current frame. When a frame is lost, the Speex
decoder will do its best to "guess" the correct signal.
As for the encoder, the speex_decode() function can still be used, with a (float *) as the output for the audio. After you’re
done with the decoding, free all resources with:
speex_bits_destroy(&bits);
speex_decoder_destroy(dec_state);
5.3 Codec Options (speex_*_ctl)
Entities should not be multiplied beyond necessity – William of Ockham.
Just because there’s an option for it doesn’t mean you have to turn it on – me.
The Speex encoder and decoder support many options and requests that can be accessed through the speex_encoder_ctl and
speex_decoder_ctl functions. These functions are similar to the ioctl system call and their prototypes are:
void speex_encoder_ctl(void *encoder, int request, void *ptr);
void speex_decoder_ctl(void *encoder, int request, void *ptr);
Despite those functions, the defaults are usually good for many applications and optional settings should only be used
when one understands them and knows that they are needed. A common error is to attempt to set many unnecessary
settings.
Here is a list of the values allowed for the requests. Some only apply to the encoder or the decoder. Because the last argument
is of type void *, the _ctl() functions are not type safe, and shoud thus be used with care. The type spx_int32_t is
the same as the C99 int32_t type.
SPEEX_SET_ENH‡ Set perceptual enhancer to on (1) or off (0) (spx_int32_t, default is on)
16
5 Using the Speex Codec API ( libspeex)
SPEEX_GET_ENH‡ Get perceptual enhancer status (spx_int32_t)
SPEEX_GET_FRAME_SIZE Get the number of samples per frame for the current mode (spx_int32_t)
SPEEX_SET_QUALITY† Set the encoder speech quality (spx_int32_t from 0 to 10, default is 8)
SPEEX_GET_QUALITY† Get the current encoder speech quality (spx_int32_t from 0 to 10)
SPEEX_SET_MODE† Set the mode number, as specified in the RTP spec (spx_int32_t)
SPEEX_GET_MODE† Get the current mode number, as specified in the RTP spec (spx_int32_t)
SPEEX_SET_VBR† Set variable bit-rate (VBR) to on (1) or off (0) (spx_int32_t, default is off)
SPEEX_GET_VBR† Get variable bit-rate (VBR) status (spx_int32_t)
SPEEX_SET_VBR_QUALITY† Set the encoder VBR speech quality (float 0.0 to 10.0, default is 8.0)
SPEEX_GET_VBR_QUALITY† Get the current encoder VBR speech quality (float 0 to 10)
SPEEX_SET_COMPLEXITY† Set the CPU resources allowed for the encoder (spx_int32_t from 1 to 10, default is 2)
SPEEX_GET_COMPLEXITY† Get the CPU resources allowed for the encoder (spx_int32_t from 1 to 10, default is
2)
SPEEX_SET_BITRATE† Set the bit-rate to use the closest value not exceeding the parameter (spx_int32_t in bits per
second)
SPEEX_GET_BITRATE Get the current bit-rate in use (spx_int32_t in bits per second)
SPEEX_SET_SAMPLING_RATE Set real sampling rate (spx_int32_t in Hz)
SPEEX_GET_SAMPLING_RATE Get real sampling rate (spx_int32_t in Hz)
SPEEX_RESET_STATE Reset the encoder/decoder state to its original state, clearing all memories (no argument)
SPEEX_SET_VAD† Set voice activity detection (VAD) to on (1) or off (0) (spx_int32_t, default is off)
SPEEX_GET_VAD† Get voice activity detection (VAD) status (spx_int32_t)
SPEEX_SET_DTX† Set discontinuous transmission (DTX) to on (1) or off (0) (spx_int32_t, default is off)
SPEEX_GET_DTX† Get discontinuous transmission (DTX) status (spx_int32_t)
SPEEX_SET_ABR† Set average bit-rate (ABR) to a value n in bits per second (spx_int32_t in bits per second)
SPEEX_GET_ABR† Get average bit-rate (ABR) setting (spx_int32_t in bits per second)
SPEEX_SET_PLC_TUNING† Tell the encoder to optimize encoding for a certain percentage of packet loss (spx_int32_t
in percent)
SPEEX_GET_PLC_TUNING† Get the current tuning of the encoder for PLC (spx_int32_t in percent)
SPEEX_SET_VBR_MAX_BITRATE† Set the maximum bit-rate allowed in VBR operation (spx_int32_t in bits per
second)
SPEEX_GET_VBR_MAX_BITRATE† Get the current maximum bit-rate allowed in VBR operation (spx_int32_t in
bits per second)
SPEEX_SET_HIGHPASS Set the high-pass filter on (1) or off (0) (spx_int32_t, default is on)
SPEEX_GET_HIGHPASS Get the current high-pass filter status (spx_int32_t)
† applies only to the encoder
‡ applies only to the decoder
17
5 Using the Speex Codec API ( libspeex)
5.4 Mode queries
Speex modes have a query system similar to the speex_encoder_ctl and speex_decoder_ctl calls. Since modes are read-only,
it is only possible to get information about a particular mode. The function used to do that is:
void speex_mode_query(SpeexMode *mode, int request, void *ptr);
The admissible values for request are (unless otherwise note, the values are returned through ptr):
SPEEX_MODE_FRAME_SIZE Get the frame size (in samples) for the mode
SPEEX_SUBMODE_BITRATE Get the bit-rate for a submode number specified through ptr (integer in bps).
5.5 Packing and in-band signalling 9.2. 5.
Code Size (bits) Content
0 1 Asks decoder to set perceptual enhancement off (0) or on(1)
1 1 Asks (if 1) the encoder to be less “agressive” due to high packet loss
2 4 Asks encoder to switch to mode N
3 4 Asks encoder to switch to mode N for low-band
4 4 Asks encoder to switch to mode N for high-band
5 4 Asks encoder to switch to quality N for VBR
6 4 Request acknowloedge (0=no, 1=all, 2=only for in-band data)
7 4 Asks encoder to set CBR (0), VAD(1), DTX(3), VBR(5), VBR+DTX(7)
8 8 Transmit (8-bit) character to the other end
9 8 Intensity stereo information
10 16 Announce maximum bit-rate acceptable (N in bytes/second)
11 16 reserved
12 32 Acknowledge receiving packet N
13 32 reserved
14 64 reserved
15 64 reserved
Table 5.1: In-band signalling codes
Finally, applications may define custom in-band messages using mode 13. The size of the message in bytes is encoded with
5 bits, so that the decoder can skip it if it doesn’t know how to interpret it.
18
6 Speech Processing API (libspeexdsp)
As of version 1.2beta3, the non-codec parts of the Speex package are now in a separate library called libspeexdsp. This library
includes the preprocessor, the acoustic echo canceller, the jitter buffer, and the resampler. In a UNIX environment, it can be
linked into a program by adding -lspeexdsp -lm to the compiler command line. Just like for libspeex, libspeexdsp calls are
reentrant, but not thread-safe. That means that it is fine to use calls from many threads, but calls using the same state from
multiple threads must be protected by mutexes.
6.1 Preprocessor
In order to use the Speex preprocessor, you first need to:
#include <speex/speex_preprocess.h>
Then, a preprocessor state can be created as:
SpeexPreprocessState *preprocess_state = speex_preprocess_state_init(frame_size,
sampling_rate);
and it is recommended to use the same value for frame_size as is used by the encoder (20 ms).
For each input frame, you need to call:
speex_preprocess_run(preprocess_state, audio_frame);
where audio_frame is used both as input and output. In cases where the output audio is not useful for a certain frame, it is
possible to use instead:
speex_preprocess_estimate_update(preprocess_state, audio_frame);
This call will update all the preprocessor internal state variables without computing the output audio, thus saving some CPU
cycles.
The behaviour of the preprocessor can be changed using:
speex_preprocess_ctl(preprocess_state, request, ptr);
which is used in the same way as the encoder and decoder equivalent. Options are listed in Section 6.1.1.
The preprocessor state can be destroyed using:
speex_preprocess_state_destroy(preprocess_state);
6.1.1 Preprocessor options
As with the codec, the preprocessor also has options that can be controlled using an ioctl()-like call. The available options are:
SPEEX_PREPROCESS_SET_DENOISE Turns denoising on(1) or off(2) (spx_int32_t)
SPEEX_PREPROCESS_GET_DENOISE Get denoising status (spx_int32_t)
SPEEX_PREPROCESS_SET_AGC Turns automatic gain control (AGC) on(1) or off(2) (spx_int32_t)
SPEEX_PREPROCESS_GET_AGC Get AGC status (spx_int32_t)
SPEEX_PREPROCESS_SET_VAD Turns voice activity detector (VAD) on(1) or off(2) (spx_int32_t)
SPEEX_PREPROCESS_GET_VAD Get VAD status (spx_int32_t)
SPEEX_PREPROCESS_SET_AGC_LEVEL
SPEEX_PREPROCESS_GET_AGC_LEVEL
19
6 Speech Processing API ( libspeexdsp)
SPEEX_PREPROCESS_SET_DEREVERB Turns reverberation removal on(1) or off(2) (spx_int32_t)
SPEEX_PREPROCESS_GET_DEREVERB Get reverberation removal status (spx_int32_t)
SPEEX_PREPROCESS_SET_DEREVERB_LEVEL Not working yet, do not use
SPEEX_PREPROCESS_GET_DEREVERB_LEVEL Not working yet, do not use
SPEEX_PREPROCESS_SET_DEREVERB_DECAY Not working yet, do not use
SPEEX_PREPROCESS_GET_DEREVERB_DECAY Not working yet, do not use
SPEEX_PREPROCESS_SET_PROB_START
SPEEX_PREPROCESS_GET_PROB_START
SPEEX_PREPROCESS_SET_PROB_CONTINUE
SPEEX_PREPROCESS_GET_PROB_CONTINUE
SPEEX_PREPROCESS_SET_NOISE_SUPPRESS Set maximum attenuation of the noise in dB (negative spx_int32_t
)
SPEEX_PREPROCESS_GET_NOISE_SUPPRESS Get maximum attenuation of the noise in dB (negative spx_int32_t
)
SPEEX_PREPROCESS_SET_ECHO_SUPPRESS Set maximum attenuation of the residual echo in dB (negative spx_int32_t
)
SPEEX_PREPROCESS_GET_ECHO_SUPPRESS Set maximum attenuation of the residual echo in dB (negative spx_int32_t
)
SPEEX_PREPROCESS_SET_ECHO_SUPPRESS_ACTIVE Set maximum attenuation of the echo in dB when near
end is active (negative spx_int32_t)
SPEEX_PREPROCESS_GET_ECHO_SUPPRESS_ACTIVE Set maximum attenuation of the echo in dB when near
end is active (negative spx_int32_t)
SPEEX_PREPROCESS_SET_ECHO_STATE Set the associated echo canceller for residual echo suppression (pointer
or NULL for no residual echo suppression)
SPEEX_PREPROCESS_GET_ECHO_STATE Get the associated echo canceller (pointer)
6.2 Echo Cancellation
The Speex library now includes an echo cancellation algorithm suitable for Acoustic Echo Cancellation (AEC). In order to
use the echo canceller, you first need to
#include <speex/speex_echo.h>
Then, an echo canceller state can be created by:
SpeexEchoState *echo_state = speex_echo_state_init(frame_size, filter_length);
where frame_size is the amount of data (in samples) you want to process at once and filter_length is the length
(in samples) of the echo cancelling filter you want to use (also known as tail length). It is recommended to use a frame size in
the order of 20 ms (or equal to the codec frame size) and make sure it is easy to perform an FFT of that size (powers of two are
better than prime sizes). The recommended tail length is approximately the third of the room reverberation time. For example,
in a small room, reverberation time is in the order of 300 ms, so a tail length of 100 ms is a good choice (800 samples at 8000
Hz sampling rate).
Once the echo canceller state is created, audio can be processed by:
speex_echo_cancellation(echo_state, input_frame, echo_frame, output_frame);
20
6 Speech Processing API ( libspeexdsp)
where input_frame is the audio as captured by the microphone, echo_frame is the signal that was played in the
speaker (and needs to be removed) and output_frame is the signal with echo removed.:
write_to_soundcard(echo_frame, frame_size);
read_from_soundcard(input_frame, frame_size);
speex_echo_cancellation(echo_state, input_frame, echo_frame, output_frame);
If you wish to further reduce the echo present in the signal, you can do so by associating the echo canceller to the prepro-
cessor (see Section 6.1). This is done by calling:
speex_preprocess_ctl(preprocess_state, SPEEX_PREPROCESS_SET_ECHO_STATE,echo_state);
in the initialisation.:
speex_echo_playback(echo_state, echo_frame);
every time an audio frame is played. Then, the capture context/thread calls:
speex_echo_capture(echo_state, input_frame, output_frame);
for every frame captured. Internally, speex_echo_playback() simply buffers the playback frame so it can be used by
speex_echo_capture() to call speex_echo_cancel(). A side effect of using this alternate API is that the playback audio is
delayed by two frames, which is the normal delay caused by the soundcard. When capture and playback are already synchro-
nised, speex_echo_cancellation() is preferable since it gives better control on the exact input/echo timing.
The echo cancellation state can be destroyed with:
speex_echo_state_destroy(echo_state);
It is also possible to reset the state of the echo canceller so it can be reused without the need to create another state with:
speex_echo_state_reset(echo_state);
6.2.1 Troubleshooting
There are several things that may prevent the echo canceller from working properly. One of them is a bug (or something
suboptimal) in the code, but there are many others you should consider first
• Using a different soundcard to do the capture and plaback will not work, regardless of what you may think. The only
exception to that is if the two cards can be made to have their sampling clock “locked” on the same clock source. If not,
the clocks will always have a small amount of drift, which will prevent the echo canceller from adapting.
• The delay between the record and playback signals must be minimal. Any signal played has to “appear” on the playback
(far end) signal slightly before the echo canceller “sees” it in the near end signal, but excessive delay means that part of
the filter length is wasted. In the worst situations, the delay is such that it is longer than the filter length, in which case,
no echo can be cancelled.
• When it comes to echo tail length (filter length), longer is *not* better. Actually, the longer the tail length, the longer it
takes for the filter to adapt. Of course, a tail length that is too short will not cancel enough echo, but the most common
problem seen is that people set a very long tail length and then wonder why no echo is being cancelled.
• Non-linear distortion cannot (by definition) be modeled by the linear adaptive filter used in the echo canceller and thus
cannot be cancelled. Use good audio gear and avoid saturation/clipping.
21
6 Speech Processing API ( libspeexdsp)
Also useful is reading Echo Cancellation Demystified by Alexey Frunze1 , which explains the fundamental principles of echo
cancellation. The details of the algorithm described in the article are different, but the general ideas of echo cancellation
through adaptive filters are the same.:
echo_diagnostic(’aec_rec.sw’, ’aec_play.sw’, ’aec_diagnostic.sw’, 1024);
The value of 1024 is the filter length and can be changed. There will be some (hopefully) useful messages printed and echo
cancelled audio will be saved to aec_diagnostic.sw . If even that output is bad (almost no cancellation) then there is probably
problem with the playback or recording process.
6.3 Jitter Buffer
The jitter buffer can be enabled by including:
#include <speex/speex_jitter.h>
and a new jitter buffer state can be initialised by:
JitterBuffer *state = jitter_buffer_init(step);
where the step argument is the default time step (in timestamp units) used for adjusting the delay and doing concealment.
A value of 1 is always correct, but higher values may be more convenient sometimes. For example, if you are only able to do
concealment on 20ms frames, there is no point in the jitter buffer asking you to do it on one sample. Another example is that
for video, it makes no sense to adjust the delay by less than a full frame. The value provided can always be changed at a later
time.
The jitter buffer API is based on the JitterBufferPacket type, which is defined as:
typedef struct {
char *data; /* Data bytes contained in the packet */
spx_uint32_t len; /* Length of the packet in bytes */
spx_uint32_t timestamp; /* Timestamp for the packet */
spx_uint32_t span; /* Time covered by the packet (timestamp units) */
} JitterBufferPacket;
As an example, for audio the timestamp field would be what is obtained from the RTP timestamp field and the span would
be the number of samples that are encoded in the packet. For Speex narrowband, span would be 160 if only one frame is
included in the packet.
When a packet arrives, it need to be inserter into the jitter buffer by:
JitterBufferPacket packet;
/* Fill in each field in the packet struct */
jitter_buffer_put(state, &packet);
When the decoder is ready to decode a packet the packet to be decoded can be obtained by:
int start_offset;
err = jitter_buffer_get(state, &packet, desired_span, &start_offset);
If jitter_buffer_put() and jitter_buffer_get() are called from different threads, then you need to protect
the jitter buffer state with a mutex.
Because the jitter buffer is designed not to use an explicit timer, it needs to be told about the time explicitly. This is done
by calling:
jitter_buffer_tick(state);
This needs to be done periodically in the playing thread. This will be the last jitter buffer call before going to sleep (until
more data is played back). In some cases, it may be preferable to use
1
22
6 Speech Processing API ( libspeexdsp)
jitter_buffer_remaining_span(state, remaining);
The second argument is used to specify that we are still holding data that has not been written to the playback device.
For instance, if 256 samples were needed by the soundcard (specified by desired_span), but jitter_buffer_get()
returned 320 samples, we would have remaining=64.
6.4 Resampler
Speex includes a resampling modules. To make use of the resampler, it is necessary to include its header file:
#include <speex/speex_resampler.h>
For each stream that is to be resampled, it is necessary to create a resampler state with:
SpeexResamplerState *resampler;
resampler = speex_resampler_init(nb_channels, input_rate, output_rate, quality, &
err);
where nb_channels is the number of channels that will be used (either interleaved or non-interleaved), input_rate is the
sampling rate of the input stream, output_rate is the sampling rate of the output stream and quality is the requested quality
setting (0 to 10). The quality parameter is useful for controlling the quality/complexity/latency tradeoff. Using a higher
quality setting means less noise/aliasing, a higher complexity and a higher latency. Usually, a quality of 3 is acceptable for
most desktop uses and quality 10 is mostly recommended for pro audio work. Quality 0 usually has a decent sound (certainly
better than using linear interpolation resampling), but artifacts may be heard.
The actual resampling is performed using
err = speex_resampler_process_int(resampler, channelID, in, &in_length, out, &
out_length);
where channelID is the ID of the channel to be processed. For a mono stream, use 0. The in pointer points to the first sample
of the input buffer for the selected channel and out points to the first sample of the output. The size of the input and output
buffers are specified by in_length and out.
It is also possible to process multiple channels at once.
To be continued...
6.5 Ring Buffer
Put some stuff there...
23
7 Formats and standards
Spe1..
7.1 RTP Payload Format
The RTP payload draft is included in appendix C and the latest version is available at
latest. This draft has been sent (2003/02/26) to the Internet Engineering Task Force (IETF) and will be discussed at the
March 18th meeting in San Francisco.
7.2 MIME Type
For now, you should use the MIME type audio/x-speex for Speex-in-Ogg. We will apply for type audio/speex in the near
future.
7.3 Ogg file format
Speex bit-streams can be stored in Ogg files. In this case, the first packet of the Ogg file contains the Speex header described in
table 7.
1 The wideband bit-stream contains an embedded narrowband bit-stream which can be decoded alone
24
7 Formats and standards
Field Type Size
speex_string char[] 8
speex_version char[] 20
speex_version_id int 4
header_size int 4
rate int 4
mode int 4
mode_bitstream_version int 4
nb_channels int 4
bitrate int 4
frame_size int 4
vbr int 4
frames_per_packet int 4
extra_headers int 4
reserved1 int 4
reserved2 int 4
Table 7.1: Ogg/Speex header packet
25
8 Introduction to CELP Coding
Do 9. The CELP technique is based on
three ideas:
1. The use of a linear prediction (LP) model to model the vocal tract
2. The use of (adaptive and fixed) codebook entries as input (excitation) of the LP model
3. The search performed in closed-loop in a “perceptually weighted domain”
This section describes the basic ideas behind CELP. This is still a work in progress.
8.1 Source-Filter Model of Speech Prediction 8.1.
8.2 Linear Prediction (LPC)
Linear prediction is at the base of many speech coding techniques, including CELP. The idea behind it is to predict the signal
x[n] using a linear combination of its past samples:
N
y[n] = ∑ ai x[n − i]
i=1
where y[n] is the linear prediction of x[n]. The prediction error is thus given by:
N
e[n] = x[n] − y[n] = x[n] − ∑ ai x[n − i]
i=1
The goal of the LPC analysis is to find the best prediction coefficients ai which minimize the quadratic error function:
" #2
L−1 L−1 N
E= ∑ [e[n]] 2
= ∑ x[n] − ∑ ai x[n − i]
n=0 n=0 i=1
∂E
That can be done by making all derivatives ∂ ai equal to zero:
" #2
∂E ∂ L−1 N
= ∑ x[n] − ∑ ai x[n − i] = 0
∂ ai ∂ ai n=0 i=1
26
8 Introduction to CELP Coding
Figure 8.1: The CELP model of speech synthesis (decoder)
For an order N filter, the filter coefficients ai are found by solving the system N × N linear system Ra = r, where
R(0) R(1) · · · R(N − 1)
R(1) R(0) · · · R(N − 2)
R=
.. .. . . ..
. . . .
R(N − 1) R(N − 2) · · · R(0)
R(1)
R(2)
r=
..
.
R(N)
with R(m), the auto-correlation of the signal x[n], computed as:
N−1
R(m) = ∑ x[i]x[i − m]
i=0
Toeplitz, the Levinson-Durbin algorithm can be used, making the solution to the problem O N 2
Because R is Hermitian
instead of O N 3 . Also, it can be proven that all the roots of A(z) are within the unit circle, which means that 1/A(z) is always
stable. This is in theory; in practice because of finite precision, there are two commonly used techniques to make sure we have
a stable filter. First, we multiply R(0) by a number slightly above one (such as 1.0001), which is equivalent to adding noise
to the signal. Also, we can apply a window to the auto-correlation, which is equivalent to filtering in the frequency domain,
reducing sharp resonances.
8.3 Pitch Prediction
During voiced segments, the speech signal is periodic, so it is possible to take advantage of that property by approximating
the excitation signal e[n] by a gain times the past of the excitation:
e[n] ≃ p[n] = β e[n − T ] ,
where T is the pitch period, β is the pitch gain. We call that long-term prediction since the excitation is predicted from e[n − T ]
with T ≫ N.
27
8 Introduction to CELP Coding
30
Speech signal
LPC synthesis filter
Reference shaping
20
10
Response (dB)
0
-10
-20
-30
-40
0 500 1000 1500 2000 2500 3000 3500 4000
Frequency (Hz)
Figure 8.2: Standard noise shaping in CELP. Arbitrary y-axis offset.
8.4 Innovation Codebook
The final excitation e[n] will be the sum of the pitch prediction and an innovation signal c[n] taken from a fixed codebook,
hence the name Code Excited Linear Prediction. The final excitation is given by
e[n] = p[n] + c[n] = β e[n − T ] + c[n] .
The quantization of c[n] is where most of the bits in a CELP codec are allocated. It represents the information that couldn’t
be obtained either from linear prediction or pitch prediction. In the z-domain we can represent the final signal X(z) as
C(z)
X(z) =
A(z) (1 − β z−T )
8.5 Noise Weight W (z) is applied to the error signal in the encoder. In most CELP
codecs, W (z) is a pole-zero weighting filter derived from the linear prediction coefficients (LPC), generally using bandwidth
expansion. Let the spectral envelope be represented by the synthesis filter 1/A(z), CELP codecs typically derive the noise
weighting filter as
A(z/γ1 )
W (z) = , (8.1)
A(z/γ2 )
where γ1 = 0.9 and γ2 = 0.6 in the Speex reference implementation. If a filter A(z) has (complex) poles at pi in the z-plane,
the filter A(z/γ ) will have its poles at p′i = γ pi , making it a flatter version of A(z).
The weighting filter is applied to the error signal used to optimize the codebook search through analysis-by-synthesis
(AbS). This results in a spectral shape of the noise that tends towards 1/W (z). While the simplicity of the model has been
an important reason for the success of CELP, it remains that W (z) is a very rough approximation for the perceptually optimal
noise weighting function. Fig. 8.2 illustrates the noise shaping that results from Eq. 8.1. Throughout this paper, we refer to
W (z) as the noise weighting filter and to 1/W (z) as the noise shaping filter (or curve).
8.6 Analysis-by-Synthesis
One of the main principles behind CELP is called Analysis-by-Synthesis (AbS), meaning that the encoding (analysis) is
performed by perceptually optimising the decoded (synthesis) signal in a closed loop. In theory, the best CELP stream would
28
8 Introduction to CELP Coding.
29
9 Speex narrowband mode
This section looks at how Speex works for narrowband (8 kHz sampling rate) operation. The frame size for this mode is 20 ms,
corresponding to 160 samples. Each frame is also subdivided into 4 sub-frames of 40 samples each.
Also many design decisions were based on the original goals and assumptions:
• Minimizing the amount of information extracted from past frames (for robustness to packet loss)
• Dynamically-selectable codebooks (LSP, pitch and innovation)
• sub-vector fixed (innovation) codebooks
9.1 Whole-Frame Analysis
In narrowband, Speex frames are 20 ms long (160 samples) and are subdivided in 4 sub-frames of 5 ms each (40 samples).
For most narrowband bit-rates (8 kbps and above), the only parameters encoded at the frame level are the Line Spectral Pairs
(LSP) and a global excitation gain g f rame , as shown in Fig. 9.1. All other parameters are encoded at the sub-frame level.
Linear prediction analysis is performed once per frame using an asymmetric Hamming window centered on the fourth sub-
frame. Because linear prediction coefficients (LPC) are not robust to quantization, they are first are converted to line spectral
pairs (LSP). The LSP’s are considered to be associated to the 4th sub-frames and the LSP’s associated to the first 3 sub-frames
are linearly interpolated using the current and previous LSP coefficients. The LSP coefficients and converted back to the LPC
filter Â(z). The non-quantized interpolated filter is denoted A(z) and can be used for the weighting filter W (z) because it does
not need to be available to the decoder.
To make Speex more robust to packet loss, no prediction is applied on the LSP coefficients prior to quantization. The LSPs
are encoded using vector quantizatin (VQ) with 30 bits for higher quality modes and 18 bits for lower quality.
9.2 Sub-Frame Analysis-by-Synthesis
The analysis-by-synthesis (AbS) encoder loop is described in Fig. 9.2. There are three main aspects where Speex significantly
differs from most other CELP codecs. First, while most recent CELP codecs make use of fractional pitch estimation with a
single gain, Speex uses an integer to encode the pitch period, but uses a 3-tap predictor (3 gains). The adaptive codebook
contribution ea [n] can thus be expressed as:
ea [n] = g0 e[n − T − 1] + g1e[n − T ] + g2e[n − T + 1] (9.1)
where g0 , g1 and g2 are the jointly quantized pitch gains and e[n] is the codec excitation memory. It is worth noting that when
the pitch is smaller than the sub-frame size, we repeat the excitation at a period T . For example, when n − T + 1 ≥ 0, we
use n − 2T + 1 instead. In most modes, the pitch period is encoded with 7 bits in the [17, 144] range and the βi coefficients
are vector-quantized using 7 bits at higher bit-rates (15 kbps narrowband and above) and 5 bits at lower bit-rates (11 kbps
narrowband and below).
Figure 9.1: Frame open-loop analysis
30
9 Speex narrowband mode
Figure 9.2: Analysis-by-synthesis closed-loop optimization on a sub-frame.
31
9 Speex narrowband mode
Many current CELP codecs use moving average (MA) prediction to encode the fixed codebook gain. This provides slightly
better coding at the expense of introducing a dependency on previously encoded frames. A second difference is that Speex
encodes the fixed codebook gain as the product of the global excitation gain g f rame with a sub-frame gain corrections gsub f .
This increases robustness to packet loss by eliminating the inter-frame dependency. The sub-frame gain correction is encoded
before the fixed codebook is searched (not closed-loop optimized) and uses between 0 and 3 bits per sub-frame, depending on
the bit-rate.
The third difference is that Speex uses sub-vector quantization of the innovation (fixed codebook) signal instead of an
algebraic codebook. Each sub-frame is divided into sub-vectors of lengths ranging between 5 and 20 samples. Each sub-
vector is chosen from a bitrate-dependent codebook and all sub-vectors are concatenated to form a sub-frame. As an example,
the 3.95 kbps mode uses a sub-vector size of 20 samples with 32 entries in the codebook (5 bits). This means that the
innovation is encoded with 10 bits per sub-frame, or 2000 bps. On the other hand, the 18.2 kbps mode uses a sub-vector size
of 5 samples with 256 entries in the codebook (8 bits), so the innovation uses 64 bits per sub-frame, or 12800 bps.
9.3 Bit allocation
There are 7 different narrowband bit-rates defined for Speex, ranging from 250 bps to 24.6 kbps, although the modes below
5.9 kbps should not be used for speech. The bit-allocation for each mode is detailed in table 9.1. Each frame starts with
the mode ID encoded with 4 bits which allows a range from 0 to 15, though only the first 7 values are used (the others are
reserved). The parameters are listed in the table in the order they are packed in the bit-stream. All frame-based parameters are
packed before sub-frame parameters. The parameters for a certain sub-frame are all packed before the following sub-frame
is packed. Note that the “OL” in the parameter description means that the parameter is an open loop estimation based on the
whole frame.
Parameter Update rate 0 1 2 3 4 5 6 7 8
Wideband bit frame 1 1 1 1 1 1 1 1 1
Mode ID frame 4 4 4 4 4 4 4 4 4
LSP frame 0 18 18 18 18 30 30 30 18
OL pitch frame 0 7 7 0 0 0 0 0 7
OL pitch gain frame 0 4 0 0 0 0 0 0 4
OL Exc gain frame 0 5 5 5 5 5 5 5 5
Fine pitch sub-frame 0 0 0 7 7 7 7 7 0
Pitch gain sub-frame 0 0 5 5 5 7 7 7 0
Innovation gain sub-frame 0 1 0 1 1 3 3 3 0
Innovation VQ sub-frame 0 0 16 20 35 48 64 96 10
Total frame 5 43 119 160 220 300 364 492 79
Table 9.1: Bit allocation for narrowband modes
So far, no MOS (Mean Opinion Score) subjective evaluation has been performed for Speex. In order to give an idea of
the quality achievable with it, table 9.2 presents my own subjective opinion on it. It sould be noted that different people
will perceive the quality differently and that the person that designed the codec often has a bias (one way or another) when
it comes to subjective evaluation. Last thing, it should be noted that for most codecs (including Speex) encoding quality
sometimes varies depending on the input. Note that the complexity is only approximate (within 0.5 mflops and using the lowest
complexity setting). Decoding requires approximately 0.5 mflops in most modes (1 mflops with perceptual enhancement).
9.4 Perceptual enhancement
This section was only valid for version 1.1.12 and earlier. It does not apply to version 1.2-beta1 (and later), for which
the new perceptual enhancement is not yet documented.
This part of the codec only applies to the decoder and can even be changed without affecting inter-operability. For that
reason, the implementation provided and described here should only be considered as a reference implementation. The
enhancement system is divided into two parts. First, the synthesis filter S(z) = 1/A(z) is replaced by an enhanced filter:
A (z/a2 ) A (z/a3 )
S′ (z) =
A (z) A (z/a1 )
32
9 Speex narrowband mode
9 - - - reserved
10 - - - reserved
11 - - - reserved
12 - - - reserved
13 - - - Application-defined, interpreted by callback or skipped
14 - - - Speex in-band signaling
15 - - - Terminator code
Table 9.2: Quality versus bit-rate
where a1 and a2 depend on the mode in use and a3 = 1r 1 − 1−ra 1−ra2 with r = .9. The second part of the enhancement consists
1
of using a comb filter to enhance the pitch in the excitation domain.
33
10 Speex wideband mode (sub-band CELP)
For wideband, the Speex approach uses a quadrature mirror f ilter (QMF) to split the band in two. The 16 kHz signal is thus
divided into two 8 kHz signals, one representing the low band (0-4 kHz), the other the high band (4-8 kHz). The low band is
encoded with the narrowband mode described in section 9 in such a way that the resulting “embedded narrowband bit-stream”
can also be decoded with the narrowband decoder. Since the low band encoding has already been described, only the high
band encoding is described in this section.
10.1 Linear Prediction
The linear prediction part used for the high-band is very similar to what is done for narrowband. The only difference is that
we use only 12 bits to encode the high-band LSP’s using a multi-stage vector quantizer (MSVQ). The first level quantizes the
10 coefficients with 6 bits and the error is then quantized using 6 bits, too.
10.2 Pitch Prediction
That part is easy: there’s no pitch prediction for the high-band. There are two reasons for that. First, there is usually little
harmonic structure in this band (above 4 kHz). Second, it would be very hard to implement since the QMF folds the 4-8 kHz
band into 4-0 kHz (reversing the frequency axis), which means that the location of the harmonics is no longer at multiples of
the fundamental (pitch).
10.3 Excitation Quantization
The high-band excitation is coded in the same way as for narrowband.
10.4 Bit allocation
For the wideband mode, the entire narrowband frame is packed before the high-band is encoded. The narrowband part of the
bit-stream is as defined in table 9.1. The high-band follows, as described in table 10.1. For wideband, the mode ID is the same
as the Speex quality setting and is defined in table 10.2. This also means that a wideband frame may be correctly decoded by
a narrowband decoder with the only caveat that if more than one frame is packed in the same packet, the decoder will need to
skip the high-band parts in order to sync with the bit-stream.
Parameter Update rate 0 1 2 3 4
Wideband bit frame 1 1 1 1 1
Mode ID frame 3 3 3 3 3
LSP frame 0 12 12 12 12
Excitation gain sub-frame 0 5 4 4 4
Excitation VQ sub-frame 0 0 20 40 80
Total frame 4 36 112 192 352
Table 10.1: Bit allocation for high-band in wideband mode
34
10 Speex wideband mode (sub-band CELP)
Mode/Quality Bit-rate (bps) Quality/description
0 3,950 Barely intelligible (mostly for comfort noise)
1 5,750 Very noticeable artifacts/noise, poor intelligibility
2 7,750 Very noticeable artifacts/noise, good intelligibility
3 9,800 Artifacts/noise sometimes annoying
4 12,800 Artifacts/noise usually noticeable
5 16,800 Artifacts/noise sometimes noticeable
6 20,600 Need good headphones to tell the difference
7 23,800 Need good headphones to tell the difference
8 27,800 Hard to tell the difference even with good headphones
9 34,200 Hard to tell the difference even with good headphones
10 42,200 Completely transparent for voice, good quality music
Table 10.2: Quality versus bit-rate for the wideband encoder
35
A).
A.1 sampleenc.c
sampleenc takes a raw 16 bits/sample file, encodes it and outputs a Speex stream to stdout. Note that the packing used is not
compatible with that of speexenc/speexdec.
Listing A.1: Source code for sampleenc
1 #include <speex/speex.h>
2 #include <stdio.h>
3
4 /*The frame size in hardcoded for this sample code but it doesn’t have to be*/
5 #define FRAME_SIZE 160
6 int main(int argc, char **argv)
7 {
8 char *inFile;
9 FILE *fin;
10 short in[FRAME_SIZE];
11 float input[FRAME_SIZE];
12 char cbits[200];
13 int nbBytes;
14 /*Holds the state of the encoder*/
15 void *state;
16 /*Holds bits so they can be read and written to by the Speex routines*/
17 SpeexBits bits;
18 int i, tmp;
19
20 /*Create a new encoder state in narrowband mode*/
21 state = speex_encoder_init(&speex_nb_mode);
22
23 /*Set the quality to 8 (15 kbps)*/
24 tmp=8;
25 speex_encoder_ctl(state, SPEEX_SET_QUALITY, &tmp);
26
27 inFile = argv[1];
28 fin = fopen(inFile, "r");
29
30 /*Initialization of the structure that holds the bits*/
31 speex_bits_init(&bits);
32 while (1)
33 {
34 /*Read a 16 bits/sample audio frame*/
35 fread(in, sizeof(short), FRAME_SIZE, fin);
36 if (feof(fin))
37 break;
38 /*Copy the 16 bits values to float so Speex can work on them*/
36
A Sample code
39 for (i=0;i<FRAME_SIZE;i++)
40 input[i]=in[i];
41
42 /*Flush all the bits in the struct so we can encode a new frame*/
43 speex_bits_reset(&bits);
44
45 /*Encode the frame*/
46 speex_encode(state, input, &bits);
47 /*Copy the bits to an array of char that can be written*/
48 nbBytes = speex_bits_write(&bits, cbits, 200);
49
50 /*Write the size of the frame first. This is what sampledec expects but
51 it’s likely to be different in your own application*/
52 fwrite(&nbBytes, sizeof(int), 1, stdout);
53 /*Write the compressed data*/
54 fwrite(cbits, 1, nbBytes, stdout);
55
56 }
57
58 /*Destroy the encoder state*/
59 speex_encoder_destroy(state);
60 /*Destroy the bit-packing struct*/
61 speex_bits_destroy(&bits);
62 fclose(fin);
63 return 0;
64 }
A.2 sampledec.c
sampledec reads a Speex stream from stdin, decodes it and outputs it to a raw 16 bits/sample file. Note that the packing used
is not compatible with that of speexenc/speexdec.
Listing A.2: Source code for sampledec
1 #include <speex/speex.h>
2 #include <stdio.h>
3
4 /*The frame size in hardcoded for this sample code but it doesn’t have to be*/
5 #define FRAME_SIZE 160
6 int main(int argc, char **argv)
7 {
8 char *outFile;
9 FILE *fout;
10 /*Holds the audio that will be written to file (16 bits per sample)*/
11 short out[FRAME_SIZE];
12 /*Speex handle samples as float, so we need an array of floats*/
13 float output[FRAME_SIZE];
14 char cbits[200];
15 int nbBytes;
16 /*Holds the state of the decoder*/
17 void *state;
18 /*Holds bits so they can be read and written to by the Speex routines*/
19 SpeexBits bits;
20 int i, tmp;
21
22 /*Create a new decoder state in narrowband mode*/
23 state = speex_decoder_init(&speex_nb_mode);
37
A Sample code
24
25 /*Set the perceptual enhancement on*/
26 tmp=1;
27 speex_decoder_ctl(state, SPEEX_SET_ENH, &tmp);
28
29 outFile = argv[1];
30 fout = fopen(outFile, "w");
31
32 /*Initialization of the structure that holds the bits*/
33 speex_bits_init(&bits);
34 while (1)
35 {
36 /*Read the size encoded by sampleenc, this part will likely be
37 different in your application*/
38 fread(&nbBytes, sizeof(int), 1, stdin);
39 fprintf (stderr, "nbBytes: %d\n", nbBytes);
40 if (feof(stdin))
41 break;
42
43 /*Read the "packet" encoded by sampleenc*/
44 fread(cbits, 1, nbBytes, stdin);
45 /*Copy the data into the bit-stream struct*/
46 speex_bits_read_from(&bits, cbits, nbBytes);
47
48 /*Decode the data*/
49 speex_decode(state, &bits, output);
50
51 /*Copy from float to short (16 bits) for output*/
52 for (i=0;i<FRAME_SIZE;i++)
53 out[i]=output[i];
54
55 /*Write the decoded audio to file*/
56 fwrite(out, sizeof(short), FRAME_SIZE, fout);
57 }
58
59 /*Destroy the decoder state*/
60 speex_decoder_destroy(state);
61 /*Destroy the bit-stream truct*/
62 speex_bits_destroy(&bits);
63 fclose(fout);
64 return 0;
65 }
38
B Jitter Buffer for Speex
Listing B.1: Example of using the jitter buffer for Speex packets
1 #include <speex/speex_jitter.h>
2 #include "speex_jitter_buffer.h"
3
4 #ifndef NULL
5 #define NULL 0
6 #endif
7
8
9 void speex_jitter_init(SpeexJitter *jitter, void *decoder, int sampling_rate)
10 {
11 jitter->dec = decoder;
12 speex_decoder_ctl(decoder, SPEEX_GET_FRAME_SIZE, &jitter->frame_size);
13
14 jitter->packets = jitter_buffer_init(jitter->frame_size);
15
16 speex_bits_init(&jitter->current_packet);
17 jitter->valid_bits = 0;
18
19 }
20
21 void speex_jitter_destroy(SpeexJitter *jitter)
22 {
23 jitter_buffer_destroy(jitter->packets);
24 speex_bits_destroy(&jitter->current_packet);
25 }
26
27 void speex_jitter_put(SpeexJitter *jitter, char *packet, int len, int timestamp)
28 {
29 JitterBufferPacket p;
30 p.data = packet;
31 p.len = len;
32 p.timestamp = timestamp;
33 p.span = jitter->frame_size;
34 jitter_buffer_put(jitter->packets, &p);
35 }
36
37 void speex_jitter_get(SpeexJitter *jitter, spx_int16_t *out, int *current_timestamp
)
38 {
39 int i;
40 int ret;
41 spx_int32_t activity;
42 char data[2048];
43 JitterBufferPacket packet;
44 packet.data = data;
45
46 if (jitter->valid_bits)
47 {
39
B Jitter Buffer for Speex
48 /* Try decoding last received packet */
49 ret = speex_decode_int(jitter->dec, &jitter->current_packet, out);
50 if (ret == 0)
51 {
52 jitter_buffer_tick(jitter->packets);
53 return;
54 } else {
55 jitter->valid_bits = 0;
56 }
57 }
58
59 ret = jitter_buffer_get(jitter->packets, &packet, jitter->frame_size, NULL);
60
61 if (ret != JITTER_BUFFER_OK)
62 {
63 /* No packet found */
64
65 /*fprintf (stderr, "lost/late frame\n");*/
66 /*Packet is late or lost*/
67 speex_decode_int(jitter->dec, NULL, out);
68 } else {
69 speex_bits_read_from(&jitter->current_packet, packet.data, packet.len);
70 /* Decode packet */
71 ret = speex_decode_int(jitter->dec, &jitter->current_packet, out);
72 if (ret == 0)
73 {
74 jitter->valid_bits = 1;
75 } else {
76 /* Error while decoding */
77 for (i=0;i<jitter->frame_size;i++)
78 out[i]=0;
79 }
80 }
81 speex_decoder_ctl(jitter->dec, SPEEX_GET_ACTIVITY, &activity);
82 if (activity < 30)
83 jitter_buffer_update_delay(jitter->packets, &packet, NULL);
84 jitter_buffer_tick(jitter->packets);
85 }
86
87 int speex_jitter_get_pointer_timestamp(SpeexJitter *jitter)
88 {
89 return jitter_buffer_get_pointer_timestamp(jitter->packets);
90 }
40
C IETF RTP Profile
AV.
41
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 1]
Internet-Draft Speex April 2007
Abstract
Speex is an open-source voice codec suitable for use in Voice over IP
(VoIP) type applications. This document describes the payload format
for Speex generated bit streams within an RTP packet. Also included
here are the necessary details for the use of Speex with the Session
Description Protocol (SDP).
42
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 2]
Internet-Draft Speex April. Security Considerations . . . . . . . . . . . . . . . . . . . 14
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 15
8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16
8.1. Normative References . . . . . . . . . . . . . . . . . . . 16
8.2. Informative References . . . . . . . . . . . . . . . . . . 16
Authors’ Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17
Intellectual Property and Copyright Statements . . . . . . . . . . 18
43
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 3]
Internet-Draft Speex April
To be compliant with this specification, implementations MUST support
8 kHz sampling rate (narrowband)" and SHOULD support 8 kbps bitrate.
The sampling rate MUST be 8, 16 or 32 kHz.
44
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 4]
Internet-Draft Speex April 2007
2. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC2119 [RFC2119] and
indicate requirement levels for compliant RTP implementations.
45
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 5]
Internet-Draft Speex April 2007
3. RTP usage for Speex
3.1. RTP Speex signaled dynamically out-of-band (e.g., using SDP).
Marker (M) bit: The M bit is set toenc], and present the same sequence to the decoder.
The payload format described here maintains this sequence.
46
C IETF RTP Profile
A typical Speex frame, encoded at the maximum bitrate, is approx. 110
Herlein, et al. Expires October 24, 2007 [Page 6]
Internet-Draft Speex April.
47
C IETF RTP Profile
Speex codecs [speexenc] are able to detect the bitrate from the
Herlein, et al. Expires October 24, 2007 [Page 7]
Internet-Draft Speex April 2007
payload and.. |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
48
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 8]
Internet-Draft Speex April :
49
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 9]
Internet-Draft Speex April 2007.
50
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 10]
Internet-Draft Speex April is typically 8000 for narrow band
operation, 16000 for wide band operation, and 32000 for ultra-wide
band operation.
If for some reason the offerer has bandwidth limitations, the client
may use the "b=" header, as explained in SDP [RFC4566]. The
following example illustrates the case where the offerer cannot
receive more than 10 kbit/s.
m=audio 8088 RTP/AVP 97
b=AS:10
a=rtmap:97 speex/8000
In this case, if the remote part agrees, it should configure its
Speex encoder so that it does not use modes that produce more than 10
kbit/s. Note that the "b=" constraint also applies on all payload
types that may be proposed in the media line ("m=").
An other way to make recommendations to the remote Speex encoder is
to use its specific parameters via the a=fmtp: directive. The
following parameters are defined for use in this way:
ptime: duration of each packet in milliseconds.
sr: actual sample rate in Hz.
ebw: encoding bandwidth - either ’narrow’ or ’wide’ or ’ultra’
(corresponds to nominal 8000, 16000, and 32000 Hz sampling rates).
51
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 11]
Internet-Draft Speex April 2007.
mode: Speex encoding mode. Can be {1,2,3,4,5,6,any} defaults to 3
in narrowband, 6 in wide and ultra-wide. ignored and clients MUST use the
default value of 20 instead.
Implementations SHOULD support ptime of 20 msec (i.e. one frame per
packet)
52
C IETF RTP Profile
In the example below the ptime value is set to 40, indicating that
Herlein, et al. Expires October 24, 2007 [Page 12]
Internet-Draft Speex April 2007.
Values of ptime not multiple of 20 msec are meaningless, so the
receiver of such ptime values MUST ignore them. If during the life
of an RTP session the ptime value changes, when there are multiple
Speex frames for example, the SDP value must also reflect the new
value.
Care must be taken when setting the value of ptime so that the RTP
packet size does not exceed the path MTU.
53
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 13]
Internet-Draft Speex April 2007.
54
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 14]
Internet-Draft Speex April 2007
7.
55
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 15]
Internet-Draft Speex April 2007.
8.2. Informative References
[CELP] "CELP, U.S. Federal Standard 1016.", National Technical
Information Service (NTIS) website.
[RFC4288] Freed, N. and J. Klensin, "Media Type Specifications and
Registration Procedures", BCP 13, RFC 4288, December 2005.
[speexenc]
Valin, J., "Speexenc/speexdec, reference command-line
encoder/decoder", Speex website.
56
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 16]
Internet-Draft Speex April 2007
Authors’ Addresses
Greg Herlein
2034 Filbert Street
San Francisco, California 94123
United States
Jean-Marc Valin
University of Sherbrooke
Department of Electrical and Computer Engineering
University of Sherbrooke
2500 blvd Universite
Sherbrooke, Quebec J1K 2R1
Canada
Alfred E. Heggestad
Biskop J. Nilssonsgt. 20a
Oslo 0659
Norway
57
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 17]
Internet-Draft Speex April).
58
C IETF RTP Profile
Herlein, et al. Expires October 24, 2007 [Page 18]
59
D Speex License
Organisation (CSIRO)
‘‘.
60-
cially. documen-
tation:.
61
E:
•.
• B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in
the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if
it has less than five)..
62
E GNU Free Documentation.
63
E GNU Free Documentation License.
64
Index
acoustic echo cancellation, 20 variable bit-rate, 7, 8, 17
algorithmic delay, 8 voice activity detection, 8, 17
analysis-by-synthesis, 28
auto-correlation, 27 wideband, 7, 8, 34
average bit-rate, 7, 17
bit-rate, 33, 35
CELP, 6, 26
complexity, 7, 8, 32, 33
constant bit-rate, 7
discontinuous transmission, 8, 17
DTMF, 7
echo cancellation, 20
error weighting, 28
fixed-point, 10
in-band signalling, 18
Levinson-Durbin, 27
libspeex, 6, 15
line spectral pair, 30
linear prediction, 26, 30
mean opinion score, 32
narrowband, 7, 8, 30
Ogg, 24
open-source, 8
patent, 8
perceptual enhancement, 8, 16, 32
pitch, 27
preprocessor, 19
quadrature mirror filter, 34
quality, 7
RTP, 24
sampling rate, 7
speexdec, 14
speexenc, 13
standards, 24
tail length, 20
ultra-wideband, 7
65 | https://www.scribd.com/document/21050124/Speex-Manual | CC-MAIN-2019-09 | refinedweb | 11,712 | 50.06 |
.
At this stage, the JAR file has not been created, but you can nevertheless test the MIDlet suite by selecting an appropriate target device on the KToolbar main window and pressing the Run button. This loads the MIDlet classes, its resources, and any associated libraries from the classes, res, and lib subdirectories. If you select the default gray phone and press the Run button, the emulator starts and displays the list of MIDlets in this suite, as shown in Figure 3-8.
When the MIDlet suite is loaded, the device's application management software displays a list of the MIDlets that it contains and allows you to select the one you want to run. In this case, even though the suite contains only one MIDlet, the list is still displayed, as shown in Figure 3-8. Given the current lack of security for MIDlets imported from external sources, it would be dangerous for the device to run a MIDlet automatically, and, by giving the device user the chance to choose a MIDlet, it allows him the opportunity to decide not to run any of the MIDlets if, for any reason, they are thought to be a security risk or otherwise unsuitable. It is not obvious, though, on what basis such a decision would be made, since the user will see only the MIDlet names at this stage, but requiring the user to confirm that a MIDlet should be run transfers the ultimate responsibility to the user. In this case, the device displays the MIDlet name and its icon (the exclamation mark) as taken from the
MIDlet-1 attribute in the manifest file. The device is not obliged to display an icon, and it may use its own icon in preference to the one specified in the manifest.
When you run the MIDlet suite this way, the Wireless Toolkit compiles the source code with the option set to save debugging information in the class files, and it does not create a JAR file. If you want to create a JAR, you can do so by selecting the Package item from the Project menu. This rebuilds all the class files without debugging enabled, which reduces the size of the class files, a measure intended to keep the time required to download the JAR to a cell phone or PDA as small as possible. It also extracts the content of any JARs or ZIP files it finds in the lib subdirectory and includes them in the MIDlet JAR, after running the preverifier over any class files that it finds in these archives. The JAR can be used, along with the JAD file, to distribute the MIDlet suite for installation into a device over a network, as will be shown in the later section "Delivery and Installation of MIDlets.
Further information on the Wireless Toolkit and other MIDlet development environments can be found in Chapter 9.
Now let's look at the implementation of the
ExampleMIDlet class you have just built and packaged with the Wireless Toolkit. This simple MIDlet demonstrates the lifecycle methods that were described in the earlier section "MIDlet Execution Environment and Lifecycle, and it also illustrates how the MIDlet's foreground activity interacts with background threads, as well as how to create and use timers. The code for this example in shown in Example 3-1. For clarity, the timer-related code has not been included in the code listing; you'll see how that works when timers are discussed later in this chapter.
Example 3-1: A Simple MIDlet
package ora.ch3; import java.util.Timer; import java.util.TimerTask; import javax.microedition.midlet.MIDlet; import javax.microedition.midlet.MIDletStateChangeException; public class ExampleMIDlet extends MIDlet { // Flag to indicate first call to startApp private boolean started = false; // Background thread private Thread thread; // Timer interval private int timerInterval; // Timer private Timer timer; // Task to run via the timer private TimerTask task; // Required public constructor. Can be omitted if nothing to do and no // other constructors are created. public ExampleMIDlet( ) { System.out.println("Constructor executed"); // Get the timer interval from the manifest or JAD file. String interval = getAppProperty("Timer-Interval"); timerInterval = Integer.parseInt(interval); System.out.println("Timer interval is " + interval); } protected void startApp( ) throws MIDletStateChangeException { if (!started) { // First invocation. Create and start a timer. started = true; System.out.println("startApp called for the first time"); startTimer( ); } else { // Resumed after pausing. System.out.println("startApp called following pause"); } // In all cases, start a background thread. synchronized (this) { if (thread == null) { thread = new Thread( ) { public void run( ) { System.out.println("Thread running"); while (thread == this) { try { Thread.sleep(1000); System.out.println("Thread still active"); } catch (InterruptedException ex) { } } System.out.println("Thread terminating"); } }; } }; thread.start( ) } protected void pauseApp( ) { // Called from the timer task to do whatever is necessary to pause // the MIDlet. // Tell the background thread to stop. System.out.println("pauseApp called."); synchronized (this) { if (thread != null) { thread = null; } } } protected void destroyApp(boolean unconditional) throws MIDletStateChangeException { // Called to destroy the MIDlet. System.out.println("destroyApp called - unconditional = " + unconditional); if (thread != null) { Thread bgThread = thread; thread = null; // Signal thread to die try { bgThread.join( ); } catch (InterruptedException ex) { } } stopTimer( ); } // Timer code not shown here }
This simple MIDlet does two things:
The code listing shows the implementation of the MIDlet's constructor and its
startApp( ),
pauseApp( ) and
destroyApp( ) methods. A MIDlet is not required to do anything in its constructor and may instead defer initialization until the
startApp( ) method is executed. In this example, the constructor prints a message so that you can see when it is being executed. It also performs the more useful function of getting the interval for the timer that will be used to change the MIDlet's state. It is appropriate to put this code in the constructor because this value needs to be set only once. The timer value is obtained from the
Timer-Interval attribute that was specified in the settings dialog of the Wireless Toolkit and subsequently written to the JAD file. Here is what the JAD file created for this MIDlet suite actually looks like:
MIDlet-1: ExampleMIDlet, /ora/ch3/icon.png, ora.ch3.ExampleMIDlet
MIDlet-Jar-Size: 100
MIDlet-Jar-URL: Chapter3.jar
MIDlet-Name: Chapter3
MIDlet-Vendor: J2ME in a Nutshell
MIDlet-Version: 1.0
Timer-Interval: 3000
A MIDlet can read the values of its attributes using the following method from the
MIDlet class:
public final String getAppProperty(String name);
This method looks for an attribute with the given name; it looks first in the JAD file, and then, if it was not found there, in the manifest file. Attributes names are case-sensitive and scoped to the MIDlet suite, so every MIDlet in the suite has access to the same set of attributes. The
getAppProperty( ) method can be used to retrieve any attributes in the JAD file or the manifest, so the following line of code returns the name of the MIDlet's suite, in this case
Chapter3:
String suiteName = geAppProperty("MIDlet-Name");
The timer interval for this MIDlet is obtained as follows:
String interval = getAppProperty("Timer-Interval");
timerInterval = Integer.parseInt(interval);
Once the value in the form of a string has been retrieved, the next step is to convert it to an integer by calling the
Integer
parseInt( ) method. If the
Timer-Interval attribute is not included in the JAD file or manifest (or if its name is misspelled),
getAppProperty( ) returns
null, and the
parseInt( ) method throws an exception. A similar thing happens if the attribute value is not a valid integer. Notice that the constructor does not bother to catch either of these exceptions. The main reason for catching an exception is to display some meaningful information to the user and possibly allow recovery, but, strictly speaking, the MIDlet is not allowed to use the user interface in the constructor, so attempting to post a message would not necessarily work. The most appropriate thing to do in a real MIDlet is to install a default value for the timer interval and arrange to notify the user from the
startApp( ) method, when access to the user interface is possible. In this simple example, we allow the exception to be thrown out of the constructor, which causes the MIDlet to be destroyed. Additionally, the version of MIDP in the Wireless Toolkit does, in fact, display the exception on the screen, but vendor implementations are not bound to do so.. | http://www.linuxdevcenter.com/lpt/a/1268 | CC-MAIN-2015-22 | refinedweb | 1,397 | 50.97 |
rake finds & runs task, but doesn't show in --tasks list
Discussion in 'Ruby' started by Hugh J. Devlin,
changing rake task dependencies, runs in wrong orderTrans, Feb 5, 2007, in forum: Ruby
- Replies:
- 0
- Views:
- 111
- Trans
- Feb 5, 2007
Rake and rake aborted! Rake aborted! undefined method `gem' for main:Objectpeppermonkey, Feb 9, 2007, in forum: Ruby
- Replies:
- 1
- Views:
- 268
- Gregory Brown
- Feb 10, 2007
[Rake] call a task of a namespace from an other task.Stéphane Wirtel, Jun 14, 2007, in forum: Ruby
- Replies:
- 3
- Views:
- 411
- Stephane Wirtel
- Jun 15, 2007
rake published rdoc version and arity of Rake::Task#execute - wrongnumber of arguments (0 for 1)James Mead, Jan 15, 2008, in forum: Ruby
- Replies:
- 0
- Views:
- 159
- James Mead
- Jan 15, 2008
Testing Rake tasks with RSpec... and RakeJohn Feminella, Apr 25, 2010, in forum: Ruby
- Replies:
- 0
- Views:
- 256
- John Feminella
- Apr 25, 2010 | http://www.thecodingforums.com/threads/rake-finds-runs-task-but-doesnt-show-in-tasks-list.854189/ | CC-MAIN-2014-52 | refinedweb | 152 | 58.76 |
Syntactic Sugar
Importing a Module
If we don’t want to write the module name
every time we use something from that module,
an alternative is to import the module.
If we import
Protocols.HTTP,
we can use the data type
Query,
and the method
get_url,
without prefixing them with
Protocols.HTTP:
import Protocols.HTTP; void handle_url(string this_url) { write("Fetching URL '" + this_url + "'..."); Query web_page; web_page = get_url(this_url); if (web_page == 0) { write(" Failed.\n"); return; } write(" Done.\n"); } // handle_url
Although you could import lots and lots of modules
for the ease of lazy typing,
this mostly isn’t a recommended practice,
for obvious reasons of clarity and readability.
There are also some non-obvious reasons to refrain from doing imports.
If someone adds the method
write to the module
Protocols.HTTP,
we would call that method instead of the one that writes text to the user.
It also takes longer to start the program,
since Pike must search through all imported modules
to find the methods you use.
Initial Values for Variables
We can give a value to a variable when we define it, so there is no reason to write:
Query web_page; web_page = get_url(this_url);
We change it to this:
Query web_page = get_url(this_url); | http://pike.lysator.liu.se/docs/tut/browser/convenient_syntax.md | CC-MAIN-2017-30 | refinedweb | 205 | 65.01 |
This article as written assumes a sequentially consistent model. In particular, the code relies on specific order of instructions in both Consumer and Producer methods. However, without inserting proper memory barrier instructions, these instructions can be reordered with unpredictable results (see, for example, the classic Double-Checked Locking problem).
Another issue is using the standard std::list<T>. While the article mentions that it is the developer responsibility to check that the reading/writing std::list<T>::iterator is atomic, this turns out to be too restrictive. While gcc/MSVC++2003 has 4-byte iterators, the MSVC++2005 has 8-byte iterators in Release Mode and 12-byte iterators in the Debug Mode.
The solution to prevent this is to use memory barriers/volatile variables. The downloadable code featured at the top of this article has fixed that issue.
Many thanks to Herb Sutter who signaled the issue and helped me fix the code. --P.M.
Queues can be useful in a variety of systems involving data-stream processing. Typically, you have a data source producing datarequests coming to a web server, market feeds, or digital telephony packetsat a variable pace, and you need to process the data as fast as possible so there are no losses. To do this, you can push data into a queue using one thread and process it using a different threada good utilization of resources on multicore processors. One thread inserts data into the queue, and the other reads/deletes elements from the queue. Your main requirement is that a high-rate data burst does not last longer than the system's ability to accumulate data while the consumer thread handles it. The queue you use has to be threadsafe to prevent race conditions when inserting/removing data from multiple threads. For obvious reasons, it is necessary that the queue mutual exclusion mechanism add as little overhead as possible.
In this article, I present a lock-free queue (the source code for the lockfreequeue class is available online; see) in which one thread can write to the queue and another read from itat the same time without any locking.
To do this, the code implements these requirements:
- There is a single writer (Producer) and single reader (Consumer). When you have multiple producers and consumers, you can still use this queue with some external locking. You cannot have multiple producers writing at the same time (or multiple consumers consuming the data simultaneously), but you can have one producer and one consumer (2x threads) accessing the queue at the same time (Responsibility: developer).
- When inserting/erasing to/from an std::list<T>, the iterators for the existing elements must remain valid (Responsibility: library implementor).
- Only one thread modifies the queue; the producer thread both adds/erases elements in the queue (Responsibility: library implementor).
- Beside the underlying std::list<T> used as the container, the lock-free queue class also holds two iterators pointing to the not-yet-consumed range of elements; each is modified by one thread and read by the other (Responsibility: library implementor).
- Reading/writing list<T>::iterator is atomic on the machine upon which you run the application. If they are not on your implementation of STL, you should check whether the raw pointer's operations are atomic. You could easily replace the iterators to be mentioned shortly with raw pointers in the code (Responsibility: machine).
Because I use Standard C++, the code is portable under the aforementioned "machine" assumption:
template <typename T> struct LockFreeQueue { LockFreeQueue(); void Produce(const T& t); bool Consume(T& t); private: typedef std::list<T> TList; TList list; typename TList::iterator iHead, iTail; };
Considering how simple this code is, you might wonder how can it be threadsafe. The magic is due to design, not implementation. Take a look at the implementation of the Produce() and Consume() methods. The Produce() method looks like this:
void Produce(const T& t) { list.push_back(t); iTail = list.end(); list.erase(list.begin(), iHead); }
To understand how this works, mentally separate the data from LockFreeQueue<T> into two groups:
- The list and the iTail iterator, modified by the Produce() method (Producer thread).
- The iHead iterator, modified by the Consume() method (Consumer thread).
-
Produce() is the only method that changes the list (adding new elements and erasing the consumed elements), and it is essential that only one thread ever calls Produce()it's the Producer thread! The iterator (iTail) (only manipulated by the Producer thread) changes it only after a new element is added to the list. This way, when the Consumer thread is reading the iTail element, the new added element is ready to be used. The Consume() method tries to read all the elements between iHead and iTail (excluding both ends).
bool Consume(T& t) { typename TList::iterator iNext = iHead; ++iNext; if (iNext != iTail) { iHead = iNext; t = *iHead; return true; } return false; }
This method reads the elements, but doesn't remove them from the list. Nor does it access the list directly, but through the iterators. They are guaranteed to be valid after std::list<T> is modified, so no matter what the Producer thread does to the list, you are safe to use them.
The std::list<T> maintains an element (pointed to by iHead) that is considered already read. For this algorithm to work even when the queue was just created, I add an empty T() element in the constructor of the LockFreeQueue<T> (see Figure 1):
LockFreeQueue() { list.push_back(T()); iHead = list.begin(); iTail = list.end(); }
Consume() may fail to read an element (and return false). Unlike traditional lock-based queues, this queue works fast when the queue is not empty, but needs an external locking or polling method to wait for data. Sometimes you want to wait if there is no element available in the queue, and avoid returning false. A naive approach to waiting is:
T Consume() { T tmp; while (!Consume(tmp)) ; return tmp; }
This Consume() method will likely heat up one of your CPUs red-hot to 100-percent use if there are no elements in the queue. Nevertheless, this should have good performance when the queue is not empty. However, if you think of it, a queue that's almost never empty is a sign of systemic trouble: It means the consumer is unable to keep pace with the producer, and sooner or later, the system is doomed to die of memory exhaustion. Call this approach NAIVE_POLLING.
A friendlier Consume() function does some pooling and calls some sort of sleep() or yield() function available on your system:
T Consume(int wait_time = 1/*milliseconds*/) { T tmp; while (!Consume(tmp)) { Sleep(wait_time/*milliseconds*/); } return tmp; }
The DoSleep() can be implemented using nanosleep() (POSIX) or Sleep() (Windows), or even better, using boost::thread::sleep(), which abstracts away system-dependent nomenclature. Call this approach SLEEP. Instead of simple polling, you can use more advanced techniques to signal the Consumer thread that a new element is available. I illustrate this in Listing One using a boost::condition variable.
#include <boost/thread.hpp> #include <boost/thread/condition.hpp> #include <boost/thread/xtime.hpp> template <typename T> struct WaitFreeQueue { void Produce(const T& t) { queue.Produce(t); cond.notify_one(); } bool Consume(T& t) { return queue.Consume(t); } T Consume(int wait_time = 1/*milliseconds*/) { T tmp; if (Consume(tmp)) return tmp; // the queue is empty, try again (possible waiting...) boost::mutex::scoped_lock lock(mtx); while (!Consume(tmp)) // line A { boost::xtime t; boost::xtime_get(&t, boost::TIME_UTC); AddMilliseconds(t, wait_time); cond.timed_wait(lock, t); // line B } return tmp; } private: LockFreeQueue<T> queue; boost::condition cond; boost::mutex mtx; };
I used the timed_wait() instead of the simpler wait() to solve a possible deadlock when Produce() is called between line A and line B in Listing One. Then wait() will miss the notify_one() call and have to wait for the next produced element to wake up. If this element never comes (no more produced elements or if the Produce() call actually waits for Consume() to return), there's a deadlock. Call this approach TIME_WAIT.
The lock is still wait-free as long as there are elements in the queue. In this case, the Consumer() thread does no waiting and reads data as fast as possible (even with the Producer() that is inserting new elements). Only when the queue is exhausted does locking occur. | http://www.drdobbs.com/parallel/lock-free-queues/208801974?pgno=1 | CC-MAIN-2014-23 | refinedweb | 1,388 | 53.41 |
Projectile based Weapons
From Valve Developer Community
This article is a stub. You can help by adding to it ().
Introduction
This article explains some of the possibilities for projectile weapons like the crossbow or RPG, instead of the standard guns that hit or miss instantly. All tweaks in this article are based on the crossbow code. The relevant code can all be found in the server side part of the crossbow code. This is found under hl -> source files -> HL2 DLL -> weapon_crossbow.cpp in the solution explorer
Adding spread
Spread can be added rather quickly. The GetAutoAimVector used in other weapons appears to do no good here, so another approach is needed. I basically use the aiming vector converted to angles to change these angles, and then convert it back to the aiming vector. The relevant code is found in the CWeaponCrossbow::FireBolt method, the lines that need to be changed are:
594 - 595
QAngle angAiming; VectorAngles( vecAiming, angAiming );
these can be exchanged with:
QAngle angAiming; VectorAngles( vecAiming, angAiming ); angAiming.x += ((rand() % 250) / 100.0) * (rand() % 2 == 1 ? -1 : 1); angAiming.y += ((rand() % 250) / 100.0) * (rand() % 2 == 1 ? -1 : 1); AngleVectors(angAiming, &vecAiming);
This changes the x and y angles with up to 2.5 degrees in either direction. This actually results in a square spread instead of a cone, but is a good quick-and-dirty solution. To make a conical spread, take the cosine and sine of the x and y coordinates respectively. If anybody knows of a better approach, I would be happy to hear about it.
In order to simplify debugging, you may want to change line 608
m_iClip1--;"
to
//m_iClip1--;"
and line 640
m_flNextPrimaryAttack = m_flNextSecondaryAttack = gpGlobals->curtime + 0.75;
to
m_flNextPrimaryAttack = m_flNextSecondaryAttack = gpGlobals->curtime + 0.15;
This gives you unlimited ammo and a quicker "reload" time.
Obeying gravity
Crossbow bolts do not obey the sv_gravity server variable, which can be changed with the console, or I believe even on a per map basis by the mapper. In order to make this work properly, find the CCrossbowBolt::Spawn method and edit line no 163 (unmodified file):
SetGravity( 0.05f );
change it to:
SetGravity( sv_gravity.GetFloat() / 600);
Since the original value was 0.05, and gravity usually is set to 600 or 800, this means that the new gravity value will be larger, and the bolt will fly in a more pronounced arc. You will have to tweak the value as you see fit, but the bolt should now obey any server side changes in gravity.
In order to make this compile you will need to add
#include "movevars_shared.h"
to the includes in the top of the file, otherwise the sv_gravity variable is not accessible.
Firing multiple bolts (not working yet)
In order to create a shotgun-like effect, it may be desirable to fire multiple randomized bolts at once. The following is an example:
for (int i = -10; i < 11; i += 2) { Vector vecAiming = pOwner->GetAutoaimVector( 0 ); Vector vecSrc = pOwner->Weapon_ShootPosition(); QAngle angAiming; VectorAngles( vecAiming, angAiming ); angAiming.x += ((rand() % 250) / 100.0) * (rand() % 2 == 1 ? -1 : 1); vecSrc.x += i; angAiming.y += ((rand() % 250) / 100.0) * (rand() % 2 == 1 ? -1 : 1); vecSrc.y += i; /*angAiming.z += ((rand() % 350) / 100.0) * (rand() % 2 == 1 ? -1 : 1); vecSrc.z += i;*/ AngleVectors(angAiming, &vecAiming); CCrossbowBolt *pBolt = CCrossbowBolt::BoltCreate( vecSrc, angAiming, pOwner ); if ( pOwner->GetWaterLevel() == 3 ) { pBolt->SetAbsVelocity(vecAiming * BOLT_WATER_VELOCITY ); } else { pBolt->SetAbsVelocity(vecAiming * BOLT_AIR_VELOCITY ); } }
Changing the vecSrc.x and vecSrc.y based on i results in a neat line of crossbow bolts in the instant that the trigger is pressed. However, just a moment later all but the last (or first?) in the line disappear, and only the remaining bolt continues on its trajectory, and it is only this bolt that glows. Changing the the vecSrc.x and vecSrc.y based on random numbers creates a more randomized spawn position for each bolt, and when moving around the crossbow actually shoots 3-5 bolts, but still not all 10 of them, and not all of the time. For now, the solution for this is unknown.
Modding the crossbow so that it has multiple spawn points may help with getting the code to function.
Categories: Programming | Tutorials | Modding | Stubs | http://developer.valvesoftware.com/wiki/Projectile_based_Weapons | crawl-001 | refinedweb | 701 | 59.3 |
Introduction to Groovy Part 3
Join the DZone community and get the full member experience.Join For Free
In this third installment ofIntroduction to Groovy (part 1, part 2) we will continue looking at some features of the Groovy language. Some you may find them on other languages, but some are exclusive to Groovy.
Varargs
The ability for a method to work with a variable number of arguments is relatively new to Java (since Java5 to be precise). But it has been a feature other language, even before Java was born (in languages like C or LiveScript [extra points for those who actually coded with JavaScript's grand daddy]). Before the current branch for Groovy 1.1 gave us support for the three dot notation available in Java5, Groovy followed a convention if you wanted a method to have varargs: the last argument should be declared as Object[]
Let's see an example of that:
Listing 2.1
class Calculator { // follow the convention, if the argument is of type Object[] // then the method call may be used with varargs def addAllGroovy( Object[] args ){ int total = 0 args.each { total += it } total } // Java5 equivalent also available in Groovy 1.1 def addAllJava( int... args ){ int total = 0 args.each { total += it } total } } Calculator c = new Calculator() assert c.addAllGroovy(1,2,3,4,5) == 15 assert c.addAllJava(1,2,3,4,5) == 15
Before I get slammed with hate-mail I know there are better ways to sum numbers in Groovy, this example is just to show varargs in action.
Named parameters
When POGOs were discussed in the last part we saw a way to create bean instances using a combination of key/value pairs, where each key mapped to a property. The trick (if you recall) is that Groovy injects a generic constructor with a Map as parameter. A similar trick may be used to simulate named parameters in any methods. Just declare a Map as parameter and access its keys inside the method
Listing 3.1
class Math { static getSlope( Map args ){ def b = args.b ? args.b : 0 (args.y - b) / args.x } } assert 1 == Math.getSlope( x: 2, y: 2 ) assert 0.5 == Math.getSlope( x: 2, y: 2, b: 1 )
If you remember your math lessons from school, the straight line formula is
y = mx + b where
m is the slope and
b is a constant. When you solve the equation for
m you may find the value for any point in the space (
x,
y) with an optional offset (
b). This technique allows you to communicate in a better way what's the meaning of each parameter, but at the same time sacrifices any attempt to refactor and know which parameters are allowed just by looking at thye method signature. Thus makes the use of proper code comments a critical issue (we all comment our code right?).
Fixed parameters
The name of this section is just a play of words (unfortunately section 2 did not follow the * parameters as it did in the Spanish version) because what I realy want to say is known "currying". Currying is a technique that transforms a function (in our case a closure) into another one while fixing one or more values, meaning that the fixed values remain constant for each invocation of the second function. Think "constant parameters" for the second function.
Listing 4.1
def getSlope = { x, y, b = 0 -> println "x: y: b:" (y - b) / x } assert 1 == getSlope( 2, 2 ) def getSlopeX = getSlope.curry(5) assert 1 == getSlopeX(5) assert 0 == getSlopeX(2.5,2.5) // imprime // x:2 y:2 b:0 // x:5 y:5 b:0 // x:5 y:2.5 b:2.5
First we define a closure with 3 parameters (with dynamic types) and what's more, the parameter
b has a default value of 0, meaning that you may call the closure with just 2 parameters (another feature found in other languages like PHP). After asserting that the implementation of getSlope is accurate we proceed to curry it with x = 5, meaning that all calls to getSlopeX will have x = 5. So if we call getSlopeX(1) y will have 1 as value and b would have 0 as value (remember the default we assigned). And if you call getSlopeX with two parameters, then the second will be b's value.
Revisiting operators
In the second part of this series, we saw how Groovy enables operator overloading. This time we will take a look at other operators you may not see in Java. The first one is known as spaceship operator because if looks like a little UFO (<=>), it is of binary type (accepts two arguments) and executes a comparison, makeing it ideal for sorting algortithms. To enable its use with a type in particular, that class must implement
java.util.Comparable, which has the side effect of enabling the following operators too: <, <=, =>, >.
Listing 5.1
// Number is Comparable assert [1,2,3,4,5,6] == [2,4,6,1,5,3].sort {a,b -> a <=> b }
The second operator is named "spread" and has two variants. The first one allows the contents of a list to be expanded (like a reverse varargs), the second one applyes a method call to all elements in the list (used by GPath, Groovy's version of XPath but for beans, cool!). Let's see them in action:
Listing 5.2
// 1ra variant def calculate = { a, b, c -> "$a,$b,$c" } assert "1,2,3" == calculate( *[1,2,3] ) // 2da variant assert ["01","11","21"] == ["10","11","12"]*.reverse()
The third operator in our list is Elvis (?:) which works like a simplified version of the ternary operator, as it will return the conditional expression if it evaluates to true. If it's, false then it will return the other expression
Listing 5.3
assert "hello" == "hello" ?: "bye" assert "bye" == null ?: "bye"
The next operator should bring memories to those that used to work with the C language, as it allows you to get a reference to a method (in C it was a function pointer). In Groovy 1.1 its also possible to obtain a property reference. This feature is very useful to transform a method into a closure, and enables behavior mixin in a similar way as a Ruby does as you may call that closure from context of an external class (not in the hierarchy of the one that owns the method). Let's see in action with a revised version of Math and let's throw in curry just for fun
Listing 5.4
class Math { static getSlope( x, y, b = 0 ){ (y - b) / x } } // let's grab a reference to the static method def getSlope = Math.&getSlope assert getSlope instanceof Closure assert 1 == getSlope( 2, 2 ) def getSlopeX = getSlope.curry(5) assert 1 == getSlopeX(5) assert 0 == getSlopeX(2.5,2.5)
The last operator is not exactly an operator per se, it is a reserved keyword (
as) which enables Groovy casting, a better way to convert a type into another. It may be overridden/extended like an operator if your class has a method named
asType( Class ). When used along with Closures and maps you may coerce that closure or map into a concrete implementation of an interface (abstract and concrete class coercion supported since Groovy 1.1)
Listing 5.5
import java.awt.event.* import javax.swing.JButton class Person { String name int id } def person = [name: 'Duke', id:1] as Person assert person.class == Person assert person.name == 'Duke' def button = new JButton(label:'OK') button.addActionListener({ event -> assert 'OK' == event.source.label } as ActionListener ) button.doClick()
Conclusion
That's all for the introductory material, I hope the features presented in this three part series have interested you in trying Groovy. In later articles I'd like to talk about Builders and the MOP (Meta Object Protocol), two advanced features that Groovy has to make life easier this side of Java.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/introduction-groovy-part-3 | CC-MAIN-2021-43 | refinedweb | 1,343 | 62.88 |
A number whose prime factors are either 2, 3 or 5 is called an Ugly Number. Some of the ugly numbers are: 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, etc.
We have a number N and the task is to find the Nth Ugly number in the sequence of Ugly numbers.
For Example:
Input-1:
N = 5
Output:
5
Explanation:
The 5th ugly number in the sequence of ugly numbers [1, 2, 3, 4, 5, 6, 8, 10, 12, 15] is 5.
Input-2:
N = 7
Output:
8
Explanation:
The 7th ugly number in the sequence of ugly numbers [1, 2, 3, 4, 5, 6, 8, 10, 12, 15] is 8.
A simple approach to solve this problem is to check if the given number is divisible by 2 or 3 or 5 and keep track of the sequence till the given number. Now find if the number satisfies all the condition of an ugly number, then return the number as the output.
public class UglyN { public static boolean isUglyNumber(int num) { boolean x = true; while (num != 1) { if (num % 5 == 0) { num /= 5; } else if (num % 3 == 0) { num /= 3; } // To check if number is divisible by 2 or not else if (num % 2 == 0) { num /= 2; } else { x = false; break; } } return x; } public static int nthUglyNumber(int n) { int i = 1; int count = 1; while (n > count) { i++; if (isUglyNumber(i)) { count++; } } return i; } public static void main(String[] args) { int number = 100; int no = nthUglyNumber(number); System.out.println("The Ugly no. at position " + number + " is " + no); } }
The Ugly no. at position 100 is 1536. | https://www.tutorialspoint.com/find-the-nth-ugly-number-in-java | CC-MAIN-2021-25 | refinedweb | 271 | 66.27 |
Qt on Android layouts all wrong
Using Qt 5.2 or 5.3 (Beta) for exactly the same fixed position widgets, laysout all wrong on Android.
The Widgets are crammed together on the left of the screen, overlapping. The font scales up at double the expected size yet the widgets do not.
Why is this?
font size (pt) is scaled with the DPI, pixel size not, that happens on all system as far as I know.
My solution: scale everything to the DPI independent size manually..
But that defeats the purpose of being cross-platform. it would have the same UI in every environment. I dont have to have 1 UI file for Windows, 1 for Linux, 1 for Android, 1 for Mac an 1 for iOS. That is worthless. There should be a better way.
In any event, how would it ever be DPI independent? Every font is DPI dependent and every widget is DPI independent. Hence a big conflict,
No I didn't mean to create separate UI files for each platform.
You can scale all objects and fonts within the same UI to be platform independent, that is at least what I do in my App and it works great.
I use a little different approach in my App but essentially it is like this example in QML (if you don't use QML you can use the QScreen class):
@
import QtQuick 2.2
import QtQuick.Window 2.0
Rectangle {
readonly property real dip: Screen.pixelDensity / (96 / 25.4) // DPI norm in mm
width: 360*dip height: 360*dip Text { text: "foo bar" font.pixelSize: 14*dip }
}
@
Screen.pixelDensity is in millimeters (mm) and to calculate the DIP-value (device independent pixel) I use a DPI norm (96 DPI on my computer monitor). If you are familiar with Android development something like that is always used in layout files, Andoid uses 160 DPI as a norm and QML has number suffixes so I just multiply every size with the "dip" factor to have the same effect.
In the end this means if you have a 96 DPI screen the size for the Rectangle will be exactly 360 x 360 pixels, but if you have higher DPI values the size in pixels will also increase so the rect will have the same size on all devices and screens.
I hope that gives you some ideas, this is just a simple example but should clarify what I mean?
unless anybody has a better solution for platform independent sizes, I use it like this with QML apps.
This is not QML, this is C++ we are talking about. Ie real Qt Applications.
But if I read this correctly (since I never use QML I cant be sure) this is a style sheet information. Since there is no documentation on applicable styles and parameters, your infomation could be helpful, but is dip a defined value in the stylesheet system?
It seems silly to try and re-layout everything in code rather than the design mode of UI files.
Yeah sorry I was just assuming nobody uses Qt widget layouts on mobile devices, just my opinion but I think it's a lot easier and faster with QML (among other things).
But this doesn't change the fact, you can do the same with "old" widgets, but not in the designer, only if you create the UI via code because .ui form layouts can't have any code attached as far as I know they are just "static" layouts.
I haven't used widget layouts for some time now, but as far as I remember there is a setupUi method in every controller c++ file (or moc file?) and you can apply the device independent scaling there if you need to.
Actually my code above was just an example in pure QML, in my app I use c++ for that like this (excerpt):
@
...
#if defined(Q_OS_ANDROID)
m_dipNorm = 160;
#elif defined(Q_OS_IOS)
m_dipNorm = 160; // TODO test different sizes
#else
m_dipNorm = 96; // desktop norm
#endif
QScreen *screen = qApp->primaryScreen();
m_dipScaleFactor = screen->physicalDotsPerInch() / screen->devicePixelRatio() / m_dipNorm;
@
So I use the m_dipScaleFactor value to scale essentially all sizes for layouts and margins etc. For font sizes I have another value with an extra font scale factor like android apps use dp or dip for sizes and sp or sip for scaleble font sizes.
I just don't know how to best integrate a dynamic factor into widget forms, maybe you have some ideas for that?
By the way one nice effect of this scaling stuff is that you can zoom the whole layout with this, in my app i can zoom in or out as I please, so the complete app changes size and margins, spacing between elements and font size. also font size can be scaled independently without layout size changes. | https://forum.qt.io/topic/39659/qt-on-android-layouts-all-wrong | CC-MAIN-2018-39 | refinedweb | 806 | 70.43 |
Want to know notepad themes? we have a huge selection of notepad themes information on alibabacloud.com
Paip. Improve user experience ----- c ++ gcc command configuration in notepad ++ extension ..Author Attilax, EMAIL: 1466519819@qq.comSource: attilax ColumnAddress:. Create a topic File--------------------D: \ Program Files \ Notepad ++ \ themes \ gcccmd. xmlGlobal styles >>> default style and current line backgrount setting...2. Create a new custom language GCC... file.////////
Notepad ++ Top Notepad ++ is a free and open-source code editor in windows. (I) Automatic code prompt settings Notepad ++ does not enable this function by default. The setting method is as follows: Enable [Preferences]-> [backup and Automatic completion], and press set (Ii) install plug-ins Because notepad ++ is small, it gives it more space for expansion. Its rich plug-in functions make notepad ++ even more powerful. Here I will only introduce two plug-ins. The first is Zen coding an
Automatic completion], and press set (Ii) install plug-ins Because Notepad ++ is small, it gives it more space for expansion. Its rich plug-in functions make Notepad ++ even more powerful. Here I will only introduce two plug-ins. The first is Zen Coding and the second is QuickText. These two plug-ins can be used together to speed up code writing by ten times, it can be said that it is a necessary plug-in for Notepad ++ (at least I need these two plug-ins) (Iii) skin replacement 15 middle skins
Download the theme to TextMate Theme directory, open it with a text editor, copy all the code, paste it into the Theme converter page, then "Download", save; Select Settings--Import in notepad++- > Import the theme, or directly copy to the notepad++ installation directory notepad++\themes; After importing, choose Settings---language formatting, select a new theme. The following two topics are recommended:
SublimeCommon topics are:Pastels on DarkMonokaiZenburnsqueCommon plugins areAnacondaPackage ControlSide BarConvertToUTF8AutoPEP8TerminalSublimereplEmmetJediPackage Control Installation Method: #python 3import ('.
namespace. For example: Iv. applying styles and themes to the UIThere are two ways to set style: For a single view, you can add style attributes to a view element in the XML layout; Or, for the entire activity or application, you can add android:theme attributes to or elements from an android manifest file activity application .When you apply a style to a single layout View , the attributes defined in the style are applied only to t
Document directory Ashford CMS Theme Paintbox CMS MIMBO WordPress Magazine Theme The Morning After Linoluna: Magazine-style Theme for WordPress Channel-Free WordPress theme Magiting Radioactive Free Premium Worpress Theme Wordousel Lite You may also like WordPress exquisite free theme sharing series Full Set Share 20 beautiful WordPress portfolio themes 32 exquisite WordPress 3.0 free themes
Styles and themes in Android app development (Style,themes)More
Php, Notepad: php wrap in Notepad: on the webpage: use it to solve the problem. n does not work to save the data to notepad and use rn. a line break is implemented in Notepad! For ($ distance50; $ distance. $ distance... ($ distance10). lt; t On the webpage: useWe can solve this problem. \ n can't save the data to
The theme (Themes) allows you to maintain a unified style of your site. Of course, you can also specify different theme for individual pages or controls. Think about it, modify the entire site style, only need to modify the theme file can be done, that is how pleasant thing ah. ^_^ If you don't say much, test your test with a simple button Theme, and note that this is a custom theme (named Theme). Choose to add new items, select skin files, click Add,
I've seen a lot of people asking how to change the color and code color of Eclipse themes, and haven't found a good solution. In fact, the best solution I feel is to go to the official website to see it, after all, is first-hand information. But save everyone's time, I share what I see after the official website. Hope everyone just reference on, do not attack ha, after all, I am also rookie 650) this.width=650; "src="
WordPress Learning-themes-001,-themes-001 This article is mainly to record the content of WordPress theme. On why to write their own WordPress theme reasons, I believe we all have their own experience. Want to make your blog more prominent? More personal words? The compilation of WordPress theme is one of the reasons why WordPress is so popular. Because there are at least hundreds of thousands of people wh
Install the Linux Desktop theme and mouse theme-general Linux technology-Linux technology and application information. For more information, see the following. I played with my desktop for a day yesterday, so I will share some practices with you. I am talking about gnome here. You are welcome to add kde and others. Desktop topic: You can use apt-get install or aptitude to install the following two theme packages: Gnome-themes Gnome-
10 flat-style wordpress Themes and wordpress Themes Now the flat design style is really popular. You can see that it is even flat on iOS 7. This article recommends 10 WordPress Themes with very beautiful flat styles. Please note:Nemo metro wordpress theme Nemo is a Metro-style WordPress topic and can be set to a pure black background. A front-end UI framework th
Wordpress learning-themes-001,-themes-001. Wordpress learning-themes-001,-themes-001 is mainly used to record the content of wordpresstheme. I believe everyone has their own themes-001,-themes-001 This article mainly records the
在网页上面:用就可以解决了,\n行不通的把数据保存到记事本用\r\n,在记事本就实现了换行! for ($distance50$distance250$distance50) { echo" ".$distance." ".($distance10)." ";} '). addclass (' pre-numbering '). Hide (); $ (this). addclass (' has-numbering '). Parent (). append ($numbering); for (i = 1; i '). Text (i)); }; $numbering. FadeIn (1700); }); }); The above describes the PHP in Notepad to wrap the problem, including PHP, Not
This article translated from: and theme words are specialized terms that are used directly below and not translated.Styles and Themes (styles and Themes)A style is a collection of view or form (window) properties that contains the specified appearance and formatting. A style can specify a number of attributes, such as height, padding, font color, change the form style; Style is for the level of the form element, changing the s
How to quickly restore the default themes of MyEclipse and the default themes of myeclipse I have found some topic imports on the Internet. However, some topics cannot be restored using the preference option after being imported! What should we do? Is there any other way! I found some answers on the Internet, including ways to change the workspace and ways to replace. settings. You can delete. settings d
Simple and generous wordpress Themes, with themes for source code download and wordpress source code download The cu topic is designed by the crazy uncle. Its concise and generous interface is one of its biggest features. Shoufujun also prefers this topic. During the use process, the subject is optimized according to the personal habits of shoufujun.Title Optimization Center title display Add title | https://topic.alibabacloud.com/zqpop/notepad-themes_166362.html | CC-MAIN-2019-39 | refinedweb | 1,176 | 63.19 |
Ever wonder how Segways work? This tutorial will show you how to build a robot using an Arduino that balances itself — just like a Segway!
How Does Balancing Work?To keep the robot balanced, the motors must counteract the robot falling. This action requires feedback and correcting elements. The feedback element is the MPU6050 gyroscope + accelerometer, which gives both acceleration and rotation in all three axes. The Arduino uses this to know the current orientation of the robot. The correcting element is the motor and wheel combination.
Connection Diagram
Complete Fritzing diagram
Notice the Fritzing diagram above, connect the MPU6050 to the Arduino first and test the connection using the codes in this IMU interfacing tutorial. If data is now displayed on the serial monitor, you're good to go! Proceed to connect the rest of the components as shown above. The L298N module can provide the +5V needed by the Arduino as long as its input voltage is +7V or greater. However, I chose to have separate power sources for the motor and the circuit for isolation. Note that if you are planning to use a supply voltage of more than +12V for the L298N module, you need to remove the jumper just above the +12V input.
Building the Robot
Robot frame (made mostly of acrylic slab) with two geared DC motors
Main circuit board, consisting of an Arduino Nano and MPU6050
L298N motor driver module
Geared DC motor with wheel
The self-balancing robot is essentially an inverted pendulum. It can be balanced better if the center of mass is higher relative to the wheel axles. A higher center of mass means a higher mass moment of inertia, which corresponds to lower angular acceleration (slower fall). This is why I've placed the battery pack on top. The height of the robot, however, was chosen based on the availability of materials.
Completed self-balancing robot. At the top are six Ni-Cd batteries for powering the circuit board. In between the motors is a 9V battery for the motor driver.
More Self-balancing Theories:
- Make Kp, Ki, and Kd equal to zero.
- Adjust Kp. Too little Kp will make the robot fall over, because there's not enough correction. Too much Kp will make the robot go back and forth wildly. A good enough Kp will make the robot go slightly back and forth (or oscillate a little).
- Once the Kp is set, adjust Kd. A good Kd value will lessen the oscillations until the robot is almost steady. Also, the right amount of Kd will keep the robot standing, even if pushed.
- Lastly, set the Ki. The robot will oscillate when turned on, even if the Kp and Kd are set, but will stabilize in time. The correct Ki value will shorten the time it takes for the robot to stabilize.
Arduino Self-balancing Robot CodeI needed four external libraries to make this Arduino self-balancing robot work. The PID library makes it easy to calculate the P, I, and D values. The LMotorController library is used for driving the two motors with the L298N module. The I2Cdev library and MPU6050_6_Axis_MotionApps20 library are for reading data from the MPU6050. You can download the code including the libraries in this repository.
#include <PID_v1.h> #include <LMotorController.h> #include "I2Cdev.h" #include "MPU6050_6Axis_MotionApps20.h" #if I2CDEV_IMPLEMENTATION == I2CDEV_ARDUINO_WIRE #include "Wire.h" #endif #define MIN_ABS_SPEED 20 MPU6050 mpu; //Float gravity; // [x, y, z] gravity vector float ypr[3]; // [yaw, pitch, roll] yaw/pitch/roll container and gravity vector //PID double originalSetpoint = 173; double setpoint = originalSetpoint; double movingAngleOffset = 0.1; double input, output; //adjust these values to fit your own design double Kp = 50; double Kd = 1.4; double Ki = 60; PID pid(&input, &output, &setpoint, Kp, Ki, Kd, DIRECT); double motorSpeedFactorLeft = 0.6; double motorSpeedFactorRight = 0.5; //MOTOR CONTROLLER); volatile bool mpuInterrupt = false; // indicates whether MPU interrupt pin has gone high void dmpDataReady() { mpuInterrupt = true; } void setup() { // join I2C bus (I2Cdev library doesn't do this automatically) #if I2CDEV_IMPLEMENTATION == I2CDEV_ARDUINO_WIRE Wire.begin(); TWBR = 24; // 400kHz I2C clock (200kHz if CPU is 8MHz) #elif I2CDEV_IMPLEMENTATION == I2CDEV_BUILTIN_FASTWIRE Fastwire::setup(400, true); #endif mpu.initialize(); mpu.setDMPEnabled(true); // enable Arduino interrupt detection attachInterrupt(0, dmpDataReady, RISING); mpuIntStatus = mpu.getIntStatus(); // set our DMP Ready flag so the main loop() function knows it's okay to use it dmpReady = true; // get expected DMP packet size for later comparison packetSize = mpu.dmpGetFIFOPacketSize(); //setup PID pid.SetMode(AUTOMATIC); pid.SetSampleTime(10); pid.SetOutputLimits(-255, 255); } else { // ERROR! // 1 = initial memory load failed // 2 = DMP configuration updates failed // (if it's going to break, usually the code will be 1) Serial.print(F("DMP Initialization failed (code ")); Serial.print(devStatus); Serial.println(F(")")); } } void loop() { // if programming failed, don't try to do anything if (!dmpReady) return; // wait for MPU interrupt or extra packet(s) available while (!mpuInterrupt && fifoCount < packetSize) { //no mpu data - performing PID calculations and output to motors pid.Compute(); motorController.move(output, MIN_ABS_SPEED); } //; mpu.dmpGetQuaternion(&q, fifoBuffer); mpu.dmpGetGravity(&gravity, &q); mpu.dmpGetYawPitchRoll(ypr, &q, &gravity); input = ypr[1] * 180/M_PI + 180; } }My Kp, Ki, Kd values may or may not work you. If they don't, then follow the steps outlined above. Notice that the input tilt in my code is set to 173 degrees. You can change this value if you'd like, but take note that this is the tilt angle to which the robot must be maintained. Also, if your motors are too fast, you can adjust the motorSpeedFactorLeft and motorSpeedFactorRight values. | https://maker.pro/arduino/projects/build-arduino-self-balancing-robot/ | CC-MAIN-2021-39 | refinedweb | 920 | 57.87 |
.
6 thoughts on “Solve this puzzle – @memo and dynamic programming”
Hi I love the game and the puzzle+level+high score board layout of your app!
May I ask how you can achieve doing the high score+level+unlock mechanism in django? I would really like to implement something similar with a scrabble game (I have the working game in javascript already).
Could you share your code/templates privately? Or can we working something out together it will be great.
* also I find out a url hack can bypass the lock without stars/paying, please check (I Don’t Pay).
Thank you.
Hi Anzel,
Great to hear you like it, thanks!
For the number of stars each player has on the leaderboard, my current implementation actually calculates this on the fly every time, based on the games played. I save the best number of turns/stars made by each user on each board in a
PlayedGameobject (and delete old games if they are not as good as the latest game). The leaderboard API (view) code is simply:
from django.views.generic import View
....class LeaderboardApi(JSONViewMixin, View):
...."Return the JSON representation of the leaderboard"
....def get(self, request, *args, **kwargs):
........if 'board' in kwargs:
............played = PlayedGame.objects.filter(board__id=kwargs['board'])
........if 'subseries' in kwargs:
............played = PlayedGame.objects.filter(board__subseries_id=kwargs['subseries'])
........if 'series' in kwargs:
............played = PlayedGame.objects.filter(board__subseries__series_id=kwargs['series'])
........lb = played.values('user__username').annotate(stars=Sum('stars')).order_by('-stars')[:20]
........lb = [{ 'name':e['user__username'].title(),
................'stars':e['stars'],
..............} for e in lb if e['user__username']]
........return self.json_response(lb)
An earlier version updated a separate total after each game, so no on-the-fly calculation is required; if I have performance issues one day I may return to that approach. I recalculate it and re-serve every time you change screen in the game, which cannot be optimal! Also, there’s no pagination on the leaderboard yet; one day I’d like to add that. And finally, the medals each player has earned are saved at the time of earning, so they can be picked up in a simple query. I have written a “management” command that re-calculates these comprehensively in case I add or remove a class of medals.
Then there’s the access the player has – which I have not fully implemented, as you worked out (the URL scheme is not too cryptic, is it?!). My plan is each user will have an access level in their user profile, which says which “camps” they can access. This is updated after every played game. Then the API call to show the subseries in a “camp” will only serve them up if you have the right access. I will let the front-end control access to particular subseries (“RED” etc) within a camp, but in theory that could be API-controlled too.
Does that help? Happy to provide more info if you want. Also, if you have a better way, I’d love to know! And let me know where I can play your game when it’s ready…
Great write up and thanks for sharing the info!
I have read your previous posts as well so I guess you’re using django-cms framework? Mine starts with Mezzanine because I need the integration with cartridge OTB.
Alright here is my thought and idea to implement the game:
Page to hold custom content category (say Puzzles) >> grid layout for (Puzzle) probably in 5 difficulties to choose from >> Easy level is Free to play and the rest needs either (Full membership) or Pay to unlock.
You need to sign up a free account to start playing (let say, have 10 Lives to begin with), every time you lose will deduct the live (similar to candy xrush anyway).
when you win 10-15 games in Easy Level you can unlock the next level; and the AI strength becomes stronger.
You pay to add “Lives”.
I will need a global ranking system in place and the usual “highest score” + “number of bingos” etc.
And yes I am going to extend the UserProfile to store all the new parameters, and probably need to tweak a lot on the cartridge models/view to make this work together.
But hey, thanks for your tip on that leaderBoard and I will come back to you if I need help or the scrabble game is published
Best regards.
Hi Anzel,
Sounds good! I am using Django-CMS for the Second Nature Games website, but all the Panguru screens are handled by the client using Angular; Django is only providing the API. By the way, in case you haven’t come across it, you may want to take a look at Meteor too – although you can’t use Python with Meteor.
cheers
Hi Arthur,
It’s been a while, the site prototype was done about a week ago, serving a leaderboard using restframework – and do Ajax call whenever page refreshes.
I looked at both Angular & Ember however I gave up cause I didn’t want to confuse myself with the similar double curly brackets lol. But I do realise how convenient with these Js framework, so thanks for the advice.
Meanwhile CEO changed his mind and now the site will be embedded into another site as a backend and will allow anonymous game too… Meaning I have to rewrite part of the since I have things built with registered user in mind and to update user scores/level during the game… And page view permission etc.
Look forward to your new django post, pretty inspiring
I like this problem. I am curious as to the ‘span’ of the game – what’s the hardest Panguru problem (i.e. largest number of minimum moves)?
Some thoughts on how to brute-force solve this (at the expense of, in the words of George Webster, “dynamiting the trout stream”, using computers to solve a recreational mathematical problem):
There are 15! / (2^7) different positions for the markers (holding a given position of the plates constant).
We could assign a dense ID to each position as follows:
1) The hole can be in 1 of 15 different positions, and thus is numbered from 0..14. Take an accumulator and assign it this number (a = hole_pos).
2) Our first colour is arbitrarily chosen – we have 14×13/2 different configurations, having taken one position away for the hole and allowing /2 as we don’t care about which order the two markers are placed. Find a dense encoding for the 91 different choices of this color, then stick this value into our accumulator:
a += first_color_encoding * 15
3) Repeat this procedure for each of the colors, although the final color isn’t very interesting, as it’s completely determined by the positions of the previous color.
a += second_color_encoding * (15*91)
…
Each time we would have to multiply our encoded value by all the total different possibilities of the lower-order values.
Now we have a procedure to encode a game board into a dense ID. A series of divides/mod operations can reverse the procedure, going from a ID -> game board.
(note – encoding the individual pairs of positions – e.g. (14*13)/2 positions for the first color – isn’t trivial but isn’t that hard)
Given the dense IDs, we could work out from the start position and simply BFS the game to death. You’d need a data structure that can hold all 15!/(2^7) positions, but this is feasible. You probably only need a few bits at each position (a move count + a “last move” field to allow us to trace the move back).
This means about 10.2 billion positions so the important question here is: is this feasible? We would need to sweep this huge structure repeatedly, and each time we find a position at our current BFS level, we need to expand out the positions next moves, which, unfortunately, isn’t straightforward, as we have to load our data structure to see whether there’s already a better move at that position. So we’re going to have 300+ cycle latencies to main memory to do this.
This could be pipelined – we could prefetch all destinations of our current position (or current set of positions) so we’re not waiting around 300 cycles every single time we want to update a position.
Other possibly required optimizations would be to make that giant torrent of integer divide and modulo operations that need to be done to go from a dense id into a series of positions into divide-by-constant operations – which can be done via integer multiply (the compiler will get this automatically). Divides are expensive and if you have to do 7 of them every time you want to translate a dense id into a game board life will be awkward.
Note: there is probably a much smarter way of doing this. | http://racingtadpole.com/blog/puzzle-with-memo-dynamic-programming/ | CC-MAIN-2019-43 | refinedweb | 1,489 | 69.01 |
Ticket #5165 (closed Bugs: fixed)
Error in documentaiton "Using smart pointers with Boost.Intrusive containers"
Description
The example at is erroneous and misleading.
First of all, some declarations about namespaces are missing.
More importantly, the following line
typedef ip::list<shared_memory_data> shm_list_t;
defines shm_list_t in such a way that it uses standard allocators. This means that the list items are created in normal memory, which is not what the example intends.
Even worse, the "Check all the inserted nodes" test fails to detect this problem.
The example is problematic anyway, because it only deals with one process.
The attachment could serve as a starting point for a good and working example. I suspect you won't like the printf's in it....
Attachments
Change History
Changed 5 years ago by DeepThought <deeptho@…>
- attachment shm_list.cc
added
comment:1 Changed 5 years ago by steven_watanabe
- Owner changed from matias to igaztanaga
- Component changed from Documentation to interprocess
starting point for working example | https://svn.boost.org/trac/boost/ticket/5165 | CC-MAIN-2016-22 | refinedweb | 161 | 57.27 |
Components and supplies
Apps and online services
About this project
What if you don't have a Pi or beagle borne board and you still want to setup a server that can be used for Home automation projects, then this is the project for you.
In this project, we will be setting up an arduino server that is powered by the WIZ750SR serial to Ethernet module. Then we will be sending the data commands which are received from arduino(serially) to the node mcu powered home automation device via internet. We will be using a Tenda router for this project, but feel free to use any router you like.
So lets start! First of all this is the list of all the components required :1. Components Required
- Arduino Uno
- WIZ750SR serial to Ethernet module
- NodeMCU
- Tenda Router for connection setup
- Breadboard
- Jumper wires
- Cables to power the Arduino and NodeMCU.
The images of these components are shown below:
So we can now move on to setup our WIZ750SR module.2. Configuration of Module
Now, We need WIZ S2E configuration tool in our laptop or pc to setup our module. Here is the download link for the same:
This is the git link. You can also download the software from WIZnet's official website. Once you have downloaded and installed it, open the tool and set it up following the image below:
We can get our IP by simply going to our command prompt window and typing "ipconfig".
This completes the setup of our module, and hence our server. Now it will be ready for transmission once we have run our code into the arduino.
Now, lets configure our receiving end, i.e. the node MCU.3. Node MCU Configuration
For configuring this, there is a really easy way available to us now. That is using the arduino IDE. Download the additional esp8266 package using the additional package download in arduino ide.
Now, its easy to code for both arduino and node MCU. The codes are attached below. Now we will see how our System will work using a block diagram shown below.
We have used this tenda router as an access point for this project as mentioned above.
So, now you can build your own Ethernet powered Arduino Server for home automation. The code I have given is for simple LED control using this method.
You can utilize this for home automation by using relays with the node mcu. The schematic of the following is shown below,
So, the project is over here, feel free to comment or message me privately if you have any doubt with the project, Thank You!
Code
Node mcu codeC/C++
#include <ESP8266WiFi.h> const char*<button>Turn On </button></a>"); client.println("<a href=\"/LED=OFF\"\"><button>Turn Off </button></a><br />"); client.println("</html>"); delay(1); Serial.println("Client disonnected"); Serial.println(""); }
Arduino codeC/C++
#include <SoftwareSerial.h> SoftwareSerial mySerial(10, 11); //RX,TX int LEDPIN = 13; void setup() { pinMode(LEDPIN, OUTPUT); Serial.begin(9600); // communication with the host computer //while (!Serial) { ; } // Start the software serial for communication with the ESP8266 mySerial.begin(115200); Serial.println(""); Serial.println("Remember to to set Both NL & CR in the serial monitor."); Serial.println("Ready"); Serial.println(""); } void loop() { // listen for communication from the ESP8266 and then write it to the serial monitor if ( mySerial.available() ) { Serial.write( mySerial.read() ); } // listen for user input and send it to the ESP8266 if ( Serial.available() ) { mySerial.write( Serial.read() ); } }
Schematics
Author
Dhairya Parikh
- 5 projects
- 103 followers
Published onJuly 3, 2018
Members who respect this project
you might like | https://create.arduino.cc/projecthub/dhairya-parikh/an-ethernet-powered-server-for-home-automation-0f87ee?ref=tag&ref_id=relay&offset=19 | CC-MAIN-2021-49 | refinedweb | 601 | 57.37 |
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
public class Person { public string Name { get; set; } public int Age { get; set; } }
public Class People : List<Person> { public static People listFromDataBase() { //Connects to DataAccess Layer to get all people } }
public static void main(string[] args) { People p = People.listFromDataBase(); p = (People) p.FindAll(FilterByAge); foreach(Person per in p) { Console.WriteLine("Name: {0}, Age: {1}", per.Name, per.Age.ToString()); } } private bool FilterByAge(Person p) { if (p.Age > 18) { return true; } else { return false; } }
Unable to cast object of type 'System.Collections.Generi
c.List`1[Pc.List`1[P erson]' to type 'People'.erson]' to type 'People'.
i.e.
Open in new window
If you want to have FindAll return a People object then you will need to override the base class implementation:
Open in new window
For you code to work you would probably need a method that converts a List<Person> to a People object.
DaTribe
Open in new window
this worked perfect!!
I hope that the .NET guys make this posible in the new frameworks | https://www.experts-exchange.com/questions/26621320/Cast-Generic-List-to-Specific-List.html | CC-MAIN-2018-09 | refinedweb | 205 | 66.94 |
On Fri, 16 Nov 2007, Anand Avati wrote: Doesn't get any simpler than this. volume client type protocol/client option transport-type tcp/client option remote-host xxx.xxx.xxx.xxx option remote-subvolume brick1 end-volume Tx's are there to protect the inocent.
Your configuration doesnt seem to be what you want. do it someting like this - volume brick1 .. end-volume volume write-behind ... subvolume brick1 end-volume volume read-ahead ... subvolume write-behind end-volume volume server ... auth.ip.read-ahead.allow * subvolume read-ahead end-volume also can you post your client config? avati 2007/11/16, Chris Johnson <address@hidden>:Ok, hi. I think I'm committing a major blunder here which may be why I'm not seeing better through put. These xlators should be stacked, is that right? I defined the following; volume brick1 type storage/posix option directory /home/sdm1 end-volume volume server type protocol/server subvolumes brick1 option transport-type tcp/server # For TCP/IP transport # option client-volume-filename /etc/glusterfs/glusterfs-client.vol option auth.ip.brick1.allow * end-volume volume writebehind type performance/write-behind option aggregate-size 131072 # in bytes subvolumes brick1 end-volume volume readahead type performance/read-ahead option page-size 65536 ### in bytes option page-count 16 ### memory cache size is page-count x page-size per file subvolumes brick1 end-volume Should I have used the 'server' volume as the subvolume for read-ahead and write-behind in the above? Or should read-ahead and write-behind be between the basic brick and the server volume? Is there a diffrence in performance? I grabbed 5 volumes from the SATA Beast. I think the best way to test this is with the real files and jobs. So it's go for broke and full bore time. If I have two front ends I need I'll need the postix lock deal, the io threader is a must or why bother. If I unify, both front ends need access to the same namespace brick so it has to have locks on it too, yes? Looking at the GlusterFS Translators v1.3 server examples. Why is the io thread xlator so high up in the stack? Would it be better farther down that stack closer to the basic bricks? If not, why not? ------------------------------------------------------------------------------- Chris Johnson |Internet: address@hidden ------------------------------------------------------------------------------- _______________________________________________ Gluster-devel mailing list address@hidden-- It always takes longer than you expect, even when you take into account Hofstadter's Law. -- Hofstadter's Law
------------------------------------------------------------------------------- Chris Johnson |Internet: address@hidden------------------------------------------------------------------------------- Chris Johnson |Internet: address@hidden
Systems Administrator |Web: NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |For all sad words of tongue or pen, the saddestCharlestown, MA., 02129 USA |are these: "It might have been". John G. Whittier ------------------------------------------------------------------------------- | http://lists.gnu.org/archive/html/gluster-devel/2007-11/msg00090.html | CC-MAIN-2015-27 | refinedweb | 472 | 58.89 |
Wiki Team/Guide/Formatting
For a quick overview of wiki markup see the Wiki Team/Guide/Overview!
Text will show up just as you type it (provided you begin it in the first column). Multiple spaces are compressed, and line endings are ignored (except blank lines).
Use a blank line to start a new paragraph. Multiple blank lines add more vertical space.
Wiki markup code is supposed to be simple and easy to learn. You can also use most HTML markup, if you prefer it, except for links.
Contents
Fonts
Use these to change fonts:
Sections
It is often useful to divide articles into sections and subsections. The following markup can be used. You must begin these on a new line.
An article with four or more headings will automatically create a table of contents. Using HTML heading tags also creates proper section headings and a table of contents entry.
Lists
Wiki markup makes lists fairly easy:
- See also Template:Definition table
- HTML lists are also allowed
- Blank lines should be avoided, they break list numbering and sub-list. Use <br> instead.
Linking
Linking is covered in a separate page: Wiki Team/Guide/Links
Preformatted text
Text which does not begin in the first column of the wiki editing box will be shown indented, in a fixed width font.
For example, this line was preceded by a single space. It preserves whitespace formatting: A B C D E F G H I
A single paragraph of text can be entered with a space in column 1 (the first character position in the wiki editing box) of every line to preserve the raw text format of the entry.
For example, this block of lines was entered with 4 spaces before each line. A B C D E F G H I
The parser tags
<pre> and </pre> may be entered before and after blocks of preformatted text to preserve the raw format of the text. This is suitable for chat transcripts such as Design_Team/Meetings/2009-03-01.
Line wrapping
Line wrapping can be adjusted with the style property white-space (See)
- With this tag and style,
<pre style="white-space:pre-wrap">, whitespace is preserved by the browser. Text will wrap when necessary, and on line breaks.
Code Examples
Raw text is appropriate for showing computer code or command examples, such as this famous little program:
main(){ printf("Hello, World!\n"); }
- Coding format may be embedded in a line of regular text with the <code>
code text</code> wiki syntax.
- Blocks of pre-existing code (often with structured linebreaks) may be preserved by entering a space before
<nowiki>in the first wiki page column as in this example:
<nowiki</nowiki>
The result looks like the following:
Stopping Text flow
See mediawikiwiki:Help:Images#Stopping_the_text_flow.
Tables
Tables are covered in a separate page: Help:Tables
nowiki
Use <nowiki> to insert text that is not parsed by the wiki, e.g.,
- <nowiki>====Heading 4====</nowiki>
will result in ====Heading 4====, rather than the creation of an actual heading, Heading 4
Categories
An important navigational tool in the wiki is the use of categories. It is customary to include one or more category tags to the end of a page to indicate in what topical area the page belongs. For example, this page should be listed under the [[Category:General public]] and [[Category:HowTo]] categories. You can create a new category, if really necessary, simply by including a new category tag, e.g., [[Category:My new category]].
Templates
Another tool that helps make the wiki more readily navigable is the use of templates (Please select the Template namespace in the pull-down menu on the Special:Allpages page to view the current list of available templates). Templates provide a consistent look to pages as well as a shorthand for communicating things like {{Stub}} and {{Requesttranslation}}. Feel free to ask for help (here) regarding creating new templates you may need.
Translation
We have a mechanism for maintaining multilingual versions of pages in the wiki. Please see Translation Team/Wiki Translation for more information about translations. We also recommend the use of the Google machine translation for pages that have yet to be translated by a human expert. To see a Google translation, simply use the dropdown selector in the sidebar. | http://wiki.sugarlabs.org/go/Wiki_Team/Guide/Formatting | CC-MAIN-2017-39 | refinedweb | 716 | 61.77 |
Mar 29, 2011 06:43 PM|SangNaga|LINK
In my application users can write their own 'templates'. That are filled with data from other sources. I could use a simple token replacement, but I would like to be able to use the Razor syntax and engine for rendering the template.
Here is an overly simplified example:
var ViewModel = new SimpleViewModel() { Name = "Dave" }; ViewBag.MyRenderedTemplate = MyHtmlHelper.Render("Hi @Model.Name this is a simple template", ViewModel);
Before you point to the following sources I will clarify what I have learned from them:
RazorEngine (). This code compiles the string into a class, then a dll, then finally reads from a dll - all this to bypass the MVC engine. I don't want to do that because I am in MVC and don't want the additional overhead.
The following blog shows how to do this for one of the MVC 3 pre-releases:
I cannot get it to work with MVC 3 RTM. When I take the exact code (and replace the CshtmlView with a RazorView) I end up with the error: The method or operation is not implemented. It appears to have something to do with @Model.Name, and being unable to find "Name".
A comment in the blog elludes to a statement from ScottGu that the RTM would allow this functionality from the get go. I cannot find this blog.
Any help would be appreciated. Thanks!
Thanks!
render mvc
All-Star
18669 Points
Mar 29, 2011 09:43 PM|CodeHobo|LINK
Take a look at the source for Postal, which is an email library that uses the Razor engine for templating
In particular take a look at the Email Renderer code here:
render mvc
Mar 30, 2011 03:59 PM|SangNaga|LINK
Thanks for the link.. Unfortunately the Postal code IS reading the templates/views from a phyiscal file:
Quote: "Postal will look for email views in ~/Views/Emails/. So our email view is at ~/Views/Emails/Example.cshtml."
Any other ideas?
Mar 30, 2011 04:54 PM|francesco abbruzzese|LINK
I think the simplest way is to repair the code you referred...please post the details of the errors...maybe it is just a method that changed name, or that changed the namespace it is in.
Mar 30, 2011 09:02 PM|SangNaga|LINK
public class StringPathProvider : VirtualPathProvider { public StringPathProvider() : base() { } public override CacheDependency GetCacheDependency(string virtualPath, IEnumerable virtualPathDependencies, DateTime utcStart) { return null; } public override bool FileExists(string virtualPath) { if (virtualPath.StartsWith("/stringviews") || virtualPath.StartsWith("~/stringviews")) return true; return base.FileExists(virtualPath); } public override VirtualFile GetFile(string virtualPath) { if (virtualPath.StartsWith("/stringviews") || virtualPath.StartsWith("~/stringviews")) return new StringVirtualFile(virtualPath); return base.GetFile(virtualPath); } public class StringVirtualFile : System.Web.Hosting.VirtualFile { string path; public StringVirtualFile(string path) : base(path) { //deal with this later this.path = path; } public override System.IO.Stream Open() { return new System.IO.MemoryStream(System.Text.ASCIIEncoding.ASCII.GetBytes(RazorViewEngineRender.strings[System.IO.Path.GetFileName(path)])); } } }<div> </div> <div>RazorViewEngineRender(changes noted)<>
public class RazorViewEngineRender
{
internal static Dictionary<string, string> strings { get; set; }
string guid;
static RazorViewEngineRender()
{
strings = new Dictionary<string, string>();
}
public RazorViewEngineRender(string Template)
{
guid = Guid.NewGuid().ToString() + ".cshtml";
strings.Add(guid, Template);
}
public string Render(object ViewModel)
{
//Register model type
if (ViewModel == null)
{
strings[guid] = "@model System.Web.Mvc.WebViewPage\r\n" + strings[guid]; // Replaces the following line:
//strings[guid] = "@inherits System.Web.Mvc.WebViewPage\r\n" + strings[guid];
}
else
{
strings[guid] = "@model " + ViewModel.GetType().FullName + "\r\n" + strings[guid]; // Replaces the following line:
//strings[guid] = "@inherits System.Web.Mvc.WebViewPage<" + ViewModel.GetType().FullName + ">\r\n" + strings[guid];
}
System.Text.StringBuilder sb = new System.Text.StringBuilder();
System.IO.TextWriter tw = new System.IO.StringWriter(sb);
ControllerContext controller = new ControllerContext();
ViewDataDictionary ViewData = new ViewDataDictionary();
ViewData.Model = ViewModel;
RazorView view = new RazorView(controller, "/stringviews/" + guid, null, false, null); // Replaces the following line:
//CshtmlView view = new CshtmlView("/stringviews/" + guid);
view.Render(new ViewContext(controller, view, ViewData, new TempDataDictionary(), tw), tw);
//view.ExecutePageHierarchy(); // Already commented out.
strings.Remove(guid);
return sb.ToString();
}
}
string Template = "Hello, @Model.Name"; TestViewModel user = new TestViewModel() { Name = "Billy Boy" }; RazorViewEngineRender view = new RazorViewEngineRender(Template); string Results = view.Render(user); //pass in your model</div> <div> </div> <div>I also added the global.asax code to register the virtual path provider.</div> <div> </div> <div> </div> <div>With the "@model ..." line the error is :</div> <div>
Compilation Error Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: CS0103: The name 'model' does not exist in the current context Source Error: Line 39: public override void Execute() { Line 40: Line 41: Write(model); Line 42: Line 43: WriteLiteral(" Mvc3Template.Controllers.TestViewModel\r\nHello, "); ...</div> <div> </div> <div>With the "@inhertis ..." line only I get:</div> <div></div> <div>
</div> <div></div> <div></div> <div>Thanks for the help!</div> <div></div> <div></div> <div></div> <div> </div> </div></div> <div></div> <div></div> <div>Thanks for the help!</div> <div></div> <div></div> <div></div> <div> </div> </div>
The method or operation is not implemented. Description: An unhandled exception occurred during the execution of the current web request.
Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.NotImplementedException: The method or operation is not implemented.
Source Error:
Line 48: //CshtmlView view = new CshtmlView("/stringviews/" + guid);
Line 49:
Line 50: view.Render(new ViewContext(controller, view, ViewData, new TempDataDictionary(), tw), tw);
Line 51: //view.ExecutePageHierarchy(); // Already commented out.
All-Star
46996 Points
Mar 30, 2011 09:35 PM|bruce (sqlwork.com)|LINK
the mvc engine uses the same code and does the same work, reads a template file, creates a class file, then a dll, then loads the dll, then calls the class.
one of the missing features from .net is the ability to unload a dll. if you are going to use the razor engine for templates, each template is compiled and loaded into the appdomain before running. if you are going to have a lot (100's) of these templates, then you might want to create a separate appdomain you load them into. you can use this appdomain as a cache, and unload it when it gets full.
render mvc
Mar 31, 2011 06:38 AM|imran_ku07|LINK
Not very much efficient but it work for you,
var guid=Guid.NewGuid(); var path = "~/Views/Home/" + guid + ".cshtml"; var fs = System.IO.File.Create(Server.MapPath(path)); var txtWriter=new StreamWriter(fs); var st=new StringWriter(); txtWriter.WriteLine("This is @DateTime.Now right now"); txtWriter.Close(); fs.Close(); var razor=new RazorView(ControllerContext, path, null, false, null); razor.Render(new ViewContext(ControllerContext, razor, new ViewDataDictionary(), new TempDataDictionary(), st), st); System.IO.File.Delete(Server.MapPath(path)); return Content(st.ToString());
Mar 31, 2011 07:02 PM|francesco abbruzzese|LINK
@Imran
Better to put everything in a try statement and the delete operation into a finally.
Otherwise exeptions migh cause temporary files...become permanent.
Anyway, not so bad for efficiency. Maybe the file has no time to be actually stored on disk and it remain in main memory till it is deleted.
However, I don't understand because the code posted doesn't work... it should work. I have not seen any evident error (that for sure exists...)
Apr 01, 2011 02:51 AM|imran_ku07|LINK
francesco abbruzzese
Better to put everything in a try statement and the delete operation into a finally.
Otherwise exeptions migh cause temporary files...become permanent.
You are right exeptions handling is required. This is just a sample code.
francesco abbruzzeseHowever, I don't understand because the code posted doesn't work... it should work. I have not seen any evident error (that for sure exists...)
The only reason it will not work if you have no permission for creating files in your application.
Apr 01, 2011 06:25 PM|francesco abbruzzese|LINK
Apr 02, 2011 07:37 AM|imran_ku07|LINK
francesco abbruzzesewhy the code of the original post doesnt run,
I don't tested BTW there is lot of change from Preview to RTM. This code seems work only in RTM proir release.
Apr 19, 2012 02:23 PM|Malkov|LINK
There is a nice article how to use RazorEngine for that purposes:
How to create a localizable text template engine using RazorEngine:
mvc
11 replies
Last post Apr 19, 2012 02:23 PM by Malkov | http://forums.asp.net/t/1667832.aspx | CC-MAIN-2014-15 | refinedweb | 1,410 | 50.33 |
DVRs, Cable Boxes Top List of Home Energy Hogs
timothy posted more than 3 years ago | from the devil-will-find-work-for-idle-volts dept.
."
How about heating and airconditioning? (3, Interesting)
MichaelSmith (789609) | more than 3 years ago | (#36582190)
Do STBs really use more energy than things which push heat around?
Re:How about heating and airconditioning? (0)
Anonymous Coward | more than 3 years ago | (#36582212)
My cable box doubles as a space heater, so that helps in the winter.
Re:How about heating and airconditioning? (3, Interesting)
EvilRyry (1025309) | more than 3 years ago | (#36582232)
Re:How about heating and airconditioning? (2)
MichaelSmith (789609) | more than 3 years ago | (#36582258)
My fridge uses 140 watts when drawing power. Maybe 100 watts over the course of a day, and its pretty efficient.
Re:How about heating and airconditioning? (1)
Rob the Bold (788862) | more than 3 years ago | (#36582594).
Coils (0)
Anonymous Coward | more than 3 years ago | (#36582756)
Don't forget to clean the coils everyone. At least every 6 months.
Re:How about heating and airconditioning? (5, Insightful)
Smidge204 (605297) | more than 3 years ago | (#36582540)? (1)
Anonymous Coward | more than 3 years ago | (#36582572)
Off topic, but did you ever work at Winnersh Triangle followed by TVP? If you don't know what I mean then ignore this.
Re:How about heating and airconditioning? (4, Informative)
TheThiefMaster (992038) | more than 3 years ago | (#36582890)
Actually you have calculated average watts (which is what is really relevant). Your numbers are "watt hours per hour", cancelling to watts, not watt hours.
Re:How about heating and airconditioning? (2)
Thelasko (1196535) | more than 3 years ago | (#36583002)? (4, Informative)
fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36582242)
Air conditioning is likely a lot worse; but, because everybody knows that it is extremely energy intensive, thermostatic regulation has been standard since the mechanisms for achieving it were bimetallic, and microproccessor based scheduling systems creep in pretty quickly once you get away from the nastiest of basic window units.
By contrast, it sounds like team STB has somehow managed to miss Every Single Development in computer and embedded device power management in the last decade. Ironically, they've probably even managed to achieve an outcome where Intel muscling in with their x86 (barely) SoC designs would actually be more efficient than highly-integrated task specific media SoCs; because at least they would incorporate their laptop power management techniques more or less for free. Impressive work.
Re:How about heating and airconditioning? (1, Interesting)
chemicaldave (1776600) | more than 3 years ago | (#36582272)
the inefficiency of burning something, converting it to electricity, running that through transmission lines, just to dump it into a big resistor at the other end is a bit much.
Is it any more inefficient than using a fleet of trucks to store that something in peoples' homes and burn it there, i.e. oil?
Re:How about heating and airconditioning? (2)
Phreakiture (547094) | more than 3 years ago | (#36582318)
Yes, it is, actually, and the price reflects this.
Re:How about heating and airconditioning? (5, Informative)
fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36582358)
Depending on the fuel in use, your heat->mechanical energy conversion will always live in the shadow of that spoil-sport Carnot, along with any engineering limitations. In practice, I'm told that you get something in the vicinity of 30-50 percent(of the fuel at the plant, it still has to be shipped there, though at least bulk shipping is easier, per unit goods, than household delivery). After that, you still have the generator that the turbine is driving, along with the power transmission apparatus.
By contrast, since heat is the desired product, the only 'waste' heat in an onsite burn is whatever goes up with the stack gasses and whatever goes to the delivery truck. At least with oil heat, in the northeast, we had about one delivery a year. Unless the truck managed to burn half its payload getting to us, I suspect that we came out ahead.
Peripheral electrical generation, with heat engines, is something you do only for backup purposes; because small heat engines pretty much inevitably suck more than huge ones; but when all you want is heat, the only real efficiency issues are the engineering problems of cooling the exhaust gasses before they leave the premises.
Re:How about heating and airconditioning? (1)
Anonymous Coward | more than 3 years ago | (#36582454)
Power transmission apparatus losses.
Improvements are on the way but,, Power line losses are very high.
Somewhere around the area of 40 to 50 % input energy is used to keep the grid balanced via to ground power dump at power stations.
Re:How about heating and airconditioning? (3, Insightful)
TheRaven64 (641858) | more than 3 years ago | (#36582404):How about heating and airconditioning? (1)
Shivani1141 (996696) | more than 3 years ago | (#36582966)
Re:How about heating and airconditioning? (1)
drolli (522659) | more than 3 years ago | (#36582424)
Yes, massively.
The Oil has also to be brought to the power plant, and transporting the oil inside the US or Europe is not the largest factor.
Re:How about heating and airconditioning? (0)
Anonymous Coward | more than 3 years ago | (#36582494)
But there is a way to have electric heat that is more efficient than gas heat. Get a heat pump, these show 3 to 4 times heat gain over the electric input. (the warmer outside the more efficient the device and vice versa. On a modern heat pump, (Specifically a lennox xp14 36k btu unit even at 20 below F you get a slight heat gain over pure electric heat, at zero its about 2x and at 55 its about 4.5x.) If you assume a modern combined cycle gas turbine plant of 60% efficency you do better than an 80% gas furnace from around 5 above up. (on an air source unit, ground source by virtue of the higher input temp does much better).
Re:How about heating and airconditioning? (1)
robbak (775424) | more than 3 years ago | (#36582516):How about heating and airconditioning? (1)
Rising Ape (1620461) | more than 3 years ago | (#36582922)
A lot less efficient, yes. Do people use oil a lot for heating in the USA? It's almost always natural gas over here (UK).
Re:How about heating and airconditioning? (1)
petes_PoV (912422) | more than 3 years ago | (#36582910)
The best way to cut your winter heating bills is simply to put on a sweater.
Re:How about heating and airconditioning? (1)
ffejie (779512) | more than 3 years ago | (#36582980):How about heating and airconditioning? (2, Insightful)
Anonymous Coward | more than 3 years ago | (#36582584) they're not the ones paying the power bill.
Just another reason for me to cancel the cable.
Not in use? (2)
NotSoHeavyD3 (1400425) | more than 3 years ago | (#36582216)
The set-top boxes are energy hogs mostly because their drives, tuners and other components are running full tilt, 24 hours a day, even when not in active use..)
Re:Not in use? (3, Insightful)
WrongSizeGlass (838941) | more than 3 years ago | (#36582234).)? (4, Insightful)
chemicaldave (1776600) | more than 3 years ago | (#36582300):Not in use? (1)
Anonymous Coward | more than 3 years ago | (#36582384)
"yeah but this linux embedded solution from 2001 we built this crap over has crappy support for that"
Re:Not in use? (0, Troll)
elrous0 (869638) | more than 3 years ago | (#36582816).
Re:Not in use? (1)
Jumpin' Jon (731892) | more than 3 years ago | (#36582312)
While.
Re:Not in use? (-1)
Anonymous Coward | more than 3 years ago | (#36582460)
Re:Not in use? (2)
delinear (991444) | more than 3 years ago | (#36582588)
Re:Not in use? (1)
petermgreen (876956) | more than 3 years ago | (#36582794) support things like automatic firmware upgrades, remote record, anytime etc. You can pu them into a deeper off mode by holding the power button (and sometimes you have to because they crash) but they take annoyingly long to start up again from that mode and of course they can't wake up from that state to record stuff which kind of defeats the object of a sky+ box.
Fuck Americans.
The fact he talks about SKY+ means he is almost certainly not an american or at least not living in american right now.
Re:Not in use? (3, Insightful)
gbjbaanb (229885) | more than 3 years ago | (#36582316):Not in use? (1)
PhrostyMcByte (589271) | more than 3 years ago | (#36582326) being very hot even when I'm not using them. So I guess the problem applies to more than just DVRs and cable boxes.
Re:Not in use? (1)
delinear (991444) | more than 3 years ago | (#36582670)
Re:Not in use? (1)
Eivind (15695) | more than 3 years ago | (#36582360) to power up only once an hour on playback.
mp3-players with spinning discs have been doing this for a decade, because the consumer actually -cares- about energy-consumption on battery-powered devices. (he cares about how long the battery holds), a old-generation ipod, for example, will read several songs into a ram-buffer, then power down the disc for something like 10-15 minutes before the buffer runs low. No reason PVR-boxes couldn't do the same.
Re:Not in use? (1)
delinear (991444) | more than 3 years ago | (#36582730)
Re:Not in use? (1)
swb (14022) | more than 3 years ago | (#36583010) the power savings would be, but you might even copy programs from flash to HDD in the background as the program is started and spin down the HDD once completed and pretty much never use the HDD.
Won't work (2)
tkrotchko (124118) | more than 3 years ago | (#36582786)
So if its 8:15 and the person turns on the TV, their expectation would be that they could go back in time 15 minutes to catch the show from the beginning.
They'd be better off designing more efficient components, particularly power supplies.
Re:Not in use? (0)
Anonymous Coward | more than 3 years ago | (#36582416)
Is your cell phone running on full all the time? Hope not if you keep your cell phone in your front pants pocket and you plan to have kids.
I think we have reached a level that devices that turn themselves on and off or go into sleep mode are a reality... cell phones, PCs, TVs...
You just need a few pieces of circuitry running a counter and the instructions to wake the rest of the thing up.
Re:Not in use? (2)
MindStalker (22827) | more than 3 years ago | (#36582488).
Probably because it makes it more complicated. (2)
Dr_Barnowl (709838) | more than 3 years ago | (#36582228):Probably because it makes it more complicated. (2)
edumacator (910819) | more than 3 years ago | (#36582336) it for the energy savings of their costumers, then maybe one of the will make the shift, and get to put a big green sticker on the front saying their box is "green". That would bump their sales enough to offset the cost.
Re:Probably because it makes it more complicated. (5, Insightful)
necro81 (917438) | more than 3 years ago | (#36582418)
Re:Probably because it makes it more complicated. (2)
frostfreek (647009) | more than 3 years ago | (#36582952)
.. wake up (yet I could do it manually!), so I had to revert to an older kernel.
After seeing this article, I am glad I went through the hassle!
Re:Probably because it makes it more complicated. (1)
TheRaven64 (641858) | more than 3 years ago | (#36582450)
Re:Probably because it makes it more complicated. (1)
MrQuacker (1938262) | more than 3 years ago | (#36582506)
In the USA we have something like that for large appliances, like fridges. The sticker on the front shows Kwh used per year, and estimated cost based on a range of electrical prices.
Re:Probably because it makes it more complicated. (0)
Anonymous Coward | more than 3 years ago | (#36582374)
Call my cynical...
I'd like to that - what's her number?
Re:Probably because it makes it more complicated. (0)
Anonymous Coward | more than 3 years ago | (#36583006)
I want my MythTV box to do this, maybe it is better in the newer versions.
The same with EyeTV. They need to be able to turn on a powered down computer, boot into the correct OS, record, and then turn the computer off.
My Mythbuntu computer uses 85W in standby, my MacBook Pro uses 17-35W.
Turn the damn thing off (1)
coinreturn (617535) | more than 3 years ago | (#36582240)
Waiting to program while you are away is not an excuse to hog power. Only a wake-up function is required when the box is not actively recording.
Re:Turn the damn thing off (1)
Dr_Barnowl (709838) | more than 3 years ago | (#36582282)
Yes, I get this. Most annoying.
Since I'm on MythTV I suppose the solution to this is to just put some cron jobs on it that cancel live TV playback during school hours.
Re:Turn the damn thing off (2)
edumacator (910819) | more than 3 years ago | (#36582366):Turn the damn thing off (1)
delinear (991444) | more than 3 years ago | (#36582790) shows if they really care about missing stuff, so it can't be the "entertainment value" of TV that's causing them to stay home).
Re:Turn the damn thing off (1)
jedidiah (1196) | more than 3 years ago | (#36582370) of unpredictable human behaivor. The STB, not so much.
Re:Turn the damn thing off (2)
Noose For A Neck (610324) | more than 3 years ago | (#36582338)
Re:Turn the damn thing off (1)
GigaHurtsMyRobot (1143329) | more than 3 years ago | (#36582472)
Re:Turn the damn thing off (3, Informative)
bzipitidoo (647217) | more than 3 years ago | (#36582888)
Don't blame the users. More than half the blame lies on those boxes. They're practically full blown computers complete with hard drives and long boot up times of over a minute--- and almost no power management, and that's definitely not the fault of the users. Linux can be booted in 5 seconds, and could be made even faster with things such as the ancient technology known as ROM. No excuse for boxes taking so long to boot, and dodging the problem by just having it always stay on. Long ago, we were introduced to the "Power" button to get around the requirement that "Off" means off, with VCRs that would lose all their programming whenever power was interrupted. The industry has completely punted on this issue.
We could have had a standard for sensing the state of connected hardware so that if the TV is off, and no recording is being made, the box will sleep. Actually, we do have that, but the boxes can just ignore it. Or perhaps we could have more integration, with set top box functionality built into the TV. There are a whole lot of things that could have been done. Lot of cabling is still carrying analog signals. Instead, a top priority in the design of things like HDMI was that users should have to burn even more power on useless anti-piracy measures, such as HDCP.
I have a very simple solution. I don't have cable TV. Saves me a bundle.
Re:Turn the damn thing off (2)
phlobus (103053) | more than 3 years ago | (#36582926) (2)
satuon (1822492) | more than 3 years ago | (#36582244):This is a hidden price (1)
drinkypoo (153816) | more than 3 years ago | (#36582280)! (2)
yoghurt (2090) | more than 3 years ago | (#36582310):This is a hidden price (1)
Eivind (15695) | more than 3 years ago | (#36582396):This is a hidden price (2)
toonces33 (841696) | more than 3 years ago | (#36582690) weeks, so it will be a while before I have good data..
Re:This is a hidden price (1)
oh_my_080980980 (773867) | more than 3 years ago | (#36582498)
Stop blaming the consumer/homer owner! Makes you look like a total douche bag.
Meanwhile near the North Pole (0)
Anonymous Coward | more than 3 years ago | (#36582248)
One of the nice side effects of living in Finland is that the use of home appliances is free 9 months of the year. My house is heated with electric radiators; it doesn't matter how the electricity is converted into heat. The officials ran some tests to see if it mattered how optimally light bulbs etc were placed for heating but it turned out it made virtually no difference.
Just keep the curtains closed to convert light into heat.
Re:Meanwhile near the North Pole (1)
AK Marc (707885) | more than 3 years ago | (#36582502)
Re:Meanwhile near the North Pole (1)
frostfreek (647009) | more than 3 years ago | (#36582982)
Perhaps it is hard to pump heat out of permafrost?
Re:Meanwhile near the North Pole (1)
Hatta (162192) | more than 3 years ago | (#36582710)
I guess that's some consolation for having to remain inside 9 months of the year, going without sun for 3 months, and alcoholism so rampant that it's the number one cause of death for Finnish men.
The only efficiency the US understands (-1)
Anonymous Coward | more than 3 years ago | (#36582268)
is the efficient transfer of tax payer dollars to private corporations.
Consumer Choice (5, Insightful)
dasdrewid (653176) | more than 3 years ago | (#36582270)
Re:Consumer Choice (1)
toonces33 (841696) | more than 3 years ago | (#36582714)
The only thing they will listen to is if people were to turn in their boxes and tell the companies why when they send the box back..
If I get a spare moment, I might put in a service call to DirecTV for fun - just to complain that the box draws too much electricity and is throwing off heat. I could act dumb and claim that I was worried about a fire or some such.
...and I think they're right (1)
jawtheshark (198669) | more than 3 years ago | (#36582278)
I have one such set-top box. I needed one because I still have a CRT (Which I turn OFF when not in use, with the big button in front) and my cable is all-digital. It's a Technisat DigitCorder K2 and it's a frigging piece of crap. They should fire all programmers that worked on it especially the UI team. Regardless, I am scared to turn it off, so I don't. Why? Because sometimes it simply doesn't want to boot up again.
The other reason is that I have to set my TV to "EXT1" to use it, which means the you shouldn't use the remote of the TV except for volume control (The digicorder only knows "silent" and "not very loud"). Now, I know this, and you probably don't have a problem with this, but expaining these technicalities to my wife doesn't work. So, I say "use this remote", which is the one of the TechniSat and use that.
So, it's on "full-power", 24/7 because I really don't want TV-support calls while I'm at work.
Re:...and I think they're right (0)
Anonymous Coward | more than 3 years ago | (#36582428)
You may want to look at what the CRT uses in standby. My new LCD-LED tv uses less than my old CRT in standby when it is on.
I was surprised at what many of the newer TV's use for power which is why I spent a little more and got the LCD-LED. It trimmed a nice chunk off my bill.
It was about 500 more than non LED but the power usage was nearly half. Then there is plasma, many of those use more than my old tv in standby. It is also fairly easy to find out. But it takes a bit of work. You get a list of TV's with the features you want. Then download all of the manuals. The power consumption is in the back. Some websites have it but are not always accurate.
The TV I had was from 1998. Not exactly 'old'. It used 75w standby about 240 on. The new one uses less than a watt standby and about 75 on, less if I use the dimming feature.
Re:...and I think they're right (1)
AK Marc (707885) | more than 3 years ago | (#36582534)
Re:...and I think they're right (1)
delinear (991444) | more than 3 years ago | (#36582478)
DVR boxes are evil (3, Insightful)
gemtech (645045) | more than 3 years ago | (#36582298)
-:DVR boxes are evil (1)
grodzix (1235802) | more than 3 years ago | (#36582368)
Trolling causes energy use. (0)
Anonymous Coward | more than 3 years ago | (#36582302)
Imagine the power consumption of trolls keeping their computers on all day looking for websites to post goatse links to
Lack of consumer pressure makes sense. (2)
geekmux (1040042) | more than 3 years ago | (#36582354)
"..? (5, Insightful)
kuhnto (1904624) | more than 3 years ago | (#36582356)
No real power button. (1)
jedidiah (1196) | more than 3 years ago | (#36582406) TV or STB in order to ensure they are not drawing power (or generating noise). Most consumers simply don't care.
The only way anything will change is if there's some sort of nanny state approach taken where the consumer doesn't have to take any responsibility at all.
Re:No real power button. (0)
Anonymous Coward | more than 3 years ago | (#36582648)
The power button on my BellTV PVR box does exactly this: turns off the green LED. It's a bastard hog.
Many devices draw lots of power. Duh! (0)
Anonymous Coward | more than 3 years ago | (#36582412)
If I were to have a thousand devices that draw 1W, that would become the single largest permanent power drain in my home.
Is this actually a story? Who cares about a constant 25-50W power drain? Sure, it would be nice if the STBs powered some stuff down, but it's not going to change the world now is it?
Another terrible piece of sensationalist writing.
45 cents per kwh (1)
Anonymous Coward | more than 3 years ago | (#36582430)
Wow, electricity prices in the US must be insane. Electric cars will never work for you.
275 / 365 / 24 = 31 Watts for the DVR in their chart. Which makes sense.
$10 / 30 / 24 = 1.39 cents per hour
(1000 / 31) * 1.39 = 44.8 cents per kw/h. Holy shit, that's insane (had to repeat it twice). I'm in Ontario and pay 6.8 cents (7.9 cents as a heavy user) per kwh.
I had heard that in some states it has gone as high as 15 cents per kwh. 45 cents, though? WOWOWOWOW!!!!!!111!!
Re:45 cents per kwh (1)
Anonymous Coward | more than 3 years ago | (#36582620)
AWesome system. Yeah, the free market is awesome. It figures everything out.
Re:45 cents per kwh (1)
Anonymous Coward | more than 3 years ago | (#36582642)
I you read the article again you will notice that $10 is for people who have many devices, and a combination of set-top box and DVR at 446 kWh/year.
So that gives a max. cost of about
$ 120 / (4*446 kWh) ~= 6.72 cents/kWh
Use a power strip (1)
Anonymous Coward | more than 3 years ago | (#36582458)
Put the cable box and the TV (and game systems) on the power strip. When you aren't there using them, turn it off.
Re:Use a power strip (1)
ThinkWeak (958195) | more than 3 years ago | (#36582904)
multi-room DVR (1)
grumling (94709) | more than 3 years ago | (#36582480).
Piracy: The Green Thing to Do (1)
Anonymous Coward | more than 3 years ago | (#36582604)
Piracy: Cheaper, more convenient, and more environmentally conscious. No packaging, no delivery to the store, no marketing materials to be printed, and saves electricity.
This is a pet peeve of mine. (1)
toonces33 (841696) | more than 3 years ago | (#36582652)
We have a satellite system, and some of the boxes use ~40W 24x7. Doesn't matter if you turn it on or off - the only thing that changes is the little light on the front goes off. My first clue that this was an energy hog was to see how much heat the thing was throwing off.
I asked/complained about this and got a number of explanations/excuses. The number one was that the box needs to keep the guide uptodate, but there has to be a way to handle this function without the whole thing running at full tilt. Many such boxes are now connected to the internet anyways, and thus could simply download the guide on-demand when powered up and not need to wait for
Some people put these things on power strips so they can power them off. Back when I had digital cable, I did this, but that box only took a minute or so to boot up. But the satellite boxes take over 5 minutes to boot up for reasons that are far from clear.
My view is fundamentally this. The cable/satellite companies aren't the ones paying the power bills, and thus they have no incentive to reduce the power consumption. The end users pay the power bill, but they get very little choices in terms of the boxes, and no ability to configure the thing to go into "deep sleep" mode. Even if a lot of people were to complain I imagine that they wouldn't do much about it - my only hope is in 2013 when the new EnergyStar standards go into effect.
Hidden? (1)
alphatel (1450715) | more than 3 years ago | (#36582682)
Central Recorder + 3 playback only devices (1)
Anonymous Coward | more than 3 years ago | (#36582692)
Instead of having 2 or 3 DVRs, setup a central DVR that does all the recording for you. Then use specific, diskless, playback devices in the rooms with TVs. These are $40-$80 ea. Turning them off when you don't need them is trivial. Even when powered on, they use 5W of power. That central system probably needs to be on 24/7, but since it runs a full OS (Windows or Linux), you can spin down disks and use a $9, low power, video card. If standby actually works for your OS of choice, you can save even more power and be under 1W in that mode.
Avoid the cable box and cable DVR. Build your own.
I have 4 physical desktops acting as servers here in a 2900 sq ft home in the south. Electricity costs are about $850/yr or $75/month. That includes running HDTVs and multiple central A/Cs. I honestly do not see the big deal. Perhaps if I lived in California or other states where government and activists have screwed with power generation, I'd be paying $5000/yr. I don't know.
Most months, the bill is around $50, but for the 4 summer months, it is significantly higher due to A/C costs. That's with 4 physical PC systems running 10 VMs each, HDTV, TiVo, laptop, A/C, fridge, microwave ovens, routers, switches, UPSes, 4 ceiling fans running 24/7 and a few diskless HDTV playback devices (WD TV Live HD+). There are external disk arrays, external USB/eSATA drives too. Lots of battery chargers constantly working on Lithium-ion batteries and clocks with laser pointers displaying time on walls in 3 bdr. Ah, and dual 24" computer monitors that are never turned off. But nobody uses a hair dryer here.
;)
I'm not completely power-use agnostic. All my computers have 80% efficient PSUs, but that is more about being cooler and having less noise than power efficiency. Also, none of my current video cards require external power. Only bus-power is used, but I don't game. A GeForce GT 430 is the most powerful GPU here.
you need the box for VOD and SDV system need add o (1)
Joe_Dragon (2206452) | more than 3 years ago | (#36582840)
you need the box for VOD and SDV cable system need a add on tuner as well.
any ways even tru2way tv uses like 40W when off vs 1w-5w when not in tru2way mode.
Tivo, this means you (0)
flibbidyfloo (451053) | more than 3 years ago | (#36582698)
I always thought it was silly that our Tivos were difficult to turn off and instead designed to run constantly, recording two shows I'm not interested in watching, 24 hours a day.
Our new DirecTV DVRs have an "off" button on the remote that puts them in standby at least, so they only wake up to record shows I've asked it to.
Low power usage is easy (1)
Synn (6288) | more than 3 years ago | (#36582742).
Re:Low power usage is easy (3, Interesting)
GlobalEcho (26240) | more than 3 years ago | (#36583008)
While true, 20W running all day every day still comes to 1226 kWH per year, which is 2.75 times as much as the set-top box discussed in the article. Your Wifi link alone, at 8 watts, draws more power per year (490 kWH).
Those numbers surprise me, and make think there must be a lot of lower-hanging fruit around the average household.
Belgian cable providers are upgrading (0)
Anonymous Coward | more than 3 years ago | (#36582862)
FYI
Last month the Belgian cable provider Telenet announced new setup boxes. One of the (many) new features is less power consumption when idle. Only new customers or customers that switch to the new "Fiber" subscriptions (same monthly fee ) get the upgraded machines.
If an upgrade is possible in a small country like Belgium why shouldn't it be possible in/with bigger countries/cable providers?
HDs won't sleep in Linux (1)
dargaud (518470) | more than 3 years ago | (#36582866)
Until such issues can be diagnosed easily and dealt with, it's going to be hard to create energy efficient appliances. | http://beta.slashdot.org/story/153874 | CC-MAIN-2014-41 | refinedweb | 4,994 | 79.6 |
QLALR - QParser example crashes at startup on Visual Studio 2008
Unfortunately I have to parse a language (JScript). I started to study how to use Flex knowing that Qt is providing a useful tool called QLALR in order to generate parser.
In \util\qlalr\examples\qparser I found a very interesting example integrating a flex generated scanner and qlalr generated parser. Also, from what I understood (but it couldn't be true...) the example introduces a useful QParser interface to avoid to rewrite parse function. In order to compile it I had to download a unistd.h for windows from. It is in the directory include. In any case is nothing more than:
@/*
This file is part of the Mingw32 package.
unistd.h maps (roughly) to io.h
*/
#ifndef STRICT_ANSI
#include <io.h>
#include <process.h>
#endif@
Using this file I was able to compile the qparser example but when I run it I get a crash before entering in main function with error message: The program '[3736] qparser.exe: Native' has exited with code 2 (0x2).
I'm getting also the following warning messages:
@1>lex.calc.c(903) : warning C4003: not enough actual parameters for macro 'calcwrap'
1>lex.calc.c(1056) : warning C4018: '<' : signed/unsigned mismatch
1>lex.calc.c(1238) : warning C4003: not enough actual parameters for macro 'calcwrap'
1>lex.calc.c(1402) : warning C4996: 'isatty': The POSIX name for this item is deprecated. Instead, use the ISO C++ conformant name: _isatty. See online help for details.
1> c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\io.h(312) : see declaration of 'isatty'
1>lex.calc.c(1402) : warning C4996: 'fileno': The POSIX name for this item is deprecated. Instead, use the ISO C++ conformant name: _fileno. See online help for details.
1> c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\stdio.h(722) : see declaration of 'fileno'
1>Linking...
1>LINK : debug
qparser.exe not found or not built by the last incremental link; performing full link@
I'm using Visual Studio 2008 on Windows 7 and Qt 4.8.0 compiled from the same package where I got QLALR.
Some suggestions?
Thanks
Guido Ranzuglia
You should not mix headers (unistd.h) from MinGW with Visual Studio. I don't know where the get a decent version for MSVS, unfortunately.
Thanks for the suggestion!
Ok I resolved the thing by myself.
The problem is that the provided .pro has not included the win32-msvc2008:CONFIG += console directive.
I suggest you, also, to add an %option nounistd at the beginning of the calc.l and an #include<io.h> in order to use the atty function provided in the default include directory. | https://forum.qt.io/topic/13047/qlalr-qparser-example-crashes-at-startup-on-visual-studio-2008 | CC-MAIN-2018-39 | refinedweb | 453 | 61.73 |
pcap − Packet Capture library
#include <pcap.h>
pcap_t *pcap_open_live(char *device, int snaplen,
int promisc, int to_ms, char *ebuf))
The Packet Capture library provides a high level
interface to packet capture systems. All packets on the
network, even those destined for other hosts, are accessible
through this mechanism.. ebuf is
used to return error text and is only set when
pcap_open_live() fails and returns NULL.
pcap_open_offline() is called to open a
‘‘save
‘‘save with an
appropriate error message.
pcap_lookupnet() is used to determine the network
number and mask associated with the network device
device. Both netp and maskp are
bpf_u_int32 pointers. A return of -1 indicates an
error in which case errbuf is filled in
‘‘savefile.’’ A return of -1
indicates an error in which case pcap_perror() or
pcap_geterr() may be used to display the error
text.
pcap_dump() outputs a packet to the
‘‘savefile’’ opened with
pcap_dump_open(). Note that its calling arguments are
suitable for use with pcap_dispatch().
pcap_compile() is used to compile the string
str into a filter program. program(). −1 is returned on
failure; 0 is returned on success._datalink() returns the link layer type, e.g.
DLT_EN10MB.
pcap_snapshot() returns the snapshot length
specified when pcap_open_live was called.
pcap_is_swapped() returns true if the current
‘‘save
‘‘save ‘‘save(1) isn’t available.
pcap_close() closes the files associated with
p and deallocates resources.
pcap_dump_close() closes the
‘‘savefile.’’
tcpdump(1), tcpslice(1)
Van Jacobson, Craig Leres and Steven McCanne, all of the
Lawrence Berkeley National Laboratory, University of
California, Berkeley, CA.
The current version is available via anonymous ftp:
Please send bug reports to libpcap@ee.lbl.gov. | https://alvinalexander.com/unix/man/man3/pcap.3.shtml | CC-MAIN-2019-09 | refinedweb | 267 | 66.94 |
How to build an AWS Lex chatbot for an iOS app
This monster-length post will discuss how to create a basic chatbot using AWS Lex and how to integrate it with an iOS app. We’ll cover everything you’ll need so lots to cover:
- AWS Lex
- AWS Lambda (using Node.js)
- AWS Cognitio & IAM
- AWS Lex iOS SDK in XCode (Swift)
What will our chatbot do?
We’ll make a chatbot, about chatbots! It’ll answer questions like “What is a chatbot”, “Explain to me about bots” and maybe even “do bots dream of electric sheep” …
Costs
Before you start, take a look at the AWS pricing models, Lex and Lambda are generally free up until a large threshold, so you’re most likely to be ok, but with great power comes great responsibility — and occasionally not-so-great bills, so always good to check before building anything!
Lex
Presuming you’ve an AWS account already, in the console select Lex:
Lex is currently only in the US East N. Virginia region aka US-East-1, so you’ll need to select that:
If it’s your first LEX bot you’ll see a Get Started button, hit that and then on the next page hit Create and you’ll see this:
It comes with three templates that that come with code already, along with AWS tutorials to assist, but we’re going to build our bot from scratch so select Custom bot.
Fill in the details as required — I’m calling it “botBot”, as it’s a bot … about bots. I’ve selected the output voice as “Joanna”, along with a completely random piece of text to test the voice. Feel free to try out the different voices with your own text.
When ready hit Create and you’ll see the screen:
We’re now ready to star building the bot, which firstly involves:
Intents and Utterances
Intents are at the core of most chatbots. An Intent is basically the “intent” of a statement or question posed to the chatbot, as oppose to just the question itself. It signifies the “meaning” of the question and you can have many different questions that actually have the same intent.
For example, we want our users to be able to ask “what is a chatbot?”.
The intent of this question is that user wants to understand what a chatbot is i.e an explanation. There are multiple different ways of asking this question:
- What’s a chatbot
- What is a chatbot
- Tell me about chatbots
- Describe chatbots
- Explain chatbots
- I’ve never heard of chatbots
- What the hell is a chatbot
- Okay. I was at a wine tasting with my cousin Ernesto. Which was mainly reds, and you know I don’t like reds, man. But there was a rosé that saved the day. It was delightful. But then he told me about chatbots and I don’t know what they are! What are they?
but they all have the same intent.
In Lex, all those different samples of stating the same thing are called utterances.
So, let’s create an intent and give it some utterances: hit the Create Intent button:
and you’ll see this:
Then hit Create new intent and you’ll be asked to give it a name, I type “ExplainChatbot” and hit Add
It’ll be created and you can start adding sample utterances i.e. questions, for that intent. Just type them in one at a time, hitting enter to add more.
At the bottom of the page, hit Save Intent.
You’ll also see this on the same page:
These options let you specify what Lambda function(s) you want to attach this intent to — you can attach a Lambda for validation (of the user input) and also for fulfillment (answering the user input).
The fulfillment is currently set to Return parameters to client — this basically means that it won’t do anything, but you can test it. First you need to build the bot — hit the Build button in the top righthand corner:
and you’ll see this:
so hit Build again. Once built, you can test the bot using the testing panel:
Where it says Chat to your bot type in a question to match the utterances we entered, like “What is a bot”:
Hit return and you should see:
Further down in the Inspect Response section, you can hit the detail radio button to see the response object, in this case:
which shows us that it’s hit the correct Intent.
Lambda function
Ok, let’s leave Lex for a few minutes and go make a Lambda function we can use to actually provide an “answer” for that intent.
If you haven’t used Lambdas before, they are an awesome way to run code without any servers i.e. serverless. In AWS, search for Lambda, then look for the Create function button:
There’s again lots of blueprints in here to help you get started, but we’re going to hit the Author from scratch button again, because that’s how we roll:
You’ll see this:
Give it a Name, here I type “myChatbotFunction”.
Leave Role as Choose an existing role (unless you want to create your own one , or use an existing one you may have — entirely up to you) and for Existing role, select lambda_basic_execution, then hit Create function.
After a few seconds, it should be created and you’ll see:
Awesome. You can test it by hitting Test, where you’ll be asked to configure a test event:
I call it myFirstTest, leave the default json input as is, and hit Create. Then back in the main Lambda page, hit Test again and you’ll see a success message; expand out to see:
You’ll note the language is Node.js (6.10):
You can change to Java or Python, but we’ll stick with node. The existing code simply returns a standard hello message:
exports.handler = (event, context, callback) => {
// TODO implement
callback(null, ‘Hello from Lambda’);
};
The event parameter is the input object — what Lex sends the function. This will actually be in this format:
{
"currentIntent": {
"name": "intent-name",
"slots": {
"slot name": "value",
"slot name": "value"
},
"slotDetails": {
"slot name": {
"resolutions" : [
{ "value": "resolved value" },
{ "value": "resolved value" }
],
"originalValue": "original text"
},
"slot name": {
"resolutions" : [
{ "value": "resolved value" },
{ "value": "resolved value" }
],
"originalValue": "original text"
}
},
"confirmationStatus": "None, Confirmed, or Denied (intent confirmation, if configured)",
},
"bot": {
"name": "bot name",
"alias": "bot alias",
"version": "bot version"
},
"userId": "User ID specified in the POST request to Amazon Lex.",
"inputTranscript": "Text used to process the request",
"invocationSource": "FulfillmentCodeHook or DialogCodeHook",
"outputDialogMode": "Text or Voice, based on ContentType request header in runtime API request",
"messageVersion": "1.0",
"sessionAttributes": {
"key": "value",
"key": "value"
},
"requestAttributes": {
"key": "value",
"key": "value"
}
}
Full details on the Lex input / output formats here.
Node.js modules
We can write straight up node in the inline-editor in the browser to parse the input object and send back a response. You could instead create a node package and upload the code as a zip file, by changing the Code entry type to Upload a .ZIP file or Upload a file from S3, but for this basic example, the inline-editor is fine.
Change the code to this:
'use strict';//main handler
exports.handler = (event, context, callback) => {//log out the input from Lxx
console.log("event: " + JSON.stringify(event));checkIntent(event, (response) => callback(null, response));
};//take action based on what intent
function checkIntent(event, callback) {
//get current intent
const name = event.currentIntent.name;
const outputSessionAttributes = event.sessionAttributes || {};//if ExplainBot intentswitch(name){
case 'ExplainChatbot':
callback(close(outputSessionAttributes, 'Fulfilled', 'A chatbot is an automated way to respond to human queries'));
break;
default:
callback(close(outputSessionAttributes, 'Fulfilled', 'I"m afraid I did not understand the question'));
}
}function close(sessionAttributes, fulfillmentState, messageContent) {
return {
sessionAttributes,
dialogAction: {
type: 'Close',
fulfillmentState: fulfillmentState,
message: { contentType: 'PlainText', content: messageContent }
},
};
}
and hit Save.
Ok, what’s going on? We changed our main handler to:
//main handler
exports.handler = (event, context, callback) => {//log out the input from Lxx
console.log("event: " + JSON.stringify(event));checkIntent(event, (response) => callback(null, response));
};
This logs out the event in a format we can read (to CloudWatch in AWS, accessible via the Monitoring tab), then calls a new function checkIntent:
//take action based on what intent
function checkIntent(event, callback) {
//get current intent
const name = event.currentIntent.name;
const outputSessionAttributes = event.sessionAttributes || {};//if ExplainBot intentswitch(name){
case 'ExplainChatbot':
callback(close(outputSessionAttributes, 'Fulfilled', 'v'));
break;
default:
callback(close(outputSessionAttributes, 'Fulfilled', 'I"m afraid I did not understand the question'));
}
}
This gets the name of the intent fired by Lex, along with the sessionAttributes object. This can be used to store values we want to persist between interactions, though we won’t be doing that in this tutorial.
We then do a simple check against the name — if it’s ExplainChatbot i.e. matching the Intent we created earlier in Lex, then we call the close function:
function close(sessionAttributes, fulfillmentState, messageContent) {
return {
sessionAttributes,
dialogAction: {
type: 'Close',
fulfillmentState: fulfillmentState,
message: { contentType: 'PlainText', content: messageContent }
},
};
}
which send the “answer” we specified “A chatbot is an automated way to respond to human queries” back to Lex.
Note that we are specifying a type of Close and fulfillmentState as we now consider the “question” answered or “closed”. There are other options — EllicitIntent, EllicitSlot and Delegate that we won’t go into today — more info here.
Testing the Lambda
Before we hook up the Lambda to our Lex bot, we can test the Lambda code by changing the test event — hit Configure test events:
and change the text input code to:
{
"currentIntent": {
"name": "ExplainBot",
"slots": {},
"confirmationStatus": "None"
},
"bot": {
"name": "bot name",
"alias": "bot alias",
"version": "bot version"
},
"userId": "0001",
"inputTranscript": "What is a chatbot",
"invocationSource": "FulfillmentCodeHook",
"outputDialogMode": "text/plain; charset=utf-8",
"messageVersion": "1.0",
"sessionAttributes": {},
"requestAttributes": {}
}
or in its screenshot glory:
This will simulate an input object from Lex. Hit Save and then Test and you should get a successful result:
Tremendous. Ok, back to Lex.
Down the end of the page, change Fulfillment to AWS Lambda function and then in the dropdown select your Lambda function:
This message will pop up:
Just say OK, then hit Save Intent again, then Build once more. Once it’s built, type in “What is a bot” into the test plan and you should get the correct answer:
Boom! This means that our Lex chatbot is now sucessfully linked with our Lambda function. It only has one Intent and gives one answer — but you can build it up from there to have many more.
iOS Integration
Ok, let’s move onto the iOS integration — we want to be able to interact with the Lex bot via a native iOS app.
Before anyone asks what about Android or React Native, we’ll probably cover them off in a later tutorial! Well, Android anyway!
Before we can use the bot in iOS we need to Publish it. Hit that button and you’ll see this:
I gave mine an alias of “first”, then I hit Publish again. You’ll see:
then in a couple of minutes:
Great. There’s an interesting looking section there — How to connect to the your mobile app. Go ahead and hit Download connection info and it’ll take you to this page:
A better page however is here which goes through in much better detail how to setup the SDK in iOS. We’ll cover it off now ourselves anyway.
Adding LEX SDK into iOS
There are various ways of adding an SDK to an iOS app; we’re going to use Cocoapods.
First create an XCode project, and then in the directory on your Mac which has the project’s .xcodeproj file, run, via the terminal:
gem install cocoapodspod setuppod init
If all went well with those, there’ll now be a new file Podfile in the project directory. Edit and enter this text:
source ''platform :ios, '0.0'
use_frameworks!target :'BotBot' do
pod 'AWSCognito'
pod 'AWSLex'end
Where BotBot is the name of your XCode project. Finally, run:
pod install
You’ll see something like this as it installs:
Once done, close Xcode if its open, and reopen your project by clicking on the .xcworkspace file in the project directory i.e not the .xcodeproj file like you usually would.
AppDelegate.swift changes
The first thing we’ll do in our Xcode project is add the following lines, highlighted in bold, to the AppDelegate.swift file:
import UIKit
import AWSCore
import AWSCognito
import AWSLex@UIApplicationMain
Then, in the didFinishLaunchingWithOptions function:
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {// Override point for customization after application launch.//replace the XXXXXs with your own id
let credentialProvider = AWSCognitoCredentialsProvider(regionType: .USEast1, identityPoolId: "us-east-1:XXXXXXXXXXXXXXXXXXXXXXXX")let configuration = AWSServiceConfiguration(region: .USEast1, credentialsProvider: credentialProvider)AWSServiceManager.default().defaultServiceConfiguration = configuration//change "botBot" to the name of your Lex bot
let chatConfig = AWSLexInteractionKitConfig.defaultInteractionKitConfig(withBotName: "botBot", botAlias: "$LATEST")AWSLexInteractionKit.register(with: configuration!, interactionKitConfiguration: chatConfig, forKey: "chatConfig")return true}
The only bit of that code you need to change is the name of the bot from “botBot” to the name of your own, and also the “XXXXXs” of the Cognitio Identity Pool ID to your own. Let’s go get one of those.
Cognito
In AWS go to Cognito:
and then Manage Federated Identities:
Then Create new Identity pool:
Give it an Identity pool name and tick Enable access to unauthenticated identities:
Note — this tutorial doesn’t cover off how to fully secure your app or bot, so what we show may not be secure. Ideally you would not use unauthenticated identities, but it suffices for this example.
When you hit Create Pool you’ll be asked to allow the creation of new roles, hit Allow.
You’ll now be taken to a “Getting started with Amazon Cognito” page. If you change the platform dropdown:
the code samples will change — and you’ll see the same code we used earlier in the AppDelegate.swift, but this time it’ll have the Identity Pool ID that you need:
// Initialize the Amazon Cognito credentials provider
let credentialsProvider = AWSCognitoCredentialsProvider(regionType:.USEast1, identityPoolId:"us-east-1:XXXXXXXXXXXXXXXXXXXXXX")let configuration = AWSServiceConfiguration(region:.USEast1, credentialsProvider:credentialsProvider)AWSServiceManager.defaultServiceManager().defaultServiceConfiguration = configuration
Before we go back to the app and use that ID, we need to give the roles created by Cognito, access to Lex.
Go to IAM in AWS, click on Roles, then search for your Cognito role — it’ll be called something like Cognito_LEX_ID_POOLUnauth_… depending on what you called your federated identity.
Click on it, then hit Attach policy, then search for AmazonLexFullAccess:
Select it and hit Attach policy again.
Ok, we should be good to go back to our app. In the AppDelegate.swift file, copy in the identity pool id from AWS, into where I have the XXXXs:
let credentialsProvider = AWSCognitoCredentialsProvider(regionType:.USEast1, identityPoolId:”us-east-1:XXXXXXXXXXXXXXXXXXXXXX”)
ViewController code
Now we’re going to add code to allow the user to enter in a question and see the answer from the bot.
We’re going to do it as simply as possible — we’ll have one View Controller, on which we’ll have one TextField to get the question from the user and a Label to display the answer:
In the ViewController.swift file I have two outlets for the TextField and Label, and at the top of the file I include:
import AWSLex
If it doesn’t recognise it, try cleaning the project.
Now add these delegates to the class signature:
class ViewController: UIViewController, AWSLexInteractionDelegate, UITextFieldDelegate {
along with this variable:
var interactionKit: AWSLexInteractionKit?
This interactionKit is a class from the AWS SDK that we will use to communicate to the Lex Bot.
For the Lex implemenation add a setup function:
func setUpLex(){
self.interactionKit = AWSLexInteractionKit.init(forKey: "chatConfig")self.interactionKit?.interactionDelegate = self
}
Note that the “chatConfig” matches the key we specified in the AppDelegate.swift earlier:
AWSLexInteractionKit.register(with: configuration!, interactionKitConfiguration: chatConfig, forKey: "chatConfig")
and for the TextField, add this setup function:
func setUpTextField(){
questionTextField.delegate = self
}
These are then both called in the viewDidLoad function:
override func viewDidLoad() {
super.viewDidLoad()// Do any additional setup after loading the view, typically from a nib.setUpTextField()
setUpLex()
}
For Lex error handling, include this delegate function:
func interactionKit(_ interactionKit: AWSLexInteractionKit, onError error: Error) {
print("interactionKit error: \(error)")
}
and add this function to handle when the user hits the Return key on the keyboard while editing the TextField, to signify they have finished entering their question:
func textFieldShouldReturn(_ textField: UITextField) -> Bool {
textField.resignFirstResponder()
if (questionTextField.text?.characters.count)! > 0 {
sendToLex(text: questionTextField.text!)
}
return true
}
This dismisses the keyboard, checks if the TextField is not empty and then calls the sendToLex function, using the user-entered text in the TextField as a parameter.
Now write that sendToLex function:
func sendToLex(text : String){
self.interactionKit?.text(inTextOut: text, sessionAttributes: nil)
}
That line:
self.interactionKit?.text(inTextOut: text, sessionAttributes: nil)
is where we actually send the question to Lex. It takes two paramaters — a String input aka the question and “sessionAttributes” which we set to nil as we don’t need to send any.
Add this function to handle the response:
//handle responsefunc interactionKit(_ interactionKit: AWSLexInteractionKit, switchModeInput: AWSLexSwitchModeInput, completionSource: AWSTaskCompletionSource<AWSLexSwitchModeResponse>?) {
guard let response = switchModeInput.outputText else {
let response = "No reply from bot"
print("Response: \(response)")
return
}//show response on screen
DispatchQueue.main.async{
self.answerLabel.text = response
}
}
This basically takes the response from Lex and updates the Label in the ViewContoller.
Ok, that should be everything!! Run the app and you should see:
Go ahead and type “What is a bot?”, hit Enter and you should see:
Yes! It works! Try typing a question it won’t get correct:
and Lex’s standard response for not recognising any intent will be returned.
That’s it! We hope you found this useful, please comment below if you have any questions and feel free to hit the little clap button if you think it worth it! Andy | https://medium.com/libertyit/how-to-build-an-aws-lex-chatbot-for-an-ios-app-9fd7693353b?source=post_internal_links---------3---------------------------- | CC-MAIN-2022-40 | refinedweb | 3,003 | 51.58 |
> module Test.HUnit.Text > ( > PutText(..), > putTextToHandle, putTextToShowS, > runTestText, > showPath, showCounts, > runTestTT > ) > where
> import Test.HUnit.Base
> import Control.Monad (when) > import System.IO (Handle, stderr, hPutStr, hPutStrLn`.
> data PutText st = PutText (String -> Bool -> st -> IO st) stTwooHandle :: Handle -> Bool -> PutText Int > putTextToHandle handle showProgress = PutText put initCnt > where > initCnt = if showProgress then 0 else -1 > put line pers (-1) = do when pers (hPutStrLn handle line); return (-1) > put line True cnt = do hPutStrLn handle (erase cnt ++ line); return 0 > put line False cnt = do hPutStr handle ('\r' : line); return (length line) > -- The "erasing" strategy with a single '\r' relies on the fact that the > -- lengths of successive summary lines are monotonically nondecreasing. > erase cnt = if cnt == 0 then "" else "\r" ++ replicate cnt ' ' ++ "\r"`putTextToShowS` accumulates persistent lines (dropping progess lines) for return by `runTestText`. The accumulated lines are represented by a `ShowS` (`String -> String`) function whose first argument is the string to be appended to the accumulated report lines.
> putTextToShowS :: PutText ShowS > putTextToShowS = PutText put id > where put line pers f = return (if pers then acc f line else f) > acc f line tail = f (line ++ '\n' : tail)`runTestText` executes a test, processing each report line according to the given reporting scheme. The reporting scheme's state is threaded through calls to the reporting scheme's function and finally returned, along with final count values.
> runTestText :: PutText st -> Test -> IO (Counts, st) > runTestText (PutText put us) t = do > (counts, us') <- performTest reportStart reportError reportFailure us t > us'' <- put (showCounts counts) True us' > return (counts, us'') > where > reportStart ss us = put (showCounts (counts ss)) False us > reportError = reportProblem "Error:" "Error in: " > reportFailure = reportProblem "Failure:" "Failure in: " > reportProblem p0 p1 msg ss us = put line True us > where line = "### " ++ kind ++ path' ++ '\n' : msg > kind = if null path' then p0 else p1 > path' = showPath (path ss)`showCounts` converts test execution counts to a string.
> showCounts :: Counts -> String > showCounts Counts{ cases = cases, tried = tried, > errors = errors, failures = failures } = > "Cases: " ++ show cases ++ " Tried: " ++ show tried ++ > " Errors: " ++ show errors ++ " Failures: " ++ show failures`showPath` converts a test case path to a string, separating adjacent elements by ':'. An element of the path is quoted (as with `show`) when there is potential ambiguity.
> showPath :: Path -> String > showPath [] = "" > showPath nodes = foldl1 f (map showNode nodes) > where f b a = a ++ ":" ++ b > showNode (ListItem n) = show n > showNode (Label label) = safe label (show label) > safe s ss = if ':' `elem` s || "\"" ++ s ++ "\"" /= ss then ss else s`runTestTT` provides the "standard" text-based test controller. Reporting is made to standard error, and progress reports are included. For possible programmatic use, the final counts are returned. The "TT" in the name suggests "Text-based reporting to the Terminal".
> runTestTT :: Test -> IO Counts > runTestTT t = do (counts, 0) <- runTestText (putTextToHandle stderr True) t > return counts | http://hackage.haskell.org/packages/archive/HUnit/1.2.0.3/doc/html/src/Test-HUnit-Text.html | crawl-003 | refinedweb | 461 | 53.34 |
You are viewing
igorius_maximus
's journal
Create a LiveJournal Account
Learn more
Explore LJ:
Life
Entertainment
Music
Culture
News & Politics
Technology
Interest
Region
Site & User
IM Info
Advertisement
Customize
Perhaps not so crazy anymore...
[
entries
|
archive
|
friends
|
userinfo
]
Igor
[
website
|
]
[
userinfo
|
livejournal userinfo
]
[
archive
|
journal archive
]
Woah
[Apr. 22nd, 2009|
02:39 am
]
Last entry in 2007? For real? Bahahaha.
Well erm lets see. 1st year in Toronto went by, now 2nd year is 3 days away from finishing. During that time:
2007-2008 year
Bought a Nikon D60 using my unspent student loans. thx Canada <3
Went to Montreal for 3 days and walked about, during summer right after end of school year with a friend, good times.
Did survey camp... a mandatory 2-week course up in northern Ontario, where alleged bears roam free. Typical work day: Wake up at 6am, learn how to do and then do lots of laborious work that's typically involved in site surveying, eat, do work, eat, do work, sleep (usually around midnight - 2am). 1 weekend of fun was had involving canoeing and the like.
Attempted to get a research internship in Germany... failed due to 800 applicants trying to grab 100 spots or so. :/
Instead got an IT job at Legal Aid Ontario in Toronto... which is completely unrelated to civil engineering, but it let me stay in Toronto and earn some cash. The manager is an alleged former Israeli army officer of notable rank. Also, he is Russian... gotta love Toronto. Had a couple of busy weeks, and the rest involved me sitting at my desk playing java games on my work PC.
Onwards to 2008-2009...
Continued to hover just above class average (73-75%) during both years... regardless of level of effort. :S
Moved places at start of 2008-2009 year. The new place is on Bloor/St George, a much more studenty/cozy/non-million-dollar-condo area in the northern edge of downtwon. Bloor is all crowded with people in the evenings which is awesome.
Mid-school year: got an innterview for a one-year internship for a civil engineering software company in Alicante, Spain. Didn't get it :(
Late-school year: got an interview for a one-year internship in Boston/Seattle/Toronto. I applied to all three locations, but ranking Boston/Seattle higher in preference. I got an offer from the Seattle office. ^^ The company is IBI Group, a city planning/architectural consulting firm mostly specializing in ITS (intelligent transportation systems) in urban areas... definitely something I am considering specializing in, so it's a good fit. They pay quite nicely, but it's not like I'll be swimming in cash as I will be forced to pay off my loans as soon as I start working, even though it's an internship. :/ I'll be in the Seattle office May 09 - May 2010, then do 4 months in the Toronto office, then finish my last year at UofT. I'll be graduating with the younger class, not my current one... but that's ok since most of the guys from my class are annoying anyway. Except for a few, most of whom are also doing 1-year internships, woot.
I'll probably be updating this thing more often once I start the whole transition thing since I'll probably have oodles of time on my hands. It's hard to imagine actually coming home after 5pm and not having to study or solve problem sets til bed time. :/ I'll probably spend the majority of my time taking pics with my beloved D60 during the first months.
Hm, still undecided about what to do regarding housing. I'd definitely prefer to be near the downtown area, but that's so very costly :/ My current place in Toronto costs me 510 CAD, whereas most of the decent places in downtown Seattle start at 800-900 USD, which is more like 1000-1125 CAD. It's just crazy when I think how much extra money I will be spending on housing, so I just don't think about it too much other than that I will be able to afford it.
Granted, my place here is ridiculously small, and the kitchen and bathroom serves an entire floor, and most of the places I find in Seattle are some polished condo-like apartment building, with full studio apartments with kitchen/bathroom/workout area/pool/etc. That's actually part of the problem, I think. Toronto has all these old (=cheap), cozy places in awesome locations, but Seattle is too clean and polished for its own good, at least from what I have been able to see so far without actually having been there.
K so it's 3:40am, and I'm still awake - but the fire alarm just started to ring, so I kind of win. Take that, moron who is cooking at 4am.
As for the reason why I'm awake at 4am - no idea, but it might have something to do with the fact that I just wrote 3 exams in the past 24 hours or so, and just 1 more left on friday.
link
post comment
comfort vs freedom
[Dec. 30th, 2007|
04:11 pm
]
It seems I've discovered a contrast.
In Toronto I have a place that's cramped-tiny with barely enough room for a bed and a desk, with papers lying about everywhere like casualties after a major battle, without the time to clean up.
In Ottawa I have a huge room in a huge house that's overlooked by a huge park, all of which are clean and well-kept.
In Toronto I take orders from no one and have my own important matters to take care of, namely school.
In Ottawa I sit with mostly nothing to do and must always comply with dinner times, be careful not to wake people up at night, etc.
All things considered, it would make sense if my time in Ottawa was more enjoyable. It is not. I much prefer to be in the densely-populated Toronto despite all its dirtiness and crime and homework and stress. I even find it easier to be social over there.
It is strange. Why would I care about that last bit? I'm supposed to have an introverted personality. Maybe I was born to be extroverted, but something happened along the way? Funny, I actually remember being the centre of attention back in early elementary years, until we started moving from place to place.
Oh yeah, as to the topic at hand, I prefer freedom.
link
|
post comment
Newness
[Aug. 10th, 2007|
11:08 pm
]
Is where I'm at as of September 1st. Air conditioning (free electricity :D), squeaky clean, spacious room, ~20 min from UofT, only 2 bedrooms in the apartment. There will be 3 people living there in total including me (the third one will be living in the living room, that was converted into a bedroom).
aside:
If you are an "internet predator" and are thinking to yourself,
woohoo, I have a target!
, all I can say is: bring it on.
/aside
Pictures will have to wait till September 1st, as I did not bring my camera with me when I visited.
So yes, I am moving to Toronto. The city with the tallest man-made structure in the world, longest street in the world, largest Russian community in the world (outside of Russia), 2nd most vibrant theatre community (after Broadway), will be attending university with the 3rd most extensive library network (after Harvard and Yale), and will be living in the middle of it all in downtown Toronto.
The thing I'm looking forwards to the most, though, is change. I've been going through change all my life, and it seems recently has been the first time I've stayed in the same place for... 6 years? Wow. Shattered my old record twofold.
And yes, I DID get the job at Black's Photography, and it seems to be going well. I sold three cameras today, heh. I love that job because of it's central location and the people that come there to develop photos... during the FIFA U-20 World Cup, a Nigerian player came to develop some photos of him playing the match, as well as some team group shots. The thing I don't like about it, though, is that I get teased everyday by the most beautiful cameras in existence (and I have to sell them to other people, too)... and I know that with the recent development, I will most definitely not be able to afford the camera that I want. Perhaps I should sacrifice 3 months' worth of food and buy the D40x.
Food... D40x... food... D40x... hmm. Maybe if I get the camera, I can take some shots of food and admire it from afar. Maybe not.
Yes, I will need to count my dollars to make sure I have enough for food and clothes this year. Bloody expensive downtown Toronto. But it'll be awesome.
link
|
post comment
RAWR!
[Jul. 5th, 2007|
11:21 pm
]
Today I have received my admission to 2nd year engineering at U of T! I have full credit to all of first year, even though I haven't taken two of U of T's 1st year engineering courses. I was, however, advised to do some personal reading over the summer so as not to be disadvantaged in my 2nd year.
And as always... you solve one problem; three more pop up. Since I am a 2nd year student, I will not get guaranteed residence at U of T, so I must go and seek out my own place to live. Although, I hope with some convincing the right people I can still get into residence, but I fear it is too late for that (damn you high schoolers).
AS IF this wasn't good enough news... today I also got a call for an interview at Black's Photography... if I will also get this job, my life WILL be complete (at least for the summer). *crosses fingers* MUST NOT SCREW UP INTERVIEW
link
|
post comment
Rain
[Jun. 9th, 2007|
12:11 am
]
This afternoon saw some heavy downpouring rain and a thunderstorm. I stood under it. It was fun. There was also a tornado watch; the clouds were interesting and rather pretty.
Playing tennis in this weather would have been more fun, but Jason wasn't around. That bugger.
Still waiting for a response from U of T.
I am THIS || close to buying a digital SLR.
Federer will beat Nadal.
The Sens got their asses kicked.
I want to go to Europe. Nothing new.
French girls are pretty.
Yeah, I couldn't follow that line of thought either. How did I write it down, then? Life is full of mysteries.
Mine in particular. One day I will sit down and write it all down, classified stuff and all. Hah. Maybe when I'm eighty. Or tomorrow? Tomorrow would certainly be easier - being only one quarter of the age of eighty and all - but it also wouldn't be as fun. It seems as if a compromise is in order; I will write it sometime between tomorrow and sixty years from now. Enjoy the wait.
I know I will.
link
post comment
stolen from Amanda
[Apr. 25th, 2007|
01:11 pm
]
Interesting:
Read my VisualDNA
™
Get your own VisualDNA™
link
|
post comment
In preparation for today's programming exam... (and also outlining my present thoughts)
[Apr. 16th, 2007|
12:48 pm
]
#include
[
Error:
Irreparable invalid markup ('<stdio.h>') in entry. Owner must fix manually. Raw contents below.]
#include <stdio.h>
#define linear_algebra 7
#define calculus_I 9
#define chemistry 4
#define vector_mechanics 7
#define technical_writing 8
float overall_grade(float marks[5]);
int main(){
int i;
float marks[5];
char *array[5]={"programming","calculus_II","physics_II","drafting","environmental_sci"};
for(i=0;i<5;i++){
printf("\nEnter %s final grade:",array[i]);
scanf("%f",&marks[i]);
}
printf("\nAfter all the exams, my CGPA is %f\n",overall_grade(marks));
if (overall_grade(marks)>=8.5)
printf("Woohoo! $4,500 scholarship here I come.\n");
else if (overall_grade(marks)>8)
printf("Damn, good grade but no scholarship :(\n");
else if (overall_grade(marks)>7)
printf("Grade is good enough to have some hope to transfer to U of T...\n");
else if (overall_grade(marks) > 6)
printf("Grade is just barely good enough to have any hope of transferring...\n");
else if (overall_grade(marks) <=6)
printf("I might as well drop out and take a general arts degree now.\n");
else
printf("You screwed up your program, have fun failing today's exam.\n");
system("pause");
}
float overall_grade(float marks[5]){
int i;
for(i=0;i<5;i++){
marks[i]=(marks[i])*3;
}
float CGPA = (marks[0]+marks[1]+marks[2]+marks[3]+marks[4]+3*linear_algebra + 3*calculus_I + 3*chemistry + 3*vector_mechanics + 3*technical_writing)/30;
return CGPA;
}
/*------------------------------------END------------------------------------*/
link
post comment
Stuff to do this week
[Feb. 20th, 2007|
09:50 pm
]
[
Current Location
|
Milky Way
]
[
music
|
various frequencies ranging from 20 to 20,000 Hz
]
Woo, it's reading week. Time to
relax
do work at a pace of my own chosing, for a change.
Things that need to be done:
1.
Five
two drafting assignments in AutoCAD
2. Send various documents to the U of T
3. Huge assignment in physics
4. Assignment in calculus
5. Figure out what the heck the programming prof has been talking about these past two months
6. Buy a good table tennis paddle
7. Buy a watch
Yeah, I'm in the uOttawa table tennis club, and I've been owning people. A tournament is coming up, so a good paddle is a must. I heard it will also involve people from the Chinese Student Association, though... so winning will be hard :P
Walking around watchless is getting really annoying.
link
post comment
stuff
[Jan. 20th, 2007|
10:05 pm
]
[
music
|
Ron Van Den Beuken
]
I used to write a lot about my past in here... but here's a change. I have plans and I'd like to write them down before forgetting about them.
1. Find out more about what it is I actually need in order to get into a prestigious engineering post-grad program. If it means transferring to a more prestigious undergrad school, then...
2. Work my ass off like I never have before on the second semester and through the next year, hoping that it will somehow increase my chances of a successful transfer to a better university (that is, if that is actually possible to do). It's too bad cause I actually have friends at uOttawa, but if it must be done... then oh well.
3. I have a guaranteed job in a construction site... for a company that specializes in foundation building. Approx 10 hours a day, 12 bucks/hour, this should cover the extra cost of living elsewhere. This could be done every summer to cover the year's expense (unless there will be co-op of course).
I'm still not sure about the summer job... since I had plans to go work part-time in Holland (through a program of the Netherlands Embassy). Living in Europe for a few months is very tempting... but the money is tempting as well. I guess I will decide as I go through the documentation process and maybe something will go wrong and I won't have a choice :D
Okay, time to start with item #1. Maybe the uOttawa site will provide me with some info.
link
|
post comment
...What?
[Oct. 16th, 2006|
06:58 pm
]
I really don't feel like elaborating on my life... but I'll just mention some stats.
Calculus mid-term: 90%
Linear Algebra mid-term: 97%
Chemistry mid-term: 74%
English mid-term: 80%
Chemistry has never been my thing... but I'm only taking one semester of it so I don't care. Yeah, I just decided to post that before thursday's second lin. algebra and engineering mechanics mid-terms... so they don't spoil the nice numbers.
I went to an information lecture about international experiences today. If all turns out as planned, within a year or two I will go study in Europe, and later will hopefully do co-op in developing countries for Engneers Without Borders.
But we all know that things never go as planned...
link
|
post comment
What the...
[Aug. 12th, 2006|
11:36 am
]
Hey look, I have a livejournal!
Oh wait...
Hey, who wrote all these entries in my name?
link
|
post comment
Commencement
[Jun. 29th, 2006|
12:14 am
]
[
music
|
In the Jungle (Lion King Theme)
]
So yes, grad ceremonies were held today, which I think went quite smoothly (or rather, smoothly enough). Something told me (it must be my KGB instinct) not to dress up all fancy for today, and it seems it has done me well. It was mighty hot in that auditorium, under all that intense light, in that black grad gown... but I was wearing a t-shirt and shorts under it, so it wasn't that bad. Unlike some other people. hehehe :D They were even nice enough to leave us some water bottles under our seats. Which would be very helpful and refreshing... had I not knocked it over with my feet. The damned thing rolled away beyond my reach, unless I was willing to reach for it and knock over all the nearby grads, creating a domino effect. Tumbling grads, yessssss! I can just imagine what Novak would have to say about that.
I got some weird music award, which is just basically an envelope with a $275.00 cheque in it... yay! I can use it to buy myself a clarinet. w00tness. Very fitting, to use music award money to buy a musical instrument. Yes.
And as to the previously mentioned air vent idea, well... Seb didn't show up. Damn him! It would have been so much fun. But it's ok... We'll come back to the school soon to try it for sure :D
Oh yes, a bunch of us helped Andrew move, which was fun, as it involved lots of barefoot soccer on the way from old home to new home. I haven't played soccer for AGES, I feel my skills thinning... noooooo... but it's ok. I think Seb, Victor, Mauran, etc are playing at the Greenboro fields, so I might join them there.
Now if you guys will excuse me, I will go back to listening to the best song ever...
aaaaeeeeeEEEEEEEooooeeoooooeeeemomamoeee
ee...
link
|
post comment
last day of school!
[Jun. 16th, 2006|
06:13 pm
]
So, today was the last day of school, and i'm done for sure as even my volunteer hours are complete. I'm more or less prepared for prom... all I need is a tie, shoes, and a haircut.
and a girl
... kidding, lol
On a less happy note, to add to my broken bone, I am also sick :S I hope I will feel well enough to study for my math exam soon.
The Netherlands aren't doing as well as they should, given the quality of their players... but they are still winning, so I am not complaining. Go Netherlands!
Seb and I discovered, while playing wall ball, that the air shaft at the corner can be opened and climbed into. We're thinking of sneaking into the school and changing some answers to our exams... cause we all know they keep em in a secret room, which can only be accessed by a fingerprint scanner, or by the air vent. The only problem is that the air shaft is about 1.5 meters high, so climbing out of it with only one working hand might prove troublesome. Getting
into
it however will not be a problem, which is all that matters.
link
|
post comment
randomness
[Jun. 9th, 2006|
11:16 pm
]
Yesterday, Jason and I woke up at 6:00 am in order to go for a jog. Nothing exciting... except for the fact that I can go jog! Yay! Running felt soooooo good, oh man. I'm still going to walk around with the sling at school, just in case of a sudden movement or something.
Soccer World Cup is under way. Even though they beat Costa Rica with a score of 4-2, in my view Germany has embarrassed themselves by letting 2 goals in. Costa Rica is not the kind of team that should be able to win against Germany. On the other hand, Poland was playing exceptionally well, even though they lost 2-0 to Equador, who aren't usually any good at all. They got lucky way too many times. But it matters not; if Poland continues to play in the same manner, except with a bit more luck, they will easily overcome Germany and secure a spot in the next round. That is if they beat Costa Rica, which of course they will.
I've decided that I'll bet my imaginary money on the Netherlands to win gold this time. I was going for England at first, but everyone else seems to be cheering for them, including Henson. I envy his jerseys though... damn things must be so expensive. Anyway, it won't be as fun if a team wins and there's no one to go "I told you so!" in their face, so I'll take my chances with a less popular team. Besides, they looked really promising in a friendly game against Mexico.
In other news, I've been playing Minesweeper, but with disappointing results:
Beginner: 13s
Intermediate: 96s
I blame it on my collar bone. I shall keep trying.
Hey, if I can jog without my sling, maybe I can start playing tennis soon? *super hopeful* I've been playin ping pong without much pain, so I predict full functionality in the near future. w00t.
link
|
post comment
World Cup Soccer!!!!one1eleven
[Jun. 6th, 2006|
09:24 pm
]
I forsee a whole lot of skipping in the near future... starting with Friday
(nobody ever read that)
(or else)
My predictions:
Brazil owns. Ronaldinho, Kaka, Roberto Carlos, Robinho, Ronaldo, sooooooooooo many superstars, it's just not funny.
However, there are other teams I'd like to win, cause Brazil just has too many wins already.
-England: Beckham is expected to work some magic
-Spain: Torres and Raul... damn, so good.
-Ukraine: Shevchenko, only the best soccer player in Europe...
-France: Zidane, who used to be the world's best soccer player is getting a bit old, but his soccer instinct is as good as ever.
It's pretty impossible for me to narrow down as to what my favourite team would be... England, Ukraine, or Spain.
link
|
post comment
the clavicle
[May. 26th, 2006|
11:04 pm
]
[
music
|
Bush - Glycerine
]
I can now move around, yay. As in, walk around... my shoulder is still out of service.
Must... get... mind... off... tennis... Almost two weeks done, four more to go! Bone, go go go! You can do it!
By the end of this month I will have taken the bus more than I ever did in my life... seeing how walking hurts (it requires slight movements of the shoulder, even when it's strapped in) and biking... well I haven't tried that yet. Don't think I will.
Not doing anything at all feels so crappy... I hope I can start jogging soon. Getting out of shape in the summer is super lame.
I think I messed up my calculus test and got zeros for a number of english assignments.
Oh yeah, when I went to the hospital, they were obviously completely useless. They seem to have got this new X-ray machine; the X-ray operator couldn't operate it. He had to ask all these questions... "How do I get it to focus on this area?" and he points to my broken bone, I was about to kill him with my other arm for poking it... but he didn't. "You can move it manually" replies the girl that was helping him. "But... how..." he looks at the buttons behind the X-ray camera. The girl just goes up there, grabs the camera, and moves it to point at my broken bone. I also had to
ask
if I was supposed to walk out of the X-ray room in my armored skirt. That sounded wrong...
Anyway, I think he was an intern, so I forgive him.
"We can't do anything to treat your broken bone other than give... I mean sell you this sling, and perscribe some pain-killers." I didn't take the pain-killers, cause I figured it hurts when it moves... and moving it is bad, therefore pain is good.
Now, who's willing to let me borrow their left arm? It would be useful, as I came to realise that it really is a two-handed world out there.
link
|
post comment
auuuuuugh
[May. 17th, 2006|
02:44 pm
]
For those that still don't know, I've been completely owned in rugby. That is, my left collar bone now looks like and will heal as:
Yup, my shoulder will now be ~1 cm shorter, but should still be 100% strong once healed. It's pretty painful and all, so I will probably miss a day or two of school.
Obviously, the whole Judo deal will have to wait.
Anyhow... typing with one hand is veeeeeery slow and annoying, so i will now stop.
Edit: I can still feel the two parts rubbing against one another when i move... it's actually pretty interesting.
Caaaaaan't wait till i can stretch that shoulder again.
link
|
post comment
Hmm
[May. 15th, 2006|
08:40 pm
]
[
music
|
106.9 The Bear
]
Summer is coming up, and I will likely have no competitive sport to play now that school tennis is over forever. My tennis evaluation with the OAC was totally ruined by the volunteering event; the coach doesn't seem to think that it's worth calling back to reschedule... and I can't blame him, I only told him one day in advance. I will still be playing lots of tennis with Jason of course, and possibly some games with Mr. Novotny, but nothing competitive.
Sooooo... I've been thinking. My dad claims I'd be very good in Judo wrestling, Mr. Laggis thinks I'd be a good wrestler (from watching me play rugby), and I think it'd be SUPER fun... so I think I will try and join a competitive Judo club. After some searching, I found Judo Canada that is located at the intersection of St-Laurent and Smyth, which shouldn't take more than 20 min by bike.
The best thing about wrestling in general: weight categories! It feels like I'd have an advantage in my height-weight ratio.
The not so best thing about Judo: awkward situations such as these:
.......
Yeah, that's wrestling for you. So yes... tomorrow is the rugby tournament, so I wouldn't be able to do anything in that regard... but Wednesday will be when I will go bike after school to this Judo Canada place to find out what I can about joining.
Speaking of rugby, the team we're playing against are said to have tied with Osgood... and Osgood OWNED us even back when we had our full set of players. But that's ok... the Spartans fought back thousands with only an army of 300. Yeah, never mind :P
link
|
post comment
rugby/band trip
[Apr. 26th, 2006|
06:24 pm
]
[
music
|
Green Day - Jesus of Suburbia
]
Monday's rugby tournament: amazing fun! Nothing beats drilling an ex-best friend into the field (not tha I have a grudge against him or anything). Got a few bumps and bruises (and a cut), but it was all worth it.
Interesting development in the tennis department: I may be able to get into the OAC Tennis school for cheap or free, because my dad works there. When I was playing there with my dad, one of their senior coaches checked me out, and said he would see if he could arrange something... which would be super cool. If all goes well, I could participate in tournaments and all. *fingers crossed* I would have found out this Thursday, but because of the band trip I will know on Sunday.
Speaking of which, that should be pretty fun. Messed up principal said we can't use the pool cause Guy doesn't work for the board (right, like the board hires freaking lifeguards), but I don't expect there to be much of a problem using the pool anyway. Even if some sneaking in would be involved maybe... Some excitement is always nice. Maybe my KGB skills will come in handy to pick some locks or something :D
Now I just need to figure out what it is I need to bring with me... other than ping pong rackets of course. There BETTER be a tennis table this time...
Oh, and look at this:
ets/Articles/Boy+or+Girl+Cat.htm
<--- my cat's a celebrity!!
...either that, or somebody's made a clone of my cat. Hmm...
link
post comment
YAAAAAY
[Apr. 21st, 2006|
03:32 pm
]
I am just going to save myself the trouble and post last year's entry:
om/2005/04/21/
yesyesyesyesyesyesyesyesyesyesyesyesyesy
esyesyesyesyesyesyesyesyesyesyesyesyesye
syesyesyesyesyesyesyes (to emphasize just how much this is good news, I typed the above without copy+pasting).
Yesterday in bio, the class was able to convince the student teacher to let us go do our work outside, since the weather was so awesome. Amanda, Andrew, Cam, and I went to climb a tree in hopes of doing our work from above, but since I got there first, I got the most comfortable spot and eventually was the only one who remained up there. That is where I got my lab done that was due on the same day. It is amazing how productive it is to do homework on top of a tree... I wish there was a tree in my backyard. Maybe I should just go to the park and find a nice tree to do homework on (it's only 30sec away)...
OR MAAAAAYBE...
I will just go practice some serves 'till Jason arrives. Yeah I think I'll do that.
link
|
post comment
navigation
[
viewing
|
most recent entries
]
[
go
|
earlier
]
Advertisement
Customize | http://igorius-maximus.livejournal.com/ | crawl-002 | refinedweb | 5,124 | 82.75 |
Artifact 110a8c4d356e0aa464ca8730375608a9a0b61ae1:
- File src/test_multiplex.h
- 2013-08-07 14:18:45 — part of check-in [0ad83ceb] on branch trunk — Add a guard #ifndef to test_intarray.h to prevent harm if it is #included more than once. Add a comment on the closing #endif of the guards on sqlite3.h and test_multiplex.h. (user: drh size: 3034)
- 2013-08-07 18:07:05 — part of check-in [c78b0d30] on branch uri-enhancement — Merge in the latest changes from trunk. (user: drh size: 3034)
- 2013-08-07 18:42:27 — part of check-in [08f74c45] on branch sqlite_stat4 — Merge latest trunk changes with this branch. (user: dan size: 3034)
- 2013-08-19 12:49:06 — part of check-in [67587a33] on branch sessions — Merge in all the latest updates and enhancements from trunk. (user: drh size: 3034)
- 2013-09-12 00:40:54 — part of check-in [fca799f0] on branch vsix2013 — Merge updates from trunk. (user: mistachkin size: 3034)
- 2014-03-13 15:41:09 — part of check-in [d17231b6] on branch threads — Merge latest trunk changes into this branch. (user: dan size: 3034)
-: 3034) | https://sqlite.org/src/whatis/110a8c4d356e0aa4 | CC-MAIN-2019-39 | refinedweb | 184 | 83.15 |
import data from one database ta another for same schemasdba Jan 29, 2014 4:53 AM
I have two dbs A and B.
both have User U.
User U have two tables:
Parent(Colums Id primary key,Name ,Attributes).
Child (columns id primary key,Name,attributes,PID foreign key refering id column of parent table).
Now I want to import data for these two tables from Database A to B.
Only new data should be added or old data should be updated.
no delete are permitted and sequence of id's should be consecutive.
Please suggest some ways.
1. Re: import data from one database ta another for same schemaCloudDB Jan 29, 2014 6:08 AM (in response to sdba)
Hi
You can use below workaround.
sync the latest data from production to staging server using oracle datapump utility
impdp username/password@netstr directory=DUMP_DIR TABLES=Parent,Child network_link=prd_racdb table_exists_action=replace
2. Re: import data from one database ta another for same schemaKarK Jan 29, 2014 6:13 AM (in response to sdba)
Hi,
You can INCLUDE =CONSTRAINT option while exporting to include constraints and TABLE_EXISTS_ACTION=REPLACE while importing ,if the table already exists then it replaces the table with the exported table along with the constraints
expdp username/password@dbnameA TABLES=emp,dept DIRECTORY=DMPDIR DUMPFILE=expdp_1.dmp LOGFILE=expdp_1.log INCLUDE=constraint
impdp username/password@dbnameB DIRECTORY=DMPDIR DUMPFILE=expdp_1.dmp LOGFILE=impdp_1.log TABLE_EXISTS_ACTION=REPLACE
3. Re: import data from one database ta another for same schemasdba Jan 29, 2014 7:21 AM (in response to KarK)
Replace will delete the data in database B.
It just need update and insert not delete.
and it should work for parent-child relationship.
4. Re: import data from one database ta another for same schemaKarK Jan 29, 2014 10:39 AM (in response to sdba)
Since you having PRIMARY key in the parent and chile table, it checks for unique values when you are importing data from one database to another database. If there exists any duplicate values, then it will "ORA-00001: unique constraint" erorr.
To Avoid this behaviour:
1)Disable the constraint on the target tables before import and then enable the constraint with novalidate option after import
2)Truncate or delete the table by using TABLE_EXISTS_ACTION option during import
5. Re: import data from one database ta another for same schemaCloudDB Jan 29, 2014 10:45 AM (in response to sdba)
imp help=y
CONSTRAINTS import constraints (Y)
impdp help=y
CONTENT
Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
DATA_OPTIONS
Data layer option flags.
Valid keywords are: SKIP_CONSTRAINT_ERRORS.
TABLE_EXISTS_ACTION
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.
QUERY
Predicate clause used to import a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".
check more options on help main page ...
6. Re: import data from one database ta another for same schemasdba Jan 29, 2014 12:09 PM (in response to CloudDB)
impdp will replace the table or append the data.
but i need to update the current data as well.
Also child tables needs to updated.
I can update the parent table using merge.
but merging child tables is a problem.
7. Re: import data from one database ta another for same schemaRichard Harrison . Jan 29, 2014 10:48 PM (in response to sdba)
Hi,
If the merge works for the parent - why is it an issue for the child?
Also - you can maybe use DBMS_COMPARISON for this
Perform a Compare with DBMS_COMPARISON - Oracle - Oracle - Toad World
But i think realistically you are going to have do some manual coding to do exactly what you want. A lot depends on how you are generating id's and if you are just able to use pk/fk values from the original data source.
Cheers,
Rich
8. Re: import data from one database ta another for same schemasdba Jan 30, 2014 8:51 AM (in response to Richard Harrison .)
Yes ..
Manual coding is the only option feasible here. | https://community.oracle.com/message/11343924 | CC-MAIN-2017-39 | refinedweb | 676 | 55.64 |
In this section, we are going to separate the word that begins with vowels from the specified text. As the specified text consist of sentences that are terminated by either '.' or '?' or !. So first of all we have replaced all these special characters using replaceAll() method. Then we have used StringTokenizer class that breaks the whole string into tokens. After that using the method startsWith() with the tokens, we have checked whether the token is started with vowel, if it is, then we have displayed that word.
Here is the code:
import java.util.*; import java.util.regex.*; public class Vowels { public static void main(String[] args) { String st = "hello! how are you? when are you coming? hope to see u soon."; String str = st.replaceAll("[?!.]", ""); StringTokenizer tokenizer = new StringTokenizer(str); String s = ""; while (tokenizer.hasMoreTokens()) { s = tokenizer.nextToken(); if ((s.startsWith("a")) || (s.startsWith("e")) || (s.startsWith("i")) || (s.startsWith("o")) || (s.startsWith("u"))) { System.out.println(s); } } } }
Output
are are | http://www.roseindia.net/tutorial/java/core/vowels.html | CC-MAIN-2017-17 | refinedweb | 162 | 70.39 |
Selection
December 11, 2009.
My solution in Ruby:
def select(k,array)
elem = array[array.size / 2]
smaller = array.find_all { |e| e < elem }
larger = array.find_all { |e| e >= elem }
if(smaller.size == k)
return elem
end
if (smaller.size > k)
select(k,smaller)
else
select(k – smaller.size,larger)
end
end
Python solution:
Unefficient Common Lisp, but very direct…
1 (defun select (k li)
2 (when (>= k 0)
3 (nth k (sort li #’<))))
Jos Koot pointed out a bug:
(select 3 '(1 1 2 2 3 3 4 4 5 5 6 6 7 7)) results in an infinite loop. The bug manifests itself whenever the kth element has a duplicate.
Both
selectand
partitionhave to change to fix the bug. Here is
partition, where the
(cdr xs)in the initialization of the loop ensures that at least one element is removed from the list at each iteration, thus forcing progress toward termination:
(define (partition xs)
(let ((x (car xs)))
(let loop ((xs (cdr xs)) (lt '()) (ge '()))
(cond ((null? xs) (values lt x ge))
((< (car xs) x)
(loop (cdr xs) (cons (car xs) lt) ge))
(else (loop (cdr xs) lt (cons (car xs) ge)))))))
Selectalso has to change, because the partition element is no longer included in the "greater-than" partition:
(define (select k xs)
(if (<= (length xs) k)
(error 'select "out of range")
(let loop ((k k) (xs (shuffle xs)))
(let-values (((lt x ge) (partition xs)))
(cond ((< k (length lt)) (loop k lt))
((< (length lt) k) (loop (- k (length lt) 1) ge))
(else x))))))
The variable name
gtis changed to
geto reflect that equal elements are possible, and are (arbitrarily) placed in the "greater-than" partition. The order of the return elements from
partitionis also changed, so that when reading debug output from
partitionthe elements are physically displayed in their proper order, a small point that nonetheless improves the program.
My implementation in C
Here is a java method to accomplish the same. I think there is a better solution feasible. Please suggest/comment.
public static int select(int[] a, int k)
{
int length = a.length;
int randomElementIndex = (new Long(Math.round((length-1)*Math.random()))).intValue();
int randomElement = a[randomElementIndex];
//Lower and Upper Limits of k are handled in the following loop.
if (k == 0 || k==length)
{
Arrays.sort(a);
return k==0?a[0]:a[length-1];
}
int[] leftTransientArray = new int[length];
int[] rightTransientArray = new int[length];
int laIndex = 0, raIndex = 0;
for(int i=0; i<length; i++)
{
if (a[i] randomElement)
{
rightTransientArray[raIndex]=a[i];
raIndex++;
}
}
int[] leftArray = Arrays.copyOfRange(leftTransientArray, 0, laIndex);
int[] rightArray = Arrays.copyOfRange(rightTransientArray, 0, raIndex);
if (leftArray.length == k+1)
return randomElement;
else if (leftArray.length <= k )
return select(rightArray, k-leftArray.length);
else
return select(leftArray, k);
} | http://programmingpraxis.com/2009/12/11/selection/?like=1&_wpnonce=cb13afa090 | CC-MAIN-2014-15 | refinedweb | 458 | 55.84 |
Mypy syntax cheat sheet (Python 2)¶
This document is a quick cheat sheet showing how the PEP 484 type language represents various common types in Python 2. # For simple built-in types, just use the name of the type. x = 1 # type: int x = 1.0 # type: float x = True # type: bool x = "test" # type: str x = u"test" # type: unicode # For collections, the name of the type is capitalized, and the # name of the type inside the collection is in brackets. x = [1] # type: List[int] x = set([6, 7]) # type: Set[int] # For mappings, we need the types of both keys and values. x = dict
from typing import Callable, Iterable # This is how you annotate a function definition. def stringify(num): # type: (int) -> str """Your function docstring goes here after the type definition.""" return str(num) # This function has no parameters and also returns nothing. Annotations # can also be placed on the same line as their function headers. def greet_world(): # type: () -> None print "Hello, world!" # And here's how you specify multiple arguments. def plus(num1, num2): # type: (int, int) -> int return num1 + num2 # Add type annotations for kwargs as though they were positional args. def f(num1, my_float=3.5): # type: (int, float) -> float return num1 + my_float # An argument can be declared positional-only by giving it a name # starting with two underscores: def quux(__x): #): # type: (int) -> Iterable[int] i = 0 while i < n: yield i i += 1 # There's alternative syntax for functions with many arguments. def send_email(address, # type: Union[str, List[str]] sender, # type: str cc, # type: Optional[List[str]] bcc, # type: Optional[List[str]] subject='', body=None # type: List[str] ): # type: (...) -> bool <code>
When you’re puzzled or when things are complicated¶
from typing import Union, Any,, **kwargs): # type: (*str, * # Use Iterable for generic iterables (anything usable in `for`), # and Sequence where a sequence (supporting `len` and `__getitem__`) is required. def f(iterable_of_ints): # type: (Iterable[int]) -> List[str] return [str(x) for x in iterator_of_ints] f(range(1, 3)) # Mapping describes a dict-like object (with `__getitem__`) that we won't mutate, # and MutableMapping one (with `__setitem__`) that we might. def f(my_dict): # type: (Mapping[int, str]) -> List[int] return list(my_dict.keys()) f({3: 'yes', 4: 'no'}) def f(my_mapping): # type: (MutableMapping[int, str]) -> Set[str] my_dict[5] = 'maybe' return set(my_dict.values()) f({3: 'yes', 4: 'no'})
Classes¶
class MyClass(object): # For instance methods, omit `self`. def my_method(self, num, str1): # type: (int, str) -> str return num * str1 # The __init__ method doesn't return anything, so it gets return # type None just like any other method that doesn't return anything. def __init__(self): # type: () -> None pass # User-defined classes are written with just their own names. x = MyClass() # type: MyClass
Other stuff¶
import sys # typing.Match describes regex matches from the re module. from typing import Match, AnyStr, IO x = re.match(r'[0-9]+', "15") # type: Match[str] # Use AnyStr for functions that should accept any kind of string # without allowing different kinds of strings to mix. def concat(a: AnyStr, b: AnyStr) -> AnyStr: return a + b concat(u"foo", u"bar") # type: unicode concat(b"foo", b"bar") # type: bytes # # TODO: add TypeVar and a simple generic function | http://mypy.readthedocs.io/en/stable/cheat_sheet.html | CC-MAIN-2017-26 | refinedweb | 541 | 62.07 |
Originally posted by Camilo Morales: 13). Given an excerpt of a Book entity (it is just missing the import statements): 10. @Entity 11. public class Book implements Serializable { 12. @Id 13. @GeneratedValue(strategy = GenerationType.AUTO) 14. private Integer id; 15. String bookName; 16. protected int price; 17. enum Status {IN, OUT}; 18. @Enumerated( EnumType.ORDINAL ) 19. Status status; 20. transient int bar; 21. java.util.Map<Integer, String> comments; 22. protected Book() {}; 23. } No descriptors are used. Which statement is correct about this entity? A) There is an error on line 11. It must NOT implement Serializable. B) Adding a single @Transient annotation makes this entity valid. C) The visibility declarations on some of the variables causes an exception. D) The enumeration or its field definition on lines 17, 18, or 19 is NOT valid. E) There is an error in the identity definition on lines 12, 13, or 14. After the check list I can discard A and C but not sure about the rest.] ... [4] Portable applications should not expect the order of lists to be maintained across persistence contexts unless the OrderBy construct is used and the modifications to the list observe the specified ordering. The order is not otherwise persistent. [5] The implementation type may be used by the application to initialize fields or properties before the entity is made persistent; subsequent access must be through the interface type once the entity becomes managed (or detached)., user-defined serializable types, byte[], Byte[], char[], and Character[]) * enums * entity types and/or collections of entity types * embeddable classes
Anyway I didnt FIND that explanation in the Specs or in the MZ notes ... I guess that if we are right ... they assumed we will never use Collection types for non-relationships persistent fields. | http://www.coderanch.com/t/163098/java-EJB-SCBCD/certification/EJB-Entity | CC-MAIN-2015-22 | refinedweb | 294 | 51.85 |
Last edited: March 22nd 2018Last edited: March 22nd 2018
Within this example, we'll consider a resistor network consisting of $N$ repetitive units. Each unit has two resistors of magnitude $R$ and $12R$, respectively. The units are connected to a battery of voltage $V_0$, as shown in the figure below.
The goal of this example is to calculate the toal current propagating through the cicuit, provided by the battery. The following image explains the applied notations and variables.
When solving a problem like this, it is often useful to look at special scenarios, if available, before one tries to solve the general problem. We will now look at two such simplified problems.
First, consider the special case when $N=1$. The circuit then looks like the following,
This reduced problem is easily solved by defining $R_{eff}$, the effective resistance of the circuit connected to the battery. This implies that we may represent the circuit by the following diagram
where $R_{eff} = R+12R=13R$. Then, by Ohm's law,$$ I_{1,1} = \frac{V_0}{13R} = \frac{1}{13} \frac{\textrm{V}}{\Omega} \approx 0.0769 \textrm{A} $$
As another special scenario, we will consider the case when the number of units $N$ goes to infinity. This scenario is not so trivial. Obviously, we are providing more options for the current to flow as $N\to\infty$. Hence, we expect the resistance to decrease in this limit. Take a minute to think about how you would solve it before you read on.
Again, we can consider an effective resistance $R_{eff}$ and represent the whole circuit as before (last image). Now, take the circuit from the previuos page with infinitely many repetitive units and add one more unit to it, so that it can be represented by the following circuit diagram
Since $N$ is infinitely large, adding one more unit should not change the effective resistance $R_{eff}$ of the whole circuit. In other words, $12R$ and $R_{eff}$ are in parallel in the above sketch and the resistance of the entire circuit $R_{total}$ is still $R_{eff}$. By this argument, we can relate $R_{eff}$ to itself by$$ R_{total} = R_{eff} = R + \frac{1}{\frac{1}{12R} + \frac{1}{R_{eff}}} $$
By solving this quadric equation for $R_{eff}$ and demanding that $R_{eff}$ be positive, we obtain$$ R_{eff} = 4R = 4\Omega $$
Such that by Ohm's law, we find $$ I_{1,1} = \frac{V_0}{4R} = \frac{1}{4} \frac{\textrm{V}}{\Omega} \approx 0.25 \textrm{A} $$
Now, we turn to the more general case when $1<N<\infty$. To solve it, we will set up a system of $N$ equations and $N$ unknowns that we will formulate as a matrix problem. It will then be solven using Python. The unknowns will be the $N$ voltages $V_i$, $i=1,\ldots,N;$.
To obtain the $N$ equations and $N$ unknowns, we first apply Ohm's law to all resistors in the circuit. This yields$$ I_{i,1} = \frac{V_{i-1} - V_i}{R}, \quad i=1,\ldots,N; $$
for the $N$ resistors of resistance $R$ and$$ I_{i,2} = \frac{V_i}{12R}, \quad i=1,\ldots,N; $$
for the $N$ resistors with resistance 12R.
The next step is to eliminate the currents $I_{i,1}$ and $I_{i,2}$ for $i=1,\ldots,N;$ in the equations above. To do so, we turn to the principle of conservation of current, namely that the sum of all currents flowing into a node in a circuit diagram is equal to the sum of all currents flowing out of it. This statement equates to saying that no charges are created or destroyed in a node and is often reffered to as Krichoff's current law.
For the nodes labelled $V_i$, where $i=1,\ldots,N-1;$ we get$$ I_{i,1} = I_{i,2} + I_{i+1,1} $$
For the last node, labelled $V_N$, we get$$ I_{N,1} = I_{N,2} $$
Substituting the earliest expressions for $I_{i,1}$ and $I_{i,2}$ into the each of these two last equations separately (the upper one first), we find for the first node $i=1$,$$ \frac{25}{12R} V_1 - \frac{1}{R}V_2 = V_0 $$
and for the nodes labelled $V_i$, with $i=2,\ldots,N-1;$$$ -\frac{1}{R}V_{i-1} + \frac{25}{12R}V_i - \frac{1}{R} V_{i+1} = 0 $$
and then for the lower equation,$$ -\frac{1}{R}V_{N-1} + \frac{13}{12R}V_N = 0 $$
Counting the number of equations, we see that these three last expressions contain $N$ equations in total. This is exactly the amount we need to determine all $N$ voltages $V_i$ for $i=2,\ldots,N-1;$ uniquely.
Moreover, these equations can be formulated as a matrix problem $\mathcal{A}\boldsymbol{V}=\boldsymbol{b}$$$ \begin{bmatrix} 25/12R & -1/R & 0 & \dots & 0 \\ -1/R & 25/12R & -1/R & \dots & 0 \\ \vdots& \ddots &\ddots& \ddots& \vdots \\ 0 & \dots &-1/R & 25/12R & -1/R \\ 0 & \dots & 0 & -1/R & 25/12R \end{bmatrix} \cdot \begin{bmatrix} V_1 \\ V_2 \\ \vdots \\ V_{N-1} \\ V_N \end{bmatrix} = \begin{bmatrix} V_0/R \\ 0 \\ \vdots \\ 0 \\ 0 \end{bmatrix} $$
Now, finding the unknown voltages amounts to solving this matrix equation $\mathcal{A}\boldsymbol{V}=\boldsymbol{b}$, for the unknown voltage vector $\boldsymbol{V}$. The matrix $\mathcal{A}$ and the vector $\boldsymbol{b}$ are given by the resistances within the circuit and the voltage $V_0$. Subsequently, we can calculate the total current by Ohm's law,$$ I_{1,1} = \frac{V_0 - V_1}{R} $$
In Python, the first thing we need to do is defineing $V_0$, $R$ and $N$.
R = 1.0 # Resistance [Ohm] V0 = 1.0 # Applied voltage [V] N = 10 # Number of repitative units [dimless]
Then, we need to set up the matrix $\mathcal{A}$ and the vector $\boldsymbol{b}$ of the matrix equation. We start by initializing them both with zeros before filling them and also define two variables, $a=25/12R$ and $c=-1/R$ to simplify their filling.
# We use numpy arrays for increased efficiency # of matrix operations and functionality. import numpy as np A = np.zeros((N,N)) # Matrix of dimension NxN b = np.zeros(N) # Vector of length N a = 25.0/(12*R) # Scalar (constant) c = -1.0/R # Scalar (constant)
Next, we set up $\boldsymbol{b}$, and $\mathcal{A}$ row by row.
# the b-vector is all zeros except for first entry. Thus, b[0] = V0/R # Set up first row A[0,0] = a A[0,1] = c # Set up last row A[N-1,N-1] = a A[N-1,N-2] = c # Set up all other rows # OBS: if you're running Python 2.7 (or lower) you might want to use # the 'xrange()' instead of 'range()' depending on the size of 'N'. for row in range(1,N-1): A[row,row-1] = c A[row,row ] = a A[row,row+1] = c # You may want to print A,b to see if it was initialized correctly: # print(A,b)
Then we can solve the system of equations by using the built-in Numerical Python Linear Algebra Solver
Voltages = np.linalg.solve(A
We see that when $N$ becomes large (or actually already at $N\geq15$), $I_{1,1}$ approaches the limit we found analytically as $N\to\infty$. That is $I_{1,1}\to 1/4$.
You should note that, even though the built-in solver we use in this example is implemented to be effective, it is a general solver which do not take advantage of the fact that $\mathcal{A}$ is a sparse matrix for this problem.
If $\mathcal{A}$ becomes really large, then this solver would eventually become very slow as it unnessecarily iterates over all the zeros. As an alternative, one can use the built-in functionality of the Scientific Python Sparse Linear Algebra package. However, this module requires $\mathcal{A}$ to be stored in a specific (sparse) manner.
from scipy import sparse import scipy.sparse.linalg as ssl from scipy.sparse.linalg import spsolve # Since our matrix only have non-zero values on # THE diagonal, the UPPER(sup) diagonal and the LOWER(sub) diagonal, # this is in fact all we need to tell Python. #Create create a sparse matrix sup_diag = np.ones(N)*c sub_diag = np.ones(N)*c the_diag = np.ones(N)*a # Offset: -1 0 1 all_diags = [sub_diag, the_diag, sup_diag] offsets = np.array([-1, 0, 1]) csc="csc" # Computer format of which the matrix is stored # Define the SPARSE matrix A_sparse = sparse.spdiags(all_diags, offsets, N,N, format=csc) # print(A_sparse.todense()) # prints the SPARSE matrix in a DENSE (normal NxN) format. Voltages = spsolve(A_sparse
Now, try to compare running times of the sparse solver, with the standard solver as you increase $N$ to $10, 100, 1000, \ldots$. | https://nbviewer.jupyter.org/urls/www.numfys.net/media/notebooks/resistor_network.ipynb | CC-MAIN-2019-43 | refinedweb | 1,473 | 60.75 |
tag:blogger.com,1999:blog-63304662009-01-12T05:19:34.222-05:00sam bot dot comYou don't care about me. You don't. So don't even pretend to. But, if you did, I would write something like this: Hi. I'm Sam and this is my blog. It's a place for me to unload some of the crud that floats around in the ol' gray matter. I might blog about design, technology, the Mac... or quite possibly, what I had for lunch. In any case, welcome.Sam the rare chance that anyone is looking here for WWDC info... umm... that's so unlikely that I'm not even going to finish the thought. I am at WWDC though. But I, along with the rest of the developers here, will be concentrating all of my efforts on breaking twitter. You can see all of the action here: <a href=""></a>Sam 8 Us<a href=""><img src="" align="left" class="pic"></a>Yeah... I'm on a bit of a hiatus right now (<i>could you tell?</i>). I'm just trying to get some job/life/post-graduation stuff going in the forward/upward/non-wayward direction (<i>and it's going pretty well... thanks for asking!</i>). But fear not, when I return (<i>and this doesn't count</i>), sam bot dot com will be bigger/better/stronger than ever before! <br /><br />'Til then, brothers and sisters, blog on!<br /><br />P.S. I want an iPhone.Sam Fear ChangeWhoa. Apple.com. It's all... <a href="">different</a>.Sam, the Disappointment!I've been really struggling with a good way to say this, and I've decided redundancy is probably the best approach. Here goes: Today's WWDC keynote was a...<br /><br /><b><i>big</i></b> <br /><i>adjective</i> <br /><b>1.</b> large, sizable, substantial, great, huge, immense, enormous, extensive, colossal, massive, mammoth, vast, tremendous, gigantic, giant, monumental, mighty, gargantuan, elephantine, titanic, mountainous, Brobdingnagian; towering, tall, high, lofty; outsize, oversized; goodly; capacious, voluminous, spacious; king-size(d), man-size, family-size(d), economy-size(d); informal jumbo, whopping, mega, humongous, monster, astronomical, ginormous*<br /><br />...disappointment. Honestly, the highlight of the keynote (for me) was the announcement of the future absence of the brushed metal look (which I've said good-bye to long ago, thanks to <a href="">Uno</a>). Yeah, aside from being a great sub-genre of Heavy Metal, brushed metal is getting a bit tired! (<i>Ha!</i>)<br /><br />*<i>Special shout-outs to the OS X thesaurus for this one!</i>Sam's Get It Together!<a href=""><img src="" align="left" class="pic"></a>Less than two hours to showtime, people. Let's get it together! Set hammock to <i>light sway</i>. Coffee, hot! Point all browsers to <a href="">Mac Rumors Live</a> (or <a href="">Engadget</a>). Keep a "new blog entry" window at the ready. And please, let's make sure that "i" key is all polished and ready to go. I have a feeling that it's going to get a thorough workout today.Sam<a href=""><img src="" align="left" class="pic"></a>Well what do you know... this year's WWDC kicks off next Monday... <i>er... tomorrow</i>. I guess it sort of snuck up on me. And so now the question is, "<i>what's it gonna be, folks?</i>" More iPhone fun? Leopard stuff? Something brand new? Only the Steve knows for sure.<br /><br />(And in other news, I'm blogging from a hammock. Yes, I know, my socks are very stylish, thank you.)Sam... Kudos Onslaught<img src="" align="left" class="pic">Well, after being disconnected just as I was giving my phone number to the customer service rep (<i>you know... just in case we get disconnected</i>), and after trudging through the automated help interface twice, being put on hold while forced to endure a never-ending Kenny G. medley, and then finally being connected to a real live person, and having them finally connect me to the <i>correct</i> real live person, I have successfully accessed my home wifi network. Yay!<br /><br />The solution was as simple as putting a "$" in front of the WEP password. Wow. Wouldn't it have been much simpler and easier (not to mention, more cost effective for SBC/AT&T/Yahoo!) if that minute, yet critical, nugget of information was included in the li'l install booklet? Or perhaps if it was available on their help site? (<i>"How did you query their help site without internet access?"</i> you may ask. Thankfully, the upstairs neighbors have been allowing me to <s>steal</s> borrow internet from their unsecured wifi network for about a month now. Thanks guys! (That's for dominating the communal storage space with your heaping mounds of junk.)) <br /><br />Anyway, the true reason that I'm posting this is twofold. Of course my main motivation for publicizing this gripe is to... well, gripe (really though, isn't that the primary usage of most blogs?). But my secondary reason, and the one that makes me seem less whiny and quite possibly even selfless (and therefore, the one that we will be focusing on here), is simply to get this solution, the "$" before the WEP password, out there into the boundless ether of internetdom. Come on, I can't be the only one experiencing this problem. <br /><br />So hopefully, Google will index this solution, make it findable, I'll be a great boon to many, and there will be much rejoicing. See, I do my part to give back to society. Let the onslaught of rightfully-deserved kudos commence! And just for the record, I will gladly accept all forms of kudos... <s>even</s> especially those in <a href="">bar form</a>.Sam, You've Done It Again!<i>Congrats Palm, you've done it again! You've added another unimpressive and uninspiring piece of hardware to your already lacking product line.</i><br /><br /><a href=""><img src="" align="left" class="pic"></a>Those creative and forward-thinking guys and gals over at Palm Inc. (or Palm Pilot, or PalmOne, or Access OS, or whatever their name is this month), have just released a spectacular new product. Something so spectacular and new that you're likely to curl up in the nearest corner and cry. Yes, cry. Cry till your tear ducts have nothing left to cry and they poof out vapor clouds of dried tear powder. And when someone asks why you're crying in the corner (<i>because inevitably, someone will</i>), you can tell them that you're crying because now, thanks to the innovative innovations emanating from Palm's headquarters, humanity's long stint of suffering and agony is finally over. Brothers and sisters, these are not tears of sorrow. Nay! They are tears of joy! For today, Palm released a laptop. But wait! Not just any ol' laptop. It's a <i>little</i> laptop. Hurray! <br /><br />See, not all that impressive, is it? What really gets me though is the mounds and mounds of <a href="">media attention</a> devoted to this <a href="">product's</a> release. Maybe I'm missing something... and perhaps you can help. Okay okay, fun time! Yay! <a href="">Complete</a> the following sentence: Palm's new laptop thingy is great because <br />____________________________________________<br />____________________________________________<br />____________________________________________.<br /><br />(I'll even start the <a href="">first one</a>.)Sam!<img src="" align="left" class="pic">Hot! I just <a href="">read</a> that a portion of the filming of Indiana Jones 4 will be taking place in The Have (okay. Not have. <i>Hayv</i>... with a long A. Like HAVEn. Get it? That's what all the cool kids call New Haven. Are you a cool kid? I'm a cool kid.)! Apparently, there's going to be "some kind of car chase on Chapel between College and High streets." June 28-30. Yeah. That's hot.Sam Filth<a href=""><img src="" align="left" class="pic"></a>Sometimes I find the most difficult time to write is when I have too much to write about... hence the severe lack of updateage. But yes, I'm still alive. And no, my thesis didn't kill me (though it tried its damnedest... and proved to be a more-than-worthy adversary. Good for you, thesis!). <br /><br />So yeah, I'm all graduated and stuff. The graduation ceremony was spectacular. It was a beautiful day, I got an award, there was free beer... needless to say, fun was had by all. And now, into the real world, to experience bigger and better things. Like debt and unemployment. <br /><br />It's funny though: grad school, Quinnipiac, the endless amounts of research, writing, and general academic toil... for the past two years, these have all been monstrously strenuous and mentally exhausting parts of my life. And despite my relentless complaining, I've loved every excruciating moment. In fact, I've reveled in those moments. I've rolled in them, gleefully, like a pig in its own filth. But now, without any real preparation, I've been yoinked, remorselessly, from my sheltering filth. Sadly, I've discovered that my filth was keeping me warm... sane... content. My filth was keeping me filled with purpose. <br /><br />Sigh... it's cold and strange out here without my filth to slog through. Perhaps I should get a doctorate.* Yes! Back to the cozy, comfortable, filthy world of academia! <br /><br />Sallie Mae, baby, I'm coming home! Now where's that loan deferment form?<br /><br />*<i>Doctorate? Umm... no way. Not for all the coddling filth in the world.</i>Sam Bar!<a href=""><img src="" align="left" class="pic"></a>Yeah. I saw it in the store and I just had to. I couldn't resist. I needed to know what the seven foods of Deuteronomy 8:8 taste like... in bar form. And the result? <i>Well <s>goddamn</s> goshdarn!</i> That's one divine <a href="">Bible Bar</a>. <br /><br />(Actually, it was kinda bland.)<br /> Sam Wildebeest<img src="" align="left" class="pic">Staying in line with my <a href="">most</a> <a href="">current</a> <a href="">theme</a> (and my most passionate hobby), I'd like to bring to everyone's attention the fact that there is FREE COFFEE being distributed (for this, and most of next week) in the Quinnipiac University Law Library. All you have to do to participate in this spectacular event is simply say that you're a law student. So, if anyone asks, I'm a law student. Heck... I'd declare myself a <i>tremendous wildebeest*</i> if it results in free coffee. <br /><br />*Yeah, I'll admit, "tremendous wildebeest" is an odd thing to declare oneself. But I'm trying really hard to work it into a blog post... cuz it's sort of an inside joke... but not a very funny one... and that's all I got... and I'm gonna go now... <i>ZINGGG!!!**</i><br /><br />**And just so we're all on the same page here, "<i>ZINGGG!!!</i>" is the sound it makes when I leave really quickly.Sam <a href="">either</a>:<br /><br />A) I've finished my thesis,<br />B) I haven't left the library since Saturday, or<br />C) I'm dead (and blogging from the nether-realm).<br /><br />The correct answer is A... and a little bit of C thrown in for flava. <br /><br />The truth is that I'm 95% done with my thesis. And 5% dead. So really, it all balances out. But I'll be 100% done really soon. And then 100% graduated. And then 100% a master. <br /><br />And then 100% unemployed. <br /><br />Yay!Sam My Heart Doesn't Explode First...<a href=""><img src="" align="left" class="pic"></a>I am only allowed to leave the library tonight under one of the two following conditions:<br /><br />1) I am finished with my graduate thesis, <i>or</i><br />2) I am dead. <br /><br />(<i>I'm hoping for the first scenario... but only marginally so</i>). <br /><br />And thus, in a weak attempt to accelerate the completion of my thesis (and also, to advance the incursion of death), I have prepared the requisite dinner of power bars and energy drinks... a perfect compliment to a fine evening of academic toil. <br /><br />Ahh... good times.Sam Choice<a href=""><img src="" align="left" class="pic"></a>Concerning web browsers, there was a time (a dark time) when I used to do the Safari thing. Then I migrated to Firefox (and then back to Safari... and then back to Firefox... which is not what we're here to talk about). But concerning mail apps, the Moz (Mozilla, the makers of Firefox) just released <a href="">Thunderbird 2</a>, a free and powerful email app. <br /><br />Currently, I use OS X's Mail and I'm pretty happy with it. Though, it does have its annoying quirks. Now the T'bird... well, she's already making some <a href="">peeps</a> pretty excited. <i>Dare I?</i><br /><br />And so, as I do with all of the important and life-altering decisions that I'm faced with, I turn to the internet. My dear readers, should I make the switch from Mail to T'bird?<br /><br />A) Yes.<br />B) No.<br />C) I really don't care. And by the way, this will be the last time that I read this crappy-ass blog.Sam?<a href=""><img src="" align="left" class="pic"></a>Being somewhat of a self-proclaimed sewing dork, and equally as much a <i>cool machines</i> dork, I was delighted (<i>simply delighted!</i>) to see <a href="">this graphic</a> explain, using four-color animated gif awesomeness, the stitching technique of the commonplace sewing machine. Up until this point, I was fully willing to accept satan, magic, or gnomes as logical answers to the <i>mystery of the sewing machine</i> question... that can be fully and succinctly articulated in the following syllable: <i>wha?</i>Sam Have Influence<a href="">It</a> <a href="">worked</a>!Sam on Your Blogs and Ride!<a href=""><img src="" align="left" class="pic"></a>Something weird is going on with Blogger and/or Blogspot and/or Bloglines: the blogs of some of my peeps that have not updated their blogs in a <a href="">very</a> <a href="">long</a> <a href="">time</a>, are spontaneously showing up as unread in my feed reader. Which is weird, but also kinda nice. It's like an unexpected dose of nostalgia... a reminder of a simpler time... a time when bloggers may still have referred to their craft as <i>we</i>blogging. Weblogging!? Ha! Those truly were foolish times.<br /><br />And so, the following is a plea... no, let's make it a petition (to be signed in the <a href="">comments</a> section of this post). It's a call to all of my pals who once proudly donned the titled "blogger" but have since fallen, hard, from the top deck of the blogwagon. (And this points especially pointedly at The Dark Lord Derfla, whose blog, <a href="">From the Depths of the Tepid Inferno</a>, demonstrates a mastery of hand drawn, Sharpie and Post-It note illustrations which showcase the hilarious exploits (often featuring <a href="">yours</a> <a href="">truly</a>) of the trials and tribulations of The Dark Lord's daily routine.)<br /><br />And so, from one who knows the pain of time spent <a href="">away from the blog</a>, I reach an outstretched hand from the driver's seat of the blogwagon to those unfortunate bloggers who have fallen to the cold, wet, and just miserable on all accounts, ground. Come'on back. We'll party like it's 2004. <br /><br /><b>The Petition:</b><br /><br />The internet is a cold, wet, and just miserable on all accounts, place without the blogs we once loved. Therefore, we, the undersigned, hereby invite those who may have fallen, to get back on their blogs and ride (ummm... sing that last part in the tune of <i>Fat Bottom Girls</i> by Queen... you know, that part where Freddie Mercury shouts, "Get on your bikes and ride!" for no particular reason whatsoever. God... I love that song). <br /><br />Signed,<br /><br /><a href="">(<i>sign petition by leaving a comment</i>)</a>Sam<a href=""><img src="" align="left" class="pic"></a><i><a href="">Top 10 Coolest Doormats</a>!? What!?</i> No... I refuse.<br /><br />Okay. This is it. I've had enough. I'm drawing the line. The buck stops here. The camel's back just broke. Et cetera. Et cetera. Et cetera.<br /><br />Can someone please tell me what the blogoshere's fascination with <i>top ten lists</i> is?<br /><br /><b><i>Play along at home! What fun!</i></b><br />1. Go to <a href="">digg.com</a>. <br />2. Search for "<a href="§ion=news&search-buried=1&type=title&area=all&sort=score">top ten</a>."*<br />3. Be appalled by the ensuing result. <br /><br />Anyway, this has got to end. So, even though <a href="">I've written one</a>, I declare an all out boycott of the invasive top ten list. Starting... <i>now!</i><br /><br />*<i>The actual search term that I used was "ten top." I dunno. It just worked better that way. But I'm sure someone was going to call me out on it.</i>Sam Hate Old Navy<a href=""><img src="" align="left" class="pic"></a>I never was, nor do I aspire to be a bicycle messenger. However, I spent some time living in the East Bay, rooming with a San Franciscan messenger. And as a result, I was unwittingly plunged headfirst into their culture... which was accurately described to me as being "the rock star lifestyle of the cycling world," in that bicycle messengers are feared by grandmas, idolized by youth, and guilty of trashing hotel rooms... all of which I can't personally verify. But I've heard stories.<br /><br />I do enjoy <i>the bicycle</i> though, in all its forms (especially the <a href="">purest</a>), cultures, and subcultures... including, of course, bike messenger culture. Clearly, this stems (almost entirely) from my west coast inundation. And even though I have long since moved back to the east coast, I've maintained a sort of passive interest in the goings-on within the bicycle messenger genre of bikedom. <i>Why?</i> Well, I guess it makes me feel ever-so-slightly less removed from the west coast and my <a href="">bike messenger friend</a>. <br /><br /><i>And that is why I hate Old Navy. Good night.</i><br /><br />Wait. I think I left something out. Oh right... In my passive interest in the goings-on of the bike messenger community, I stumbled upon <a href="">this</a>: "Can't think of a sub-culture that hates to be co-opted more than bike messengers. Nothing worse than seeing your lifestyle turned into an Old Navy tshirt." Yep. Old Navy has taken the <i>bike messenger rock star lifestyle</i>, condensed it, mainstreamed it, and printed it on a faux-vintage t-shirt. <br /><br />And <i>that</i> is why I hate Old Navy. Good night. <br /><br />(Ahh... see? My hatred makes so much more sense now. And for the record, I don't really hate Old Navy. It's not their fault. I think <i>Murphy's Law of Cool Things</i> states that, "All cool things will, eventually and unfortunately, be exploited by large corporations (that just don't get it) for the specific intent of mainstream consumption.)Sam"Kurt is up in heaven now."<a href="">Kurt Vonnegut</a> died yesterday. He was 84. This makes me so sad. <br /><br />The following is a <a href="">quote</a> from Kurt's last book, <i><a href="">A Man Without a Country</a></i> (2005). It seems oddly appropriate:<br /><br /><img src="" align="left" class="pic">."<br /><br />Anyway, R.I.P. Kurt Vonnegut. And if you haven't already, you should read Vonnegut's <i><a href="">The Sirens of Titan</a></i>, followed by <i><a href="">Slapstick</a></i>... two of my favorites.<br /><br />More:<br /><a href="">Boing Boing</a><br /><a href="">New York Times</a><br /><a href="">Gaurdian Unlimited</a>Sam. Sausage or Chain... You DecideJust some linkage that I'd like to share:<br /><br /><a href=""><img src="" align="left" class="pic"></a>1) The <a href="">Otis</a>, by <a href="">Swobo Bikes</a>. I'm going to call it the world's most perfect city bike. What sets this bike above some of the <a href="">others</a> in the urban cycling genre is, in order of least to most important: front disc brake, a 3-speed internal hub, matt black styling, and finally and most importantly, there is a bottle opener embedded in the bottom of the saddle! Yes, a bottle opener. See... perfect. (What is questionable about the Otis, is the inclusion of a coaster brake. <i>Huh?</i> Yeah, I don't get that one either. It would seem, however, that one should be able to simply spin off the fixed cog and replace it with a freewheel. But don't quote me on that.)<br /><br />2) And the second link... <i>darn... what was the second one?</i> Well, I guess this will have to do. All you need to know is <a href="">blood is truth</a>. Many, <i>many</i> good things for your aural amusement. Enjoy.Sam'm Not Sorry<a href=""><img src="" align="left" class="pic"></a>I enjoyed FREE COFFEE (<i>yes, FREE COFFEE is deserving of all caps</i>) this morning, just for driving my car to school. Encouragement for being lazy. Ha! In your face, beautiful spring-time weather!<br /><br />Hmm... this blog is quickly transforming into a <i>where to get <a href="">FREE</a> <a href="">COFFEE</a></i> blog. I can't honestly say that it's entirely unexpected though... considering my fondness for all that is <i>free</i> and all that is <i>coffee</i>. <br /><br />I do, however, feel compelled to apologize for the lack of warning. Though, in all truthfulness, I'm not repentant... in any way whatsoever. But having said that, here's a hollow apology: Sorry, jerks.Sam the Founder of Quinnipiac University is...<a href=""><img src="" align="left" class="pic"></a>Right... so I don't want to jinx it, but as of right now (<i>actually, as of <a href="">February 26, 2007</a></i>), I am, according to the all-knowing brain-in-a-jar that is Wikipedia, the founder of <a href="">Quinnipiac University</a>.<br /><br />Yep. It's right there in the <a href="">History</a> section, second sentence in. It reads, "Quinnipiac University is a private, coeducational, nonsectarian institution of higher education. Originally known as the Connecticut College of Commerce, it was founded in 1929 by <b>Samuel H. Cohen</b> as a small business college..." <br /><br />1929. Damn. I'm looking pretty good despite being almost 80!<br /><br /><i>and</i><br /><br />I guess I should update my resume to reflect my former position as University Founder. Unfortunately, I will be unable to provide references... because they're all probably dead.<br /><br />I don't know who made the actual edit (I do have my <a href="">suspicions</a>, though). But anyway, I have to go now. I'm off to the bursar's office. Being the founder, I assume that I'm entitled to a tuition reimbursement... or at the very least, a free <a href="">bobble-head</a>. <br /><br />(<i>What makes this inaccuracy all the more ironic, is that last semester I wrote a paper commending the Wikipedia community for its accuracy... promoting the notion that, by the collaborative authoring of worldly knowledge, accuracy will prevail! Clearly, I was wrong. But I'm not giving back my A. Oh... and just in case Wikipedia ever catches up with itself, and reverts to a previous iteration, <a href="">here</a> is a .pdf of the Quinnipiac University Wikipedia entry as it exists today.</i>)Sam... More Free (as in Beer) CoffeeTomorrow. All day long. FREE ICED COFFEE. <a href="">This time</a>, from Dunkin' Donuts.Sam | http://feeds.feedburner.com/sambot | crawl-002 | refinedweb | 4,183 | 76.52 |
Welcome back for the second installment in this series. This installment serves as an introduction to the world of convolution filters. It is also the first version of our program that offers one level of undo. We'll build on that later, but for now I thought it mandatory that you be able to undo your experiments without having to reload the image every time.
So what is a convolution filter ? Essentially, it's a matrix, as follows:
The idea is that the pixel we are processing, and the eight that surround it, are each given a weight. The total value of the matrix is divided by a factor, and optionally an offset is added to the end value. The matrix above is called an identity matrix, because the image is not changed by passing through it. Usually the factor is the value derived from adding all the values in the matrix together, which ensures the end value will be in the range 0-255. Where this is not the case, for example, in an embossing filter where the values add up to 0, an offet of 127 is common. I should also mention that convolution filters come in a variety of sizes, 7x7 is not unheard of, and edge detection filters in particular are not symmetrical. Also, the bigger the filter, the more pixels we cannot process, as we cannot process pixels that do not have the number of surrounding pixels our matrix requires. In our case, the outer edges of the image to a depth of one pixel will go unprocessed.
First of all we need to establish a framework from which to write these filters, otherwise we'll find ourselves writing the same code over and again. As our filter now relies on surrounding values to get a result, we are going to need a source and a destination bitmap. I tend to create a copy of the bitmap coming in and use the copy as the source, as it is the one getting discarded in the end. To facilitate this, I define a matrix class as follows:
public class ConvMatrix
{
public int TopLeft = 0, TopMid = 0, TopRight = 0;
public int MidLeft = 0, Pixel = 1, MidRight = 0;
public int BottomLeft = 0, BottomMid = 0, BottomRight = 0;
public int Factor = 1;
public int Offset = 0;
public void SetAll(int nVal)
{
TopLeft = TopMid = TopRight = MidLeft = Pixel = MidRight =
BottomLeft = BottomMid = BottomRight = nVal;
}
}
I'm sure you noticed that it is an identity matrix by default. I also define a method that sets all the elements of the matrix to the same value.
The pixel processing code is more complex than our last article, because we need to access nine pixels, and two bitmaps. I do this by defining constants for jumping one and two rows ( because we want to avoid calculations as much as possible in the main loop, we define both instead of adding one to itself, or multiplying it by 2 ). We can then use these values to write our code. As our initial offset into the different color is 0, 1, and 2, we end up with 3 and 6 added to each of those values to create indices for three pixels across, and use our constants to add the rows. In order to ensure we don't have any values jumping from the bottom of the image to the top, we need to create one int, which is used to calculate each pixel value, then clamped and stored. Here is the entire function:
public static bool Conv3x3(Bitmap b, ConvMatrix m)
{
// Avoid divide by zero errors
if (0 == m.Factor)
return false; Bitmap
// GDI+ still lies to us - the return format is BGR, NOT RGB.
bSrc = (Bitmap)b.Clone();
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0, bSrc.Width, bSrc.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
int stride2 = stride * 2;
System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr SrcScan0 = bmSrc.Scan0;
unsafe {
byte * p = (byte *)(void *)Scan0;
byte * pSrc = (byte *)(void *)SrcScan0;
int nOffset = stride - b.Width*3;
int nWidth = b.Width - 2;
int nHeight = b.Height - 2;
int nPixel;
for(int y=0;y < nHeight;++y)
{
for(int x=0; x < nWidth; ++x )
{
nPixel = ( ( ( (pSrc[2] * m.TopLeft) +
(pSrc[5] * m.TopMid) +
(pSrc[8] * m.TopRight) +
(pSrc[2 + stride] * m.MidLeft) +
(pSrc[5 + stride] * m.Pixel) +
(pSrc[8 + stride] * m.MidRight) +
(pSrc[2 + stride2] * m.BottomLeft) +
(pSrc[5 + stride2] * m.BottomMid) +
(pSrc[8 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);
if (nPixel < 0) nPixel = 0;
if (nPixel > 255) nPixel = 255;
p[5 + stride]= (byte)nPixel;
nPixel = ( ( ( (pSrc[1] * m.TopLeft) +
(pSrc[4] * m.TopMid) +
(pSrc[7] * m.TopRight) +
(pSrc[1 + stride] * m.MidLeft) +
(pSrc[4 + stride] * m.Pixel) +
(pSrc[7 + stride] * m.MidRight) +
(pSrc[1 + stride2] * m.BottomLeft) +
(pSrc[4 + stride2] * m.BottomMid) +
(pSrc[7 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);
if (nPixel < 0) nPixel = 0;
if (nPixel > 255) nPixel = 255;
p[4 + stride] = (byte)nPixel;
nPixel = ( ( ( (pSrc[0] * m.TopLeft) +
(pSrc[3] * m.TopMid) +
(pSrc[6] * m.TopRight) +
(pSrc[0 + stride] * m.MidLeft) +
(pSrc[3 + stride] * m.Pixel) +
(pSrc[6 + stride] * m.MidRight) +
(pSrc[0 + stride2] * m.BottomLeft) +
(pSrc[3 + stride2] * m.BottomMid) +
(pSrc[6 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);
if (nPixel < 0) nPixel = 0;
if (nPixel > 255) nPixel = 255;
p[3 + stride] = (byte)nPixel;
p += 3;
pSrc += 3;
}
p += nOffset;
pSrc += nOffset;
}
}
b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);
return true;
}
Not the sort of thing you want to have to write over and over, is it ? Now we can use our ConvMatrix class to define filters, and just pass them into this function, which does all the gruesome stuff for us.
Given what I've told you about the mechanics of this filter, it is obvious how we create a smoothing effect. We ascribe values to all our pixels, so that the weight of each pixel is spread over the surrounding area. The code looks like this:
public static bool Smooth(Bitmap b, int nWeight /* default to 1 */)
{
ConvMatrix m = new ConvMatrix();
m.SetAll(1);
m.Pixel = nWeight;
m.Factor = nWeight + 8;
return BitmapFilter.Conv3x3(b, m);
}
As you can see, it's simple to write the filters in the context of our framework. Most of these filters have at least one parameter, unfortunately C# does not have default values, so I put them in a comment for you. The net result of apply this filter several times is as follows:
Gaussian Blur filters locate significant color transitions in an image, then create intermediary colors to soften the edges. The filter looks like this:
The middle value is the one you can alter with the filter provided, you can see that the default value especially makes for a circular effect, with pixels given less weight the further they go from the edge. In fact, this sort of smoothing generates an image not unlike an out of focus lens.
On the other end of the scale, a sharpen filter looks like this:
If you compare this to the gaussian blur you'll note it's almost an exact opposite. It sharpens an image by enhancing the difference between pixels. The greater the difference between the pixels that are given a negative weight and the pixel being modified, the greater the change in the main pixel value. The degree of sharpening can be adjusted by changing the centre weight. To show the effect better I have started with a blurred picture for this example.
The Mean Removal filter is also a sharpen filter, it looks like this:
Unlike the previous filter, which only worked in the horizontal and vertical directions, this one spreads it's influence diagonally as well, with the following result on the same source image. Once again, the central value is the one to change in order to change the degree of the effect.
Probably the most spectacular filter you can do with a convolution filter is embossing. Embossing is really just an edge detection filter. I'll cover another simple edge detection filter after this and you'll notice it's quite similar. Edge detection generally works by offsetting a positive and a negative value across an axis, so that the greater the difference between the two pixels, the higher the value returned. With an emboss filter, because our filter values add to 0 instead of 1, we use an offset of 127 to brighten the image, otherwise much of it would clamp to black.
The filter I have implemented looks like this:
and it looks like this:
As you might have noticed, this emboss works in both diagonal directions. I've also included a custom dialog where you can enter your own filters, you might like to try some of these for embossing:
The horizontal and vertical only filters differ for no other reason than to show two variations. You can actually rotate these filters as well, by rotating the values around the central point. You'll notice the filter I have used is the horz/vertical filter rotated by one degree, for example.
Although this is kinda cool, you will notice if you run Photoshop that it offers a lot more functionality than the emboss I've shown you here. Photoshop creates an emboss using a more specifically written filter, and only part of that functionality can be simulated using convolution filters. I have spent some time writing a more flexible emboss filter, once we've covered bilinear filtering and the like, I may write an article on a more complete emboss filter down the track.
Finally, just a simple edge detection filter, as a foretaste of the next article, which will explore a number of ways to detect edges. The filter looks like this:
Like all edge detection filters, this filter is not concerned with the value of the pixel being examined, but rather in the difference between the pixels surrounding it. As it stands it will detect a horizontal edge, and, like the embossing filters, can be rotated. As I said before, the embossing filters are essentially doing edge detection, this one just heightens the effect.
The next article will be covering a variety of edge detection methods. I'd also encourage you to search the web for convolution filters. The comp.graphics.algorithms newsgroup tends to lean towards 3D graphics, but if you search an archive like google news for 'convolution' you'll find plenty more ideas to try in the custom dialog.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Math Primers for Programmers | http://www.codeproject.com/Articles/2008/Image-Processing-for-Dummies-with-C-and-GDI-Part-2?fid=3511&df=90&mpp=10&sort=Position&spc=None&tid=4162557 | CC-MAIN-2013-20 | refinedweb | 1,789 | 63.59 |
Java 16 From the vector API to records to elastic metaspace, there’s a lot packed into Java 16 The Java Champions say these are a few of their favorite things. by Alan Zeichick March 12, 2021 Download a PDF of this article What’s the best, most significant feature of the forthcoming Java 16, set for general availability on March 16? There’s so much packed into this semiannual update, with 17 JDK Enhancement Proposals (JEPs), some of which are previews and incubators. Everyone, it seems, has a favorite when it comes to the latest JEPs. Personally, I’m intrigued by JEP 387: Elastic metaspace, which improves the allocation and deallocation of metaspace memory in the HotSpot JVM, thereby reducing class-loader overhead and memory fragmentation. For long-running server-side applications, this could really improve software performance. What do other developers think? For Java 16, Java Magazine reached out to several Java Champions, that is, technical luminaries from a broad cross-section of the Java community. (Candidates for joining the Java Champions program are nominated and selected by existing Java Champions themselves through a peer review process. They don’t work for Oracle and are not chosen by Oracle.) The question was straightforward: “What’s the most significant part of JDK 16—to you?” Seven Java Champions replied and explained their 10 favorite JEPs. Here’s what they said, in their own words. JEP 338 (vector API) By Gunnar Morling, Java Champion The Java 16 feature I’m most excited about is the JEP 338: Vector API, which is now incubating. It really was about time to update all those ancient collection types…just kidding, of course. The Vector API is about making the vector calculation capabilities of the x64 and AArch64 CPU architectures usable for Java developers. Vectorization allows a CPU to apply the same operation to multiple input values at once (single instruction, multiple data—SIMD), resulting in drastic performance improvements, if your problem lends itself towards such parallel processing. While the HotSpot JVM’s C2 just-in-time compiler supports autovectorization of specific scalar operations, the dedicated API allows developers to benefit from vectorization in many more cases, providing explicit and fine-grained control over the executed vector calculations. I am expecting JEP 338 to open up the door for Java for many interesting use cases, providing excellent performance in areas such as image and signal processing, encryption, text processing, bioinformatics, and many others. JEP 357 (migrate from Mercurial to Git) and JEP 369 (migrate to GitHub) By Ian Darwin, Java Champion While the garbage collector improvements in Java 16 are most welcome, as always (thanks, garbage collection team!), my favorite change is probably the migration to Git and GitHub—that’s JEP 357: Migrate from Mercurial to Git and JEP 369: Migrate to GitHub. I know I, and I imagine many others, have been hesitant to contribute to OpenJDK on the grounds of having to learn yet another single-use tooling. “I’m sure I’ll get around to it one of these days.” Git and GitHub are so widely used it’s hard to imagine a developer with more than a few years’ experience who hasn’t used one or both. I predict (and hope) that this move will lead to an uptick in active contributors and help realize the potential that a long-ago Sun Microsystems infused into OpenJDK as an open source project. JEP 376 (ZGC concurrent thread-stack processing) By Monica Beckwith, Java Champion While working with Oracle’s Z Garbage Collector (ZGC) team to help enable large pages support on Windows, I learned about the concurrent thread-stack processing JEP that the Oracle ZGC team was working on: JEP 376. This JEP also happened to be one of the bigger JEPs in the candidate state, and then by September 2020, the JEP targeted JDK 16. The Z Garbage Collector is one of the newer low-latency collectors in the HotSpot JVM. The design goal for ZGC is to provide near real-time, predictable, scalable garbage collection. Before JEP 376, ZGC would scan the application’s thread stack during two stop-the-world (STW) phases, Pause Mark Start and Pause Relocate Start. This meant that the application’s root set size would gate the pause time in these two STW phases. In the quest to achieve pauses shorter than one millisecond, the ZGC developers decided to concurrently (with GC threads working at the same time as application threads [aka mutators]) process the thread stack. As mentioned above, the thread stack is no longer scanned during the STW phases, thus reducing any dependency on the application’s root set size. Therefore, end users can use ZGC for heap sizes as small as 8 MB and as large as 16 TB and still expect the pause times to be less than one millisecond! JEP 386 (Alpine Linux port) By Mohamed Taman, Java Champion JDK 16 is bringing many performance enhancements and memory management out of the box. Besides, developers will get many language features and JVM enhancements. In my opinion, the most significant part of this JDK release is JEP 386: Alpine Linux port. The Alpine Linux distribution is widely adopted in cloud deployments, microservices, and container environments due to its small image size, which is less than 6 MB. That applies to embedded system deployments as well, which are constrained by size. JEP 386 could revolutionize the running of Java applications in such an environment as native applications. Additionally, using the jlink linker, developers can cut down the Java application runtime environment size with only the required modules to run their application. Thus, the user could create a very small image targeted to run a specific application. JEP 388 (Windows/AArch64 port), JEP 394 (pattern matching for instanceof), and JEP 395 (records) By Josh Juneau, Java Champion I feel that a couple of the most significant features for the upcoming JDK 16 are those that have been in preview mode for the past couple of releases. Specifically, JEP 395: Records and JEP 394: Pattern matching for instanceof provide two features that will likely be used by a large number of developers, thereby changing the way that Java applications are developed moving forward. These two features are graduating to be fully supported features in this release. Records help the language to evolve by allowing constructs to become less verbose and easier to maintain. In a similar way, pattern matching enables developers to write code more concisely using patterns, which will not only make code easier to write but also more maintainable. I also have my eye on the capability to port JDK 16 to ARM64 on Windows with JEP 388: Windows/AArch64 port, because this means that we can now install the JDK on even more devices. JEP 394 (pattern matching for instanceof) By Ben Evans, Java Champion The most significant part of Java 16 is probably JEP 394: Pattern matching for instanceof. At first glance, it doesn’t seem like it. All this JEP seems to do is get rid of some ugly casts. For example, this if (o instanceof String) { String s = (String)o; ... } becomes this if (o instanceof String s) { // s is now in scope ... } Useful, but not exactly groundbreaking, you might think. However, sometimes big shifts in the direction of a language grow from seemingly tiny changes. What is actually being introduced here is the very first step towards a language feature called pattern matching—but note that this does not mean the string-matching capabilities that are delivered via regular expressions. Instead, a pattern is a combination of a predicate to be applied to a target expression and some local variables, which are brought into scope only if the predicate tests true. In this example, the predicate is “Is o an instance of String?” But it is now expressed directly in new Java language syntax. This simple, almost trivial, case of patterns is not, by itself, all that significant. If this were the Marvel Cinematic Universe, then JEP 394 is the original Iron Man movie. Sure, it’s fun, but the real significance is what it heralds in the larger reality of new language features that are coming. For example, we might see patterns that can be used in switch expressions; patterns that can deconstruct (aka destructure) records; patterns that combine with sealed classes; patterns with guards; and much more besides. JEP 395 (records) By Heinz Kabutz, Java Champion Records are one of Java’s most desired new features. Finally, a way to deserialize objects by calling the canonical constructor. No more need for ObjectInputValidation to check that no one has tampered with a serialized object. And so many other great features. But something far more interesting caught my eye whilst reading JEP 395: Records.. Yes, that long-standing restriction has often annoyed me. Java has four different types of nested classes. Here they are: public class Outer { static class Nested { } // 1. class Inner { } // 2. public void method() { new Object() { // 3. anonymous }; class Local { // 4. } } } Of these four, only the static nested class could contain static methods. Inner, anonymous, and local classes were out of luck. The restriction never made sense, and it seemed like Sun Microsystems was trying to micromanage code structure (see the section entitled “Members that can be marked static”). From Java 16, we can now have static methods inside nonstatic nested classes. For example, this now compiles and runs: public class AVeryGoodMorning { public class ToYou { public static void main(String... args) { System.out.println("How do you do?"); } } } Run it with java AVeryGoodMorning\$ToYou and you get a friendly How do you do? A word of warning: This should also work with anonymous and local types. However, it currently crashes the JVM on MacOS. It runs on Linux and Windows. Oh, another nice side effect of JEP 395 is that records can be defined locally inside a method. Since the Java architects had to refactor the language specification to support this, they at the same time lifted other restrictions, allowing local interfaces and enums. These might not seem particularly useful, but I have at least once wished that local interfaces were allowed. It sometimes makes sense when working with complex Java 8 streams. Dig deeper The role of preview features in Java 14, Java 15, Java 16, and beyond Records come to Java Pattern matching for instanceof in Java 14 Understanding the JDK’s new superfast garbage collectors Java on Arm processors: Understanding AArch64 vs. x86 | https://blogs.oracle.com/javamagazine/java-champion-favorite-java16-records-vector-arm64-github?source=:em:nw:mt::::RC_WWMK200429P00043:NSL400139911&elq_mid=187945&sh=162609181316181313222609291604350235&cmid=WWMK200429P00043C0022 | CC-MAIN-2021-21 | refinedweb | 1,753 | 52.09 |
IP(4) BSD Programmer's Manual IP(4)
ip - Internet Protocol
#include <sys/socket.h> #include <netinet/in.h> int socket(AF_INET, SOCK_RAW, proto);
IP is the network ad- dresses for Source Route options must include the first-hop gateway at the beginning of the list of gateways. The first-hop gateway address will be extracted from the option list and the size adjusted accordingly be- fore and SOCK_DGRAM sockets. For exam- ple, da- tagram. The msg_control field in the msghdr structure points to a buffer that contains a cmsghdr structure followed by the IP address. The cmsghdr fields have the following values: cmsg_len = CMSG_LEN(sizeof(struct in_addr)) cmsg_level = IPPROTO_IP cmsg_type = IP_RECVDSTADDR- */ } imr_interface should be INADDR_ANY to choose the default multicast inter- face,) member- ships may be added on a single socket. To drop a membership, use: struct ip_mreq mreq; setsockopt(s, IPPROTO_IP, IP_DROP_MEMBERSHIP, &mreq, sizeof(mreq)); where mreq contains the same values as used to add the membership. Memberships are dropped when the socket is closed or the process exits. re- ceived. If proto is non-zero, that protocol number will be used on outgo- ing = htons(offset); ip->ip_len = htons(len); Additionally note that starting with OpenBSD 2.1, the ip_off and ip_len fields are in network byte order. If the header source address is set to INADDR_ANY, the kernel will choose an appropriate address..
getsockopt(2), recv(2), send(2), icmp(4), inet(4), netintro(4)
The ip protocol appeared in 4.2BSD. MirOS BSD #10-current November 30, 1993. | http://www.mirbsd.org/htman/sparc/man4/ip.htm | CC-MAIN-2014-49 | refinedweb | 253 | 55.13 |
Welcome to my MVC Java Tutorial. I have been asked for this tutorial many times in the last few weeks.
To understand the Model View Controller you just need to know that it separates the Calculations and Data from the interface. The Model is the class that contains the data and the methods needed to use the data. The View is the interface. The Controller coordinates interactions between the Model and View.
The video and code below will make it very easy to understand.
If you like videos like this it helps to tell Google+ [googleplusone]
Code From the Video
CalculatorModel.java
// The Model performs all the calculations needed // and that is it. It doesn't know the View // exists public class CalculatorModel { // Holds the value of the sum of the numbers // entered in the view private int calculationValue; public void addTwoNumbers(int firstNumber, int secondNumber){ calculationValue = firstNumber + secondNumber; } public int getCalculationValue(){ return calculationValue; } }
CalculatorView.java
// This is the View // Its only job is to display what the user sees // It performs no calculations, but instead passes // information entered by the user to whomever needs // it. import java.awt.event.ActionListener; import javax.swing.*; public class CalculatorView extends JFrame{ private JTextField firstNumber = new JTextField(10); private JLabel additionLabel = new JLabel("+"); private JTextField secondNumber = new JTextField(10); private JButton calculateButton = new JButton("Calculate"); private JTextField calcSolution = new JTextField(10); CalculatorView(){ // Sets up the view and adds the components JPanel calcPanel = new JPanel(); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.setSize(600, 200); calcPanel.add(firstNumber); calcPanel.add(additionLabel); calcPanel.add(secondNumber); calcPanel.add(calculateButton); calcPanel.add(calcSolution); this.add(calcPanel); // End of setting up the components -------- } public int getFirstNumber(){ return Integer.parseInt(firstNumber.getText()); } public int getSecondNumber(){ return Integer.parseInt(secondNumber.getText()); } public int getCalcSolution(){ return Integer.parseInt(calcSolution.getText()); } public void setCalcSolution(int solution){ calcSolution.setText(Integer.toString(solution)); } // If the calculateButton is clicked execute a method // in the Controller named actionPerformed void addCalculateListener(ActionListener listenForCalcButton){ calculateButton.addActionListener(listenForCalcButton); } // Open a popup that contains the error message passed void displayErrorMessage(String errorMessage){ JOptionPane.showMessageDialog(this, errorMessage); } }
CalculatorController.java
import java.awt.event.ActionEvent; import java.awt.event.ActionListener; // The Controller coordinates interactions // between the View and Model public class CalculatorController { private CalculatorView theView; private CalculatorModel theModel; public CalculatorController(CalculatorView theView, CalculatorModel theModel) { this.theView = theView; this.theModel = theModel; // Tell the View that when ever the calculate button // is clicked to execute the actionPerformed method // in the CalculateListener inner class this.theView.addCalculateListener(new CalculateListener()); } class CalculateListener implements ActionListener{ public void actionPerformed(ActionEvent e) { int firstNumber, secondNumber = 0; // Surround interactions with the view with // a try block in case numbers weren't // properly entered try{ firstNumber = theView.getFirstNumber(); secondNumber = theView.getSecondNumber(); theModel.addTwoNumbers(firstNumber, secondNumber); theView.setCalcSolution(theModel.getCalculationValue()); } catch(NumberFormatException ex){ System.out.println(ex); theView.displayErrorMessage("You Need to Enter 2 Integers"); } } } }
MVCCalculator.java
public class MVCCalculator { public static void main(String[] args) { CalculatorView theView = new CalculatorView(); CalculatorModel theModel = new CalculatorModel(); CalculatorController theController = new CalculatorController(theView,theModel); theView.setVisible(true); } }
kool man.
also waiting for your game development tutorial.
They are technically going on right now. I’m going to make professional grade games which are going to require things like object oriented design and design patterns. When the proper game tutorials starts it will go on for at least 6 months. I hope you like them
yeah off course i will like them.
Thank you 🙂
Thanks for the very useful Videos! I have always wonder shouldn’t the model store ALL the data that is going to be displayed in view like first_number and second_number as well? Assume an example where the first_number is the score of a student out of total marks which is the 2nd number. The sample function is going to just calculate the % of the student in the test. Assume the view wants to print the whole report of student later, it could just ask the model for that information? What do you think? Thanks again!
There are many ways to implement an MVC. Yes you are correct that model should contain all of the data, but when I am implementing a system I tend to decouple the model completely from the view if possible. I could have kept them decoupled and still store the numbers in the model with a FocusListener, but I was also trying to keep the code as uncomplicated as possible. I hope that clears some things up. You made a good point 🙂
Thanks for the very useful Videos!
You’re very welcome 🙂 Thank you for stopping by my site
Thank you so much for this awesome tutorial. I was wondering if you could clarify the MVC design pattern with regards to Servlets, JSPs and Java classes. my main issue is the communication between the Controller(Servlet) and the view(JSP). and is it the best way to implement MVC if you have your model create certain parts of the view EG:a Table, then pass it on as a String value to the Controller which in-turn sends it to the view to display.To communicate(Controller and VIEW) I’m currently using JQUERY,AJAX AND JSON
I’ll cover JEE concepts as soon as possible. I go out of my way to avoid having the model do anything but handle data and process data. I try to have the controller handle all communications between the Model and View.
This isn’t the only MVC pattern by any means though. Many allow the model and view to interact. If the code is understandable I have no problem with that.
ok cool, Thanks
Perfect video!
Thank you 🙂 I do my best
Hello! Incredible tutorial, but I have one question, if you can help me. How you work in the controller class if you have a model made by 2 classes for example?
Thank you very much 🙂 The controller would just work with both of them. Not much would change here. It is all about the action listeners and triggering the right methods when needed. Sorry, I can’t really say anything else because I’m not sure what may be hanging you up
Excellent tutorial. I am just started learning about MVC and this is the best job I have seen explaining it.
I would tweak one tiny thing to the actual program… I added:
calcSolution.setEditable(false);
because it seemed odd that I could type a value into the solution field.
Love the site. Just found it and I will be spending a lot of time here!
Thank you very much 🙂 I’m glad you liked the tutorial. Yes, that was silly that I allowed that to be editable
Hi Derek!
I’m from Queretaro Mexico, dude I really love your site, your work is just amazing.
I’ve a question, hopefully you can guide me, for example, if I have two or more buttons in my main window and each one has a different action, should I create one inner class in my controller for each different button ? Is that the right way to do it ?
I can’t thank you enough!
You’re very welcome 🙂
Basically you need to define what method to call in the controller when the button is clicked in the view like this
// If the calculateButton is clicked execute a method
// in the Controller named actionPerformed
void addCalculateListener(ActionListener listenForCalcButton){
calculateButton.addActionListener(listenForCalcButton);
}
Then everything is handled in the controller like this
class CalculateListener implements ActionListener{
public void actionPerformed(ActionEvent e) {…
Does that help?
Easy does it… Nice tutorial.
Thank you very much 🙂 I’m glad I was able to make it understandable
Awesome tutorial! I’ve been out of the programming realm for a while now, but I want back in, and recently I saw MVC as preferred knowledge for a job opening. Google got me to your tutorial, and I must say I have a great understanding of what’s involved in MVC. Now I just have to reinvent my programming methods, which are pretty rusty and outdated anyway.
To tell the truth though, I don’t understand 100% why the view and controller are separate…
Thank you 🙂 A major step for a programmer is investing some time into understanding object oriented programming completely. That is probably what you are struggling with? The whole idea behind MVC is to separate the components so that they can be changed without breaking, or even effecting the rest of the system. Just like how we can put new windshield whippers on our car with worrying about the whole car breaking.
I explain Object Oriented Design in detail here. It should help you better understand the process
Fantastic explanation
pleas can send you assignment need to develop by MVC pattern please?
Thank you 🙂 I’m glad it was able to help you on your assignment
Awesome tutorial. I just started learning MVC and I have couple of questions:
1) So, the whole concept of MVC is to break down the code into seperate programs to make coding more clear, is that true? Meaning, instead of writing 1 program with thousand lines of code, we break down into 3 different programs (1 for model, 1 for view, and for controller). Is that correct?
2) I want to practice using Eclipse and copy your code and play with it. Can you tell me what I need to do? Do I download Eclipse? How do I set it up? What else do I need to download in order to follow your Calculator code?
3) Can you give me another tutorial like this but using SQL database? Like an address book scenario. Can you please explain to me?
1. MVC is used for many reasons. It helps break pieces of your system into real world objects. It also makes it easy for you to change the model and view without breaking your program because the controller handles communications between the two.
2. I cover how to install eclipse for java here.
3. Right now I’m making Android tutorials. I will make a tutorial using SQLite very soon and I’ll use the MVC with it
I hope that helps
Awesome tutorial,
but i have small question.
So the Main function can directly access to theView?
Isn’t it should be theController that set theView to be visible?
Thanks for the answer,,
Thank you 🙂 Main could access the view, but the controller does all of the work. I don’t want to tightly couple the view and controller by having the view created by the controller. Does that help?
This is the simplest, easiest and the clearest example of MVC I’ve ever found. Thanks.
Thank you for the nice compliment 🙂 You’re very welcome.
Hi
I find your tutorial very useful. I have created a view made up of three input components. Two of these provide the filling for sql stored procedures executed by the controller.
I have created a model for each of these components. There is only one controller. Is this the best way to do it?
What is the best way of identifying which button was clicked, there are three. Would you use the onFormEvent or just use an action event on the buttons involved. How would you update the model and view from a controller that ran a sql stored procedure.
Thanks
Michael
Sounds like you set up everything well. For something like that it makes sense to have just one controller.
I’d monitor action events like you said to make updates for the view. Great job!
Derek
Hi, great tutorial. If I had watched it a year ago, I could had done a better job 😛
I have some questions by the way.
1. If I have difernt views (forms) I need a different controller for each one?
2. How do I stablish communication between different controllers?
3. Do you have a DOA pattern example?
Thank you very much for such a great example.
Ricardo.
Hi Ricardo,
Thank you very much 🙂 I’m glad you enjoyed it.
1. You should have 1 controller in most all situations no matter how many views you have. You can have many models and many views however.
2. Not Applicable
3. Sorry I haven’t covered that pattern yet, but I will when I cover JEE topics.
Derek
Is MVC a design pattern? If not what type of deign pattern ? Thanks
Yes it is considered to be a pattern
Thank you very much for you article!
You’re very welcome 🙂
very goooood tutorial
when one start with your tutorial ..can’t stop
Thank you 🙂 I’m glad you enjoyed it!
Hello Derek,
Thanks for the wonderful tutorial.
I have a question: How exactly is this different from MVP, Model- View-Presenter? I thought that in MVC the model updates the view, whereas in MVP the controller liaises between the View and the Model (which is what you’re doing in this tutorial).
Hello, You’re very welcome 🙂 I’m glad you enjoyed the video.
This is the best explanation I ever came across for your question StackOverflow Answer
A very good example , thank U .
Thank you 🙂
Great tutorial, nice and straight forward 🙂
The MVC pattern shown here demonstrates changing the model data by interacting with the view which is fine, but how would one change the model data by having the two numbers initially stored in a file and then reading those numbers so that the Model and View are updated? Lets say we have defined class that reads the file and stores the two numbers in instance variables of that class. How does that class fit into the MVC pattern?
Thank you 🙂 That is why the MVC is so brilliant. If you need to add a feature and the user asks for that feature everything changes accordingly, but they still don’t need to know how the task is performed.
1. The view updates to allow the new feature if it must.
2. The controller catches the request from the view and passes it to the model.
3. The model performs the needed calculations and then sends them to the controller.
4. The controller provides an update to the view.
5. Anything can change and as long as we can agree on a solid way to communicate we don’t lose flexibility.
I hope that answered your question.
Hey Derek,
If we try to implement this Calculator in Business Delegate along with MVC(in single code), so could you tell me ‘who will do what?’
I mean
Business Delegate’ service = MVC’s Model
but what component of Business delegate will play role of controller
and similarly where we can put VIEW component in business delegate…
thank you so much in advance!!!
Hi Derek,
Thanks for all these tutorials, they are great.
I still do have a probleme with the MVC pattern. How can we implement it with multiple views and models ? I’ve been googling around
in advance, thanks for your response.
Hi,
You may want to take a look at my Android tutorials because everything is based on MVC. You basically have one controller that handles all communication between the views and the models. If the view needs to change it tells the controller and it serves up a new view. If a new calculation is needed in the model the class is added and the controller just needs a way to communicate with that new class.
Thanks for your reply.
Cheers.
A very helpful tutorial indeed!!! go ahead!!!
Thank you very much 🙂
Very nice example, just I have a question, how can I do to use some Controllers not only one? I need to create some class that would be working like a “Dispatcher”??
I hope you can answer to me!
Thanks in advance!
Thank you 🙂 The idea behind the MVC is to only have one controller. I may not have explained that well in the tutorial. I’ll make a new MVC tutorial to cover the issues that have been brought up.
Great stuff, thanks. Nice and simple and to the point
You’re very welcome 🙂
Wow, so clear and easy. Thank you so much, this is good stuff.
Thank you 🙂 I’m glad you liked it.
Thank you !
You’re very welcome 🙂
Thank you man… 🙂 you explained it well. now its easy for me to manipulate the codes. once again thanks a lot.
You’re very welcome 🙂
hey dude i saw your many tutorials and they are great
and i also decided to run YouTube channel for java programming
but i don’t know where to start ,technically i ran out of
ideas for it because there are lot of tutorials in YouTube .
so if you have any suggestion about where to kickoff it will helpful to me
well you can also suggest some software development books on java
thanks and keep making tutorials they are really fun to watch
The Java book that everyone seems to like is the Head First Java Book. I know a lot of people want Java enterprise tutorials and I’ve never made one. I think NetTuts has one you could use as a guide.
If you make a lot of videos eventually you’ll find your way. It took me about 2 years to get comfortable with my style. Just make tutorials that you enjoy and you’ll do great. I wish you all the best 🙂
is it possible to create a social networking website in java?
Yes, but it would be very complex from a security perspective.
i am actually creating it. so it is OK to do it?
what do you mean by
security perspective ?
and thanks for reply
I’m sorry, but I don’t know what you are referring to in regards to security?
Thank you!!!!!!!!!!!!
You’re very welcome 🙂
Hi Derek, Sorry that I’m bit off the topic, but do you use any autocomplete plugin for eclipse or just Ctrl+space?
Hi, I used to just use Eclipse code completion. Lately i have been using Android Studio and intelliJ.
Can anyone help me? I’m new to Java as well as MVC. I made the program work by following the tutorial, but is there any more models for MVC instead of a Calculator model? or does it depend on what we put inside the java to make it look and work like a calculator?
Thankyou!
MVC is all about separating the interface (view) from the backend code (model). There are many different types of views and models, but they way they interact is basically the same.
The tutorials are nice , the way it is covered.
Thank you 🙂
Fantastic explanation of rationale and implementation. Stream of consciousness coding works well. We want more!
Thank you 🙂 Yes that is one of the things that I think helps to make my videos original. I’m glad you like that style.
Very concise illustrative example, much appreciated.
Thank you 🙂
This was very useful in learning about the MVC.
Can u please show or point me to a simple java application(code) that is developed according to the MVC which uses a database.
Thank you 🙂 Android follows the MVC format. I have a bunch of tutorials on it.
This is amazing! I never thought we can MVC in swing!
Thank you 🙂 I’m glad it helped
Thank you for this tutorial!! It definitely cleared things up about the basic structure of the MVC model.
I do have a question, though: You mentioned in a response to another commenter that a project could have many models and views but should only have one controller. Could you explain or demo how to have this program call and display the solution in a different view when the user clicks ‘Calculate’?
I’m trying to understand how the controller gets any other view or model than the ones you passed to it from your main class.
You’re very welcome 🙂 You could definitely have multiple MVCs in one project. It is very common though for a controller to implement an ActionListener and contain a List of ActionListeners to maintain multiple models and views for example. | http://www.newthinktank.com/2013/02/mvc-java-tutorial/?replytocom=18628 | CC-MAIN-2021-31 | refinedweb | 3,355 | 65.73 |
Building effective logs in Python
Tracking the chain of actions from a specific point in time can be crucial when debugging and monitoring software operations. That’s why logs were created to record and track events or log information.
Logs are necessary to track errors in software system modules and, if necessary, restore data integrity in information systems. Data logs can be thought of as a method of recording to allow you to find the places where software is running slowly and identify the reason for specific system behavior.
Many programming languages have their own libraries to write logs. The main task of these libraries is a simple, convenient, and effective system for outputting information to a specific source like a file. It’s important to have certain levels of tracking logs based on unique characteristics to process large amounts of information in output logs.
In this post, we are going to walk you through working with logs using the Python language as an example. Let’s get to it!
Why do you need to create logs?
Let’s begin by examining why logs are important in the development and maintenance of information systems, hardware systems, and software systems:
- Help you track and detect random or rare errors, recovering action sequences that led to a specific error.
- Monitor system operations to improve security and identify unusual user behavior of trying to gain unauthorized access to the system.
- Identify weaknesses in infrastructure and cloud services for a stable system operation and to upgrade to more powerful instances, if necessary.
- Analyze the main functionality that is most often used and plan improvements for the most popular functions to make them more convenient.
- Help recover information if the system’s integrity is violated or jeopardized for individual users.
- Help analyze user behavior and improve UI/UX and business logic.
Why shouldn't you use the normal output to display this information, say with print? Because in this case, it will not be possible to write to various files or set the correct levels of log values, which can lead the console output of the program to be confused with the output of logs.
In general, the construction logic for your system logs may look like this.
Types of log messages
Each log can refer to specific levels (there can be more or fewer levels, depending on your system’s needs). Here are the types of log messages:
- Fatal
- Critical
- Error
- Warning
- Notice
- Info
- Debug
- Trace
These log types help you analyze the level of output of service information, which is necessary to solve specific tasks. Without log types, finding the required entry in the output file would be very difficult.
Creating logs in Python
Overall, logging in Python is very simple. To do so, you need to connect a logging library such as logging and draw an output using the logging.warning () operator.
import logging logging.warning('This is the warning logging example')
As a result, you will receive a message like the following one in the console:
WARNING:root:This is the warning logging example
In Python, documentation logs can be created with the following levels:
Oftentimes, the size of logs will be larger than the size of your text console and it’s not always convenient to scroll through the logs. In this scenario, you could write logs to a file.
import logging logging.basicConfig(filename='test.log', level=logging.DEBUG) logging.debug('This message will be in the log file')
In the file test.log you will have a new line like this:
DEBUG:root:This message will be in the log file
Every time you run this Python script log, text will be added to the existing file, i.e. previous logs lines will be kept.
In the configuration function, you can set another level=logging.DEBUG for the necessary level of logs (INFO, WARNING, ERROR, CRITICAL).
Exception logging
Typically, you need to have additional information when a handled exception is thrown. It’s not easy to track the output when exceptions are thrown in large projects. Here, it’s inappropriate to build a separate output of additional information. Instead, you can successfully use existing loggers in the messages to display information about an exception that has occurred.
import logging logging.basicConfig(filename='test.log', level=logging.ERROR) logging.debug('This message will be in the log file') try: print ("Test") a = 4 / 0 except ZeroDivisionError as e: print ("Division by zero detected") logging.error(e, exc_info=True)
Next, the test.log file will look like this
ERROR:root:division by zero Traceback (most recent call last): File "log.py", line 8, in <module> a = 4 / 0 ZeroDivisionError: division by zero
Set exc_info=True to show traceback, which is a very helpful option.
Using JSON format as a log output
When your system writes hundreds of thousands of logs per hour, analyzing the necessary entries in the log file becomes nearly impossible. Fortunately, if you organize log outputs in JSON format, you can easily analyze them using special tools to read the values from a specifically generated system log into the required fields.
For example, you can use a library such as json-logging. You can install this library with a simple command using pip:
pip install json-logging
Moreover, as indicated in the documentation for the json_logging library, you can connect the JSON logger using the line logger = logging.getLogger ("test-logger").
Before doing so, be sure to enable the JSON logger using one line: json_logging.init_non_web(enable_json=True).
This is particularly helpful since you can switch between the regular and JSON logger without changing anything else in the whole code for log outputs.
Also, this is a very good way to switch the log format as it preserves the universality principle of the Python code.
import json_logging, logging, sys # log is initialized without a web framework name json_logging.init_non_web(enable_json=True) logger = logging.getLogger("test-logger") logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler(sys.stdout)) logger.info("test logging statement")
Since you now have enable_json = True, the output will be as follows
{"written_at": "2021-02-06T12:15:08.824Z", "written_ts": 1612613708824719000, "msg": "test logging statement", "type": "log", "logger": "test-logger", "thread": "MainThread", "level": "INFO", "module": "jsonlogs", "line_no": 10}
Comment one line json_logging.init_non_web(enable_json=True) and you will switch to simple log format like this:
test logging statement
In addition, there are methods on how to add custom formatters and loggers. You can learn more from the json-logging-python and logging documentation.
Conclusion
In summary, logging is a very useful feature that doesn’t require a lot of effort to implement. With the help of simple actions, very early on in the process of writing code, you can insert the necessary lines to log critical parameters. That way, during system operation, developers of new functionalities and system admins will be very grateful that you implemented logging.
Very rarely does one really look through logs in a corporate system with their eyes. That’s why there are graphical systems to process logs so you can analyze a specifically identified item in text mode.
With minimal effort when writing code, our Svitla System developers employ logging systems to improve the quality of information systems that are developed for the customer. You can always contact Svitla Systems for advice and place orders to develop. | https://svitla.com/blog/building-effective-logs-in-python | CC-MAIN-2022-21 | refinedweb | 1,226 | 52.7 |
Finance: MUTUAL FUNDS
MUTUAL FUNDS: TANNED, RESTED, AND RARING
Mutual funds are likely to make their best quarterly showings in more than a year
If the wild gyrations of the stock market make your stomach queasy, try this remedy: Take a look at the recent returns on mutual funds. Even if your funds are only average performers, they've been trouncing the blue chips since midsummer and doing it with less volatility, too.
In the third quarter, mutual funds are likely to make their best showing in more than a year. U.S. diversified equity funds earned an 11.87% total return--including reinvestment of dividends and capital gains (through Sept. 22)--according to Morningstar Inc., which prepares fund data for BUSINESS WEEK. That's more than 3.5 percentage points higher than the 8.25% return of the Standard & Poor's 500-stock index for the same period. The all-equity average, which includes the negative returns from some overseas funds, was only a hairbreadth behind the S&P. And thanks to an improving bond market, bond funds are showing some pep as well. Taxable bond funds gained on average 2.81%, and tax-free funds 2.5%.
THINKING SMALL. No, those fund managers didn't get a whole lot smarter over the summer. What happened is that Wall Streeters, wary of the high valuations and weakening earnings of multinational giants like Coca-Cola, Gillette, and Eastman Kodak, began unloading their S&P 500 stocks and buying long-ignored mid-cap and small-cap shares. That played right into the hands of mutual-fund managers, who typically invest more in those secondary stocks than in the blue chips.
The outlook for the smaller companies--and for the funds that invest in them--continues to be bright. Valuations are still reasonable. "There's no shortage of good, solid earnings improvement stories out there," says Bill Newman, who runs Phoenix Small Cap A, up 28.04% for the quarter. And there's not much threat from sharply higher interest rates or recession, either of which would send smaller companies reeling. The new capital-gains tax cuts gives investors added incentives to seek small-company funds: They pay little if any dividends, and most of their returns are in the form of capital gains. All told, the market's turn to smaller companies should allow the average fund to beat the S&P 500 index for the entire year. That has not happened since 1993.
Certainly, the cash pouring into funds is heading toward the ones that invest in smaller companies. Robert Adler of AMG Data Services says that such funds took in more than $1 billion a week in the first three weeks of September. "That's an awfully large weekly flow, considering these funds have assets of $163 billion," says Adler. Equity funds as a whole are now taking in $4 billion per week, a neat bounceback from August, when a market downdraft caused some investors to close their checkbooks.
UNGLAMOROUS. Among U.S. diversified equity funds in the third quarter, the small-cap funds beat the mid-caps, and the mid-caps beat the large (table, page 58). Small-cap growth funds, those that invest in the less seasoned but fastest-growing companies, raced to the fore with a 16.72% return. Small-cap value funds, which generally work in the less glamorous industrial and financial sectors of the small-company universe, showed an impressive 13.95% total return.
Managers of large-cap funds still found ways to stand out. Nimble large-cap managers who were able to overweight technology and underweight food, tobacco, cosmetics, and beverages found they could beat the S&P 500 index as well. Large-cap growth funds earned a 10.8% return. Even managers of large-cap blend funds, which compete head-on with computer-run S&P 500 index funds, squeaked past the index with a 9% gain.
Funds big in assets also fared nicely. After several years of market-trailing performance, the $60.1 billion Fidelity Magellan Fund delivered a plump 10.42% return (table). Magellan was set to close to new investors on Sept. 30, but the $28.9 billion Fidelity Contrafund, up 11.84% for the quarter, is still open for business. Out of the 10 largest equity funds, 8 beat the index, including Vanguard Index 500 Fund, which topped its bogey by 0.13 percentage points.
Each bull-market surge in small caps also produces a slew of new funds to lure investors. Among them are Bjurman Micro-Cap Growth Fund, the first mutual-fund foray of George D. Bjurman & Associates, a $2.3 billion Los Angeles institutional investment firm. The fund is up 32% for the quarter and 58% since its Mar. 31 launch. It screens for companies with market caps between $30 million and $300 million, choosing those with strong earnings growth and moderate price-earnings and price-to-cash-flow ratios. The fund is also micro in size--less than $2 million. But if it continues to post numbers of that sort, it won't be for long.
CLOSED DOOR. Indeed, the biggest problem facing investors is a dearth of good small-cap fund opportunities. The best ones either get so big that their performance suffers or they move into mid-cap stocks--which become a different sort of investment vehicle.
Others fight to maintain performance and their investment style by closing to new investors. That's what has happened at ni Growth and ni Micro Cap, the two quantitatively driven funds run by John C. Bogle Jr., son of Vanguard Group's founder and chairman. Bogle launched the funds 16 months ago and shut the doors on Aug. 8. "That's the only way to prevent capitalization creep," he says. Portfolio manager John Montgomery of Bridgeway Ultra-Small Company Fund said "no thanks" to new investors when it reached $27 million in June. The fund invests in companies from the bottom 10% in market capitalization, a difficult area in which to invest large sums. The bottom line: If you find an attractive new small-cap offering for your portfolio, don't wait too long.
Besides small caps, funds specializing in technology and in natural resources dominated the top performers. In the No.1 slot, Fidelity Select Computers Portfolio may not be much of a surprise. But just 0.4 percentage points behind is Fidelity Select Energy Service Portfolio, up nearly 36% for the quarter. The $1 billion fund focuses on companies that supply equipment and services to energy companies. "We're running out of production capacity, and incremental demand will have to be met by new production," says portfolio manager Robert Ewing. "We're only in year 3 or 4 of a 15-year energy cycle."
ASIAN LOSERS. A broader-based approach to energy investing is Invesco Strategic Portfolio Energy fund, up 26.32% for the quarter. John S. Segner, who took over the fund last winter, put one-third of its $350 million in oil services, but he also invests in the big international oils and domestic exploration and production companies.
The quarter's big losers all invest in the Far East, where market turmoil and currency devaluation proved devastating, especially for those investing in Malaysia and Thailand. Richard Farrell, who runs Guinness Flight Asia Blue Chip Fund, doesn't think the crises are over, but he is willing to step gingerly into these battered markets. Figuring the domestic economies will be sluggish for some time, he's buying stocks of Malaysian agricultural producers and Thai exporters. Both sectors, he says, should benefit from their now-cheaper currencies. The more diversified of the emerging-markets funds weathered the crises better, down only 4%, compared with a 13.5% loss for diversified Asian funds. One big player, Fidelity Emerging Markets Fund, apparently got caught in the wrong places. It's down 22.54% for the quarter.
The quarter has been a good one for the bond funds. Long-term interest rates came down--a big plus for bond- fund net asset values. Not surprisingly, the top-performing taxable funds were four zero-coupon funds run by American Century-Benham (table). Since they pay no current interest, zeros are highly sensitive to changes in rates.
Of course, bond funds have had some strong quarters in recent years that investors ignored. But now they're starting to pay attention. AMG's Adler says taxable bond funds took in more than $1 billion a week during half of the quarter's weeks, a rate not seen since before 1994's bond-fund debacle. And the money is going not only to high-yield funds but also to investment-grade portfolios. Certainly, it's not the yields on these funds that are drawing the cash. Perhaps market volatility is reminding investors that diversification is just good preventive medicine.By Jeffrey M. Laderman in New YorkReturn to top | http://www.bloomberg.com/bw/stories/1997-10-05/mutual-funds-tanned-rested-and-raring | CC-MAIN-2015-22 | refinedweb | 1,488 | 65.52 |
User talk:Iritscen
From Uncyclopedia, the content-free encyclopedia.
[edit] You've won the Poo Lit Surprise
Congratulations! Your article The Enigmatic Mr. Grumpikins has won the Poo Lit Surprise for Best]].
Thanks for participating in the Poo Lit Surprise writing competition! Mr. Grumpikins was one of my favorite submissions, personally. Here is how I think we will go about awarding your $10 Amazon e-certificate: first, send me an email through Uncyclopedia (using the sidebar "E-mail this user" link on my userpage) so that I can get your email address without making you post it publicly, unless you really don't care about posting it publicly. Include a secret word or something. Then, post a response here on your talk page giving the secret word from the email. As soon as I receive that confirmation, I'll send you your prize. Congrats, and thanks again for an excellent article! —rc (t) 00:36, 23 April 2006 (UTC)
Great article..., 23 April 2006 (UTC)
- Congratulations on a deserved win. -- Sir Mhaille
(talk to me)
[edit] Very good work
Very well done on your winning of The Poo Lit Surprise, you deserve it: this, to me, is by far the best article of the lot (except my entry, obviously...). Disturbingly, I see elements of The Enigmatic Mr. Grumpikins in me. --Hindleyite
Talk 11:39, 23 April 2006 (UTC)
[edit] Thank You, Thank You All
I'd like to thank everyone who made this possible: first my team of writers, next my dedicated art team equipped with the latest version of Paint....
I seriously would like to thank Elvis for actually posting my article, seeing as I couldn't, seeing as I'm banned. He took some time out of his doubtlessly busy day hanging out on IRC to upload my images, move my article to the proper namespace, and enter it on the PLS page. Thanks, man. You're my favorite admin. Maybe I'll make you an award when I can actually edit stuff again.
“reblocking range, your ISP is infected with a sh*thead, sorry”
~ Splaka on why I'm banned
This just might be the first time someone has won an award, VFH, etc. on this site while being banned, huh?
Oh yeah, WOOLLY EARMUFFS! --Iritscen 17:28, 25 April 2006 (UTC)
- E-certificate sent! Amazon says they "Arrive within hours," so email me if you don't get it soon. Sorry about the range block, but it kind of gives you a John Bunyan, writing-from-prison flair, no? Congrats again! —rc (t) 17:44, 25 April 2006 (UTC) | http://uncyclopedia.wikia.com/wiki/User_talk:Iritscen | crawl-002 | refinedweb | 433 | 73.47 |
Wednesday, April 6, 2016¶
Cannot see participants of course 039¶
Alexa reported the problem that Lino did not show the content of the EnrolmentsByCourse table for a given course. The server log disclosed the reason:
File "repositories/voga/lino_voga/projects/roger/lib/courses/models.py", line 102, in get_enrolment_info s += " " + self.section TypeError: coercing to Unicode: need string or buffer, Choice found
That is, there was one participant with a non-empty section field in this course, and Lino did not yet manage this case correctly. Fixed.
This problem took only 20 minutes, including voice chat with Alexa, code change, commit, release to their production site and explanation email to Alexa.
Managing user permissions¶
I had a phone meeting with Gerd about managing user permissions. The existing system uses class-based “user roles” and is the best we can imagine. We had two ideas for optimizing it: #856 and #857.
#856 is clear (no discussions) but will require some work and is not vital.
For #857 here an introduction. They face the problem that even though they currently have only about a dozen of user profiles, they sometimes have difficulties to figure out which one is needed for a new user.
My idea is to solve this by providing the following table:
The real table should of course (1) have more columns (configurable as a list of “key functionalities”) and (2) be accessible via the web interface.
The table has been generated using the following script:
from lino import startup startup('lino_welfare.projects.eupen.settings.demo') from atelier.rstgen import table from lino.api.shell import * from lino_welfare.modlib.welfare.roles import * roles = [OfficeOperator, OfficeUser, SepaUser, DebtsUser, LedgerStaff, Supervisor, SiteAdmin] rows = [] headers = ["Profile"] for r in roles: headers.append(r.__name__) for p in users.UserTypes.objects(): row = [p.text] for r in roles: row.append("X" if p.has_required_roles([r]) else "") rows.append(row) print(table(headers, rows)) | http://luc.lino-framework.org/blog/2016/0406.html | CC-MAIN-2018-05 | refinedweb | 319 | 50.53 |
DEBSOURCES
Skip Quicknav
Patches / linux /3.2.78-1
Makefile |
15 14 + 1 - 0 !
arch/arm/kernel/process.c |
6 4 + 2 - 0 !
arch/ia64/kernel/process.c |
5 3 + 2 - 0 !
arch/powerpc/kernel/process.c |
6 4 + 2 - 0 !
arch/s390/kernel/traps.c |
11 7 + 4 - 0 !
arch/sh/kernel/process_32.c |
6 4 + 2 - 0 !
arch/x86/kernel/dumpstack.c |
6 4 + 2 - 0 !
arch/x86/kernel/process.c |
6 4 + 2 - 0 !
arch/x86/um/sysrq_64.c |
6 4 + 2 - 0 !
9 files changed, 48 insertions(+), 19 deletions(-)
include package version along with kernel release in stack traces
Date: Tue, 24 Jul 2012 03:13:10 +0100
For distribution binary packages we assume
$DISTRIBUTION_OFFICIAL_BUILD, $DISTRIBUTOR and $DISTRIBUTION_VERSION
are set.
Makefile |
78 38 + 40 - 0 !
1 file changed, 38 insertions(+), 40 deletions(-)
kbuild: make the toolchain variables easily overwritable
Date: Sun, 22 Feb 2009 15:39:35 +0100
Documentation/DocBook/Makefile |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
docbook: make documentation/docbook -j clean
Date: Tue, 14 Jun 2006 00:05:06 +0200
Two concurrent calls to cmd_db2man may attempt to compress manual
pages generated by each other. gzip can then fail due to an input
file having already been compressed and removed.
Since dh_compress will compress manual pages later, we don't need
to run gzip here at all.
.gitignore |
18 7 + 11 - 0 !
1 file changed, 7 insertions(+), 11 deletions(-)
tweak gitignore for debian pkg-kernel using git svn.
[bwh: Tweak further for pure git]
drivers/media/dvb/dvb-usb/Kconfig |
2 1 + 1 - 0 !
drivers/media/dvb/dvb-usb/af9005-fe.c |
66 53 + 13 - 0 !
2 files changed, 54 insertions(+), 14 deletions(-)
[patch] af9005: use request_firmware() to load register init script
Read the register init script from the Windows driver. This is sick
but should avoid the potential copyright infringement in distributing
a version of the script which is directly derived from the driver.
sound/pci/Kconfig |
3 2 + 1 - 0 !
sound/pci/cs46xx/cs46xx_lib.c |
86 72 + 14 - 0 !
sound/pci/cs46xx/cs46xx_lib.h |
4 2 + 2 - 0 !
3 files changed, 76 insertions(+), 17 deletions(-)
cs46xx: use request_firmware() for old dsp code
Based on work by Kalle Olavi Niemitalo <kon@iki.fi>.
Tested by Antonio Ospite <ospite@studenti.unina.it>.
Unfortunately we cannot currently distribute the firmware.
fs/namei.c |
2 1 + 1 - 0 !
fs/splice.c |
10 5 + 5 - 0 !
include/linux/namei.h |
1 1 + 0 - 0 !
include/linux/splice.h |
6 6 + 0 - 0 !
4 files changed, 13 insertions(+), 6 deletions(-)
---
fs/file_table.c |
2 2 + 0 - 0 !
fs/inode.c |
1 1 + 0 - 0 !
fs/namei.c |
1 1 + 0 - 0 !
fs/namespace.c |
1 1 + 0 - 0 !
fs/notify/group.c |
3 3 + 0 - 0 !
fs/notify/mark.c |
4 4 + 0 - 0 !
fs/open.c |
1 1 + 0 - 0 !
fs/splice.c |
2 2 + 0 - 0 !
security/commoncap.c |
1 1 + 0 - 0 !
security/device_cgroup.c |
2 2 + 0 - 0 !
security/security.c |
10 10 + 0 - 0 !
11 files changed, 28 insertions(+)
fs/Kconfig |
1 1 + 0 - 0 !
fs/Makefile |
1 1 + 0 - 0 !
include/linux/Kbuild |
1 1 + 0 - 0 !
3 files changed, 3 insertions(+)
Documentation/ABI/testing/debugfs-aufs |
37 37 + 0 - 0 !
Documentation/ABI/testing/sysfs-aufs |
24 24 + 0 - 0 !
Documentation/filesystems/aufs/README |
333 333 + 0 - 0 !
Documentation/filesystems/aufs/design/01intro.txt |
162 162 + 0 - 0 !
Documentation/filesystems/aufs/design/02struct.txt |
226 226 + 0 - 0 !
Documentation/filesystems/aufs/design/03lookup.txt |
106 106 + 0 - 0 !
Documentation/filesystems/aufs/design/04branch.txt |
76 76 + 0 - 0 !
Documentation/filesystems/aufs/design/05wbr_policy.txt |
65 65 + 0 - 0 !
Documentation/filesystems/aufs/design/06mmap.txt |
47 47 + 0 - 0 !
Documentation/filesystems/aufs/design/07export.txt |
59 59 + 0 - 0 !
Documentation/filesystems/aufs/design/08shwh.txt |
53 53 + 0 - 0 !
Documentation/filesystems/aufs/design/10dynop.txt |
47 47 + 0 - 0 !
Documentation/filesystems/aufs/design/99plan.txt |
96 96 + 0 - 0 !
fs/aufs/Kconfig |
203 203 + 0 - 0 !
fs/aufs/Makefile |
42 42 + 0 - 0 !
fs/aufs/aufs.h |
60 60 + 0 - 0 !
fs/aufs/branch.c |
1172 1172 + 0 - 0 !
fs/aufs/branch.h |
230 230 + 0 - 0 !
fs/aufs/conf.mk |
38 38 + 0 - 0 !
fs/aufs/cpup.c |
1079 1079 + 0 - 0 !
fs/aufs/cpup.h |
81 81 + 0 - 0 !
fs/aufs/dbgaufs.c |
334 334 + 0 - 0 !
fs/aufs/dbgaufs.h |
49 49 + 0 - 0 !
fs/aufs/dcsub.c |
243 243 + 0 - 0 !
fs/aufs/dcsub.h |
94 94 + 0 - 0 !
fs/aufs/debug.c |
489 489 + 0 - 0 !
fs/aufs/debug.h |
243 243 + 0 - 0 !
fs/aufs/dentry.c |
1140 1140 + 0 - 0 !
fs/aufs/dentry.h |
237 237 + 0 - 0 !
fs/aufs/dinfo.c |
543 543 + 0 - 0 !
fs/aufs/dir.c |
634 634 + 0 - 0 !
fs/aufs/dir.h |
137 137 + 0 - 0 !
fs/aufs/dynop.c |
377 377 + 0 - 0 !
fs/aufs/dynop.h |
76 76 + 0 - 0 !
fs/aufs/export.c |
805 805 + 0 - 0 !
fs/aufs/f_op.c |
729 729 + 0 - 0 !
fs/aufs/f_op_sp.c |
298 298 + 0 - 0 !
fs/aufs/file.c |
673 673 + 0 - 0 !
fs/aufs/file.h |
298 298 + 0 - 0 !
fs/aufs/finfo.c |
157 157 + 0 - 0 !
fs/aufs/fstype.h |
496 496 + 0 - 0 !
fs/aufs/hfsnotify.c |
260 260 + 0 - 0 !
fs/aufs/hfsplus.c |
57 57 + 0 - 0 !
fs/aufs/hnotify.c |
712 712 + 0 - 0 !
fs/aufs/i_op.c |
1004 1004 + 0 - 0 !
fs/aufs/i_op_add.c |
711 711 + 0 - 0 !
fs/aufs/i_op_del.c |
478 478 + 0 - 0 !
fs/aufs/i_op_ren.c |
1026 1026 + 0 - 0 !
fs/aufs/iinfo.c |
276 276 + 0 - 0 !
fs/aufs/inode.c |
492 492 + 0 - 0 !
fs/aufs/inode.h |
587 587 + 0 - 0 !
fs/aufs/ioctl.c |
196 196 + 0 - 0 !
fs/aufs/loop.c |
133 133 + 0 - 0 !
fs/aufs/loop.h |
50 50 + 0 - 0 !
fs/aufs/magic.mk |
54 54 + 0 - 0 !
fs/aufs/module.c |
196 196 + 0 - 0 !
fs/aufs/module.h |
105 105 + 0 - 0 !
fs/aufs/opts.c |
1677 1677 + 0 - 0 !
fs/aufs/opts.h |
209 209 + 0 - 0 !
fs/aufs/plink.c |
515 515 + 0 - 0 !
fs/aufs/poll.c |
56 56 + 0 - 0 !
fs/aufs/procfs.c |
170 170 + 0 - 0 !
fs/aufs/rdu.c |
383 383 + 0 - 0 !
fs/aufs/rwsem.h |
188 188 + 0 - 0 !
fs/aufs/sbinfo.c |
343 343 + 0 - 0 !
fs/aufs/spl.h |
62 62 + 0 - 0 !
fs/aufs/super.c |
999 999 + 0 - 0 !
fs/aufs/super.h |
546 546 + 0 - 0 !
fs/aufs/sysaufs.c |
105 105 + 0 - 0 !
fs/aufs/sysaufs.h |
104 104 + 0 - 0 !
fs/aufs/sysfs.c |
257 257 + 0 - 0 !
fs/aufs/sysrq.c |
148 148 + 0 - 0 !
fs/aufs/vdir.c |
885 885 + 0 - 0 !
fs/aufs/vfsub.c |
835 835 + 0 - 0 !
fs/aufs/vfsub.h |
232 232 + 0 - 0 !
fs/aufs/wbr_policy.c |
700 700 + 0 - 0 !
fs/aufs/whout.c |
1049 1049 + 0 - 0 !
fs/aufs/whout.h |
88 88 + 0 - 0 !
fs/aufs/wkq.c |
214 214 + 0 - 0 !
fs/aufs/wkq.h |
92 92 + 0 - 0 !
fs/aufs/xino.c |
1265 1265 + 0 - 0 !
include/linux/aufs_type.h |
233 233 + 0 - 0 !
82 files changed, 29980 insertions(+)
fs/aufs/module.c |
1 1 + 0 - 0 !
1 file changed, 1 insertion(+)
[patch] aufs: mark as staging
I really don't want to support this.
security/device_cgroup.c |
1 1 + 0 - 0 !
1 file changed, 1 insertion(+)
arch/ia64/Makefile |
17 2 + 15 - 0 !
1 file changed, 2 insertions(+), 15 deletions(-)
hardcode arch script output
Date: Mon, 26 Mar 2007 16:30:51 -0600
Bug-Debian:
drivers/scsi/Kconfig |
1 1 + 0 - 0 !
1 file changed, 1 insertion(+)
[arm, mips] disable advansys
Florian Lohoff <flo@rfc822.org> reports the following build failure on IP32:
MODPOST 552 modules
ERROR: "free_dma" [drivers/scsi/advansys.ko] undefined!
make[5]: *** [__modpost] Error 1
But report:
arch/mips/Kbuild |
5 0 + 5 - 0 !
1 file changed, 5 deletions(-)
[patch] partially revert "mips: add -werror to arch/mips/kbuild"
drivers/tty/hvc/hvc_vio.c |
5 4 + 1 - 0 !
1 file changed, 4 insertions(+), 1 deletion(-)
fix console selection in powerpc lpar environment
Date: Tue, 27 Sep 2011 06:04:39 +0100
Bug-Debian:
Do not override the preferred console set through the kernel parameter.
Original version by Bastian Blank <waldi@debian.org>.
include/linux/sysrq.h |
2 1 + 1 - 0 !
lib/Kconfig.debug |
8 8 + 0 - 0 !
2 files changed, 9 insertions(+), 1 deletion(-)
allow access to sensitive sysrq keys to be restricted by default
Date: Sun, 14 Feb 2010 16:11:35 +0100
Add a Kconfig variable to set the initial value of the Magic
SysRq mask (sysctl: kernel.sysrq).
arch/sh/Makefile |
1 0 + 1 - 0 !
1 file changed, 1 deletion(-)
net/ieee802154/af_ieee802154.c |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
[patch 2/3] af_802154: disable auto-loading as mitigation against local exploits
net/rds/af_rds.c |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
[patch 1/3] rds: disable auto-loading as mitigation against local exploits
net/decnet/af_decnet.c |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
[patch] decnet: disable auto-loading as mitigation against local exploits
drivers/md/dm-table.c |
14 8 + 6 - 0 !
1 file changed, 8 insertions(+), 6 deletions(-)
[patch] dm: deal with merge_bvec_fn in component devices better
Bug-Debian:
This is analogous to commit 627a2d3c29427637f4c5d31ccc7fcbd8d312cd71,
which does the same for md-devices at the top of the stack. The
following explanation is taken from that commit. Thanks to Neil Brown
<neilb@suse.de> for the advice.
If a component device has a merge_bvec_fn then as we never call it
we must ensure we never need to. Currently this is done by setting
max_sector to 1 PAGE, however this does not stop a bio being created
with several sub-page iovecs that would violate the merge_bvec_fn.
So instead set max_segments to 1 and set the segment boundary to the
same as a page boundary to ensure there is only ever one single-page
segment of IO requested at a time.
This can particularly be an issue when 'xen' is used as it is
known to submit multiple small buffers in a single bio.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
scripts/kconfig/conf.c |
42 32 + 10 - 0 !
scripts/kconfig/confdata.c |
9 9 + 0 - 0 !
scripts/kconfig/expr.h |
2 2 + 0 - 0 !
scripts/kconfig/lkc_proto.h |
1 1 + 0 - 0 !
4 files changed, 44 insertions(+), 10 deletions(-)
[patch] kbuild: kconfig: verbose version of --listnewconfig
If the KBUILD_VERBOSE environment variable is set to non-zero, show
the default values of new symbols and not just their names.
Based on work by Bastian Blank <waldi@debian.org> and
maximilian attems <max@stro.at>. Simplified by Michal Marek
<mmarek@suse.cz>.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
kernel/sched_autogroup.c |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
sched: do not enable autogrouping by default
Date: Wed, 16 Mar 2011 03:17:06 +0000
Documentation/kernel-parameters.txt |
4 2 + 2 - 0 !
init/Kconfig |
8 8 + 0 - 0 !
kernel/cgroup.c |
20 16 + 4 - 0 !
mm/memcontrol.c |
3 3 + 0 - 0 !
4 files changed, 29 insertions(+), 6 deletions(-)
[patch 1/2] cgroups: allow memory cgroup support to be included but
disabled
Memory cgroup support has some run-time overhead, so it's useful to
include it in a distribution kernel without enabling it by default.
Add a kernel config option to disable it by default and a kernel
parameter 'cgroup_enable' as the opposite to 'cgroup_disable'.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Documentation/cgroups/memory.txt |
4 4 + 0 - 0 !
1 file changed, 4 insertions(+)
[patch 2/2] cgroups: document the debian memory resource controller
config change
drivers/platform/x86/Kconfig |
7 7 + 0 - 0 !
drivers/platform/x86/Makefile |
1 1 + 0 - 0 !
drivers/platform/x86/amilo-rfkill.c |
173 173 + 0 - 0 !
3 files changed, 181 insertions(+)
[patch] x86: add amilo-rfkill driver for some fujitsu-siemens amilo
laptops
commit c215ab9a7530d415707430de8d51a58ca6a41808 upstream. function to write to the i8042 safely.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
arch/arm/mach-ixp4xx/include/mach/io.h |
57 57 + 0 - 0 !
1 file changed, 57 insertions(+)
ixp4xx: add io{read,write}{16,32}be functions
Date: 2011-11-13 19:27:56 +0000
Some driver are now requiring some be io functions, add noted in
commit (06901bd83412db5a31de7526e637101ed0c2c472). Otherwise, it may lead
to build errors like this one :
drivers/net/mlx4/en_tx.c: In function ‘mlx4_en_xmit’:
drivers/net/mlx4/en_tx.c:815: error: implicit declaration of function ‘iowrite32be’
make[3]: *** [drivers/net/mlx4/en_tx.o] Error 1
make[2]: *** [drivers/net/mlx4] Error 2
make[1]: *** [drivers/net] Error 2
Signed-off-by: Arnaud Patard <arnaud.patard@rtp-net.org>
drivers/bcma/host_pci.c |
4 3 + 1 - 0 !
drivers/net/wireless/brcm80211/Kconfig |
1 0 + 1 - 0 !
2 files changed, 3 insertions(+), 2 deletions(-)
[patch] bcma: do not claim pci device ids also claimed by brcmsmac
drivers/staging/media/lirc/lirc_serial.c |
23 12 + 11 - 0 !
1 file changed, 12 insertions(+), 11 deletions(-)
[patch 4/5] [media] staging: lirc_serial: fix bogus error codes
commit 9b98d60679711753e548be15c6bef5239db6ed64 upstream.
Device not found? ENODEV, not EINVAL.
Write to read-only device? EPERM, not EBADF.
Invalid argument? EINVAL, not ENOSYS.
Unsupported ioctl? ENOIOCTLCMD, not ENOSYS.
Another function returned an error code? Use that, don't replace it.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
arch/x86/mm/memtest.c |
2 2 + 0 - 0 !
1 file changed, 2 insertions(+)
[patch] x86: memtest: warn if bad ram found
Since this is not a particularly thorough test, if we find any bad
bits of RAM then there is a fair chance that there are other bad bits
we fail to detect.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
kernel/power/user.c |
64 64 + 0 - 0 !
1 file changed, 64 insertions(+)
[patch] pm / hibernate: implement compat_ioctl for /dev/snapshot
commit c336078bf65c4d38caa9a4b8b7b7261c778e622c upstream.
This allows uswsusp built for i386 to run on an x86_64 kernel (tested
with Debian package version 1.0+20110509-2).
References:
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
arch/arm/include/asm/bug.h |
1 0 + 1 - 0 !
1 file changed, 1 deletion(-)
[patch] arm: remove use of possibly-undefined build_bug_on in
<asm/bug.h>
<asm/bug.h> may be included by either <linux/bug.h> or
<linux/kernel.h> but only the latter will define BUILD_BUG_ON.
Bodge it until there is a proper upstream fix.
arch/arm/include/asm/pgtable.h |
1 1 + 0 - 0 !
arch/arm/include/asm/processor.h |
2 2 + 0 - 0 !
arch/arm/mm/mmap.c |
173 168 + 5 - 0 !
3 files changed, 171 insertions(+), 5 deletions(-)
[patch] arm: 7169/1: topdown mmap support
commit 7dbaa466780a754154531b44c2086f6618cee3a8 upstream.
Similar to other architectures, this adds topdown mmap support in user
process address space allocation policy. This allows mmap sizes greater
than 2GB. This support is largely copied from MIPS and the generic
implementations.
The address space randomization is moved into arch_pick_mmap_layout.
Tested on V-Express with ubuntu and a mmap test from here:
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
arch/arm/mach-kirkwood/common.c |
2 2 + 0 - 0 !
arch/arm/mach-kirkwood/include/mach/kirkwood.h |
1 1 + 0 - 0 !
2 files changed, 3 insertions(+)
arm: kirkwood: recognize a1 revision of 6282 chip
commit a87d89e74f0a4b56eaee8c3ef74bce69277b780f upstream.
Recognize the Kirkwood 6282 revision A1 chip since products using
this chip are shipping now, such as the QNAP TS-x19P II devices.
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Documentation/input/alps.txt |
75 75 + 0 - 0 !
drivers/input/mouse/alps.c |
37 1 + 36 - 0 !
2 files changed, 76 insertions(+), 36 deletions(-)
[patch 1/5] input: alps - move protocol information to documentation
commit d4b347b29b4d14647c7394f7167bf6785dc98e50 upstream.
In preparation for new protocol support, move the protocol
information currently documented in alps.c to
Documentation/input/alps.txt, where it can be expanded without
cluttering up the driver.
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
drivers/input/mouse/alps.c |
47 23 + 24 - 0 !
drivers/input/mouse/alps.h |
4 4 + 0 - 0 !
2 files changed, 27 insertions(+), 24 deletions(-)
[patch 2/5] input: alps - add protocol version field in
alps_model_info
commit fa629ef5222193214da9a2b3c94369f79353bec9 upstream.
In preparation for adding support for more ALPS protocol versions,
add a field for the protocol version to the model info instead of
using a field in the flags. OLDPROTO and !OLDPROTO are now called
version 1 and version 2, repsectively.
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
drivers/input/mouse/alps.c |
10 5 + 5 - 0 !
1 file changed, 5 insertions(+), 5 deletions(-)
[patch 3/5] input: alps - remove assumptions about packet size
commit b46615fe9215214ac00e26d35fc54dbe1c510803 upstream.
In preparation for version 4 protocol support, which has 8-byte
data packets, remove all hard-coded assumptions about packet size
and use psmouse->pktsize instead.
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
drivers/input/mouse/alps.c |
791 753 + 38 - 0 !
drivers/input/mouse/alps.h |
14 14 + 0 - 0 !
drivers/input/mouse/psmouse.h |
1 1 + 0 - 0 !
3 files changed, 768 insertions(+), 38 deletions(-)
[patch 4/5] input: alps - add support for protocol versions 3 and 4
commit 25bded7cd60fa460e520e9f819bd06f4c5cb53f0 upstream.
This patch adds support for two ALPS touchpad protocols not
supported currently by the driver, which I am arbitrarily naming
version 3 and version 4. Support is single-touch only at this time,
although both protocols are capable of limited multitouch support.
Thanks to Andrew Skalski, who did the initial reverse-engineering
of the v3 protocol.
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
drivers/input/mouse/alps.c |
233 214 + 19 - 0 !
drivers/input/mouse/alps.h |
1 1 + 0 - 0 !
2 files changed, 215 insertions(+), 19 deletions(-)
[patch 5/5] input: alps - add semi-mt support for v3 protocol
commit 01ce661fc83005947dc958a5739c153843af8a73 upstream.
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
arch/x86/kvm/vmx.c |
11 7 + 4 - 0 !
arch/x86/kvm/x86.c |
7 6 + 1 - 0 !
include/linux/kvm_host.h |
1 1 + 0 - 0 !
3 files changed, 14 insertions(+), 5 deletions(-)
[patch 1/2] kvm: nvmx: add kvm_req_immediate_exit
commit d6185f20a0efbf175e12831d0de330e4f21725aa upstream.
This patch adds a new vcpu->requests bit, KVM_REQ_IMMEDIATE_EXIT.
This bit requests that when next entering the guest, we should run it only
for as little as possible, and exit again.
We use this new option in nested VMX: When L1 launches L2, but L0 wishes L1
to continue running so it can inject an event to it, we unfortunately cannot
just pretend to have run L2 for a little while - We must really launch L2,
otherwise certain one-off vmcs12 parameters (namely, L1 injection into L2)
will be lost. So the existing code runs L2 in this case.
But L2 could potentially run for a long time until it exits, and the
injection into L1 will be delayed. The new KVM_REQ_IMMEDIATE_EXIT allows us
to request that L2 will be entered, as necessary, but will exit as soon as
possible after entry.
Our implementation of this request uses smp_send_reschedule() to send a
self-IPI, with interrupts disabled. The interrupts remain disabled until the
guest is entered, and then, after the entry is complete (often including
processing an injection and jumping to the relevant handler), the physical
interrupt is noticed and causes an exit.
On recent Intel processors, we could have achieved the same goal by using
MTF instead of a self-IPI. Another technique worth considering in the future
is to use VM_EXIT_ACK_INTR_ON_EXIT and a highest-priority vector IPI - to
slightly improve performance by avoiding the useless interrupt handler
which ends up being called when smp_send_reschedule() is used.
Signed-off-by: Nadav Har'El <nyh@il.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
arch/x86/kvm/vmx.c |
7 4 + 3 - 0 !
1 file changed, 4 insertions(+), 3 deletions(-)
[patch 2/2] kvm: nvmx: fix warning-causing idt-vectoring-info
behavior
commit 51cfe38ea50aa631f58ed8c340ed6f0143c325a8 upstream.
When L0 wishes to inject an interrupt while L2 is running, it emulates an exit
to L1 with EXIT_REASON_EXTERNAL_INTERRUPT. This was explained in the original
nVMX patch 23, titled "Correct handling of interrupt injection".
Unfortunately, it is possible (though rare) that at this point there is valid
idt_vectoring_info in vmcs02. For example, L1 injected some interrupt to L2,
and when L2 tried to run this interrupt's handler, it got a page fault - so
it returns the original interrupt vector in idt_vectoring_info. The problem
is that if this is the case, we cannot exit to L1 with EXTERNAL_INTERRUPT
like we wished to, because the VMX spec guarantees that idt_vectoring_info
and exit_reason_external_interrupt can never happen together. This is not
just specified in the spec - a KVM L1 actually prints a kernel warning
"unexpected, valid vectoring info" if we violate this guarantee, and some
users noticed these warnings in L1's logs.
In order to better emulate a processor, which would never return the external
interrupt and the idt-vectoring-info together, we need to separate the two
injection steps: First, complete L1's injection into L2 (i.e., enter L2,
injecting to it the idt-vectoring-info); Second, after entry into L2 succeeds
and it exits back to L0, exit to L1 with the EXIT_REASON_EXTERNAL_INTERRUPT.
Most of this is already in the code - the only change we need is to remain
in L2 (and not exit to L1) in this case.
Note that the previous patch ensures (by using KVM_REQ_IMMEDIATE_EXIT) that
although we do enter L2 first, it will exit immediately after processing its
injection, allowing us to promptly inject to L1.
Note how we test vmcs12->idt_vectoring_info_field; This isn't really the
vmcs12 value (we haven't exited to L1 yet, so vmcs12 hasn't been updated),
but rather the place we save, at the end of vmx_vcpu_run, the vmcs02 value
of this field. This was explained in patch 25 ("Correct handling of idt
vectoring info") of the original nVMX patch series.
Thanks to Dave Allan and to Federico Simoncelli for reporting this bug,
to Abel Gordon for helping me figure out the solution, and to Avi Kivity
for helping to improve it.
Signed-off-by: Nadav Har'El <nyh@il.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com> | https://sources.debian.org/patches/linux/3.2.78-1/ | CC-MAIN-2019-09 | refinedweb | 3,756 | 61.43 |
From: Doug Gregor (dgregor_at_[hidden])
Date: 2006-06-28 16:00:30
On Jun 28, 2006, at 1:56 PM, Cromwell Enage wrote:
>> Thanks! (And, sorry for the late response)
>>
>> Doug
>
> No problem, except that in
> <boost/graph/graph_concepts.hpp>, some of the
> constructor declarations are now misplaced (i.e.
> residing in the definitions of other classes), because
> I created my patch before you moved the classes to the
> boost::concepts namespace.
Gaaa! Fixed in CVS.
Doug
Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | http://lists.boost.org/Archives/boost/2006/06/106977.php | crawl-001 | refinedweb | 104 | 71.31 |
How to implement Navie String Searching Algorithm in Python
In this post, we will study about finding a pattern in the text. There will be the main text a substring. The goal is to find how many times at what positions the substring occurs in the text. This technique of pattern-finding helps when there is a huge text and we have to find the occurrences of some keywords or especially given words. Here, we will talk about the most basic ‘Naive String Matching Algorithm in Python’ and will further improvise it through better and shorter codes.
Naive Algorithms as the word ‘naive’ itself suggest algorithms that are very basic and simple to implement. These algorithms perform the most simple and obvious techniques to perform work just like how a child would. These methods are good to start with for the newbies before proceeding towards more efficient and complicated algorithms. A naive string searching algorithm is also one of them. It is the simplest method among other string matching/pattern-finding algorithms.
The method starts by matching the string letter by letter. It checks for the first character in the main text and the first character in the substring. If it matches it moves ahead checking the next character of both the strings. If at any place the characters don’t match the loop breaks and it starts again from the next character of the main text string.
Python code for Naive String matching algorithm
Below is the code for the Naive String matching algorithm.
def naive(txt,wrd): lt=len(txt)#length of the string lw=len(wrd)/3length of the substring(pattern) for i in range(lt-lw+1): j=0 while(j<lw): if txt[i+j]==wrd[j]: j+=1 else: break else: print('found at position',i)
In the code above the function, ‘naive’ takes two arguments txt(the main string from which the pattern is to searched) and ward(the pattern to be searched). A loop is taken from 0 to (length of string-length of substring+1) since at least the substring’s length should be left to be matched towards the end. Each character is extracted from the string through the ‘for’ loop(txt[i]). Then there is an inner while loop that matches that character with the subsequent character of the substring unless the entire substring is matched. If it is not found the loop breaks and the next iteration as in the very next character is taken out for the process. As soon as the entire substring is found the while condition gets false and the else part is executed and the position is printed. It is to be noted carefully that one else is inside the loop which gets executed only when the if the condition is false whereas the other else gets executed when the while loop condition becomes False.
Let’s try the code for multiple inputs-
- String- “”AABAACAADAABAABA”
Substring- “AABA”
naive("AABAACAADAABAABA","AABA")
Output-
found at position 0 found at position 9 found at position 12
- String- “1011101110”
Substring- “111”
naive("1011101110","111")
Output-
found at position 2 found at position 6
Best Case- The best case of this method occurs when the first character of the pattern doesn’t match and hence the entire string is rejected there.
Worst Case- When all the characters or only the last character of the string and substring are different. eg-
String-“AAAAAAAAAA” & Substring-“AAA” or “AAAB”
- Using the ‘find’ in-built function in Python. This feature finds the position of the substring in the string. The second parameter given here denotes the index position it will begin its search from. Every time the string is found at a position ‘j’ is incremented and the matching starts from the next position till the entire length of the string.
def optimized_naive2(txt,wrd): if wrd in txt: j=txt.find(wrd) while(j!=-1): print("found at position",j) j=txt.find(wrd,j+1)
Both of the programs generate the same output as the previous one.
Also read: Using binarytree module in Python for Binary Tree | https://www.codespeedy.com/how-to-implement-navie-string-searching-algorithm-in-python/ | CC-MAIN-2020-29 | refinedweb | 687 | 68.4 |
IRC log of xproc on 2009-05-07
Timestamps are in UTC.
15:06:22 [RRSAgent]
RRSAgent has joined #xproc
15:06:22 [RRSAgent]
logging to
15:06:47 [Norm]
Meeting: XML Processing Model WG
15:06:47 [Norm]
Date: 7 May 2009
15:06:47 [Norm]
Agenda:
15:06:47 [Norm]
Meeting: 143
15:06:47 [Norm]
Chair: Norm
15:06:49 [Norm]
Scribe: Norm
15:06:51 [Norm]
ScribeNick: Norm
15:06:56 [Norm]
Zakim, who's here?
15:06:56 [Zakim]
On the phone I see Vojtech, Norm, Ht, Murray_Maloney
15:06:57 [Zakim]
On IRC I see RRSAgent, Zakim, MoZ, Norm, ht_home, ht
15:07:37 [Norm]
Present: Vojtech, Norm, Henry, Murray
15:07:46 [Norm]
Topic: Accept this agenda?
15:07:46 [Norm]
->
15:07:52 [Norm]
Accepted
15:07:58 [Norm]
Topic: Accept minutes from the previous meeting?
15:07:58 [Norm]
->
15:08:01 [Norm]
Accepted.
15:08:12 [Norm]
Topic: Next meeting: telcon 14 May 2009
15:08:18 [Norm]
Norm must give regrets.
15:08:34 [Norm]
Henry to chair
15:08:49 [Zakim]
+MoZ
15:08:56 [Norm]
Present: Vojtech, Norm, Henry, Murray, Mohamed
15:09:04 [Norm]
Topic: The default XML processing model
15:10:18 [Norm]
Henry reminds us of the two deliverables from our charter.
15:11:05 [ht]
15:11:24 [Norm]
Henry: Those are Tim's thoughts on this issue.
15:12:38 [Norm]
Henry: Tim's ideas are driven largely by the combination of XML elements from different namespaces in vocabularies like HTML
15:13:08 [Norm]
...Tim frames the question in terms of "what is the meaning of this document"? He likes to think about it in a way that I'll characterize informally as "compositional semantics"
15:13:37 [Norm]
...If you have a tree structured thing, you can talk about its semantics as being compositional in so far as the meaning of a bit of the tree is a function of the label on that node and the meaning of its children.
15:13:50 [Norm]
...So a simple recursive descent process will allow you to compute the meaning of the whole tree.
15:14:29 [Norm]
...It's not surprising that we switch between a processing view and more abstract statements about the meaning of nodes
15:14:55 [Norm]
...Tim's perspective is that if the fully qualified names of elements can be thought of as functions, then the recursive descent story is straightforward.
15:15:13 [Norm]
...That is the "XML functions" story about the document's meaning.
15:15:57 [Norm]
Henry: Another bit of the background is the kind of issue that crops up repeatedly "now that we have specs like XML encryption and XInclude, when another spec that's supposed to deal with generic XML
15:16:54 [Norm]
...is created, just what infoset is it that those documents work with if you hand them a URI. Is it the result you get from parsing with a minimal parser, or a maximal parser, or one of those followed by ... name your favorite set of specs ...
15:17:18 [Norm]
...until nothing changes". People keep bumping up against this and deciding that there should be some generic spec to answer this question.
15:17:25 [Norm]
Henry: That's where the requirement in our charter comes from
15:18:03 [Norm]
So GRDDL could have said "You start with a blurt infoset" where a spec such as ours could define what blurt is.
15:18:12 [Norm]
Henry: I ran with this ball in the TAG for a while.
15:18:17 [ht]
15:18:18 [Norm]
Henry: This document is as far as I got.
15:18:49 [Norm]
...It uses the phrase "elaborated infoset" to describe a richer infoset that you get from performing some number of operations.
15:19:00 [Norm]
...That document doesn't read like a TAG finding, it reads like a spec.
15:19:23 [Norm]
...So one way of looking at this document is as input into this discussion in XProc.
15:19:35 [Norm]
Henry: I think the reasons this stalled in the TAG are at least to some extent outside the scope of our deliverable.
15:20:03 [Norm]
...They have to do with the distinction between some kind of elaborated infoset and the more general question of application meaning of XML documents.
15:20:11 [Norm]
...Fortunately, we don't have to deal with the latter.
15:20:36 [Norm]
Henry: Hopefully, folks can review these documents before next week.
15:20:44 [Norm]
Norm: They make sense to me
15:22:37 [Norm]
15:24:11 [Norm]
Norm: My concern with my chair's hat on is that there are lots of ways to approach this. Perhaps the right thing to do is start by trying to create a requirements and use cases document?
15:24:26 [Norm]
Murray: Can we do that by pointing to all the existing documents?
15:24:51 [Norm]
Henry: I think we could benefit from a pointer to Murray's discussion on the GRDDL working group mailing list.
15:25:13 [Norm]
Henry: I'm thinking that what this is primarily is a way of defining a vocabulary that other specs can use.
15:25:31 [Norm]
Murray: Would I be way off base thinking that a net result of this process will be a pipeline written in XProc?
15:26:21 [Norm]
Henry: The process I described above requires you to repeat the process until nothing changes.
15:26:50 [Norm]
Norm: You could do it, you'd end up doing it n+1 times, but you could do it with p:compare.
15:27:13 [Norm]
Henry: It'd have to be recursive, it'd be a little tricky, but I guess you could do it.
15:27:39 [Norm]
Murray: It seems to me that if we can't answer part 2 by relying on part 1, then we didn't complete part 1 properly.
15:28:01 [Norm]
...If you can't produce an elaborated infoset by running it through a pipeline, then you did something wrong in the pipeline.
15:28:12 [Norm]
...Otherwise, what hope does anyone else have in accomplishing this.
15:29:07 [Norm].
15:29:18 [Norm]
...I'm not yet convinced that there aren't things in that category that you need.
15:29:32 [Norm]
Murray: Maybe that'll provide the imputus for XProc v.next
15:30:15 [Norm]
...If I have a document that purports some truth, but I have to go through lots of machinations to get there, but there's a formulaic way then we should be able to use the tools to do it.
15:30:51 [Norm]
Henry: I believe, thoughI I'm not sure, that you could implement XInclude by writing an XProc pipeline that didn't use the XInclude step. Doing so might reveal something about the complexity of XInclude.
15:31:11 [Norm]
...If someone said you don't need to do that, you can always write a pipeline, I'd say "No, wrong, you could but you wouldn't want to."
15:32:15 [Norm]
Norm: If we get to the point where we think we could do it, but we needed a few extra atomic steps, I think we could call that victory.
15:32:52 [Norm]
Henry: Let me introduce one other aspect, in attempting to do this in a way that doesn't require a putative spec to be rewritten every time some new bit of XML processing gets defined,
15:33:32 [Norm]
...the elaborated infoset proposal has this notion of "elaboration cues" and attempts to define the process independent of a concreate list of these cues.
15:34:02 [Norm]
Henry: I'm not sure how valuable that attempt to be generic is.
15:34:56 [Norm]
Norm: I think one possibility is to define a concrete pipeline that does just have a limited set of steps.
15:35:51 [Norm]
Henry: Doesn't that mean that if we add a new obfuscation step, that de-obfuscation requires us to revisit the elaborated infoset spec?
15:35:54 [Norm]
Norm: Yes.
15:36:17 [Norm]
Murray: Right, we're talking about the default processing model. Henry's talking about the obfuscated processing model which would be different.
15:36:27 [Norm]
...You can petition later to become part of the default.
15:36:46 [Norm]
Henry: Another way of putting it is, should the elaboration spec be extensible?
15:36:52 [Norm]
Norm: Right. And one answer is "no".
15:37:25 [MoZ]
Scribe : MoZ
15:38:13 [MoZ]
Murray : it looks like it has been done in other spec. What we need to do is to define the processing model for the most common cases
15:39:00 [MoZ]
Henry : my experience is that it will be easier to have agreement on elaboration to allow people to control what is elaboration and what isn't
15:39:21 [MoZ]
...The default is to have XInclude but not external stylesheet
15:39:41 [MoZ]
...Some want to have XInclude, some wants external stylesheet and other wants both
15:40:16 [MoZ]
Norm : I agree, but will it make any progress on the problem ?
15:40:30 [MoZ]
Henry : it will depends on the conformance story
15:41:24 [MoZ]
...what I had in mind was : if GRRDL is coming out, and has the ability to say that the input of the processing is a GRDDL elaborated infoset
15:42:42 [MoZ]
Murray : what happen to XML document that is not anymore an XML Document (Encryption, Zipping, etc...)
15:43:23 [MoZ]
Henry : I agree, that's all we talking about
15:43:39 [Norm]
scribe: Norm
15:43:39 [MoZ]
Scribe : Norm
15:44:16 [Norm]
Some discussion about which technologies preserve the "XMLness" of a document.
15:45:42 [Norm]
Encryption and Henry's obfuscation example both produce XML documents
15:47:05 [Norm]
Mohamed: It's an interesting discussion. There is, I think, a common base on which we can at least agree.
15:47:33 [Norm]
...These are more technical than logical, for example, XInclude, encryption, where the behavior is clearly defined.
15:47:57 [Norm]
...On top of that, there's a layer of user behavior. I think we'll have a hard time at that layer.
15:48:23 [Norm]
...Defining the use caes and requirements is probably the only place we can start.
15:49:19 [Norm]
Norm: Yeah, I don't want to make us do work that's already been done. But I think there would be value in collecting the use cases together
15:49:39 [Norm]
...to see if we have agreement that some, all, or none of them are things we think we could reasonably be expected to achieve.
15:49:46 [Norm]
Murray: Hopefully more than noen.
15:49:49 [Norm]
s/noen/none/
15:49:52 [Norm]
Norm: Indeed.
15:50:46 [Norm]
Norm: Ok, for next week, let's plan to have reviewed the documents that Henry pointed to and spend some time on use cases and requirements.
15:50:48 [Norm]
Henry. Agreed.
15:50:54 [Norm]
Topic: Any other business?
15:50:55 [Norm]
None heard.
15:51:04 [Norm]
Adjourned.
15:51:05 [Zakim]
-Norm
15:51:08 [Zakim]
-Murray_Maloney
15:51:09 [Zakim]
-Vojtech
15:51:09 [Zakim]
-MoZ
15:51:10 [Zakim]
-Ht
15:51:12 [Zakim]
XML_PMWG()11:00AM has ended
15:51:13 [Zakim]
Attendees were Norm, Vojtech, Ht, Murray_Maloney, MoZ
15:51:14 [Norm]
RRSAgent, set logs world-visible
15:51:17 [Norm]
RRSAgent, draft minutes
15:51:17 [RRSAgent]
I have made the request to generate
Norm
17:33:48 [Zakim]
Zakim has left #xproc | http://www.w3.org/2009/05/07-xproc-irc | CC-MAIN-2016-36 | refinedweb | 1,985 | 68.91 |
Parametric Polymorphism is a well-established programming language feature. Generics offers this feature to C#.
The best way to understand generics is to study some C# code that would benefit from generics. The code stated below is about a simple Stack class with two methods: Push () and Pop (). First, without using generics example you can get a clear idea about two issues: a) Boxing and unboxing overhead and b) No strong type information at compile type. After that the same Stack class with the use of generics explains how these two issues are solved.
Stack
Push ()
Pop ().
public class Stack
{
object[] store;
int size;
public void Push(object x) {...}
public object Pop() {...}
}
You can push a value of any type onto a stack. To retrieve, the result of the Pop method must be explicitly cast back. For example if an integer passed to the Push method, it is automatically boxed. While retrieving, it must be unboxed with an explicit type cast.
In C# with generics, you declare class Stack <T> {...}, where T is the type parameter. Within class Stack <T> you can use T as if it were a type. You can create a Stack as Integer by declaring Stack <int> or Stack as Customer object by declaring Stack<Customer>. Simply your type arguments get substituted for the type parameter. All of the Ts become ints or Customers, you don't have to downcast, and there is strong type checking everywhere.
Stack >,
Implementation of parametric polymorphism can be done in two ways 1. Code Specialization: Specializing the code for each instantiation 2. Code sharing: Generating common code for all instantiations. The C# implementation of generics uses both code specialization and code sharing as explained below.
At runtime, when your application makes its first reference to Stack <int>, the system looks to see if anyone already asked for Stack <int>. If not, it feeds into the JIT the IL and metadata for Stack <T> and the type argument int. The .NET Common Language Runtime creates a specialized copy of the native code for each generic type instantiation with a value type, but shares a single copy of the native code for all reference types (since, at the native code level, references are just pointers with the same representation).
Stack <int>.
Stack <T>
In other words, for instantiations those are value types: such as Stack <int>, Stack <long>, Stack<double>, Stack<float> CLR creates a unique copy of the executable native code. So Stack<int> gets its own code. Stack<long> gets its own code. Stack <float> gets its own code. Stack <int> uses 32 bits and Stack <long> uses 64 bits. While reference types, Stack <dog> is different from Stack <cat>, but they actually share all the same method code and both are 32-bit pointers. This code sharing avoids code bloat and gives better performance.<T1,T2>
private DataController _controller;
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/6207/Generics-C | CC-MAIN-2017-30 | refinedweb | 510 | 65.22 |
pthread_detach - detach a thread
#include <pthread.h> int pthread_detach(pthread_t thread);
The pthread_detach() function is used to indicate to the implementation that storage for the thread thread can be reclaimed when that thread terminates. If thread has not terminated, pthread_detach() will not cause it to terminate. The effect of multiple pthread_detach() calls on the same target thread is unspecified.
If the call succeeds, pthread_detach() returns 0. Otherwise, an error number is returned to indicate the error.
The pthread_detach() function will fail if:
- [EINVAL]
- The implementation has detected that the value specified by thread does not refer to a joinable thread.
- [ESRCH]
- No thread could be found corresponding to that specified by the given thread ID.
The pthread_detach() function will not return an error code of [EINTR].
None.
None.
None.
pthread_join(), <pthread.h>.
Derived from the POSIX Threads Extension (1003.1c-1995) | http://pubs.opengroup.org/onlinepubs/7990989775/xsh/pthread_detach.html | crawl-003 | refinedweb | 141 | 57.67 |
A continuation from previous Module. The source code for this module is: C/C++ pointers program source codes. The lab worksheets for your practice are: C/C++ pointers part 1 and C/C++ pointers part 2. Also the exercises in the Indirection Operator lab worksheet 1, lab worksheet 2 and lab worksheet 3.
Element of an array are stored in sequential memory locations with the first element in the lowest address.
Subsequent array elements, those with an index greater than 0 are stored at higher addresses.
As mentioned before, array of type int occupies 2 byte of memory and a type float occupies 4 byte. Hence the size is depend on the type and the platform (e.g. 32, 64 bits system).
So, for float type, each element is located 4 bytes higher than the preceding element, and the address of each array element is 4 higher than the address of the preceding element.
For example, relationship between array storage and addresses for a 6-elements int array and a 3-elements float array is illustrated below.
Figure 8.9
The x variable without the array brackets is the address of the first element of the array, x[0].
The element is at address of 1000; the second element is 1002 and so on.
As conclusion, to access successive elements of an array of a particular data type, a pointer must be increased by the sizeof(data_type). sizeof() function returns the size in bytes of a C/C++ data type. Let take a look at the following example:
// demonstrates the relationship between addresses
// and elements of arrays of different data type
#include <stdio.h>
void main()
{
// declare three arrays and a counter variable
int i[10], x;
float f[10];
double d[10];
// print the table heading
printf("\nArray's el. add of i[x] add of f[x] add of d[x]");
printf("\n|================================");
printf("======================|");
// print the addresses of each array element
for(x=0; x<10; x++)
printf("\nElement %d:\t%p\t%p\t%p",x,&i[x],&f[x],&d[x]);
printf("\n|================================");
printf("======================|\n");
printf("\nLegends:");
printf("\nel.- element, add - address\n");
printf("\ndifferent pc, shows different addresses\n");
}
Notice the difference between the element addresses.
12FEB4 – 12FEB0 = 4 bytes for int
12FE78 – 12FE74 = 4 bytes float
12FE24 – 12FE1C = 8 bytes double
The size of the data type depends on the specification of your compiler, whether your target is 16, 32 or 64 bits systems, the output of the program may be different for different PC. The addresses also may different.
Try another program example.
// demonstrates the use of pointer arithmetic to access
// array elements with pointer notation
#include <stdio.h>
#define MAX 10
void main()
{
// declare and initialize an integer array
int array1[MAX] = {0,1,2,3,4,5,6,7,8,9};
// declare a pointer to int and an int variable
int *ptr1, count;
// declare and initialize a float array
float array2[MAX] = {0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9};
// declare a pointer to float
float *ptr2;
// initialize the pointers
// just an array name is the pointer to the
// 1st array element, both left value and right value
// of the expression are pointers types...
ptr1 = array1;
ptr2 = array2;
// print the array elements
printf("\narray1 values array2 values");
printf("\n-------------------------");
// iterate or loop the arrays and display the content...
for(count = 0; count < MAX; count++)
printf("\n%d\t\t%f", *ptr1++, *ptr2++);
printf("\n-------------------------\n");
}
Let make it clear, if an array named list[ ] is a declared array, the expression *list is the array’s first element, *(list + 1) is the array’s second element, and so on.
Generally, the relationship is as follows:
*(list) == list[0] // first element
*(list + 1) == list[1] // second element
*(list + 2) == list[2] // third element
...
...
*(array + n) == list[n] // the nth element
So, you can see the equivalence of array subscript notation and array pointer notation.
Pointers may be arrayed like any other data type. The declaration for an int pointer array of size 20 is:
int *arrayPtr[20];
To assign the address of an integer variables called var to the first element of the array, we could write something like this:
// assign the address of variable var to the first arrayPtr element
arrayPtr[0] = &var;
Graphically can be depicted as follows:
Figure 8.10
To find the value stored in var, we could write something like this:
*arrayPtr[0]
To pass an array of pointers to a function, we simply call the function with the array’s name without any index/subscript, because this is an automatically a pointer to the first element of the array, as explained before.
For example, to pass the array named arrayPtr to viewArray function, we write the following statement:
viewArray(arrayPtr);
The following program example demonstrates the passing of a pointer array to a function. It first declares and initializes the array variable var (not a pointer array).
Then it assigns the address of each element (var[i]) to the corresponding pointer element (arrayPtr[i]).
Next, the array arrayPtr is passed to the array’s parameter q in the function viewArray(). The function displays the elements pointed to by q (that is the values of the elements in array var) and then passes control back to main().
// a program that passes a pointer array to a function
#include <iostream>
using namespace std;
// a function prototype for viewArray
void viewArray(int *[ ]);
void main()
{
// declare and initialize the array variables...
int i,*arrayPtr[7], var[7]={3,4,4,2,1,3,1};
// loop through the array...
for(i=0; i<7; i++)
// arrayPtr[i] is assigned with the address of var[i]
arrayPtr[i] = &var[i];
// a call to function viewArray,
// pass along the pointer to the
//1st array element
viewArray(arrayPtr);
cout<<endl;
}
// an arrayPtr is now passed to parameter q,
// q[i] now points to var[i]
void viewArray(int *q[ ])
{
int j;
// displays the element var[i] pointed to by q[j]
// followed by a space. No value is returned
// and control reverts to main()
for(j = 0; j < 7; j++)
cout<<*q[j]<<" ";
}
-----------------------------------------------------------------------------------------------
Graphically, the construct of a pointer to pointer can be depicted as shown below. pointer_one is the first pointer, pointing to the second pointer, pointer_two and finally pointer_two is pointing to a normal variable num that hold integer 10.
Figure 8.11
Another explanation, from the following figure, a pointer to a variable (first figure) is a single indirection but if a pointer points to another pointer (the second figure), then we have a double or multiple indirections.
Figure 8.12
For the second figure, the second pointer is not a pointer to an ordinary variable, but rather, a pointer to another pointer that points to an ordinary pointer.
In other words, the second pointer points to the first pointer, which in turn points to the variable that contains the data value.
In order to indirectly access the target value pointed to by a pointer to a pointer, the asterisk operator must be applied twice. For example, the following declaration:
int **SecondPtr;
The code tells the compiler that SecondPtr is a pointer to a pointer of type integer. Pointer to pointer is rarely used but you will find it regularly in programs that accept argument(s) from command line.
Consider the following declarations:
char chs; /* a normal character variable */
char *ptchs; /* a pointer to a character */
char **ptptchs; /* a pointer to a pointer to a character */
If the variables are related as shown below:
Figure 8.13
We can do some assignment like this:
chs = 'A';
ptpch = &chs;
ptptpch = ptchs;
Recall that char * refers to a NULL terminated string. So one common way is to declare a pointer to a pointer to a string something like this:
Figure 8.14
Taking this one stage further we can have several strings being pointed to by the integer pointers (instead of char) as shown below.
Figure 8.15
Then, we can refer to the individual string by using ptptchs[0], ptptchs[1],…. and generally, this is identical to declaring:
char *ptptchs[ ] /* an array of pointer */
Or from Figure 8.15:
char **ptptchs
Thus, programs that accept argument(s) through command line, the main() parameter list is declared as follows:
int main(int argc, char **argv)
Or something like this:
int main(int argc, char *argv[ ])
Where the argc (argument counter) and argv (argument vector) are equivalent to ptchs and ptptchs respectively.
For example, program that accept command line argument(s) such as echo:
C:\>echo This is command line argument
This is command line argument
Here we have:
Figure 8.16
/* a program to print arguments from command line */
/* run this program at the command prompt */
#include <stdio.h>
/*or int main(int argc, *argv[ ])*/
int main(int argc, char **argv)
{
int i;
printf("argc = %d\n\n", argc);
for (i=0; i<argc; ++i)
printf("argv[%d]: %s\n", i, argv[i]);
return 0;
}
Another silly program example :o):
// pointer to pointer...
#include <stdio.h>
int main(void)
{
int **theptr;
int *anotherptr;
int data = 200;
anotherptr = &data;
// assign the second pointer address to the first pointer...
theptr = &anotherptr;
printf("The actual data, **theptr = %d\n", **theptr);
printf("\nThe actual data, *anotherptr = %d\n", *anotherptr);
printf("\nThe first pointer pointing to an address, theptr = %p\n", theptr);
printf("\nThis should be the second pointer address, &anotherptr = %p\n", &anotherptr);
printf("\nThe second pointer pointing to address(= hold data),\nanotherptr = %p\n", anotherptr);
printf("\nThen, its own address, &anotherptr = %p\n", &anotherptr);
printf("\nThe address of the actual data, &data = %p\n", &data);
printf("\nNormal variable, the data = %d\n", data);
return 0;
}
Because of C functions have addresses we can use pointers to point to C functions. If we know the function’s address then we can point to it, which provides another way to invoke it.
Function pointers are pointer variables which point to functions. Function pointers can be declared, assigned values and then used to access the functions they point to. The declaration is as the following:
int (*funptr)();
Here, funptr is declared as a pointer to a function that returns int data type.
The interpretation is the de-referenced value of funptr, that is (*funptr) followed by () which indicates a function, which returns integer data type.
The parentheses are essential in the declarations because of the operators’ precedence. The declaration without the parentheses as the following:
int *funptr();
Will declare a function funptr that returns an integer pointer that is not our intention in this case. In C, the name of a function, used in an expression by itself, is a pointer to that function. For example, if a function, testfun() is declared as follows:
int testfun(int x);
The name of this function, testfun is a pointer to that function. Then, we can assign them to pointer variable funptr, something like this:
funptr = testfun;
The function can now be accessed or called, by dereferencing the function pointer:
/* calls testfun() with x as an argument then assign to the variable y */
y = (*funptr)(x);
Function pointers can be passed as parameters in function calls and can be returned as function values.
Use of function pointers as parameters makes for flexible functions and programs. It’s common to use typedefs with complex types such as function pointers. You can use this typedef name to hide the cumbersome syntax of function pointers. For example, after defining:
typedef int (*funptr)();
The identifier funptr is now a synonym for the type of ‘a pointer to function takes no arguments, returning int type’. This typedef would make declaring pointers such as testvar as shown below, considerably easier:
funptr testvar;
Another example, you can use this type in a sizeof() expression or as a function parameter as shown below:
/* get the size of a function pointer */
unsigned ptrsize = sizeof (int (*funptr)());
/* used as a function parameter */
void signal(int (*funptr)());
Let try a simple program example using function pointer.
/* invoking function using function pointer */
#include <stdio.h>
int somedisplay();
int main()
{
int (*func_ptr)();
/* assigning a function to function pointer
as normal variable assignment */
func_ptr = somedisplay;
/* checking the address of function */
printf("\nAddress of function somedisplay() is %p", func_ptr);
/* invokes the function somedisplay() */
(*func_ptr)() ;
return 0;
}
int somedisplay()
{
printf("\n--Displaying some texts--\n");
return 0;
}
Another example with an argument.
#include <stdio.h>
/* function prototypes */
void funct1(int);
void funct2(int);
/* making FuncType an alias for the type
'function with one int argument and no return value'.
This means the type of func_ptr is 'pointer to function
with one int argument and no return value'. */
typedef void FuncType(int);
int main(void)
{
FuncType *func_ptr;
/* put the address of funct1 into func_ptr */
func_ptr = funct1;
/* call the function pointed to by func_ptr with an argument of 100 */
(*func_ptr)(100);
/* put the address of funct2 into func_ptr */
func_ptr = funct2;
/* call the function pointed to by func_ptr with an argument of 200 */
(*func_ptr)(200);
return 0;
}
/* function definitions */
void funct1 (testarg)
{printf("funct1 got an argument of %d\n", testarg);}
void funct2 (testarg)
{printf("funct2 got an argument of %d\n", testarg);}
The following codes in the program example:
func_ptr = funct1;
(*func_ptr)(100);
Can also be written as:
func_ptr = &funct1;
(*func_ptr)(100);
Or
func_ptr = &funct1;
func_ptr(100);
Or
func_ptr = funct1;
func_ptr(100);
As we have discussed before, we can have an array of pointers to an int, float and string. Similarly we can have an array of pointers to a function. It is illustrated in the following program example.
/* an array of pointers to function */
#include <stdio.h>
/* functions' prototypes */
int fun1(int, double);
int fun2(int, double);
int fun3(int, double);
/* an array of a function pointers */
int (*p[3]) (int, double);
int main()
{
int i;
/* assigning address of functions to array pointers */
p[0] = fun1;
p[1] = fun2;
p[2] = fun3;
/* calling an array of function pointers with arguments */
for(i = 0; i <= 2; i++)
(*p[i]) (100, 1.234);
return 0;
}
/* functions' definition */
int fun1(int a, double b)
{
printf("a = %d b = %f", a, b);
return 0;
}
int fun2(int c, double d)
{
printf("\nc = %d d = %f", c, d);
return 0;
}
int fun3(int e, double f)
{
printf("\ne = %d f = %f\n", e, f);
return 0;
}
----------------------------------------------------------------------------
In the above program we take an array of pointers to function int (*p[3]) (int, double). Then, we store the addresses of three function fun1(), fun2(), fun3() in array (int *p[ ]). In the for loop we consecutively call each function using their addresses stored in array.
For function and array, the only way an array can be passed to a function is by means of a pointer.
Before this, an argument is a value that the calling program passes to a function. It can be int, a float or any other simple data type, but it has to be a single numerical value.
The argument can, therefore, be a single array element, but it cannot be an entire array.
If an entire array needs to be passed to a function, then you must use a pointer.
As said before, a pointer to an array is a single numeric value (the address of the array’s first element).
Once the value of the pointer (memory address) is passed to the function, the function knows the address of the array and can access the array elements using pointer notation.
Then how does the function know the size of the array whose address it was passed?
Remember! The value passed to a function is a pointer to the first array element. It could be the first of 10 elements or the first of 10000 or what ever the array size.
The method used for letting a function knows an array’s size, is by passing the function the array size as a simple int type argument.
Thus the function receives two arguments:
-
A pointer to the first array element and
-
An integer specifying the number of elements in the array, the array size.
The following program example illustrates the use of a pointer to a function. It uses the function prototype float (*ptr) (float, float) to specify the number and types of arguments. The statement ptr = &minimum assigns the address of minimum() to ptr.
The statement small = (*ptr)(x1, x2); calls the function pointed to by (*ptr), that is the function minimum() which then returns the smaller of the two values.
// pointer to a function
#include <iostream>
using namespace std;
// function prototypes...
float minimum(float, float);
// (*ptr) is a pointer to function of type float
float (*ptr)(float, float);
void main()
{
float x1, x2, small;
// assigning address of minimum() function to ptr
ptr = minimum;
cout<<"\nEnter two numbers, separated by space: ";
cin>>x1>>x2;
// call the function pointed by ptr small has the return value
small = (*ptr)(x1, x2);
cout<<"\smaller number is "<<small<<endl;
}
float minimum(float y1, float y2)
{
if (y1 < y2)
return y1;
else
return y2;
}
Study the program's source code and the output.
C & C++ programming tutorials
Also the exercises in the Indirection Operator lab worksheet 1, lab worksheet 2 and lab worksheet 3. | http://www.tenouk.com/Module8a.html | crawl-001 | refinedweb | 2,860 | 54.76 |
#!/usr/bin/env python
from datetime import date
def todo(what):
print(what)
while True: # Yes, infinite feeling
todo('Don\'t search in LP. Search in Google by Ubuntu Trusty bugs and not by Ubuntu 14.04 bugs')
todo('Just click in Report a bug')
todo('Read an awesome page over 4.000 words about how to fill bugs')
if date.today() == date(2014, 04, 17): # It was not really infinite :)
todo('
Do something else!')
break
I'm sorry, but it is frustrating not being able to help by an infinite loop :)
To avoid the redirect:
or
ubuntu-bug package
Otherwise if one joins bug squad or some such team, redirect is also deactivated.
Thanks Dmitrijs for your comment, but I don't see how "ubuntu-bug" (ubuntu-bugcontrol-tools) is the same as Ubuntu Trusty bugs. Best regards :)
ubuntu-bug is the best way to file bugs. If you just run that command, it allows you to choose from various options what the issue you're getting might relate to. Otherwise, you can specify the name of a package as you run ubuntu-bug (like Dmitrijs pointed out).
The reason we say it's best, and why there's a redirect on the Report a Bug link, is that this makes sure there is more information on the bug which developers are likely to ask for anyway. So in this case, already having this included in the bug report hopefully avoids you getting asked to give more information, and tends to make the time from bug reporting to bug being fixed shorter.
Oh! I understand now, ubuntu-bug is an app, not a LP package! Thanks Mathieu and Dmitrijs.
One question more please, is there any way to include a submitted bug into the Trusty bugs?
Thanks in advance! | http://thinkonbytes.blogspot.com/2013/12/how-can-i-send-bug-for-ubuntu-1404.html | CC-MAIN-2015-11 | refinedweb | 300 | 70.84 |
While emulators are a great way to test apps, there are a number of reasons why deploying to the web and having the ability to run on several devices simultaneously is important.
So let’s explore AWS Amplify with Ionic 4 and React. In this post I’ll just scratch the surface by showing you how to deploy to AWS Amplify using the Amplify CLI, but stay tuned for possible future posts about more on how to use and build on Amplify.
Creating Our Ionic React App
Assuming you have the Ionic CLI installed, let’s create our app:
$ ionic start ionic-cra blank --type=react --capacitor
The standard Ionic React app template comes with the Ionic Router, some bare bones TypeScript, and ready for us to start coding. After navigating to the
pages folder, let’s open up
Home.tsx and begin with something simple.
import React from "react"; import { IonContent, IonCard, IonCardContent, IonImg, IonPage, IonToolbar, IonTitle } from "@ionic/react"; import Waves from "../images/waves.jpg"; const Home: React.FC = () => ( <IonPage> <IonContent color="primary"> <IonToolbar> <IonTitle>The Ocean</IonTitle> </IonToolbar> <IonCard color="light"> <IonCardContent> <IonImg src={Waves} /> </IonCardContent> </IonCard> </IonContent> </IonPage> ); export default Home;
Getting Started with AWS Amplify
To begin, we’ll install the Amplify CLI globally and configure our project.
$ npm install -g @aws-amplify/cli $ amplify configure
The
configure command will open up your browser and prompt you to login to your AWS console. We can configure Amplify any number of ways, but the defaults seem to do the trick for at least getting started.
However you see fit, define the following:
- Your region, username, access type (I chose Programmatic), permissions (Administrator Access), and optional tags. Next, create your user, save your credentials somewhere safe, and login with the default profile.
Adding Amplify to Your Project
This next part is a bit involved, but fairly simple nonetheless. Again, I used the defaults that I could.
$ amplify init
Project name: default Environment name: dev Editor: Visual Studio Code Type of App: javascript Framework: ionic Source directory: src Distribution directory: build Build command: ionic build Start command: ionic serve AWS Profile? Yes, default
$ amplify hosting add
For my environment, I went with
DEV (S3 only with HTTP) and the default bucket name, as well as index.html for the website and error docs.
Finally, we can publish our app with the publish command.
$ amplify publish
With a bit of patience, your browser should open up to your new app. Save your new link so you can test away on any device with a web browser and get a better feel for your new app.
| https://alligator.io/ionic/ionic-4-react-aws-amplify/ | CC-MAIN-2020-34 | refinedweb | 436 | 51.38 |
Advanced Namespace Tools blog 24 December 2016
Security Implications of Writable /proc/pid/ns?
A Sample Use-case
While investigating aspects of the 2nd edition of Plan 9 released back in 1995, I happened to have a good use for the ANTS kernel modification which allows process namespace to be modified by writing namespace operations to the /proc/pid/ns file. It was relatively trivial, but still a decent practical example.
The Plan9 dossrv utility is a server which runs in the background and gives you an entry in /srv, usually as /srv/dos. To use it to read a filesystem, you enter a command like:
mount /srv/dos /n/flop /usr/glenda/2nded.disk1
Which should result in you being able to read the files from 2nded.disk1 at /n/flop. Because I had started the dossrv in early boot to access files in the 9fat partition, it was not running in the full standard namespace, so I got this error:
mount: mount /n/flop: '/usr/glenda/2nd_ed.disk1' does not exist
The standard thing to do would probably be to start a new instance of dossrv within the standard namespace, so there would be two copies of it running with different pipes available in /srv. Rather than run an extra server, I did:
ps -a |grep dossrv echo 'mount /srv/boot /net.alt' >/proc/120/ns mount /srv/dos /n/flop /net.alt/usr/glenda/2nd_ed.disk1
In this example, net.alt is just a random unused mountpoint for dossrv to put the main filesystem into its namespace. This is one of the simplest ways that writable proc/ns is useful: it lets you modify the namespace of long-running background processes and services to access data that they couldn't reach otherwise.
Discussing Possible Security Implications
I mentioned this in IRC, and it provoked a discussion about whether or not this mechanism changes or violates any of the assumptions that underlie the Plan 9 security model. Many people have an instinctive feeling that it does - after all, usually a process is only permitted to modify its own namespace (and any other processes within the same namespace group as determined by how rfork is used). You can even import /proc from a remote machine and modify the namespace of remote processes, if they are owned by your user. Doesn't that open new holes in the security model?
I don't believe it does, because the Plan 9 security model already allows you to attach a debugger such as Acid to the /proc of remote machines, and directly control/modify running code and memory of processes you own. This is inherently more powerful and invasive than what the new proc/ns control interface allows. Assuming that things are coded correctly to follow the standard permissions model and special cases such as RFNOMNT and private memory setting, there shouldn't be any exploitable loopholes created by writing namespace commands to proc/pid/ns.
Special Treatment of 'None' User and a Tiny Patch
The discussion did lead to one change, triggered by the question of whether or not the kernel treats the user 'none' as a special case. I had a recollection of seeing some code in the kernel which did check for whether or not the user was 'none', and a quick grep showed this was the case for some things in devproc.c:
/* * none can't read or write state on other * processes. This is to contain access of * servers running as none should they be * subverted by, for example, a stack attack. */ static void nonone(Proc *p) { if(p == up) return; if(strcmp(up->user, "none") != 0) return; if(iseve()) return; error(Eperm); }
I had not included this special case handling of user 'none' in my code for ns operations. Fixing this required adding a single line to my modified devproc.c:
case Qns: // print("procnsreq on p, %s, %ld\n", a, n); if(p->debug.locked == 1) qunlock(&p->debug); + nonone(p); procnsreq(p, va, n); break;
And with that change in place, user none cannot modify the namespace of other processes owned by user none. Thanks to hiro for the valuable discussion and directing my attention to possible issues with the 'none' user! | http://doc.9gridchan.org/blog/161224.procns.security | CC-MAIN-2017-22 | refinedweb | 708 | 58.62 |
I have this code, where the dictionary keys are functions and the values are lists which contains keywords. It searches through a listed
t
NowThis
count
t
'hello hi'
dct = {doThis:['hi','hello'],
doThat:['bye','goodbye']}
t = 'hi there' # or 'test'|'goodbye'|'hello hi'
count=0
for listValue in t.split():
if count > 1 or count < 0:
break
elif listValue in [n for v in dct.values() for n in v]:
for key,vl in dct.iteritems():
if listValue in vl:
key()
count+=1
elif count==0:
nowThis()
count -=1
[key() for listValue in t.split() if listValue in [n for v in dct.values() for n in v] for key,vl in dct.iteritems() if listValue in vl]
As has been explained in the comments there is no benefit in using a list comprehension here. It would actually be slower, since you'd be creating a list you don't actually want. Also, it is considered poor design to use a list comp purely for its side effects.
As I mentioned in the comments, your dictionary design is the wrong way around, so you aren't getting the benefits of using a dictionary, i.e., a dict can quickly test if a key is present, and it can quickly retrieve the value associated with a key.
Assuming your current code does what you want, here's a better way to write it.
def do_this(): print 'Do This!\n' def do_that(): print 'Do That!\n' def now_this(): print 'Now This!\n' dct = { 'hi': do_this, 'hello': do_this, 'bye': do_that, 'goodbye': do_that, } data = ('hi there', 'python', 'goodbye hello', '') for t in data: v = t.split(None, 1)[0] if t else '' print [t, v] dct.get(v, now_this)()
output
['hi there', 'hi'] Do This! ['python', 'python'] Now This! ['goodbye hello', 'goodbye'] Do That! ['', ''] Now This!
Here's a more compact version of that
for loop without the
for t in data: dct.get(t.split(None, 1)[0] if t else '', now_this)()
We use a conditional expression (
t.split(None, 1)[0] if t else '') so we can handle when
t is an empty string. Here's an alternative version that's less readable. :)
for t in data: dct.get((t.split(None, 1) or [''])[0], now_this)()
If you can guarantee that
t will never be an empty string, then you can use the simple version:
for t in data: dct.get(t.split(None, 1)[0], now_this)() | https://codedump.io/share/OUNbPafgt3NH/1/shorten-loop-to-list-comprehension-with-multiple-for39s-and-ifselifs | CC-MAIN-2017-17 | refinedweb | 406 | 75.5 |
.
4.5.1. NDArray¶
In its simplest form, we can directly use the
save and
load
functions to store and read NDArrays separately. This works just as
expected.
In [1]:
from mxnet import nd from mxnet.gluon import nn x = nd.arange(4) nd.save('x-file', x)
Then, we read the data from the stored file back into memory.
In [2]:
x2 = nd.load('x-file') x2
Out[2]:
[ [0. 1. 2. 3.] <NDArray 4 @cpu(0)>]
We can also store a list of NDArrays and read them back into memory.
In [3]:
y = nd.zeros(4) nd.save('x-files', [x, y]) x2, y2 = nd.load('x-files') (x2, y2)
Out[3]:
( .
In [4]:
mydict = {'x': x, 'y': y} nd.save('mydict', mydict) mydict2 = nd.load('mydict') mydict2
Out[4]:
{'x': [0. 1. 2. 3.] <NDArray 4 @cpu(0)>, 'y': [0. 0. 0. 0.] <NDArray 4 @cpu(0)>} is quite advantageous here since we can simply define a model without the need to put actual values in place. Let’s start with our favorite MLP.
In [5]:’.
In [6]:
# Use save_parameters in MXNet of later versions, such as 1.2.1. net.save_params('mlp.params')
To check whether we are able to recover the model we instantiate a clone of the original MLP model. Unlike the random initialization of model parameters, here we read the parameters stored in the file directly.
In [7]:
clone = MLP() # Use load_parameters in MXNet of later versions, such as 1.2.1. clone.load_params('mlp.params')
Since both instances have the same model parameters, the computation
result of the same input
x should be the same. Let’s verify this.
In [8]:
yclone = clone(x) yclone == y
Out[8]:
[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]] <NDArray 2x10 @cpu(0)>
4.5.3. Summary¶
- The
saveand
loadfunctions can be used to perform File I/O for NDArray objects.
- The
load_paramsand
save_paramsfunctions allow us to save entire sets of parameters for a network in Gluon.
- Saving the architecture has to be done in code rather than in parameters.
4.5.4. Problems? | http://gluon.ai/chapter_deep-learning-computation/read-write.html | CC-MAIN-2019-04 | refinedweb | 361 | 68.67 |
Subject: Re: [boost] [type_traits] Rewrite and dependency free version
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2015-02-03 19:14:46
On 3 Feb 2015 at 14:20, Matt Calabrese wrote:
> > For the record, you can simulate clang's modules using any compiler
> > by simply compiling in everything as a single translation unit. A
> > simple shell script can create a file which includes all the source
> > files at once.
> >
> > If it works as a single translation unit, it'll work under clang
> > modules. If it doesn't, well then you've got some ODR violation going
> > on (very common in source files which assume they own their
> > translation unit) and it may or may not work under clang modules
> > depending.
>
> Wait, I'm confused... is this actually true? Maybe I'm missing something,
> but I can imagine a simple example of an anonymous namespace in two
> translation units that define similar functions (i.e. each would have been
> defined and used in their own cpp in the multi-translation-unit version).
> This is not an ODR violation but it would fail compilation if done as you
> suggest. I'm not very familiar with modules as they are currently, but it
> seems like the assertion must be erroneous.
That is a very good point.
Ok, as I said before, but excluding anonymous namespaces. I never use
anonymous namespaces in my own code, and so never encountered that
problem. I'd imagine a macro concatenating __COUNTER__ solution with
inline namespaces could let you work around anonymous namespace
collision. Or else just remove and name them explicitly, or make them
named and inline.
(BTW I am actually fairly sure clang's C++ Modules support currently
doesn't understand anonymous namespaces, and in fact I am right
unintentionally. But if so I am right through accident, not design).
Niall
-- ned Productions Limited Consulting
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2015/02/219882.php | CC-MAIN-2021-39 | refinedweb | 331 | 65.12 |
Introduction
If things don’t go your way in predictive modeling, use XGboost. XGBoost algorithm has become the ultimate weapon of many data scientist. It’s a highly sophisticated algorithm, powerful enough to deal with all sorts of irregularities of data.
Building a model using XGBoost is easy. But, improving the model using XGBoost is difficult (at least I struggled a lot). This algorithm uses multiple parameters. To improve the model, parameter tuning is must. It is very difficult to get answers to practical questions like – Which set of parameters you should tune ? What is the ideal value of these parameters to obtain optimal output ?
This article is best suited to people who are new to XGBoost. In this article, we’ll learn the art of parameter tuning along with some useful information about XGBoost. Also, we’ll practice this algorithm using a data set in Python.
What should you know ?
XGBoost (eXtreme Gradient Boosting) is an advanced implementation of gradient boosting algorithm. Since I covered Gradient Boosting Machine in detail in my previous article – Complete Guide to Parameter Tuning in Gradient Boosting (GBM) in Python, I highly recommend going through that before reading further. It will help you bolster your understanding of boosting in general and parameter tuning for GBM.
Special Thanks: Personally, I would like to acknowledge the timeless support provided by Mr. Sudalai Rajkumar (aka SRK), currently AV Rank 2. This article wouldn’t be possible without his help. He is helping us guide thousands of data scientists. A big thanks to SRK!
Table of Contents
- The XGBoost Advantage
- Understanding XGBoost Parameters
- Tuning Parameters (with Example)
1. The XGBoost Advantage
I’ve always admired the boosting capabilities that this algorithm infuses in a predictive model. When I explored more about its performance and science behind its high accuracy, I discovered many advantages:
-
- User can start training an XGBoost model from its last iteration of previous run. This can be of significant advantage in certain specific applications.
- GBM implementation of sklearn also has this feature so they are even on this point.
I hope now you understand the sheer power XGBoost algorithm. Note that these are the points which I could muster. You know a few more? Feel free to drop a comment below and I will update the list.
Did I whet your appetite ? Good. You can refer to following web-pages for a deeper understanding:
2. XGBoost Parameters
The overall parameters have been divided into 3 categories by XGBoost authors:
- General Parameters: Guide the overall functioning
- Booster Parameters: Guide the individual booster (tree/regression) at each step
- Learning Task Parameters: Guide the optimization performed
I will give analogies to GBM here and highly recommend to read this article to learn from the very basics.
General Parameters
These define the overall functionality of XGBoost.
- booster [default=gbtree]
- Select the type of model to run at each iteration. It has 2 options:
- gbtree: tree-based models
- gblinear: linear models
- silent [default=0]:
- Silent mode is activated is set to 1, i.e. no running messages will be printed.
- It’s generally good to keep it 0 as the messages might help in understanding the model.
- nthread [default to maximum number of threads available if not set]
- This is used for parallel processing and number of cores in the system should be entered
- If you wish to run on all cores, value should not be entered and algorithm will detect automatically
There are 2 more parameters which are set automatically by XGBoost and you need not worry about them. Lets move on to Booster parameters.
Booster Parameters
Though there are 2 types of boosters, I’ll consider only tree booster here because it always outperforms the linear booster and thus the later is rarely used.
- eta [default=0.3]
- Analogous to learning rate in GBM
- Makes the model more robust by shrinking the weights on each step
- Typical final values to be used: 0.01-0.2
- min_child_weight [default=1]
- Defines the minimum sum of weights of all observations required in a child.
- This is similar to min_child_leaf in GBM but not exactly. This refers to min “sum of weights” of observations while GBM has min “number of observations”.
- Used to control over-fitting. Higher values prevent a model from learning relations which might be highly specific to the particular sample selected for a tree.
- Too high values can lead to under-fitting hence, it should be tuned using CV.
- max_depth [default=6]
- The maximum depth of a tree, same as GBM.
- Used to control over-fitting as higher depth will allow model to learn relations very specific to a particular sample.
- Should be tuned using CV.
- Typical values: 3-10
- [default=0]
- A node is split only when the resulting split gives a positive reduction in the loss function. Gamma specifies the minimum loss reduction required to make a split.
- Makes the algorithm conservative. The values can vary depending on the loss function and should be tuned.
- max_delta_step [default=0]
- In.
- This is generally not used but you can explore further if you wish.
- subsample [default=1]
- Same as the subsample of GBM. Denotes the fraction of observations to be randomly samples for each tree.
- Lower values make the algorithm more conservative and prevents overfitting but too small values might lead to under-fitting.
- Typical values: 0.5-1
- colsample_bytree [default=1]
- Similar to max_features in GBM. Denotes the fraction of columns to be randomly samples for each tree.
- Typical values: 0.5-1
- colsample_bylevel [default=1]
- Denotes the subsample ratio of columns for each split, in each level.
- I don’t use this often because subsample and colsample_bytree will do the job for you. but you can explore further if you feel so.
- lambda [default=1]
- L2 regularization term on weights (analogous to Ridge regression)
- This used to handle the regularization part of XGBoost. Though many data scientists don’t use it often, it should be explored to reduce overfitting.
- alpha [default=0]
- L1 regularization term on weight (analogous to Lasso regression)
- Can be used in case of very high dimensionality so that the algorithm runs faster when implemented
- scale_pos_weight [default=1]
- A value greater than 0 should be used in case of high class imbalance as it helps in faster convergence.
Learning Task Parameters
These parameters are used to define the optimization objective the metric to be calculated at each step.
-
- seed [default=0]
- The random number seed.
- Can be used for generating reproducible results and also for parameter tuning.
If you’ve been using Scikit-Learn till now, these parameter names might not look familiar. A good news is that xgboost module in python has an sklearn wrapper called XGBClassifier. It uses sklearn style naming convention. The parameters names which will change are:
- eta –> learning_rate
- lambda –> reg_lambda
- alpha –> reg_alpha
You must be wondering that we have defined everything except something similar to the “n_estimators” parameter in GBM. Well this exists as a parameter in XGBClassifier. However, it has to be passed as “num_boosting_rounds” while calling the fit function in the standard xgboost implementation.
I recommend you to go through the following parts of xgboost guide to better understand the parameters and codes:
- XGBoost Parameters (official guide)
- XGBoost Demo Codes (xgboost GitHub repository)
- Python API Reference (official guide)
3. Parameter Tuning with Example
We will take the data set from Data Hackathon 3.x AV hackathon, same as that taken in the GBM article. import xgboost as xgb from xgboost.sklearn import XGBClassifier'
Note that I have imported 2 forms of XGBoost:
- xgb – this is the direct xgboost library. I will use a specific function “cv” from this library
- XGBClassifier – this is an sklearn wrapper for XGBoost. This allows us to use sklearn’s Grid Search with parallel processing in the same way we did for GBM
Before proceeding further, lets define a function which will help us create XGBoost models and perform cross-validation. The best part is that you can take this function as it is and use it later for your own models.
def modelfit(alg, dtrain, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50): if useTrainCV: xgb_param = alg.get_xgb_params() xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values) cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds, metrics='auc', early_stopping_rounds=early_stopping_rounds, show_progress=False) alg.set_params(n_estimators=cvresult.shape[0]) #Fit the algorithm on the data alg.fit(dtrain[predictors], dtrain['Disbursed'],eval_metric='auc') #Predict training set: dtrain_predictions = alg.predict(dtrain[predictors]) dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1] #Print model report: print "\nModel Report" print "Accuracy : %.4g" % metrics.accuracy_score(dtrain['Disbursed'].values, dtrain_predictions) print "AUC Score (Train): %f" % metrics.roc_auc_score(dtrain['Disbursed'], dtrain_predprob) feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False) feat_imp.plot(kind='bar', title='Feature Importances') plt.ylabel('Feature Importance Score')
This code is slightly different from what I used for GBM. The focus of this article is to cover the concepts and not coding. Please feel free to drop a note in the comments if you find any challenges in understanding any part of it. Note that xgboost’s sklearn wrapper doesn’t have a “feature_importances” metric but a get_fscore() function which does the same job.
General Approach for Parameter Tuning
We will use an approach similar to that of GBM here. The various steps to be performed are:
- Choose a relatively high learning rate. Generally a learning rate of 0.1 works but somewhere between 0.05 to 0.3 should work for different problems. Determine the optimum number of trees for this learning rate. XGBoost has a very useful function called as “cv” which performs cross-validation at each boosting iteration and thus returns the optimum number of trees required.
- Tune tree-specific parameters ( max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees. Note that we can choose different parameters to define a tree and I’ll take up an example here.
- Tune regularization parameters (lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.
- Lower the learning rate and decide the optimal parameters .
Let us look at a more detailed step by step approach.
Step 1: Fix learning rate and number of estimators for tuning tree-based parameters
In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:
- max_depth = 5 : This should be between 3-10. I’ve started with 5 but you can choose a different number as well. 4-6 can be good starting points.
- min_child_weight = 1 : A smaller value is chosen because it is a highly imbalanced class problem and leaf nodes can have smaller size groups.
- gamma = 0 : A smaller value like 0.1-0.2 can also be chosen for starting. This will anyways be tuned later.
- subsample, colsample_bytree = 0.8 : This is a commonly used used start value. Typical values range between 0.5-0.9.
- scale_pos_weight = 1: Because of high class imbalance.
Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
#Choose all predictors except target & IDcols predictors = [x for x in train.columns if x not in [target, IDcol]] xgb1 = XGBClassifier( learning_rate =0.1, n_estimators=1000, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb1, train, predictors)
As you can see that here we got 140 as the optimal estimators for 0.1 learning rate. Note that this value might be too high for you depending on the power of your system. In that case you can increase the learning rate and re-run the command to get the reduced number of estimators.
Note: You will see the test AUC as “AUC Score (Test)” in the outputs here. But this would not appear if you try to run the command on your system as the data is not made public. It’s provided here just for reference. The part of the code which generates this output has been removed here.
Step 2: Tune max_depth and min_child_weight
We tune these first as they will have the highest impact on model outcome. To start with, let’s set wider ranges and then we will perform another iteration for smaller ranges.
Important Note: I’ll be doing some heavy-duty grid searched in this section which can take 15-30 mins or even more time to run depending on your system. You can vary the number of values you are testing based on what your system can handle.
param_test1 = { 'max_depth':range(3,10,2), 'min_child_weight':range(1,6,2) } gsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27), param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch1.fit(train[predictors],train[target]) gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
Here, we have run 12 combinations with wider intervals between values. The ideal values are 5 for max_depth and 5 for min_child_weight. Lets go one step deeper and look for optimum values. We’ll search for values 1 above and below the optimum values because we took an interval of two.
param_test2 = { 'max_depth':[4,5,6], 'min_child_weight':[4,5,6] } gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=5, min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch2.fit(train[predictors],train[target]) gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
Here, we get the optimum values as 4 for max_depth and 6 for min_child_weight. Also, we can see the CV score increasing slightly. Note that as the model performance increases, it becomes exponentially difficult to achieve even marginal gains in performance. You would have noticed that here we got 6 as optimum value for min_child_weight but we haven’t tried values more than 6. We can do that as follow:.
param_test2b = { 'min_child_weight':[6,8,10,12] } gsearch2b = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=4, min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test2b, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch2b.fit(train[predictors],train[target])
modelfit(gsearch3.best_estimator_, train, predictors) gsearch2b.grid_scores_, gsearch2b.best_params_, gsearch2b.best_score_
We see 6 as the optimal value.
Step 3: Tune gamma
Now lets tune gamma value using the parameters already tuned above. Gamma can take various values but I’ll check for 5 values here. You can go into more precise values as.
param_test3 = { 'gamma':[i/10.0 for i in range(0,5)] } gsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch3.fit(train[predictors],train[target]) gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
This shows that our original value of gamma, i.e. 0 is the optimum one. Before proceeding, a good idea would be to re-calibrate the number of boosting rounds for the updated parameters.
xgb2 = XGBClassifier( learning_rate =0.1, n_estimators=1000, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb2, train, predictors)
Here, we can see the improvement in score. So the final parameters are:
- max_depth: 4
- min_child_weight: 6
- gamma: 0
Step 4: Tune subsample and colsample_bytree
The next step would be try different subsample and colsample_bytree values. Lets do this in 2 stages as well and take values 0.6,0.7,0.8,0.9 for both to start with.
param_test4 = { 'subsample':[i/10.0 for i in range(6,10)], 'colsample_bytree':[i/10.0 for i in range(6,10)] } gsearch4, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch4.fit(train[predictors],train[target]) gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
Here, we found 0.8 as the optimum value for both subsample and colsample_bytree. Now we should try values in 0.05 interval around these.
param_test5 = { 'subsample':[i/100.0 for i in range(75,90,5)], 'colsample_bytree':[i/100.0 for i in range(75,90,5)] } gsearch5, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch5.fit(train[predictors],train[target])
Again we got the same values as before. Thus the optimum values are:
- subsample: 0.8
- colsample_bytree: 0.8
Step 5: Tuning Regularization Parameters
Next step is to apply regularization to reduce overfitting. Though many people don’t use this parameters much as gamma provides a substantial way of controlling complexity. But we should always try it. I’ll tune ‘reg_alpha’ value here and leave it upto you to try different values of ‘reg_lambda’.
param_test6 = { 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100] } gsearch6, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch6.fit(train[predictors],train[target]) gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_
We can see that the CV score is less than the previous case. But the values tried are very widespread, we should try values closer to the optimum here (0.01) to see if we get something better.
param_test7 = { 'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05] } gsearch7, scoring='roc_auc',n_jobs=4,iid=False, cv=5) gsearch7.fit(train[predictors],train[target]) gsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_
You can see that we got a better CV. Now we can apply this regularization in the model and look at the impact:
xgb3 = XGBClassifier( learning_rate =0.1, n_estimators=1000, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_alpha=0.005, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb3, train, predictors)
Again we can see slight improvement in the score.
Step 6: Reducing Learning Rate
Lastly, we should lower the learning rate and add more trees. Lets use the cv function of XGBoost to do the job again.
xgb4 = XGBClassifier( learning_rate =0.01, n_estimators=5000, max_depth=4, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8, reg_alpha=0.005, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb4, train, predictors)
Now we can see a significant boost in performance and the effect of parameter tuning is clearer.
As we come to the end, I would like to share 2 key thoughts:
- It is difficult to get a very big leap in performance by just using parameter tuning or slightly better models. The max score for GBM was 0.8487 while XGBoost gave 0.8494. This is a decent improvement but not something very substantial.
- A significant jump can be obtained by other methods like feature engineering, creating ensemble of models, stacking, etc
You can also download the iPython notebook with all these model codes from my GitHub account. For codes in R, you can refer to this article.
End Notes
This article was based on developing a XGBoost model end-to-end. We started with discussing why XGBoost has superior performance over GBM which was followed by detailed discussion on the various parameters involved. We also defined a generic function which you can re-use for making models.
Finally, we discussed the general approach towards tackling a problem with XGBoost and also worked out the AV Data Hackathon 3.x problem through that approach.
I hope you found this useful and now you feel more confident to apply XGBoost in solving a data science problem. You can try this out in out upcoming hackathons.
Did you like this article? Would you like to share some other hacks which you implement while making XGBoost models? Please feel free to drop a note in the comments below and I’ll be glad to discuss.
94 Comments | https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/ | CC-MAIN-2017-22 | refinedweb | 3,333 | 50.23 |
Created on 2014-04-22 23:35 by raylu, last changed 2018-10-20 00:22 by vstinner. This issue is now closed. says
.
It looks like a bug in the subprocess module e.g., if child process does:
sys.stdout.write(sys.stdin.readline())
sys.stdout.flush()
and the parent:
p.stdin.write(line) #NOTE: no flush
line = p.stdout.readline()
then a deadlock may happen with bufsize=1 (because it is equivalent to bufsize=-1 in the current code)
Surprisingly, it works if universal_newlines=True but only for C implementation of io i.e., if C extension is disabled:
import io, _pyio
io.TextIOWrapper = _pyio.TextIOWrapper
then it blocks forever even if universal_newlines=True with bufsize=1 that is clearly wrong (<-- bug) e.g., `test_universal_newlines` deadlocks with _pyio.TextIOWrapper
C implementation works because it sets `needflush` flag even if only `write_through` is provided [1]:
if (self->write_through)
needflush = 1;
else if (self->line_buffering &&
(haslf ||
PyUnicode_FindChar(text, '\r', 0, PyUnicode_GET_LENGTH(text), 1) != -1))
needflush = 1;
[1]:
Python io implementation doesn't flush with only `write_through` flag.
It doesn't deadlock if bufsize=0 whether universal_newlines=True or not.
Note: Python 2.7 doesn't deadlock with bufsize=0 and bufsize=1 in this case as expected.
What is not clear is whether it should work with universal_newline=False and bufsize=1: both current docs and Python 2.7 behaviour say that there should not be a deadlock.
I've updated the docs to mention that bufsize=1 works only with
universal_newlines=True and added corresponding tests. I've also updated the subprocess' code to pass line_buffering explicitly.
Patch is uploaded.
Related issue #21396
Thanks for the report, diagnosis and patch! Your change looks good to me. I'll commit it soon.
I've changed test_newlines to work also with Python io implementation.
I've updated the patch.
Note: tests use 10 seconds timeouts: I don't know how long it should take to read back a line from a subprocess so that the timeout would indicate a deadlock.
Perhaps.
Sorry, it seems I was wrong on the second point. Looking closer, it seems write-through mode only flushes the TextIOWrapper layer, not the underlying binary file object, whereas line-buffering mode flushes everything to the OS, so the extra line_buffering=True flag would be needed.
yes,..
to.
I've updated the patch to remove changes to test_universal_newlines
test that was fixed in revision 37d0c41ed8ad that closes #21396 issue
I've asked about thread-safety of tests on python-dev mailing list:
> The.
I'm fairly sure this hasn't been fixed in tip so I think we're still waiting on patch review. Is there an update here?
The latest patch looks good to me.
New changeset 38867f90f1d9 by Antoine Pitrou in branch '3.4':
Issue #21332: Ensure that ``bufsize=1`` in subprocess.Popen() selects line buffering, rather than block buffering.
New changeset 763d565e5840 by Antoine Pitrou in branch 'default':
Issue #21332: Ensure that ``bufsize=1`` in subprocess.Popen() selects line buffering, rather than block buffering.
Pushed! Thank you!
New changeset a2670565d8f5c502388378aba1fe73023fd8c8d4 by Victor Stinner (Alexey Izbyshev) in branch 'master':
bpo-32236: open() emits RuntimeWarning if buffering=1 for binary mode (GH-4842) | https://bugs.python.org/issue21332 | CC-MAIN-2020-50 | refinedweb | 534 | 67.55 |
When I first considered the idea of a blog, an embarrassingly long time ago, I had decided to hit things of with an introductory article outlining some of the bigger insights I have gleaned up until this point in my career. As I have been preparing that article for my second post, it has quickly become apparent that each segment is becoming too long for a convenient read! Either that or my waffle is strong!
In this passage I want to detail what I currently see as the more important lessons I have been exposed to during the start of my career in the software industry. It is my hope that this may serve 1) as a primer for early career software apprentices and career change hopefuls and 2) as a reflective log for my own selfish purposes!
The following is a distilment of, what I think are, the important and potentially time saving (for you!) lessons I have acquired to date, and it is these that shall be covered in the next few articles on this blog. These include:
1) A broad appreciation of what a “full stack developer” is 2) Bugs! Particularly where they come from (Spoiler alert: you) 3) The ritual of git 4) Abstractions 5) Continuous learning
As a primer, so that you can appreciate I am not hailing from a formal background in computing, and thus most of my perspectives are home grown, I will introduce myself:
Fortunate to have been given a job straight out of graduation by a remarkable fellow and founder of and co-founder of, I’ve had a whirlwind intro to raw, well guided, professional software craftsmanship.
Despite diving into a software development career, I did not come from an undergrad degree in computer science. Instead I spent the last year of a bioscience degree force feeding myself the knowledge I needed to pursue a career in software development. This was always a consideration of mine. A toss up between science and computing. After a gap year doing research with a with a science organisation, I knew it would, at the very least, want to move towards a computational biology field, if not full blown software development.
import python
With my self taught notions of software development I ventured down this career path. The first thing I was grateful to be exposed to was the full breadth of technologies and responsibilities at the hands of a full stack developer.
Backend to fronted, inception to deployment, refactoring and testing, databases and hosting. Naturally, at this point, I am fascinated, confused and excited by the disparate technologies and techniques I get to employ each day. It has been a brain grinding experience, even until this day. Trying to learn so many new tools, and have them speak to one another harmoniously, is a true challenge for the apprentice!
My first (and largest to date) assignment was a data analytics and mapping system built with Django and Angular. Gratefully I had a little exposure to Angular (thanks to working with these guys, but Django was all new. A little tinkering with Flask and CherryPy meant that Django was an approachable beast. Making things doubly acceptable were a designer and spec ops guru who handled the majority of deployment and visual issues.
Regardless I still had to learn and be better on all fronts. AWS for hosting, Shippable for continuous integration, MongoDB, Posgresql, Redis, Javascript/JQuery, testing and documentation. It is not all technological and code related though. Scrum, agile methodology, working with other people, planning, and accountability are all important considerations that have to be given their rightful mind share.
Meanwhile, I had the joy and freedom (and danger!) of working from home! Though we shall leave those trials, and the likes of agility, personal kanban, TDD, accountability, code review, time keeping and goal setting, for future posts.
Needless to say, the initiation for the new full stack developer is hard. Though just like starting to learn coding, a bit of determination, persistence, and a ton of reading/watching videos, pays off with growth of an employable skill set, and, importantly, the ability to begin translating your ideas to code!
My take away message would be - be daunted, be excited and focus on the long road. Let the daunting breadth of what you need to learn humble you, and continuously seek out advice and lessons from those around you, maintain your excitement by working on your own toy projects (and put them on github!). Finally, try to appreciate early on that the pursuit of software mastery is not, for most of us, a quick journey, Granted this is something I still try to challenge continuously through self improvement and study, but I recognise my position as the aspiring apprentice! | http://aaronmyatt.github.io/thestack/ | CC-MAIN-2019-09 | refinedweb | 796 | 57.61 |
This is an extrapolation of the formatting instructions in GNU's standards documentation.
New Java code for GNU Crypto should be written in "GNU style", with slight modifications for Java. The usual rules for Java code are:
Put each class into its own file, and put that file in a sequence
of directories that match that class's package. So put a class
`foo.bar.Baz' into the file
`foo/bar/Baz.java'.
Package names should be lower case, and should be a single word or abbreviation.
The basic unit of indentation is two spaces. Only spaces should be used to indent, not tabs.
Import each class used and never write the package name before the class name, unless it is needed to avoid conflicts.
List your imports alphabetically, and split large lists by package. Don't use the wildcard syntax unless you need to import a large number of classes from one package.
Write variable and class names in mixed capitals, and capitalize the first letter if it is a class name. For example, a class name would look like ``ClassName'' and a variable ``variableName''.
Section your code into logical pieces, like "Constants and fields", "Constructors", "Class methods", and "Instance methods".
Delimit each section with a form feed and a comment like the following:
// Constants and fields. // -------------------------------------------------------------------------
Keep lines under 80 columns wide.
All top-level class and interface declarations should begin at column zero, and the opening brace should be on the next line, in column zero.
Method declarations and member class declarations should begin at the next level of indentation. That is, a method declaration for the top-level class would be indented two spaces, and that method's opening and closing braces are on their own lines indented two spaces.
List modifiers in this order: access modifier, static or abstract, final, synchronized, then anything else.
Put a space between a keyword or method name and the opening parentesis, but not if the method takes no arguments:
if (condition) ... method (parameters); method();
Indent the braces for control structures like this:
for (int i = 0; i < limit; i++) { statements; }
Omit the braces for control structures if there is only a single statement in the body:
while (condition) statement;
If a parameter list or list of conditionals will not fit on a single line, break the list and indent under the opening parenthesis:
method(parameterOne, parameterTwo + 42, (Cast) parameterThree, parameterFour, parameterFive); if (conditionOne < limitOne && conditionTwo != limitTwo && conditionThree)
If the opening parenthesis starts at column 40 or greater, however, break the line and indent two spaes.
If the ``throws'' clause of a method, if any, will not fit on
the same line as the method declaration, break the line before the
throws keyword and indent two spaces:
someLongMethodName(SomeClass someParameter, AnotherClass anotherParameter) throws SomeException { }
If the list of exceptions will not fit on a single line, break the
line and indent past the
throws:
someLongMethodName(SomeClass someParameter, AnotherClass anotherParameter) throws SomeException, AnotherException, AThirdException, AFourthException { }
Chain your exceptions! It greatly aids debugging if you can get a complete stack trace:
someExceptionMethod() throws SomeException { try { someOtherExceptionMethod(); } catch (SomeOtherException soe) { SomeException se = new SomeExceptino (soe.getMessage()); se.initCause (soe); throw se; } }
Method parameters that are not changed should be marked
final, as should any member fields in an immutable
class.
The following is a basic example of how to format your source code:
/* One line to describe the class. Copyright (C) 2004. */ package ...; import ...; public class ClassName extends SuperClass implements Interface { // Constants and fields. // ------------------------------------------------------------------------- public static final Object CONSTANT; private final Object field; ^L// Constructors. // ------------------------------------------------------------------------- public ClassName(final Object field) { this.field = field; } public ClassName() { this(CONSTANT); } ^L// Class methods. // ------------------------------------------------------------------------- public static Object classMethod() throws SomeException { ...; } ^L// Instance methods. // ------------------------------------------------------------------------- public Object getField() { return field; } } 2004 $ | http://www.gnu.org/software/gnu-crypto/code-formatting.html | CC-MAIN-2015-40 | refinedweb | 622 | 54.02 |
SYNTAX
#include <slurm/slurm.h>
int slurm_kill_job (
uint32_t job_id,
uint16_t signal,
uint16_t batch_flag
);
int slurm_kill_job_step (
uint32_t job_id,
uint32_t job_step_id,
uint16_t signal
);
int slurm_signal_job (
uint32_t job_id,
uint16_t signal
);
int slurm_signal_job_step (
uint32_t job_id,
uint32_t job_step_id,
uint16_t signal
);
int slurm_terminate_job_step (
uint32_t job_id,
uint32_t job_step_id,
);
ARGUMENTS
batch_flag If non-zero then signal only the batch job shell.
- job_id
- Slurm job id number.
- job_step_id
- Slurm job step id number.
- signal
- Signal to be sent to the job or job step.
DESCRIPTION
slurm_kill_job Request that a signal be sent to either the batch job shell (if batch_flag is non-zero) or all steps of the specified job. If the job is pending and the signal is SIGKILL, the job will be terminated immediately. This function may only be successfully executed by the job's owner or user root.
slurm_kill_job_step Request that a signal be sent to a specific job step. This function may only be successfully executed by the job's owner or user root.
slurm_signal_job Request that the specified signal be sent to all steps of an existing job.
slurm_signal_job_step Request that the specified signal be sent to an existing job step.
slurm_terminate_job_step Request that terminates a job step by sending a REQUEST_TERMINATE_TASKS rpc to all slurmd of a job step.
RETURN VALUE
On success, zero is returned. On error, -1 is returned, and Slurm error code is set appropriately.
ERRORS
SLURM_PROTOCOL_VERSION_ERROR Protocol version has changed, re-link your code.
ESLURM_DEFAULT_PARTITION_NOT_SET the system lacks a valid default partition.
ESLURM_INVALID_JOB_ID the requested job id does not exist.
ESLURM_JOB_SCRIPT_MISSING the batch_flag was set for a non-batch job.
ESLURM_ALREADY_DONE the specified job has already completed and can not be modified.
ESLURM_ACCESS_DENIED the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job).
ESLURM_INTERCONNECT_FAILURE failed to configure the node interconnect.
SLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT Timeout in communicating with Slurm controller.. | https://manpages.org/slurm_kill_job/3 | CC-MAIN-2022-21 | refinedweb | 308 | 50.43 |
Flow of Control in C++
- Relational Operators
- Loops
- Decisions
- Logical Operators
- Precedence Summary
- Other Control Statements
- Summary
- Questions
- Exercises
In This Chapter
Relational Operators
Loops
Decisions
Logical Operators
Precedence Summary
Other Control Statements
Not many programs execute all their statements in strict order from beginning to end. Most programs (like many humans) decide what to do in response to changing circumstances. The flow of control jumps from one part of the program to another, depending on calculations performed in the program. Program statements that cause such jumps are called control statements. There are two major categories: loops and decisions.
How many times a loop is executed, or whether a decision results in the execution of a section of code, depends on whether certain expressions are true or false. These expressions typically involve a kind of operator called a relational operator, which compares two values. Since the operation of loops and decisions is so closely involved with these operators, we'll examine them first.
Relational Operators
A relational operator compares two values. The values can be any built-in C++ data type, such as char, int, and float, oras we'll see laterthey can be user-defined classes. The comparison involves such relationships as equal to, less than, and greater than. The result of the comparison is true or false; for example, either two values are equal (true), or they're not (false).
Our first program, relat, demonstrates relational operators in a comparison of integer variables and constants.
// relat.cpp // demonstrates relational operators #include <iostream> using namespace std; int main() { int numb; cout << "Enter a number: "; cin >> numb; cout << "numb<10 is " << (numb < 10) << endl; cout << "numb>10 is " << (numb > 10) << endl; cout << "numb==10 is " << (numb == 10) << endl; return 0; }
This program performs three kinds of comparisons between 10 and a number entered by the user. Here's the output when the user enters 20:
Enter a number: 20 numb<10 is 0 numb>10 is 1 numb==10 is 0
The first expression is true if numb is less than 10. The second expression is true if numb is greater than 10, and the third is true if numb is equal to 10. As you can see from the output, the C++ compiler considers that a true expression has the value 1, while a false expression has the value 0.
As we mentioned in the last chapter, Standard C++ includes a type bool, which can hold one of two constant values, true or false. You might think that results of relational expressions like numb<10 would be of type bool, and that the program would print false instead of 0 and true instead of 1. In fact, C++ is rather schizophrenic on this point. Displaying the results of relational operations, or even the values of type bool variables, with cout<< yields 0 or 1, not false or true. Historically this is because C++ started out with no bool type. Before the advent of Standard C++, the only way to express false and true was with 0 and 1. Now false can be represented by either a bool value of false, or by an integer value of 0; and true can be represented by either a bool value of true or an integer value of 1.
In most simple situations the difference isn't apparent because we don't need to display true/false values; we just use them in loops and decisions to influence what the program will do next.
Here's the complete list of C++ relational operators:
Now let's look at some expressions that use relational operators, and also look at the value of each expression. The first two lines are assignment statements that set the values of the variables harry and jane. You might want to hide the comments with your old Jose Canseco baseball card and see whether you can predict which expressions evaluate to true and which to false.
jane = 44; //assignment statement harry = 12; //assignment statement (jane == harry) //false (harry <= 12) //true (jane > harry) //true (jane >= 44) //true (harry != 12) // false (7 < harry) //true (0) //false (by definition) (44) //true (since it's not 0)
Note that the equal operator, ==, uses two equal signs. A common mistake is to use a single equal signthe assignment operatoras a relational operator. This is a nasty bug, since the compiler may not notice anything wrong. However, your program won't do what you want (unless you're very lucky).
Although C++ generates a 1 to indicate true, it assumes that any value other than 0 (such as 7 or 44) is true; only 0 is false. Thus, the last expression in the list is true.
Now let's see how these operators are used in typical situations. We'll examine loops first, then decisions. | http://www.informit.com/articles/article.aspx?p=27834&seqNum=5 | CC-MAIN-2017-13 | refinedweb | 799 | 58.72 |
Forum:Exciting Advertising Opportunities!
From Uncyclopedia, the content-free encyclopedia
I have an interesting proposal which I think you all might enjoy. I recently watched the show with ze frank for the first time and have been hooked ever since. What's great is that below every episode are hoverable images which display messages. They range from just observations to ads, and they are all purchased by fans.
I think it would be a good idea for us to advertise at this site for a few reasons. One, Ze Frank and his followers seem to have a very similar sense of humor to Uncyclopedia: random, topical, and intelligent all at the same time. Plus, Ze Frank's mantra is essentially the same as ours. He, too, hates people who are dicks. Two, his fan base has proven with past contests and submissions that they are more than willing to do creative stuff with little to no payoff.
I say we aim to purchase a $50 duck by teaming up and getting, say, 5 dollars from 10 people. Then we could use the alloted 50 characters to advertise our site. We could hold a contest to see which slogan gets chosen. For example, "Uncyclopedia.org: A wiki. Come, laugh, contribute!"
The show gets about a billion viewers a minute, and I think we could get a good boost in readers and positive contributors if we go through with this what say you all? --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 01:31, 21 January 2007 (UTC)
- The problem with advertising is that maybe it would boost the people who come to our site. Yah that's right. Because I estimate that 80% of edits are vandalisim/not really all that funny, but not vandalisim, and so if we got 1,000,000,000,000 more people a minute, we would soon have whatever 80% of a billion is more vandals.
While I am not against us advertising Uncyclopedia, I think that once we reach a pinicle of humor, we will have an overflow of people just begging to join. Instead of focusing our efforts on bringing people to us, we should spend more time making sure our greatest users don't leave. We should tell our friends and those we respect about Uncyclopedia, and that should be enough to bring them to us, and then to tell their friends. --Brigadier General Sir Zombiebaron 02:01, 21 January 2007 (UTC)
- ...did you even read why I think it would be a good idea? I don't think we'd get a lot of vandals or crapbutts because Ze's humor is very intelligent and anti-being-a-jerk. He has the same demo that I think we shoot for, and that's smart/funny people. This is the opposite of advertising at, say, ED because ED is conducive to vandalism and gets 1/9,000,000 of the readers that we do. --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 02:27, 21 January 2007 (UTC)
- Yes STM, I read your entire note. I honestly don't know who Ze Frank is or what his show is. However, just because you watch a show, doesn't mean that the show is aimed at you. I sometimes watch BET (Black Entertainment Network), and I'm not black. We can't determe who will see the ad, and of those who see it who will come to Uncyclopedia. However, by all means go ahead. Just go ahead with the knowlage that when the Ze Frank page becomes a hot-bed of vandals, we will have someone to point the finger at ;) --Brigadier General Sir Zombiebaron 02:32, 21 January 2007 (UTC)
- Are you saying a bunch of black people are going to vandalize our site? Racist. --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 02:36, 21 January 2007 (UTC)
- No...I said that I'm not black. The link is that not all people who watch a show are the way that the show is (ie. black, funny, old, &c), or are how the show sees its viewers being. --Brigadier General Sir Zombiebaron 02:57, 21 January 2007 (UTC)
- I acctaully just watched todays zefrank show thing. It was pretty funny. Not laugh out loud funny, but funny. Although after only seeing that one, I'll have to take your word about him being "anti-vandal" STM. --Brigadier General Sir Zombiebaron 03:05, 21 January 2007 (UTC)
We need a Targeted Campaign Response Feedback System
I'd say STM is spot-on - zefrank does have the right demographic. But before Uncyclopedia starts spending actual money on promoting itself, we should at the very least change Template:Welcome to direct new users to a page where they can introduce themselves and let us all know how they heard of us. We should have been doing this all along, but it becomes especially crucial when you're trying to figure out which of your marketing efforts are working and which aren't. c • > • cunwapquc? 02:54, 21 January 2007 (UTC)
- Great idea. Let's get on this. Quick, though, because the show ends March 17th.--
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 06:44, 21 January 2007 (UTC)
- How about a new members' forum, specifically for new users to introduce themselves? There's currently no place for that really. Also, we don't have nearly enough forums yet. Perhaps it would also hook them in quicker, making more people stick around longer. And if the show ends then, surely the highest viewing figures will be for then, and so would be the perfect time to advertise. • Spang • ☃ • talk • 08:25, 21 Jan 2007
- I seem to recall making an entry to a "How did you get to Uncyclopedia" topic, which is now in the bowels of the forum somewhere. Most new users wouldn't find it, and a similar new one will get buried again in no time. Why not dig it up and c/p it to it's own page(s) in the Uncyclopedia: namespace, as a more permanent record of arrivals? Uncyclopedia:Customs Station, perhaps? Then we get to confiscate any goodies they try to smuggle in!
- User:Tooltroll/sig 11:43, 21 January 2007 (UTC)
- Here's your man. --Brigadier General Sir Zombiebaron 13:31, 21 January 2007 (UTC)
- Ok, I c/p'd it like I suggested. Gotta go to work now, but I thought I'd pretty it up later, maybe split it into a-n/m-z or suchlike. . .
- User:Tooltroll/sig 17:58, 21 January 2007 (UTC)
- We do know where we get the users from, internet forums, wikipedia, humor sites, etc. That's what we can learn from a quick look at Forum:How_Did_You_Get_to_Uncyclopedia?. The question would be what other potential places to get users from. What do uncyclopedians watch, read, listen to, etc. Also age, general interests, hobbys and the like. We could make a survey, the wikia marketing guys could help us on this.---Asteroid B612
(aka Rataube) - Ñ 16:55, 21 January 2007 (UTC)
- I suggested a while back that we use our own google analytics account to track where people come from to find us, and even where they live mainly, which would help in any future targetted advertising effort. I have an account on it already which means we wouldnt need have the usual wait time that people have when applying for an account there. ~Sir Rangeley
GUN WotM UotM EGA +S (talk) 17:03, 21 January 2007 (UTC)
- I remember you suggesting that, Rang. Didn't someone say that Wikia already had that for us? Or am I remembering wrong?--<<
>> 13:31, 22 January 2007 (UTC)
- Wikia do have that, but I doubt very much that they'd make the data available. I'm sure it would be possible to add a second counter in via javascript, which wikipedia does, and as far as I can see is not disallowed by Wikia's terms of service, assuming the community wants it. I'm all for adding this in - perhaps a topic specifically about this would be better? • Spang • ☃ • talk • 13:51, 22 Jan 2007
- We should be able to make it available. We just need to set up an account for someone from here to access the Uncyclopedia data. Michael has added it to his to-do list, and thinks he should be able to do this quickly. So who would have the password? An admin I guess, or a couple of admins... who can then pass on the data to whoever needs it. (of course, this is subject to Michael getting it set up as easily as we anticipate). It's not possible for you to add your own analytics code, as I understand it, you don't have access to the place it has to go. We can add it for you, but at the moment we are having problems with getting two codes to work at the same time (it's supposed to be possible, but just won't work). So giving access to our copy seems the best way for now -- sannse (talk) 20:46, 22 January 2007 (UTC)
- I've got the password... who am I giving it to? -- sannse (talk) 21:07, 23 January 2007 (UTC)
- You could always give it to Range since he's the one who keeps suggesting we get it, but that's just the random ramblings of an old man. Pay me no mind.--<<
>> 02:06, 24 January 2007 (UTC)
- Hmmm, my bad, I assumed that would be top-secret wikia data. Indeed, Rangely would probably be a good choice, seeing as he already has an account with google analytics, I assume he knows how to use it. • Spang • ☃ • talk • 05:43, 24 Jan 2007
- Typical... I set this up, and the devs find the bug that was preventing two analytics codes being run on the same wiki. So we could set up a separate account now. But it's all the same data, so might as well just give Rangely the password. Range, I need an email address... ta -- sannse (talk) 08:25, 24 January 2007 (UTC)
In case you're still not sure about Zefrank
Today has proof that he is very much in tune with Uncyclopedia's sensibilities. Watch today's show and look really hard starting when he first says "beans." --
» Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 00:25, 23 January 2007 (UTC)
Them thar mooses
- You crazy moose you. --Sir ENeGMA (talk) GUN WotM PLS 02:56, 23 January 2007 (UTC)
- Well I'm convinced... let's buy an ad! I'll contribute a bit... -- sannse (talk) 08:20, 23 January 2007 (UTC)
- OK a £10 from me, contact me for where I send the cash to.--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 17:27, 25 January 2007 (UTC)
- I can do a fiver, if I can master the verbal gymnastics required to explain why I want my dad's credit card details. That nice man who said he was a Nigerian prince still isn't returning my calls. --
21:29, 25 January 2007 (UTC)
- You are talking about the "Let's Hunt Some Pants." thing at the end right? --Brigadier General Sir Zombiebaron 09:02, 26 January 2007 (UTC) | http://uncyclopedia.wikia.com/wiki/Forum:Exciting_Advertising_Opportunities! | CC-MAIN-2016-18 | refinedweb | 1,903 | 72.46 |
I am creating a game in C# Windows form and I will have 9 different characters inside the game for my player to choose from (using radio buttons). I have a variable called PlayerChar and Whenever the player chooses a character I will store the character's name inside that variable. Also I have 9 different classes for each characters. What I am trying to do is, to get the program to create an object from the class of the character chosen. but I don't know how to use variable PlayerChar as my class name.
PlayerChar obj = new PlayerChar();
PlayerChar myObj = Activator.CreateInstance(PlayerChar);
Have
PlayerChar be the base class for all of your character classes.
public class PlayerChar { ... } public class WarriorChar : PlayerChar { ... } public class RangerChar : PlayerChar { ... } public class ClericChar : PlayerChar { ... }
Then use a switch block or sequential if blocks to instantiate your player object with the right character subclass.
PlayerChar myObj = null; if (warriorButton.IsSelected) myObj = new WarriorChar(); else if (rangerButtom.IsChecked) myObj = new RangerChar(); else if (clericButton.IsChecked) myObj = new ClericChar(); ...
(Don't use the
Activator class to instantiate your objects unless you fully understand what
Activator does and have a well-thought-out reason for using it. Nine times out of ten it is used by people trying to get fancy with their code and end up just overcomplicating things.) | https://codedump.io/share/E4ebcjlT80Pj/1/how-to-construct-an-object-using-a-variable-as-the-class-name | CC-MAIN-2017-17 | refinedweb | 224 | 59.4 |
IPHostEntry Class
Provides a container class for Internet host address information.
For a list of all members of this type, see IPHostEntry Members.
System.Object
System.Net.IPHostEntry
[Visual Basic] Public Class IPHostEntry [C#] public class IPHostEntry [C++] public __gc class IPHostEntry [JScript] public class IPHostEntry
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
Remarks
The IPHostEntry class associates a Domain Name System (DNS) host name with an array of aliases and an array of matching IP addresses.
The IPHostEntry class is used as a helper class with the Dns class.
Example
[Visual Basic, C#, C++] The following example queries the DNS database for information on the host and returns the information in an IPHostEntry instance.
IPHostEntry Members | System.Net Namespace | Dns | http://msdn.microsoft.com/en-us/library/system.net.iphostentry(v=vs.71).aspx | CC-MAIN-2014-52 | refinedweb | 140 | 56.15 |
of it is that regulatory constraints force the software team to maintain multiple long-lived branches as separate streams of development. That is, they can’t just merge to development daily.
Infrastructure Needs
What do we need to test this branch properly? Well, we definitely need one copy of the target environment so we can load a build onto it and run tests. A hardware copy, it can’t be virtualized. Builds still come in fast, and we have to cross-compile to produce a working build for this environment. That means unit tests aren’t running on the Jenkins server, they’re running in the target environment. Combined, this means we’ll need an extra target environment to support rapid feedback unit testing. Yes, we could undertake the development and build infrastructure effort to produce unit test executables that could be run on an x86-64 Linux server. An extra hardware unit is decidedly cheaper.
The count doesn’t stop there though. Automated functional tests also need somewhere to run. We could run them on the first target we were reserving for deployment and smoke testing, but even after automation, the tests might take an hour to run. All incoming builds will be held up waiting on a previous build’s functional tests to finish when those new builds might fail at any earlier point. It’s better to ask for an extra hardware unit so these builds can be failed as soon as possible and reduce cycle time for the team.
Our current tally is at three hardware units: one continuous integration box for deployment and smoke testing, one unit testing box, and one functional testing box. We can argue about the necessity of all three of these targets but there are a few things that make that discussion unproductive. First, these devices are manufactured in house. If there’s enough extra supply, this isn’t a huge deal. On the other hand, this is not the case for every team. In that scenario, you can still look at the cost of purchasing sufficient hardware versus the cost incurred by the development team trying to maintain and release defect laden code. It shouldn’t be hard to justify a ROI for the hardware. There are real productivity gains to be had here. If the development team finds out that a build makes it to functional testing they can frequently just jump ahead and continue working while the functional tests run. Conversely, finding out immediately that your build fails unit tests is similarly useful. Everything we can do to get the development team information about the current health of the build improves the team’s ability to get work done.
And there are of course other kinds of testing we may want to support that take much longer to run: performance testing, security testing, durability testing, reliability testing, and the list goes on. These things need to be done before the organization can release the software to production but there’s no dedicated place to go get them done. There’s no way to discover potential problems when they’re easy to fix. It’s hard to fail quickly.
But… this is idealistic. In reality, the team is supporting multiple different streams of work. And these workstreams are decidedly independent. For one thing, they, like many other teams, have multiple older releases that need support. These older releases work with different versions of the firmware, and in one case, with different hardware requirements. But work doesn’t happen in these branches often. So maybe we can save a little here by not trying to keep a whole pipeline available for these maintenance activities.
That’s not all though. There’s also the work that goes on to build the various features the team is committing to in each sprint. By and large, these features, aggregated into epics, do not have regulatory approval. They cannot be merged back to development, and so they can’t rely on the development infrastructure. Imagine having to wait to get a build for your branch done because a branch belonging to another team member has a build in progress. When there are 3-4 branches like this, it doesn’t take long to convince the team that each branch needs it’s own infrastructure. Again, this is a basic productivity and cycle time issue. Work can’t get done while the developers or testers are waiting on a build/deploy/test cycle to finish.
Let’s say the team decides 3 independent branches + development is what they’d like to be able to support. If the team commits to more work than that, some of those branches will just have to share. No big deal. That brings us up to 14 targets just to support the day to day work of the development team. We haven’t even talked about the deeper kinds of testing the Verification and Validation team wants to do before a release and what the Test Automation team will want to have running regularly.
Fail Early, Fail Often
For one thing, this is a medical device, it needs to work without issues for an extended period of time (say, 1-2 weeks) at a normal level of use (say, buttons pressed every
n number of seconds). Functionality can’t degrade and the box can’t crash. This is called durability testing and it took maybe a day for one of the new automated testers to put a script together to do this. Shouldn’t we be running this test on some regular schedule against whatever latest stable build is available in a branch? I’d say just development, but these branches last for an entire release cycle. If a new feature is causing durability problems, the team needs to know and needs to be able to do something about it. So that’s an additional box for each branch.
Then there’s more intense testing, like performance testing. In our previous test, we were looking for reliability over a long period of time. Now we want to measure stability and functionality under different kinds of application-specific loads. Maybe we don’t need to do this for as long, but we want to really stress the hell out of the system. We need a place to run these tests. Let’s say the total time, for all different kinds of loads, adds up to about the same length of time as the durability test. Then we only need one additional box for each branch.
Oh, and there’s more periodic testing, maybe once per sprint, in the form of tests that are mostly automated, but at particular junctures need some manipulation of the hardware. For example, one module may need to be disconnected and swapped for another so the test can be repeated. It turns out there are a of a lot of tests like this when developing automated tests for a hardware device. These tests need to be performed to validate the software the development team has produced for a sprint. Stories can’t be closed until these tests are run. Some of the testing load can fall on the testers individual test units, but that means the automated testers can’t continue writing tests while these tests are running. It would be far better if they can lean on the delivery pipeline to run their tests, so we’ll need some boxes set aside for partially automated tests as well. Let’s again say one per branch will suffice for now, and reallocate depending on what’s in use/not in use when the need arises through the year we’re planning for.
What about security testing? In our case, business thinks it’s not that important. Everyone using the boxes in the real world is a trained medical professional so the assumption is that there are no bad actors with physical access to the boxes, and we can just tell the staff not to ever put these boxes on the network. That’s a business risk. We’ll want to cover this eventually, but we don’t need it right now. (N.B. this is not a recommendation.)
We’ll still need a few more devices. For one thing, the DevOps team needs a box they can point at to test various things, like changes to the deployment process, and they shouldn’t be directly testing those on the main build boxes, possibly interrupting the production delivery pipeline. The test automation team will need a place to test changes to the underlying test framework that the automated tests are being developed on. Framework tests shouldn’t be incorporated into the main application build without passing some kind of vetting, so that’s another allocated unit.
That’s 28 units for just one scrum team, a gigantic internal purchase order. Sure, these weren’t all immediately needed, but when we did this analysis at the end of last year, we were pretty sure we’d need them all by the end of Q1, just based on what the software team wanted to be able to support. 28 boxes is a pretty sizable hardware farm. Yes, we could reduce the number necessary by doing some intelligent things. None of the ideas required to reduce hardware requirements were even planned, let alone developed or tested. Using an expensive resource like a developer to manually allocate these resources winds up being less cost effective than just putting in the purchase order. Put in those terms to the business, it was quickly decided to go ahead and do it. The team needed to grow rapidly, and even if we have gotten by with less then, we would likely have started to fall short long before the year was out.
Managing Locks
Resources will always be limited though. The business may have come back and said that we could only have 20 units. Or 15. What could we have done to get by without hurting the team’s productivity?
More to the point, there was an implication previously about stuff running concurrently. It’s definitely not safe to try and run the automated tests while a new build is being pushed to the target! Nor is it safe to deploy a new build while the last build is still being unpacked!
So first things first, we needed some kind of locking strategy. We tried a few things that didn’t work (doing the locking on the Jenkins side was just a no-go), and eventually settled on an old trick that’s used in the Linux world – drop a lock file to let the resource-requester know there’s work already happening. Putting a lock file into persistent storage is easy and removing the lock file is even easier. There’s no need to fail if the lock isn’t present, just rm -f it to make sure it’s definitely gone.
Putting these things together is more interesting. We were scripting our deployment jobs with Jenkins Pipeline, so a little bit of groovy gets us a nice little function that acquires a lock, runs a block of code, and then releases the lock even in the face of errors executing the block.
def locking(box, block) { acquireLock(box) try { block() } finally { clearLock(box) } }
Assuming acquireLock is a blocking function and clearLock just ensures the lock is cleaned up whether it was successfully acquired or not, it’s a nice little piece of syntax in actual use:
downloadTests(latest) locking(1.2.3.4) { deploy(1.2.3.4, myBuild) runTests(1.2.3.4, myBuild, testTags) }
As long as all environment accesses are wrapped in locking() blocks, and clearLock cannot fail, we have a runtime guarantee that we won’t accidentally brick a console.
Full Blown Resource Management
Still, there’s still more we could do. We can be smarter about how we allocate the available targets, and so reduce the number we need in practice. We basically need a pool of boxes.
If you’re familiar with the thread pool concept from the concurrency world, the idea is exactly the same, swapping operating system threads for physical hardware units. Each target runs a service to phone home every minute or so and maintain a queue of targets on the central resource manager. Even better yet, we can keep separate queues of boxes depending on the configuration of the boxes, for incompatible configurations. With that done, we can put up a REST API in front of the manager that let’s clients (particular runs of any number of Jenkins jobs, for instance) phone in to request a box, with a database behind it so it doesn’t lose its state and so we can track statistics about capacity, utilization, normal length of time a box is in use, etc.. Now we have a resource manager that’s convenient to use and intelligently tracks our ever-growing collection of hardware and ensures its in use.
We can even do away with the lock files on the consoles. In fact, there’s a race condition, so we might want to. If there’s only one console available, a job requests it, and before the job acquires the lock, the console phones home and announces itself as available. Then another job requests a console. The first job has taken the lock, so this one is stuck waiting for the first job to finish. As far as race conditions go, it’s not a bad one, but it does mean that a job sometimes has to wait longer than necessary. One way to solve it is to use the non-blocking lock and ask for a new console instead. The other is to do away with the locks and have the manager turn off the “report home” service on the console, then return its address to the requesting client. When the client finishes, instead of releasing the lock, it sends a release request to the manager with the address so the manager can turn reporting back on. In fact, we can use the same try/finally construct we used before to make sure this works properly. Pick your favorite and try it at home.
One of the nice things about this kind of manager is the ability to easily generalize it to handle entirely different kinds of resources. So there’s no doubt this development effort would pay dividends in the future. Moreover, there’s opportunity to build out the interface further. There are plenty of Javascript libraries that provide tty emulators in the web browser, and it wouldn’t be all that hard to have the front-end serve one up that drops you onto a console already logged onto a target. And the phone-home service could become a health check, reporting the status of the operating system and software. “I’m in use but all my services are running” and “I was in use but I crashed because X service went down and here’s the appropriate log”. That would be an incredibly helpful diagnostic resource for the team.
Unfortunately, we never had time to build it on this project, but maybe these thoughts and suggestions can help you build it for your organization.
That’s everything, folks. Leave a comment if you have any questions or feedback, I’d love to hear from any of you. In particular, I’d really like to hear if there’s anything I didn’t delve into that you’re curious about. And thanks so much for reading this far! | https://www.coveros.com/devops-in-a-regulated-and-embedded-environment-scalability-and-resource-concerns/ | CC-MAIN-2021-31 | refinedweb | 2,604 | 69.82 |
I have been able to have my external commands use subprocess to call commands because not all modules exist in the splunk environment and that has worked fine. I now need to call a perl script because the data I am retrieving is only accessible through a perl API. I need to call a perl script from within and external python command and I can't get it to work. It works great for shell scripts and python scripts but not for perl. The script works outside of splunk but not from within splunk. I must be not getting something from the environment within splunk but don't know what it is.
This works fine outside of splunk but does note even execute within splunk. (I didn't include the splunk imports and processing, just the function that it is calling.
#!/usr/bin/python
import subprocess
import os
def getemp(host):
cmd = ("/usr/bin/perl","/opt/splunk/etc/apps/KtN/bin/mrilookup.pl",host)
pseudohandle = subprocess.Popen(cmd,stdin=subprocess.PIPE, stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=False)
pseudohandle.stdin.write(host)
stdout, stderr = pseudohandle.communicate()
pseudohandle.stdin.close()
return stdout.rstrip()
print getemp("bugs")
I found if I take out the added per libs it works. How do I get the perl libs added? I tried both use lib xxx and push @INC xxx within the perl script and they appear to be ignored. If I run the script as the splunk user either way they work. It is only when the external command using subprocess call the script that it fails.
Finally figured it out. I added the PER5LIB path to the subproccess command via env={"PERL5LIB":"/pathtolib"}
It works!!!
View solution in original post | https://community.splunk.com/t5/Archive/External-Commands-calling-perl-script-fails/td-p/72994 | CC-MAIN-2020-34 | refinedweb | 288 | 67.25 |
Important: Please read the Qt Code of Conduct -
Setting min-width/max-width in stylesheet (.qss) does not work after setting layout size constraint of QDialog to QLayout::SetFixedSize
Two requirements for dialog:
- Want the dialog can resize to suitable size when any child widget changes size, so I set layout size constraint of dialog to QLayout::SetFixedSize.
- Want the width of the dialog can be customized by min-width/max-width to fixed value in stylesheet.
Issue:
min-width/max-width in stylesheet (.qss) does not work.
Here is the code:
#include <qlayout.h> MyDialog::MyDialog(QWidget *parent) : QDialog(parent), ui(new Ui::MyDialog) { ui->setupUi(this); layout()->setSizeConstraint(QLayout::SetFixedSize); setStyleSheet("QDialog { min-width: 800px; max-width: 800px }"); // just a simple example, actually I want to customize in .qss file. }
Questions:
- Is it correct behavior that width can not be customized after setting size constraint to QLayout::SetFixedSize?
- How can I achieve the requirements?
- Chris Kawa Moderators last edited by
Want the dialog can resize to suitable size when any child widget changes size, so I set layout size constraint of dialog to QLayout::SetFixedSize.
SetFixedSize sets, well, a fixed size. It means it is set to the size hint value and can't be changed anymore by anything. So in your case this is definitely not what you want.
Leave the size constraint at default and just specify the min and max values in the stylesheet.
My dialog has vertical layout and there are several QFrame widgets on the dialog, but the dialog won't resize to a smaller size when the height of its children becomes smaller.
- Chris Kawa Moderators last edited by
How do you change the size of the children frames?
By making some widgets in QFrame visible/invisible when the user does some interaction on dialog.
Currently, I can set the direct children to fixed width instead of the dialog after setting size constraint of dialog to QLayout::SetFixedSize, then everything is ok. | https://forum.qt.io/topic/58869/setting-min-width-max-width-in-stylesheet-qss-does-not-work-after-setting-layout-size-constraint-of-qdialog-to-qlayout-setfixedsize | CC-MAIN-2020-40 | refinedweb | 330 | 54.63 |
:
Today Crystal Qian and Justin Kotalik, interns with the ASP.NET Core team for the past 12 weeks joined the standup to talk about their experience and to demonstrate some of the cool stuff they’ve been working on with the team. But first…
Community Links
Radu Matei shows how to register a list of services from a JSON file – handy for testing or runtime DI configuration.
Jon Hilton continues a series on building a .NET Core application completely from the command line with a look at publishing.
Andrew Lock shows how to set the hosting environment for ASP.NET Core from Visual Studio, Visual Studio Code, operating system settings, and command line.
Dmitry Sikorsky introduces Platformus CMS, a new CMS written on ASP.NET Core and ExtCore.
Nice, thorough walkthrough by Bipin Joshi explaining how to configure ASP.NET Core Identity.
Damien Bowden shows how to log to Elasticsearch from an ASP.NET Core application using NLog.
Muhammad Rehan Saeed explains NGINX principles and configuration in the context of ASP.NET Core deployment.
Taiseer Joudeh begins a series explaining how he used Azure Active Directory B2C in a large web / service / mobile application, including ASP.NET Web API 2 and ASP.NET MVC.
Benjamin Fistein demonstrates how the Peachpie compiler can be used to compile PHP applications and run them in an ASP.NET Core application.
Anuraj Parameswaran shows how to configure NancyFx in an empty ASP.NET Core application.
Andrew Lock takes a look at how JWT Bearer Auth Middleware is implemented in ASP.NET Core as a means to understanding authentication in the framework in general.
Bill Boga points out an important gotcha if you’re writing middleware that modifies the response content.
Damien Bowden shows how to use MySQL with ASP.NET Core and EF Core.
Nicolas Bello Camilletti explains the components included in JavaScriptServices.
Mads Kristensen announces a new project template pack for ASP.NET Core applications.
Donovan Brown points out an important environment variable to speed up .NET Core installation on build servers.
Bill Boga shows off a configuration provider for GPS location tracking. Fun post, and great way to learn about configuration providers.
Damien Bowden shows how to implement undo, redo functionality in an ASP.NET Core application using EF and SQL Server.
Maher Jendoubi explores upcoming changes to DI integration in ASP.NET Core 1.1, showing how some popular 3rd party continers will be registered.
Scott Allen shows how he’s been creating extension methods to move code out of his Startup.cs.
Hisham Bin Ateya shows how to use custom middleware to prevent external sites from displaying your site’s images.
Norm Johanson announces ASP.NET Core support on AWS Elastic Beanstalk and the AWS Toolkit for Visual Studio.
Chris Sells describes the existing support for ASP.NET in Google Cloud Platform and lays out the plans for future ASP.NET Core support.
Here’s a useful library from Federico Daniel Colombo that makes it easy to add auditing to your ASP.NET applications.
Intern Accomplishments
Justin lead off by showing his work on writing a URL rewrite middleware module for ASP.NET Core. He found that there are three common approaches that are used to rewrite URLs:
- Syntax from Apache mod-rewrite
- Syntax from the IIS rewrite module
- Translate using regular expressions to identify and replace content in the URL
You can find his work on GitHub at
Crystal then showed her work in making ViewComponents into TagHelpers. She demonstrated an initial ViewComponent that turned a photo of our program manager Dan Roth into an ANSI image. Along the way, she showed the shortcomings of using a ViewComponent such as a lack of intellisense and parameter hints. With a Crystal’s enhancement, you can reference your ViewComponents using a tag with a “vc” namespace and translated to lower-kabob case.
You can track Crystal’s work as part of addressing MVC issue 1051.
The work featured in this video will be shipping with the next release version of ASP.NET Core.
Join the conversationAdd Comment
Wow, it’s amazing that both of the features the interns worked on were part of the highlight features of v1.1 during Connect(); //2016. Makes me all the more eager to try and get an internship at Microsoft. | https://blogs.msdn.microsoft.com/webdev/2016/09/06/notes-from-the-asp-net-community-standup-august-30-2016/ | CC-MAIN-2017-30 | refinedweb | 715 | 59.09 |
Creation of some data processors times out
Hi there,
I'm currently trying to somehow squeeze out a
no_motion signal using the currently available c++ api (as it's not available) through the python bindings, but I have noticed that some data processors work fine (e.g.
highpass) whereas some of them time out upon creation and the callback is never called (e.g.
counter). I assume each data processor supports certain types of
MblMwDataSignals on the chip? If so could you please point me somewhere to read about those? I couldn't find anything about them in the docs.
Example case of a callback that is never called:
def processor_created(context, pointer): print("created") self.processor = pointer self.context = context e.set() fn_wrapper = cbindings.FnVoid_VoidP_VoidP(processor_created) libmetawear.mbl_mw_dataprocessor_counter_create(acc_signal, None, fn_wrapper) e.wait() # This is never reached libmetawear.mbl_mw_datasignal_subscribe(self.processor, self.context, self.callback)
Changing the counter to a highpass will make things work as expected:
libmetawear.mbl_mw_dataprocessor_highpass_create(acc_signal, 16, None, fn_wrapper)
Thanks.
This is a bug with the counter processor.
Also, the method returns a non-zero status so you can use that to escape the indefinite wait.
Hi Eric,
Thanks, I have observed this in a number (majority?) of processors.
packerfor example.
For anyone else who might be interested, I ended up calculating "any motion" from the acc signals on my own side, and would then alternate between (acc on, motion off) and (acc off, motion on) to preserve battery.
Provide code that demonstrates theses issues.
Are you cleaning up board resources every time you run your script?
@Eric Thanks, here's the code. Also what is considered a correct way of wiping the configuration on the device upon startup in a scenario where we connect to/disconnect from the device a lot? Is calling teardown enough? Is a sleep period recommended after calling it?
libmetawear.mbl_mw_acc_get_packed_acceleration_data_signalworks though.
You can't pack that much acceleration
data in one packet.
Is there any particular reason why I might use
mbl_mw_dataprocessor_packer_create(which can pack 2 values) over
mbl_mw_acc_get_packed_acceleration_data_signalwhich can do 3?
Also I expected the arrival time of every 2-batch of data be equal in the former, similar to how it is in the latter, as I assume the data points are arriving at the client as a single pack and therefore at the same time? (similar to the case in the latter). Am I missing something?
Could you please comment on the clean-up question too?
Thanks | https://mbientlab.com/community/discussion/comment/8853 | CC-MAIN-2020-40 | refinedweb | 411 | 58.18 |
Naïve Processing (NLP). NLP is a field that has been much related to machine learning, since many of its problems can be formulated as a classification task.
In this section, we will use Naïve Bayes for text classification; we will have a set of text documents with their corresponding categories, and we will train a Naïve Bayes algorithm to learn to predict the categories of new unseen instances.
Importing our pylab environment
%pylab inline
Populating the interactive namespace from numpy and matplotlib
Import the newsgroup Dataset, explore its structure and data
from sklearn.datasets import fetch_20newsgroups news = fetch_20newsgroups(subset='all') print (news.keys()) print (type(news.data), type(news.target), type(news.target_names)) print (news.target_names) print (len(news.data)) print (len(news.target))
dict_keys(['target_names', 'data', 'description', 'target', 'DESCR', 'filenames']) <class 'list'> <class 'numpy.ndarray'> <class 'list'> ['] 18846 18846
print (news.data[0]) print (news.target[0], news.target_names[news.target[0]])
From: Mamatha Devineni Ratnam <mr47+@andrew.cmu.edu> Subject: Pens fans reactions Organization: Post Office, Carnegie Mellon, Pittsburgh, PA Lines: 12 NNTP-Posting-Host: po4.andrew.cmu.edu I am sure some bashers of Pens fans are pretty confused about the lack of any kind of posts about the recent Pens massacre of the Devils. Actually, I am bit puzzled too and a bit relieved. However, I am going to put an end to non-PIttsburghers' relief with a bit of praise for the Pens. Man, they are killing those Devils worse than I thought. Jagr just showed you why he is much better than his regular season stats. He is also a lot fo fun to watch in the playoffs. Bowman should let JAgr have a lot of fun in the next couple of games since the Pens are going to beat the pulp out of Jersey anyway. I was very disappointed not to see the Islanders lose the final regular season game. PENS RULE!!! 10 rec.sport.hockey
Preprocessing the data
We have to partition our data into training and testing set. The loaded data is already in a random order, so we only have to split the data into, for example, 75 percent for training and the rest 25 percent for testing
SPLIT_PERC = 0.75 split_size = int(len(news.data)*SPLIT_PERC) X_train = news.data[:split_size] X_test = news.data[split_size:] y_train = news.target[:split_size] y_test = news.target[split_size:]
This function will serve to perform and evaluate a cross validation:
from sklearn.cross_validation import cross_val_score, KFold from scipy.stats import sem def evaluate_cross_validation(clf, X, y, K): # create a k-fold croos validation iterator of k=5 folds)))
Our machine learning algorithms can work only on numeric data.Currently we only have one feature, the text content of the message; we need some function that transforms a text into a meaningful set of numeric features.
The sklearn. feature_extraction.text module has some useful utilities to build numeric feature vectors from text documents.
You will find three different classes that can transform text into numeric features: CountVectorizer, HashingVectorizer, and TfidfVectorizer. The difference between them resides in the calculations they perform to obtain the numeric features. CountVectorizer basically creates a dictionary of words from the text corpus. Then, each instance is converted to a vector of numeric features where each element will be the count of the number of times a particular word appears in the document.
HashingVectorizer, instead of constricting and maintaining the dictionary in memory, implements a hashing function that maps tokens into feature indexes, and then computes the count as in CountVectorizer.
TfidfVectorizer works like the CountVectorizer, but with a more advanced calculation called Term Frequency Inverse Document Frequency (TF-IDF). This is a statistic for measuring the importance of a word in a document or corpus. Intuitively, it looks for words that are more frequent in the current document, compared with their frequency in the whole corpus of documents. You can see this as a way to normalize the results and avoid words that are too frequent, and thus not useful to characterize the instances.
Training a Naïve Bayes classifier
We will create a Naïve Bayes classifier that is composed of a feature vectorizer and the actual Bayes classifier. We will use the MultinomialNB class from the sklearn.naive_bayes module.
In order to compose the classifier with the vectorizer, scikitlearn has a very useful class called Pipeline (available in the sklearn.pipeline module) that eases the construction of a compound classifier, which consists of several vectorizers and classifiers.
Evaluate three models with the same Naive Bayes classifier, but with different vectorizers
from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer clf_1 = Pipeline([ ('vect', CountVectorizer()), ('clf', MultinomialNB()), ]) clf_2 = Pipeline([ ('vect', HashingVectorizer(non_negative=True)), ('clf', MultinomialNB()), ]) clf_3 = Pipeline([ ('vect', TfidfVectorizer()), ('clf', MultinomialNB()), ])
Perform a five-fold cross-validation by using each one of the classifiers
clfs = [clf_1, clf_2, clf_3] for clf in clfs: evaluate_cross_validation(clf, news.data, news.target, 5)
[ 0.85782493 0.85725657 0.84664367 0.85911382 0.8458477 ] Mean score: 0.853 (+/-0.003) [ 0.75543767 0.77659857 0.77049615 0.78508888 0.76200584] Mean score: 0.770 (+/-0.005) [ 0.84482759 0.85990979 0.84558238 0.85990979 0.84213319] Mean score: 0.850 (+/-0.004)
CountVectorizer and TfidfVectorizer had similar performances, and much better than HashingVectorizer
Let’s continue with TfidfVectorizer; we could try to improve the results by trying to parse the text documents into tokens with a different regular expression
The default regular expression: ur”\b\w\w+\b” considers alphanumeric characters and the underscore. Perhaps also considering the slash and the dot could improve the tokenization, and begin considering tokens as Wi-Fi and site.com. The new regular expression could be: ur”\b[a-z0-9-.]+[a-z][a-z0-9-.]+\b”.
clf_4 = Pipeline([ ('vect', TfidfVectorizer( token_pattern=r"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b", )), ('clf', MultinomialNB()), ])
evaluate_cross_validation(clf_4, news.data, news.target, 5)
[ 0.86100796 0.8718493 0.86203237 0.87291059 0.8588485 ] Mean score: 0.865 (+/-0.003)
We have a slight improvement from 0.86 to 0.87. Another parameter that we can use is stop_words: this argument allows us to pass a list of words we do not want to take into account, such as too frequent words, or words we do not a priori expect to provide information about the particular topic.
We will define a function to load the stop words from a text file
def get_stop_words(): result = set() for line in open('stopwords_en.txt', 'r').readlines(): result.add(line.strip()) return result
stop_words = get_stop_words()
Create a new classifier with this new parameter
clf_5 = Pipeline([ ('vect', TfidfVectorizer( stop_words=stop_words, token_pattern=r"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b", )), ('clf', MultinomialNB()), ])
evaluate_cross_validation(clf_5, news.data, news.target, 5)
[ 0.88116711 0.89519767 0.88325816 0.89227912 0.88113558] Mean score: 0.887 (+/-0.003)
Let us look at MultinomialNB parameters.
Try to improve by adjusting the alpha parameter on the MultinomialNB classifier
clf_7 = Pipeline([ ('vect', TfidfVectorizer( stop_words=stop_words, token_pattern=r"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b", )), ('clf', MultinomialNB(alpha=0.01)), ])
evaluate_cross_validation(clf_7, news.data, news.target, 5)
[ 0.9204244 0.91960732 0.91828071 0.92677103 0.91854603] Mean score: 0.921 (+/-0.002)
Evaluating the performance
If we decide that we have made enough improvements in our model, we are ready to evaluate its performance on the testing set.))
train_and_evaluate(clf_7, X_train, X_test, y_train, y_test)
Accuracy on training set: 0.996957690675 Accuracy on testing set: 0.917869269949 Classification Report: precision recall f1-score support 0 0.95 0.88 0.91 216 1 0.85 0.85 0.85 246 2 0.91 0.84 0.87 274 3 0.81 0.86 0.83 235 4 0.88 0.90 0.89 231 5 0.89 0.91 0.90 225 6 0.88 0.80 0.84 248 7 0.92 0.93 0.93 275 8 0.96 0.98 0.97 226 9 0.97 0.94 0.96 250 10 0.97 1.00 0.98 257 11 0.97 0.97 0.97 261 12 0.90 0.91 0.91 216 13 0.94 0.95 0.95 257 14 0.94 0.97 0.95 246 15 0.90 0.96 0.93 234 16 0.91 0.97 0.94 218 17 0.97 0.99 0.98 236 18 0.95 0.91 0.93 213 19 0.86 0.78 0.82 148 avg / total 0.92 0.92 0.92 4712 Confusion Matrix: [[190 0 0 0 1 0 0 0 0 1 0 0 0 1 0 9 2 0 0 12] [ 0 208 5 3 3 13 4 0 0 0 0 1 3 2 3 0 0 1 0 0] [ 0 11 230 22 1 5 1 0 1 0 0 0 0 0 1 0 1 0 1 0] [ 0 6 6 202 9 3 4 0 0 0 0 0 4 0 1 0 0 0 0 0] [ 0 2 3 4 208 1 5 0 0 0 2 0 5 0 1 0 0 0 0 0] [ 0 9 2 2 1 205 0 1 1 0 0 0 0 2 1 0 0 1 0 0] [ 0 2 3 10 6 0 199 14 1 2 0 1 5 2 2 0 0 1 0 0] [ 0 1 1 1 1 0 6 257 4 1 0 0 0 1 0 0 2 0 0 0] [ 0 0 0 0 0 1 1 2 221 0 0 0 0 1 0 0 0 0 0 0] [ 0 0 0 0 0 0 1 0 2 236 5 0 1 3 0 1 1 0 0 0] [ 0 0 0 1 0 0 0 0 0 0 256 0 0 0 0 0 0 0 0 0] [ 0 0 0 0 0 1 0 1 0 0 0 254 0 1 0 0 3 0 1 0] [ 0 1 0 1 5 1 3 1 0 2 1 1 197 1 2 0 0 0 0 0] [ 0 1 0 1 1 0 0 0 0 0 0 2 2 245 3 0 1 0 0 1] [ 0 2 0 0 1 0 0 1 0 0 0 0 0 1 238 0 1 0 1 1] [ 1 0 1 2 0 0 0 1 0 0 0 1 1 0 1 225 0 1 0 0] [ 0 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 212 0 2 0] [ 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 234 1 0] [ 0 0 0 0 0 0 1 0 0 0 0 2 1 1 0 1 7 3 193 4] [ 9 0 0 0 0 1 0 0 0 1 0 0 0 0 0 13 4 1 4 115]]
An accuracy of around 0.91.
If we look inside the vectorizer, we can see which tokens have been used to create our dictionary:
print (len(clf_7.named_steps['vect'].get_feature_names()))
145767
This Post Has One Comment
Very complex and informative
Thank you | https://adataanalyst.com/machine-learning/naiive-bayes-scikit-learn/ | CC-MAIN-2022-05 | refinedweb | 1,873 | 64 |
RuntimeHelpers.GetHashCode Method (Object)
Serves as a hash function for a particular object, and is suitable for use in algorithms and data structures that use hash codes, such as a hash table.
Assembly: mscorlib (in mscorlib.dll)
Parameters
- o
- Type: System.Object
An object to retrieve the hash code for.
Return ValueType: System.Int32
A hash code for the object identified by the o parameter.
The RuntimeHelpers.GetHashCode method always calls the Object.GetHashCode method non-virtually, even if the object's type has overridden the Object.GetHashCode method. Therefore, using RuntimeHelpers.GetHashCode might differ from calling GetHashCode directly on the object with the Object.GetHashCode method.
The Object.GetHashCode and RuntimeHelpers.GetHashCode methods differ as follows:
Object.GetHashCode returns a hash code that is based on the object's definition of equality. For example, two strings with identical contents will return the same value for Object.GetHashCode.
RuntimeHelpers.GetHashCode returns a hash code that indicates object identity. That is, two string variables whose contents are identical and that represent a string that is interned (see the String Interning section) or that represent a single string in memory return identical hash codes.
This method is used by compilers.
The common language runtime (CLR) maintains an internal pool of strings and stores literals in the pool. If two strings (for example, str1 and str2) are formed from an identical string literal, the CLR will set str1 and str2 to point to the same location on the managed heap to conserve memory. Calling RuntimeHelpers.GetHashCode on these two string objects will produce the same hash code, contrary to the second bulleted item in the previous section.
The CLR adds only literals to the pool. Results of string operations such as concatenation are not added to the pool, unless the compiler resolves the string concatenation as a single string literal. Therefore, if str2 was created as the result of a concatenation operation, and str2 is identical to str1, using RuntimeHelpers.GetHashCode on these two string objects will not produce the same hash code.
If you want to add a concatenated string to the pool explicitly, use the String.Intern method.
You can also use the String.IsInterned method to check whether a string has an interned reference.
The following example demonstrates the difference between the Object.GetHashCode and RuntimeHelpers.GetHashCode methods. The output from the example illustrates the following:
Both sets of hash codes for the first set of strings passed to the ShowHashCodes method are different, because the strings are completely different.
Object.GetHashCode generates the same hash code for the second set of strings passed to the ShowHashCodes method, because the strings are equal. However, the RuntimeHelpers.GetHashCode method does not. The first string is defined by using a string literal and so is interned. Although the value of the second string is the same, it is not interned, because it is returned by a call to the String.Format method.
In the case of the third string, the hash codes produced by Object.GetHashCode for both strings are identical, as are the hash codes produced by RuntimeHelpers.GetHashCode. This is because the compiler has treated the value assigned to both strings as a single string literal, and so the string variables refer to the same interned string.
using System; using System.Runtime.CompilerServices; public class Example { public static void Main() { Console.WriteLine("{0,-18} {1,6} {2,18:N0} {3,6} {4,18:N0}\n", "", "Var 1", "Hash Code", "Var 2", "Hash Code"); // Get hash codes of two different strings. String sc1 = "String #1"; String sc2 = "String #2"; ShowHashCodes("sc1", sc1, "sc2", sc2); // Get hash codes of two identical non-interned strings. String s1 = "This string"; String s2 = String.Format("{0} {1}", "This", "string"); ShowHashCodes("s1", s1, "s2", s2); // Get hash codes of two (evidently concatenated) strings. String si1 = "This is a string!"; String si2 = "This " + "is " + "a " + "string!"; ShowHashCodes("si1", si1, "si2", si2); } private static void ShowHashCodes(String var1, Object value1, String var2, Object value2) { Console.WriteLine("{0,-18} {1,6} {2,18:X8} {3,6} {4,18:X8}", "Obj.GetHashCode", var1, value1.GetHashCode(), var2, value2.GetHashCode()); Console.WriteLine("{0,-18} {1,6} {2,18:X8} {3,6} {4,18:X8}\n", "RTH.GetHashCode", var1, RuntimeHelpers.GetHashCode(value1), var2, RuntimeHelpers.GetHashCode(value2)); } } // The example displays output similar to the following: // Var 1 Hash Code Var 2 Hash Code // // Obj.GetHashCode sc1 94EABD27 sc2 94EABD24 // RTH.GetHashCode sc1 02BF8098 sc2 00BB8560 // // Obj.GetHashCode s1 29C5A397 s2 29C5A397 // RTH.GetHashCode s1 0297B065 s2 03553390 // // Obj.GetHashCode si1 941BCEA5 si2 941BCEA5 // RTH.GetHashCode si1 01FED012 si2 01FED012
Available since 8
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1 | https://technet.microsoft.com/en-us/library/11tbk3h9.aspx | CC-MAIN-2016-36 | refinedweb | 800 | 60.31 |
User Name:
Published: 23 Feb 2009
By: Scott Mitchell
Download Sample Code
Scott Mitchell explains how to take advantage of syndication feeds.
Virtually every website that produces new content on a regular or semi-regular basis provides a syndication feed. Syndication feeds were first used with blogs as a
means for sharing the most recent entries, but soon spread to news sites. For instance, CNN.com has a feed that syndicates their breaking news headlines and one that
syndicates the latest sports news, along with many others. Today, syndication feeds are used to share all sort of content: Flickr syndicates each user's most recently
uploaded images; each project in CodePlex (Microsoft's open-source project hosting website) has a feed that syndicates the latest releases; and Twitter syndicates each
user's most recent tweets.
A syndication feed contains information about the most recently published items using a standardized XML format. Because the
syndication feed conforms to a known XML standard, its data can be parsed by any computer program that knows how to work with the syndication standard. For example,
you can subscribe to various syndication feeds using Microsoft Outlook's RSS Feeds folder. Adding a feed to this folder displays the syndication feed's content items;
furthermore, Outlook automatically checks the syndication feed every so often to determine if there is new content. Syndication feeds are also routinely consumed by other
websites. The ASP.NET Forums includes a feed that syndicates the 25 most recent posts. This feed is consumed and
displayed in the ASP.NET Developer Center on Microsoft's MSDN website. Similarly, you
could consume the Twitter feed that syndicates your latest tweets and display them on your blog.
Creating and consuming syndication feeds from an ASP.NET
application is a walk in the park thanks to the classes in the System.ServiceModel.Syndication namespace. This article shows how to use
these classes to create a syndication feed for your website as well as how to use them to consume and display syndication feeds from other websites. These techniques are
demonstrated in a demo web application, which is a website for a fictional book company, Nile.com. This demo site includes a feed that syndicates the most recently
published book titles; it also consumes and displays the headlines syndicated by RareBookNews.com.
Before we delve into the meat of this article it would be worthwhile to spend a moment reviewing the history of syndication standards and the support for working with
syndication feeds in .NET. The first syndication feed standard was proposed by Dave Winer in 1997.
Over the years this initial syndication standard morphed into what is today known as RSS. The
RSS specification was frozen in 2003 with the release of RSS 2.0.
Shortly after RSS 2.0 was released, Sam Ruby proposed a new syndication format to address the shortcomings of the now frozen RSS standard. Over the
next several months Ruby and others created a new syndication format named Atom. The Atom 1.0
standard was ratified by the Internet Engineering Task Force (IETF) in 2007.
The .NET Framework did not provide
any built-in functionality for creating or consuming syndication feeds until version 3.5 with the introduction of the System.ServiceModel.Syndication
namespace. property, which is a collection of SyndicationItem objects. The SyndicationFeed
class also has a static Load method that parses and loads the information from a specified RSS 2.0 or Atom 1.0 syndication feed.
System.ServiceModel.Syndication
Title
Description
Links
Items
SyndicationFeed
Load
In addition to the
SyndicationFeed and SyndicationItem classes, the System.ServiceModel.Syndication namespace also includes two
formatter classes, Atom10FeedFormatter
and Rss20FeedFormatter. These classes
take a SyndicationFeed object and generate the corresponding XML content that conforms to either the Atom 1.0 or RSS 2.0 specificiations.
SyndicationItem
In 2000 a group at O'Reilly created their own syndication format standard that differed from the then current RSS 0.91 standard. Unfortunately,
this group named their new syndication format RSS 1.0, even though it was not related to RSS 0.91 (which later became RSS 2.0). Therefore, RSS 2.0 and RSS 1.0 are
significantly different from one another. Similarly, a number of sites started using Atom before it was standardized by the IETF, most notably Google. As a result, many of
Google's Atom syndication feeds still use Atom 0.3 instead of Atom 1.0.
The SyndicationFeed class can only parse feeds that conform to the RSS
2.0 or Atom 1.0 standards. The good news is that the vast majority of the syndication feeds on the web use one of these two standards. And if you need to parse an RSS 1.0
or Atom 0.3 feed you can translate the XML content into Atom 1.0-compliant XML on the fly before loading it into a SyndicationFeed object. For more
information on this technique check out Daniel Cazzulino's blog entry on upgrading Atom 0.3 feeds on the fly.
Every website that routinely publishes any kind of content should offer a syndication feed. A syndication feed can be a static XML file that gets created automatically
whenever new content is published or it can be a dynamic web page that gets the latest published items and emits the appropriate XML markup. The demo website uses the
latter method.
Creating a syndication feed from a dynamic web page involves three steps:
Let's walk through each of these steps.
The syndication feed web page's output is generated programmatically using the classes in the System.ServiceModel.Syndication namespace.
Therefore, when creating this page make sure you do not associate it with a master page; once the page has been created remove all of the content from the.aspx
page except for the @Page directive.
.aspx
@Page
The demo website's syndication feed is implemented by the Feed.aspx page. The
contents of the .aspx page follow:
Feed.aspx
.aspx
Note that in addition to the @Page directive the .aspx portion also includes an @OutputCache directive. The
@OutputCache directive caches the content returned by the syndication feed for the specified Duration - in this case, 60 seconds. In other
words, when the first user requests the syndication feed the XML output is cached. For the next 60 seconds, requests for the syndication feed return the XML output stored in
the cache rather than having it regenerated by the page. The VaryByParam attribute indicates that the output caching engine should maintain a separate
cached output for each unique Type value. As we'll see shortly, the Feed.aspx page accepts an optional Type querystring
parameter that indicates the syndication format used. Visiting Feed.aspx or Feed.aspx?Type=Atom returns the syndicated content in
accordance with the Atom 1.0 standard, whereas visiting Feed.aspx?Type=RSS returns XML that is in accordance with the RSS 2.0 standard.
@OutputCache
Duration
VaryByParam
Type
Feed.aspx?Type=Atom
Feed.aspx?Type=RSS
For
more information on output caching check out my article, Output Caching in ASP.NET.
The demo application's underlying author and title information comes from the pubs database, which you'll find
in the App_Data folder, and uses LINQ to SQL as the Data Access
Layer. The LINQ to SQL classes are defined in the Pubs.dbml file, which you'll find in the App_Code folder.
App_Data
Pubs.dbml
App_Code
The Page_Load
event handler in Feed.aspx starts with the following statement, which retrieves the books from the Titles table ordered in reverse
chronological order of their publish date.
Page_Load
Titles
That was easy!
The last step is to turn the most recently published items into the appropriate syndication feed XML markup. After the dataItems variable is created (see
the above code snippet) the Feed.aspx page defines a constant that specifies the maximum number of recently published items to include in the feed and
then determines whether to generate an RSS 2.0 or Atom 1.0 syndication feed. The value of the Type querystring parameter dictates which syndication
standard is used.
dataItems
Regardless of what syndication standard you use, the code for constructing the SyndicationFeed object is (mostly) identical. There are a few
edge cases that we'll address momentarily. However, the final lines of code that render the SyndicationFeed object into the appropriate XML markup differ
based on the syndication standard. Furthermore, the syndication feed should send back a different Content-Type header depending on the syndication
format: application/atom+xml for Atom 1.0 feeds and application/rss+xml for RSS 2.0 feeds.
Content-Type
application/atom+xml
application/rss+xml
We are now ready to create the SyndicationFeed object and set its properties. As you can see, there are a number of properties associated with the
syndication feed, such as: Title, Description, Links, Copyright, and Language, among
others.
Language
There is one method used in the above code snippet that is not part of the .NET Framework, and that's GetFullyQualifiedUrl (it is defined
further down in the code-behind class). This method accepts a relative URL like ~/Default.aspx and returns the fully qualified URL, like.
GetFullyQualifiedUrl
~/Default.aspx
The next step is to define the syndication feed's items. This is done by creating a List of
SyndicationItem objects that contains a SyndicationItem instance for each item to appear in the feed. Once this List has
been constructed it can be assigned to the SyndicationFeed object's Items property.
List
The foreach loop uses the Take method to iterate through only the first maxItemsInFeed number of books in
the dataItems collection. In each iteration a SyndicationItem object is created, its properties are set, and the object is added to
feedItems, the List of SyndicationItems created immediately before the start of the loop. Note that the Links
collection for each SyndicationItems object contains a link to the Titles.aspx page. Typically each content item has a unique URL, such as, or, and those unique URLs would be
what you would link to. However, I did not add such a unique page for each book for this demo and therefore decided to have all content items link to the same
URL.
foreach
Take
maxItemsInFeed
feedItems
SyndicationItems
Links
Titles.aspx
There are two checks I added here to ensure that the information added to the feed conforms the the syndication specification being used. At the top of the loop
you can see that I bypass adding the current book to the feed's collection of SyndicationItems if there are no authors for the book. This is because the Atom 1.0 specification
requires author information with each item. I could have alternatively added a "dummy" author value for such books, with a name and e-mail address of "Unknown" and
"unknown@example.com", for example. (No such check is needed when generating an RSS 2.0 feed as the author information is optional in the RSS 2.0 spec.) The second
check is at the end of the inner foreach loop, which adds the author information to the current SyndicationItem object. Because the RSS 2.0
specification allows no more than one author we must break out of this loop after the first iteration to ensure that multiple authors are not added.
The final step is to
generate the XML markup for the syndication feed and output it to the Response stream. This is handled via an XmlWriter object and either
the Atom10FeedFormatter or Rss20FeedFormatter class, depending on whether you want to generate an Atom 1.0 or RSS 2.0
feed.
Response
XmlWriter
Atom10FeedFormatter
Rss20FeedFormatter
The XmlWriter object feedWriter sends its output to the Response object's OutputStream.
Next, either an Atom10FeedFormatter or Rss20FeedFormatter class is instantiated and its WriteTo method is called, which
emits the rendered markup to feedWriter, which outputs it to the client.
feedWriter
OutputStream
WriteTo
feedWriter
That's it! We now have a fully functional syndication feed that can emit either
RSS 2.0 or Atom 1.0 XML. For example, visting Feed.aspx?Type=RSS returns the following content:
In addition to creating and rendering syndication feeds, the SyndicationFeed class can also be populated from an existing syndication feed. With a few
lines of code you can build a SyndicationFeed object from an RSS 2.0 or Atom 1.0 feed from another site and then display it in a web page. The
Default.aspx page in the demo application provides such functionality, pulling the recent headlines from the RareBookNews.com syndication feed,. To load the contents of a syndication feed use the SyndicationFeed
class's Load method, passing in
an XmlReader instance. Once the SyndicationFeed has been loaded you can bind its Items collection to a data Web
control.
Default.aspx
SyndicationFeed
XmlReader
The Default.aspx page includes a ListView control named BookNewsList and the following code in the Page_Load
event handler:
BookNewsList
As you can see, the SyndicationFeed object's Items collection is cached for one hour. It's good netiquette to cache the
syndication feed rather than re-requesting it every single time someone visits this page. It's also important to load the SyndicationFeed object in a
try...catch block so that you can gracefully recover if the RareBookNews.com site is down.
try...catch
The BookNewsList ListView's
ItemTemplate emits the HTML returned by the DisplayFeedItem method, which is defined in the Default.aspx page's
code-behind class. This method takes in a SyndicationItem object and generates the HTML for display. Specifically, it creates a hyperlink that points to the content items URL
and displays the title as text. It also shows the item's published date and description.
ItemTemplate
DisplayFeedItem
Figure 1 shows the Default.aspx page when viewed through a browser. The text circled in red were the latest news headlines from RareBookNews.com at the
time of writing.
Syndication has quickly grown from a technology that was strictly the domain of blogs to one that is embraced by all sorts of websites that routinely publish content.
Creating RSS 2.0 or Atom 1.0 syndication feeds, or consuming them, can be accomplished using the classes in the System.ServiceModel.Syndication
namespace. Use the SyndicationFeed and SyndicationItem classes for modeling syndication feeds and their items and the
Atom10FeedFormatter and Rss20FeedFormatter classes for rendering a SyndicationFeed object into the appropriate XML
markup. Be sure to download the demo application. It shows how to create an ASP.NET page that emits either an RSS 2.0 or Atom 1.0 syndication feed, as well as how to
consume and display another site's syndication feed.
DisplayFeedItem((Container
BookNewsList.DataSource = items;
BookNewsList.DataSource = items.Take(5);
If you subscribe from Office 2007, one gets duplicate entries in Outlook. There must be a way to prevent this.
I have been battling with this for a couple of days.
I added LastUpdatedTime to the syndication item. All RSS items in outlook now have the same received datetime.This would be so valuable if I could get it to work.Consuming the feed from a web page works just fine, of course, but what about Outlook.Greg Farquhar
Loved the article, and also read some others, but I'm having a recurring problem in vb. (Event went and tried to figure out the namespace at Microsoft - they're showing it this way also.) The first item gives the error in Visual Studio 2008 of (Microsoft.VisualBasic.Collection has no type parameters and so cannot have type arguments). The second (Type 'List' is not defined). Any help appreciated.
Dim feedItems As Collection(Of SyndicationItem) = New Collection(Of SyndicationItem)()Dim feedItems As New List(Of SyndicationItem)()
TTFN - Kent
Please login to rate or to leave a comment.
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro | http://dotnetslackers.com/articles/aspnet/How-to-create-a-syndication-feed-for-your-website.aspx | CC-MAIN-2016-26 | refinedweb | 2,648 | 57.87 |
With the ever-increasing volume of data, it is impossible to tell stories without visualizations. Data visualization is an art of how to turn numbers into useful knowledge. Using Python we can learn how to create data visualizations and present data in Python using the Seaborn package.
In this post we are going to learn how to create the following 9 plots:
- Scatter Plot
- Histogram
- Bar Plot
- Time Series Plot
- Box Plot
- Heat Map
- Correlogram
- Violin Plot
- Raincloud Plot
Python Data Visualization Tutorial: Seaborn
As previously mentioned in this Python Data Visualization tutorial we are mainly going to use Seaborn but also Pandas, and Numpy. However, to create the Raincloud Plot we are going to have to use the Python package ptitprince.
Installing Seaborn
Before we continue with this Python plotting tutorial we are going to deal with how to install the needed libraries. One of the most convenient methods to install Seaborn, and it’s dependencies, is to install the Python distribution Anaconda. This will give you many useful Python libraries for doing data science (e.g., Numpy, SciPy, Matplotlib, Seaborn).
How to Install Seaborn using Pip
<pre><code class="lang-none">pip install seaborn</code></pre>
How to Install ptitprince
In the last Python data visualization example, we are going to use a Python package called ptitprince. This package can be installed using Pip (as this post is written, it’s not available to install using Anacondas package manager conda):
pip install ptitprince
Learn more about installing, using, and upgrading Python packages in the more recent posts. For instance, the post about using pipx to install packages directly to virtual environment may prove useful. Moreover, the post about how to install Python packages using conda and pip is also very handy. Finally, sometimes when we use pip to install Python packages we may become aware that we need to update pip to the latest version. This can be done using pip itself.
Scatter Plot in Python using Seaborn
Scatter plots are similar to line graphs. That is we use the horizontal and vertical axes to visualize data points. However, the aim is different; Scatter plots can reveal how much one variable is affected by another (e.g., correlation).
Scatter plots usually consist of a large body of data. The closer the data points come when plotted to make a straight line, the higher the correlation between the two variables, or the stronger the relationship.
In the first Python data visualization example we are going to create a simple scatter plot. As previously mentioned we are going to use Seaborn to create the scatter plot.
Note, it should be possible to run each code chunk by its own. Note, however, that some code lines are optional. For instance, %matplotlib inline is used to display the plots within the Jupyter Notebook and plt (imported from matplotlib.pyplot) is used to change the size of the figures.
Python Scatter Plot Example:
%matplotlib inline import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # Suppress warnings import warnings warnings.filterwarnings('ignore') # Optional but changes the figure size fig = plt.figure(figsize=(12, 8)) df = pd.read_csv('') ax = sns.regplot(x="wt", y="mpg", data=df)
In all examples in this Python data visualization tutorial, we use Pandas to read data from CSV files. More on working with Pandas and CSV files can be found in the blog post “Pandas Read CSV Tutorial“.
- How to Make a Scatter Plot in Python using Seaborn
- Pair plots, containing scatter plots, can be created with Pandas scatter_matrix method.
Changing the Labels on a Seaborn Plot
In the next example, we are going to learn how to configure the Seaborn plot a bit. First, we are going to remove the confidence interval but we are also going to change the labels on the x-axis and y-axis.
import pandas as pd import seaborn as sns fig = plt.figure(figsize=(12, 8)) ax = sns.regplot(x="wt", y="mpg", ci=False, data=df) ax.set(xlabel='MPG', ylabel='WT')
For more about scatter plots:
- How to make Scatter Plots in Python (YouTube Video)
- Exploratory Data Analysis with Pandas Scipy and Seaborn
Histogram in Python using Seaborn
A histogram is a data visualization technique that lets us discover, and show, the distribution (shape) of continuous data. Furthermore, histograms enable the inspection of the data for its underlying distribution (e.g., normal distribution), outliers, skewness, and so on.
Python Histogram Example
In the next Python data visualization example, we will create histograms. Histograms are fairly easy to create using Seaborn. In the first Seaborn histogram example, we have turned set the parameter kde to false. This so that we only get the histogram.
import pandas as pd import seaborn as sns df = pd.read_csv('') fig = plt.figure(figsize=(12, 8)) sns.distplot(df.Temp, kde=False)
Now it is, of course, also possible to learn how to plot a histogram with Pandas. Hint: just type df.hist().
Grouped Histogram in Seaborn
If we want to plot the distribution of two conditions on the same Seaborn plot (i.e., create a grouped histogram using Seaborn) we first have to subset the data. In the histogram example below we loop through each condition (i.e., the categories in the data we want to visualize).
In the loop, we will subset the data and then we use Sebaorn distplot and create the histograms. Finally, we change the x- and y-axis labels using Seaborn set.
import pandas as pd import seaborn as sns df = pd.read_csv('', index_col=0) fig = plt.figure(figsize=(12, 8)) for condition in df.TrialType.unique(): cond_data = df[(df.TrialType == condition)] ax = sns.distplot(cond_data.RT, kde=False) ax.set(xlabel='Response Time', ylabel='Frequency')
Bar Plots in Python using Seaborn
Bar plots (or “bar graphs”) are a type of data visualization that is used to display and compare the number, frequency or other measures (e.g. mean) for different discrete categories of data. This is probably one of the most common ways to visualize data. Of course, like many of the common plots, there are many ways to create bar plots in Python (e.g., with Pandas barplot method).
Bar plots also offer some flexibility. That is, there are several variations of the standard bar plot including horizontal bar plots, grouped or component plots, and stacked bar plots.
Seaborn Bar Plot Example
In this example, we are starting by using Pandas groupby to group the data by “cyl” column. After we have done that we create a bar plot using Seaborn.
import pandas as pd import seaborn as sns df = pd.read_csv('', index_col=0) df_grpd = df.groupby("cyl").count().reset_index() fig = plt.figure(figsize=(12, 8)) sns.barplot(x="cyl", y="mpg", data=df_grpd)
More on how to work with Pandas groupby method:
Setting the Labels of a Seaborn Bar Plot
When displaying data in Python it, of course, makes sense to be as clear as possible. As you can see in the figure
In the next example, we are going to change labels because the y-axis actually represents the count of cars in each cylinder category:
import pandas as pd import seaborn as sns df = pd.read_csv('', index_col=0) df_grpd = df.groupby("cyl").count().reset_index() fig = plt.figure(figsize=(12, 8)) ax = sns.barplot(x="cyl", y="mpg", data=df_grpd) ax.set(xlabel='Cylinders', ylabel='Number of Cars for Each Cylinder')
Note, there might be better ways to display your data than using bar plots. Some researchers have named bar plots “dynamite plots” or “barbar plots”. This because when visualizing the mean, you might miss the distribution of the data (e.g., see Weissgerber et al., 2015).
Time Series Plots using Seaborn
A time series plot (also known as a time series graph or timeplot) is used to visualize values against time. In the Python Time Series Plot example, below, we are going to plot number of train trips each month.
import pandas as pd import seaborn as sns train_data = "" df = pd.read_csv(train_data) fig = plt.figure(figsize=(12, 8)) sns.lineplot(x="month", y="total_num_trips", ci=None, data=df)
Grouped Time Series Plots using Seaborn
It is further possible to visualize the value in different groups. In the next timplot example we are going to display the number of trips from the train stations in Paris. Here we use str.contains to select the rows in the dataframe containing a certain string (i.e., “Paris”). We use the parameter hue to get a separate line for each category in the data (i.e., departure station).
import pandas as pd import seaborn as sns df = pd.read_csv("") fig = plt.figure(figsize=(12, 8)) sns.lineplot(x="month", y="total_num_trips", hue="departure_station", ci=None, data=df[df.departure_station.str.contains('PARIS')])
See the more recent post about data visualization in Python and how to make a Seaborn line plots.
Box Plots in Python using Seaborn
In the next examples, we are going to learn how to visualize data, in python, by creating box plots using Seaborn. A Box Plot is a data visualization technique that is a little better compared to bar plots, for instance. Box Plots will visualize the median, the minimum, the maximum, as well as the first and fourth quartiles. Any potential outliers will also be apparent in the plot (see image below, for instance).
Python Box Plot Example:
import pandas as pd import seaborn as sns df = pd.read_csv('', index_col=0) fig = plt.figure(figsize=(12, 8)) sns.boxplot(x="vs", y='wt', data=df)
Heat Map in Python using Seaborn
A heat map (or heatmap) is a data visualization technique where the individual values contained in a matrix (or dataframe) are represented as color. In the Seaborn heat map example, below, we are going to select a few of the columns from the mtcars dataset to create a heat map plot.
import pandas as pd import seaborn as sns df = pd.read_csv('', index_col=0) fig = plt.figure(figsize=(12, 8)) ax = sns.heatmap(df[['mpg', 'disp', 'hp', 'drat', 'wt', 'qsec']])
Correlogram in Python
We continue with a Python data visualization example in which we are going to use the heatmap method to create a correlation plot. Note, a correlogram is a way to visualize the correlation matrix. Before we create the correlogram, using Seaborn, we use Pandas corr method to create a correlation matrix. We are then using numpy to remove to the upper half of the correlation matrix.
import numpy as np import pandas as pd import seaborn as sns # Correlation matrix corr = df.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True fig = plt.figure(figsize=(12, 8)) sns.heatmap(corr, mask=mask, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5})
Now, if we just want to look at the coefficients, or use the data in a report, we can also create a correlation matrix in Python using NumPy or Pandas.
Violin Plots in Python using Seaborn
In the next Python data visualization example, we are going to learn how to create a violin plot using Seaborn. A violin plot can be used to display the distribution of the data and its probability density. Furthermore, we get a visualization of the mean of the data (white dot in the center of the box plot, in the image below).
import pandas as pd import seaborn as sns df = pd.read_csv('', index_col=0) sns.violinplot(x="vs", y='wt', data=df)
Raincloud Plots in Python using ptitprince
Finally, we are going to learn how to create a “Raincloud Plot” in Python. As mentioned in the beginning of the post we need to install the package ptitprince to create this data visualization (pip install ptitprince).
Now you may wonder what a Raincloud Plot is? This is a very informative method to display your raw data (remember, bar plots may not be the best method). A Raincloud Plot combines the boxplot, violin plot, and the scatter plot.
Python Raincloud Plots Example:
import pandas as pd import ptitprince as pt df = pd.read_csv('') ax = pt.RainCloud(x = 'Species', y = 'Sepal.Length', data = df, width_viol = .8, width_box = .4, figsize = (12, 8), orient = 'h', move = .0)
Learn more about how to change the size of the Seaborn plots in Python.
Raincloud Plots in Python Video:
Here’s a YouTube video showing how to install ptitprince and how to create the two raincloud plots in this post:
If we need to save the plots, that we have created in Python, we can use matplotlibs pyplot.savefig method. In a recent post, we learn how to specifically save Seaborn plots as PDF, SVG, EPS, PNG, and TIFF files.
Summary
In this Python data visualization tutorial, we have learned how to create 9 different plots using Python Seaborn. More precisely we have used Python to create a scatter plot, histogram, bar plot, time series plot, box plot, heat map, correlogram, violin plot, and raincloud plot. All these data visualization techniques can be useful to explore and display your data before carrying on with the parametric data analysis. They are also very handy for visualizing data so that other researchers can get some information about different aspects of your data.
Leave a comment below if there are any data visualization methods that we need to cover in more detail. Here’s a link to a Jupyter notebook containing all the 9 examples covered in this post.
References
Allen M, Poggiali D, Whitaker K et al. Raincloud plots: a multi-platform tool for robust data visualization [version 1; peer review: 2 approved]. Wellcome Open Res 2019, 4:63.)
Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLOS Biology 13(4): e1002128.
Excellente article
Thank you, Atil. Glad you liked it.
Thanks Eric.!
That’s usefull for better programming.
Hey Jacques! Thanks for your comment, glad you liked it.
“Python Rainclod Plot Example” – is that a spelling mistake?
Hi Derek,
Yes, of course it should say “Python Raincloud Plots Example”. Thank you for pointing this out.
Best Regards,
Erik | https://www.marsja.se/python-data-visualization-techniques-you-should-learn-seaborn/?utm_source=rss&utm_medium=rss&utm_campaign=python-data-visualization-techniques-you-should-learn-seaborn | CC-MAIN-2020-24 | refinedweb | 2,360 | 66.23 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.