text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Now is an exciting time to work with developer tools. With a 25% increase in monthly active users of Visual Studio, 1.3 million active monthly users of Visual Studio Code, and a two-fold increase in active users of our Mac IDEs, I think our customers are excited too.
Since we released the Visual Studio 2017 Release Candidate, we’ve had nearly 700,000 downloads! We’ve been busy taking customer feedback and enhancing the user experience to deliver the most powerful and productive version of Visual Studio yet. We’ve also been fine-tuning the Visual studio family of tools. In November, we introduced previews of Visual Studio for Mac and Visual Studio Mobile Center and made Visual Studio Team Foundation Server 2017 generally available.
Now, the day that we have been working toward is here. I am excited to share that Visual Studio 2017 is generally available today. I encourage all of you to download Visual Studio 2017 today! We are also delivering updates across the Visual Studio product family, and adding new value for Visual Studio subscribers and Visual Studio Dev Essentials members.
Visual Studio 2017: The Most Productive Version Yet
With Visual Studio 2017, we’ve invested in several key areas – refining the fundamentals
Cloud and mobile development were top of mind as we built Visual Studio 2017. For streamlined cloud development, built-in tools provide integration with all your .NET Core, Azure applications, microservices, Docker containers, and more. It is easier than ever to build and deploy applications and services to Azure, directly from the IDE. Visual Studio 2017 with Xamarin make it faster for you to create mobile apps for Android, iOS, and Windows through updates like advanced debugging and profiling tools.
We also heard loud and clear that Visual Studio needs to be faster and leaner, even as applications and projects get larger. So we built a brand-new installation experience that is lightweight and modular. We also made multiple enhancements to improve Visual Studio performance across the board..
I hope that you’ll download Visual Studio 2017, try it out, and let us know what you think. You can also learn more on John Montgomery’s post covering all that’s new in Visual Studio 2017.
Updates to Visual Studio for Mac, Visual Studio Team Foundation Server and Visual Studio Mobile Center
With 5 million Visual Studio Team Services registered users and a two-fold increase in downloads of our Mac IDEs over the past six months, we are seeing customers realize the potential of the full Visual Studio family. Today, we’re bringing the next wave of updates with Team Foundation Server 2017 Update 1, Visual Studio for Mac Preview 4, and updates to the Visual Studio Mobile Center Preview.
- Visual Studio for Mac Preview 4. Visual Studio for Mac is our IDE, made for Mac to build mobile, cloud, and macOS Since the introduction at Connect(); mid-November, the team has been busy and added updated .NET Core project support, NuGet and mobile tooling improvements, and implemented many bug fixes and performance optimizations. You can read more about Visual Studio for Mac in Miguel’s blog post where you can give it a try! Please continue to share feedback as we shape the product.
- Visual Studio Team Foundation Server 2017 Update 1 available. Today, we are releasing Team Foundation Server 2017 Update 1, the collaboration platform for every developer. Team Foundation Server 2017 Update 1 adds value to on-premises customers, including a new process template managing experience, npm support in package management, additional repository permission management, pull request improvements, test impact analysis, branch policy improvements, and a personalized home . For more information on what’s new in Team Foundation Server 2017, check out Brian Harry’s blog.
- Visual Studio Mobile Center preview updates. Mobile Center now has expanded support for mobile apps beyond Swift, ObjectiveC, and Java, to include support for mobile apps built with Xamarin and React Native as well as enhanced analytics. You can try the Visual Studio Mobile Center Preview today for free by going here.
New value for Visual Studio Enterprise subscribers and Visual Studio Dev Essentials members
With the release of Visual Studio 2017, we are bringing all-new benefits for Visual Studio Enterprise subscribers and Dev Essentials members. The Enterprise DevOps Accelerator offer brings organization everything they need to implement DevOps at scale and modernize their toolchain, including Visual Studio Enterprise, continuous deployment services with continuous integration (CI) and cloud-based load testing, beta distribution through HockeyApp, a discount on Azure compute resources, and on-site expert DevOps coaching. Find more information here. Further, Visual Studio subscribers and Dev Essentials members can log in to their respective portals for additional training and support offers from Microsoft and our partners.
We hope that you’re as excited about Visual Studio 2017 as we are. Make sure to download today keep the feedback coming.
Join the conversationAdd Comment
When is Windows Phone 8.1 and also WebAssembly support coming to Visual Studio 2017?
Thank you!
You can add your vote here for WebAssembly support:
It is currently the #2 ask on Visual Studio’s feature request boards. FWIW, efforts are already underway to port Mono to WebAssembly.
Thank you for your question. We’ve got information on the platforms supported by Visual Studio 2017 here:.
Projects for Windows Store 8.1 and 8.0, and Windows Phone 8.1 and 8.0 are not supported in this release. To maintain these apps, continue to use Visual Studio 2015. To maintain Windows Phone 7.x projects, use Visual Studio 2012.
With regards to WebAssembly, we’d encourage you to suggest support on UserVoice at.
-Paul Chapman
Visual Studio Program Manager
There is also an User Voice for the Windows (Phone) 8.1 Store apps in VS2017:
Windows Phone 8.1 is the most used Windows mobile OS, so we still need to build WP8.1 apps.
Otherwise, you need to offer Windows 10 Mobile for Windows Phone 8.1 users (who have a compatible device) WITHOUT Update Advisor.
hi Julia,
will there be a ISO version of the full installer of Visual Studio 2017 Professional / Enterprise for subscriber?
Hi Daniel, great question! You can find out how to create an offline installer here:
depending on proxy configuration it may not always possible to create the ISO myself.
is this a general change that full ISO for VS are no longer provided by MSFT?
yes, please release official ISOs
Why do we have to keep asking every release? Is it some sort of endurance test?
I know! Who wants to close their existing Visual Studio to stop working while the new one downloads! The horror of taking a break!
ISOs are very important for those who want to install VS on a non-networked machine or even to install it in multiple computers while saving bandwidth. I could list several reasons why someone would need an ISO instead of a web installer.
Just because you don’t see a reason why others are asking for it doesn’t mean it’s not needed.
Hi all, thanks so much for your feedback on this. We’d love to encourage you to upvote this feedback item on UserVoice, and/or add supplementary information on how an ISO image would solve problems you have:
We’ve posted a few thoughts on the topic there and look forward to hearing more feedback from you. Best wishes, Tim
I’m just replying to see if the incredible blog software here will render one word per line. I have faith that such a miraculous feat can be accomplished here.
L
O
L
!
!
!
I’m glad your experiment was *somewhat* successful. It didn’t exactly render one word per line, but it did render a few letters per line. Meanwhile, you also made it so everyone who reads anything past your comment, like me, has to irritatingly scroll past it or press guess a number and press PgDn that many times lol! >.< No offense, but for the sake of everyone else and your own reputation as an IT professional, you could've just asked the blog softwares' developers, if you spent a few extra minutes finding out how to contact them. =)
Why thank you for your feedback, Sean. You will be happy to know that I and many others have contacted the blog software’s developers regarding this issue. Well, not this issue in particular but a great many others. In addition to the singularity column of shame, you cannot edit or upvote/downvote content for this blog. As for professional IT reputation, consider that this blog is run by a technology company and this is the primary medium that it uses to engage its community.
I’ve always found mounting an ISO into a VM to be the easiest way of giving it access to a large installation such as Visual Studio. You can also hash check the overall ISO.
I really need the ISO, the –layout option wouldn’t work without admin rights in my office. I need to download the ISO image in order to transfer to my home computer.
VS 2017 Enterprise is massive. The files for an offline installer are a bit over 20 GB (more if you want languages other than US English). Does the UDF format allow ISO files that large? Maybe a VHD would make more sense :-(.
I don’t see a real difference between downloading a single 20 GB file, or using the web installer’s –layout option, and letting it grab all the pieces. Either way you’ve got to download 20 GB of stuff. If that sounds painful, it will be painful either way. The web installer has built in retry/restart support, so it’s probably a better choice if you’ve got poor connectivity.
No No No!
Web installer is no good as an iso.
It cannot do p2p/torrent;
It cannot use proxy;
It cannot let me check the hash value;
It cannot show the progress percentage;
…
Only an id**t who calls himself John Doe would say web installer is better.
Actually, it’s not the same way. For one thing, there are still some machines that aren’t connected to a network or the Internet. It’s rare, but not *everyone* has a connection, including people who code. Also, hard drives, including the new SSDs, have a habit of crashing or being corrupted, whether its the drive or the data on it. This is why people use backups. Not to mention the fact that a lot of people (and a lot of companies) like to do a clean install of their system every few years, and I don’t know of anyone (except maybe you) who actually doesn’t mind downloading 20+ GB more than once. Most people will put an image on backup storage media (e.g., DVD, BluRay, etc.). SaaS might change that slightly, but not everyone likes the idea of that (including me).
Now, I’m almost positive you and several other people could easily come up with a “reason” why my arguments don’t make sense. But this is because they don’t make sense *to you* (and probably many others). People tend to be different, despite how much the Left, (for instance, among others—I’m not trying to pick on them), wants everyone to be *exactly* identical down to the chromosomal level. So people have different views and opinions of things. This is a *good* thing, as the world would be very boring otherwise.
But it also means that, just as you might think my opinions (and those of many others, including the commentor you’re responding to) are not valid, I and others don’t think yours are. The point is, you can rant all you want, but there is no single “right” way to install a large program. In some ways, we are both right. I’m only defending this point. I wouldn’t even have mentioned it if you wouldn’t have brought it up, but when someone claims that *only* his/her method of doing something is correct, when there is no logical basis to validate it (or conversely invalidate other peoples’ methods), I can’t in good conscience not defend the methods of others, especially when I’m one of the “others”.
I’m certainly not trying to start a “Wares-War” or anything. I just wish people would not always feel compelled to try and invalidate someone else’s way simply because they believe theirs is better. Anyway, the point is, you and those like you do things your way, and I and those like me will do thing ours, but neither is correct or incorrect, at least in this case. Arguing to the contrary is pointless and a waste of time and energy, as is taking *this* reply any further. I’ll therefore leave you to your devices and wish you the best! =)
When will the data science workload be available ?
Hi Gaëtan,
The DS workload consisted of R, Python and F#. F# is in the main release. Python is in Preview for 2017. R is not in 2017 yet.
Python and R are undergoing some work (translation into 14 languages, Accessibility improvements, etc.). Once that work is done, they’ll be back in the main 2017 release in the DS workload. We’re targeting ~ 15.2/15.3.
Note that R Tools (RTVS) for 2015 will be shipping in about a week and will be usable side by side with 2017.
Nothing about desktop development WPF and Windows Forms in the launch keynotes events.
Only the ugly and limited mobile UWP garbage..
AND NOT A MOMENT TOO SOON!!! 😉 😉 😉
And please be sure to take the keys from UWP group. They have had enough to drink.
Cool! And very good and interesting info. Thanks!
As a long-time WinForms developer, this is great news. I understand MS reacting to newer platforms and UI trends, and thus pursuing different UI techniques, but if WinForms is going to 1 day be truly/totally stopped for enhancement, I’d really like to see it be in much better shape than it’s in now. I’d mainly like to see some more modern controls, and for some of the the current ones to be made to look more modern.
I couldn’t figure out where to post this feedback, but there is something wrong with the RSS feed for developer tool blogs. Go to and click the “All Developer Tools Blogs” RSS link near the middle of the page. Paste that URL into Outlook 2017. BUG – Every RSS feed title shows up as , like “Visual Studio 2017 ReleasedVisual Studio 2017 Released” and if you click the “View Article” link at the bottom of any article it fails because the URL is invalid.
Hi Justin! Thank you so much for bringing this to our attention. I’ve started an investigation and hopefully, we’ll solve this quickly.
Hi Justin, thank you for bringing this to our attention! I’ve started an investigation with our engineering team and hopefully, we can solve this soon.
Has a new version of SSDT been released for VS2017? I saw SQL Server Data Tools in the installer so installed it but I don’t have the option to create SSIS/SSRS projects, only database projects.
I’d like to know this as well.
The download page for SSDT still says “SSDT works best with Visual Studio 2015”:
Considering this announcement: , I doubt if there will be SSDT for 2017. It took them more than a year, iirc, to release SSDT for 2015.
Having managed to get passed the install process I can say that SSDT is alive an well in VS2017; although there is an additional Redgate project template (Enterprise only it seems).
@TK
Did you have to install SSDT from the 16.5 (october 2016) installer after you installed VS2017? or was SSDT included in your install of VS2017?
I’ve installed VS2017 Professional and even though I made sure that “SQL Server Data Tools” was checked it doesn’t appear to be installed. I’ve no templates for creating SSIS, SSRS or SSAS projects. (I did uncheck “Azure Data Lake Tools” as I don’t do Azure development).
ISOz ISOZ ISOz ISOz ISOz….yesterday!!!!!
Hi,
I’ve a question: at work we are using VS2015 Pro version. Can we upgrade to VS2017 freely or does my company needs to buy a new license?
Thank you
Hi Bogdan. Without more information on the type and source of license you currently have, we can’t advise you on next steps. Also, I think you’ll get better traction on the discussion here –
It’s no fun downloading visual studio like that. Please provide .iso file just lime it had been provided since visual studio 2015 update 3.
Current visual studio is the most productive but downloading method provided officially is the least stable one.
Congratulations on the release!
However, really frustrated by the decision to not make .iso available for MSDN subscribers.
Please remember we don’t all have incredible T1 internet connections (I have 5mb/s on ADSL2) – and your ‘offline installer’ shows absolutely no information about % complete downloaded or total download size. Right now have it running and I have zero idea of when it will complete – or how long my internet connection will be maxed out.
I would have liked to use my fast 4G mobile connection (50mb/s) – which has 10gb allowance but it costs an absolute fortune if I exceed this (and can take about 15 mins – 1hr to update my ‘usage’ for me to check).
At least when you make an .iso available I can then have exact information of how large the download will be + modern browsers will actually provide my % complete, speed information and estimated time left.
Please, please make the .ISO available – or at least ensure your offline installer UX is written to modern standards (ie. proper download progress/info). Without either of these it makes the install experience just so much more difficult.
> Please remember we don’t all have incredible T1 internet connections (I have 5mb/s on ADSL2)
Actually, a T1 is only 1.5mb/s, so your DSL is over three times faster than a T1. 🙂
gotta love comments sections on the internet..
188 word post and you found an issue with 1 word (but ignored the rest / and what I was saying)..
My vote for an ISO as well. I live in the sticks and only have 4G-LTE, so my bandwidth is capped. I have to get an ISO someplace with non-measured service before I can install it on my development box. Thanks. Excited about all the new stuff.
Thanks for the feedback, @whnoel. As mentioned above, we’re listening to your feedback on ISO images. We’ve posted a few thoughts on the topic here:
Are in-place updates of the latest RC version supported or should I uninstall first?
Hi Simon. You don’t have to uninstall. You can update from the links in the blog or from within the notification in the product.
And no code completion support for reference packages and versions inside csproj as there was for project.json. This single missing feature makes me want to stay on 2015.
Hi Natan, Try using this experimental extension that offers csproj intellisense:
You can also learn more about options for editing csproj files on Rich Lander’s blog:
@Natan there is a new Extension which add code completion support for reference packages and versions inside csproj. Web Extension 2017 contains it. Read other blog posts.
After installing and doing the reboot, my computer will no longer boot. It’s been stuck booting for over 7 hours.
Sorry to hear this! Were you able to restart eventually? Feel free to email me at nicole.bruck@microsoft.com and we’ll look into this.
There were two big promise around VS2017. It’ll be lightweight and fast. It installed for me 5Gb and it takes almost 30mins. Ok VS2015 did the same for 1 hour. But come on 5Gb is not so lightweight.
And it seems almost the same speed as VS2015, where is the promised 300% faster startup?
I was also surprised to find that only installing support for WinForms and web applications also ended up being 5GB. Apparently the promise of light-weight was meaning “we won’t push an extra 2 GB on you like we’ve been doing with each new version”.
Hey Lajos,
We have made the first startup of VS after installation nearly 3x faster. For subsequent startups, we have taken many components out of the startup path that did not need to be there, providing significant startup time improvements for users who had such components installed. We also surface the cost of expensive extensions and tool windows to you, so you can choose whether to disable them. We will provide more details on these in a subsequent blog post.
VS2017 freezes the first time I started it. I think it’s caused by the Toolbox window (Freezes after the window became visible). I have no choice but to kill the VS2017 process. Then I started it again, show the Toolbox window, it still freezes (shows “Toolbox – Initializing”, and that’s it).
And then it freezes everytime I show the Toolbox window.
I’m using Windows 8.1 with Visual Studio 2015 Update 3 installed. Both VS2015 and VS2017 have C++/C#, C++ for mobile development, TypeScript, VS for Unity and Windows 8.1 SDK installed, but no Windows 10 SDK, UWP SDK or any emulators (I’m not using Hyper-V because I have to use VMware).
Both VS are community version. Also I didn’t signed into the newly installed VS2017, and selected the “General” layout on first startup.
It freezes again when I clicked the “Xamarin” item inside Options page. In VS2015 it also freezes for a while, but back to normal after 3 or 4 seconds (The options shown correctly)
BTW the Xamarin I use for VS2015 are a bit old (4.1.2.18 to be specific). Maybe that’s what caused the freezing (some internal race conditions, who knows ╮( ̄▽ ̄”)╭). I think I’ll fire up my VMware and test everything again on a newly installed Windows 10 instance.
Forgot to mention I selected Xamarin when installing VS2017.
Update with BAD news, VS2017 still freezes (when showing Toolbox window) in my newly installed Win10 VM. This time I also installed Win10 SDK and the UWP stuff.
However after I uninstalled Xamarin from VS2017, it works, both in VM and my host machine.
Please look into this, since I do Xamarin.Forms development, it’s crucial to me. Good news is that none of these affected my VS2015 Xamarin so I can still earn money to pay the rent ^_^
Sorry to hear of your Toolbox freezes, @horeaper. You may want to vote up this issue:
This is pretty much the reason I do not want to install it and will be waiting a few months (maybe a year) if I can help it.
I’ve had more crashes with VS 2017 than any prior version going back to VS 6.0. Did you know that once a new version of the installer is available, that you won’t be able to open the installer to see the options you have currently installed, because it will want you to upgrade to the newer installer.
Also, if you paste a CSS style rule into an existing .css file, any blank lines below where you pasted will be deleted, and the next style code will now be on the same line as where you pasted !?!
Yeah… feeling good with my decision to hold off for now. Restone2 is around the corner. With any luck VS can put out its first update. I plan to flatten my machine and install from scratch then.
Julia, how much advertisement does this version of Visual Studio contain? When you say it’s productive, I am assuming you removed all the unnecessary notifications about unwanted product features, like Azure or Windows Phone stuff. ‘Chatty’ software (e.g. software that continuously ‘talks’ to its user about things that are deemed important by the supplier of the software) can generally not be called productive, because interruptions are known to have a negative impact on productivity.
I ask because I noticed a trend of more and more unnecessary, non-removable extensions being added to Visual Studio. For paying customers, I feel that is unfair. And for knowledgeable developers, I don’t think it’s necessary to include what are essentially just pointers to additional services provided by Microsoft in the box.
So far all my comments asking for clarification about this problem have been ignored on blogs.msdn.microsoft.com, which I find disingenuous, because you either want feedback on and want to engage with developers about ALL features, or not at all. Now it feels like you love feedback about the product, but you are not willing to engage criticism about the ‘abuse’ of your IDE to advertise added-value services (like Azure) or failing products (like Windows Phone).
So… let’s see if the pattern of ignoring continues…
@Mike. I’m sorry your previous comments have gone unnoticed by the team. VS 2017 has several improvements that will start to address your concerns. The new install experience organizes VS’ features by “Workload” which so you only install the components associated with a particular technology stack. Workloads deliberately install less by default to keep setup quick and small. Workloads also give you more control over the individual components installed so you can reduce the foot print even further. Many of the install shortcuts that were appeared in past release have been removed from VS 2017.
To improve setup performance and reliability install much of Visual Studio through VSIXs, the same technology used by Visual Studio extensions. A side-effect is some VS components still appear in the Extension and Updates dialog. This is one area we could improve but for now most of those VS features can be removed by turning off individual components in the Visual Studio Installer.
If you see something that we can improve or that doesn’t seem right the UserVoice () and Developer Community () sites are good places to post feedback because they allow us to aggregate related feedback together and others in the community can upvote your posts helping them get better visibility.
Please create an ISO for VS2017 Professional to allow installation on machines with no internet access.
I cannot install the product first before creating my own ISO as per the new Microsoft recommendations due to security settings on the machine I have internet access with.
Does Microsoft understand the concept that some companies develop software on machines that cannot be connected to the internet due to security classification of project.
No they don’t understand that. Everytime I think MS is finally going the right direction, they make an stupid decision and destroy it. I need the ISO file. I don’t have a good internet connection. I loose connection to the internet every once in a while and last night when I tried to make my own ISO following their stupid blog and stupid instruction, after I downloaded 2 gigs I lost connection and I had to start over. So no I don’t think MS understands that not all devs have the best ‘facilities’. These behaviors from MS makes one question that “ANY DEVELOPER…” moto they use so much these days.
The web installer in –layout mode can be restarted after a connection loss, and will pick up where it left off. It may start the last file over, but it doesn’t restart the whole download.
Thanks for the feedback. As John mentions, we’ve the –layout option to support this scenario. But we’re listening: give us your feedback on what we need to add here:
I have problem to install “Mobile Development for .NET” –> It say’s that it can’t download the Java SDK for some reason.
Not sure how to solve this
Log
The product failed to install the listed workloads and components due to one or more package failures.
Incomplete workloads
Mobile development with .NET (Microsoft.VisualStudio.Workload.NetCrossPlat,version=15.0.26228.0)
IncompleteJavaJDKV1,version=1.8.14’ failed to download from ‘’.
Search URL:;PackageAction=DownloadPackage;ReturnCode=0x80131509
Impacted workloads
Mobile development with .NET (Microsoft.VisualStudio.Workload.NetCrossPlat,version=15.0.26228.0)
Impacted)
Details
WebClient download failed: Fjärrservern returnerade ett fel: (503) Servern är inte tillgänglig.
Bits download failed: File not found.
WinInet download failed: Url ‘’ returned HTTP status code: 503
–> And it is not so easy to skip the Java Development Kit component. Because then a lot of the others will not install 🙁
Yep, I have this error too.
Two issues:
1. VS2017 is NOT yet available for me: I use Imagine (imagine.microsoft.com – formerly known as dreamspark premium, formerly known as msdn-aa …). I can only see the RC version. No shiny enterprise / pro rtm yet. Do you have any eta?
2. While the new installer enables us to finally deselect stuff we do not need, you guys still mostly ignore our selected install path: I changed the drive to install to, yet you blindly install to the system drive anyway. Out of 16gb only 3gb got installed to my path. No thanks there 🙁
Trying to return tuple, I get error: Predefined type ‘System.ValueTuple`2’ is not defined or imported. What do I do wrong?
var r = (1, 2)
or:
(int, int) GetInfo()
{
return (1, 2);
}
Do you really think this post is the right place to ask these kind of questions?
Hi FMS, please keep your comments polite and considerate. I appreciate your participation on our blogs and feel free to share your own questions and concers!
Hi FMS, please keep your comments polite and considerate. I appreciate your participation on our blog and feel free to add your own comments and questions!
I left more than five comments here asking for the ISO file and this one is the comment you answer?
At least tell me should we wait for the ISO or it is just not gonna happen?
I apologize for the delay in answering. I just spoke with the team who will be replying to these comments soon.
I strongly urge you to voice your suggestions on our UserVoice page: which our engineering team uses to guide their product decisions. I do thank you for sharing your feedback though and I appreciate the time you’ve taken to share it!
Turn on “Suggest usings for types in NuGet packages” inside Options->Text Editor->C#->Advanced, then press Ctrl+. on the error line, it will prompt you to install a NuGet package called “System.ValueTuple”, which solved your problem.
Thank you for your answer! It’s amazing to see our customers helping each other 🙂
I found this too… I was under the impression this would all be seamless and automatic under the RTM?
Apparently MS talks about features (and shows videos) without mentioning that “O, by the way, you need an extension for this.” It’s possible they still consider this experimental, and plan to keep updating it frequently, which they claim is easier for them to do when the thing being updated is an extension.
did you make visual studio for every one or for people with high speed internet only ?
What about a developer living on an island in Scotland with very limited internet speed ? I could download an iso and resume when connection is bad . Online installer does not work for me at all . It is very strange that a company the size of microsoft does not consider something like this ?
Hi,
Is really hard to understand why it can’t use the same offline installer of VS2015. Open a GUI, select to download to a specified path and then wait for ~8GB to download.
Now I have to wait a command prompt window with no % of progress and download 24GB in a very messy folder (I thought that it was going to be organized when the download finished).
It will be a nightmare to copy 24GB over the network only to install it on a client machine.
Real bad solution Microsoft. No one likes web installers.
VS2015 has easily got to be the best IDE to day. Absolute dream to work in. Shame we can no longer use it for all the new .NET Core magic.
Please add ISO installer. Web installer is a bad thing. Even if resulting iso is big – nobody will actually burn it on DVD, but single file is much more comfortable:
1. It can be attached to VM
2. It can be transferred faster than –layout folder structure
3. It can be CRC/SHA1 checked for integrity
I usually steer clear of these discussions, but this one is particularly frustrating.
I was looking to try out the enterprise installer from MSDN. RTM should mean something, right? But three’s no ISO. Other people have already summed up the reasons why this is an awful idea, so I won’t repeat them. .
I go through the steps of downloading the worthless little non-installer, start an admin CMD window, and run
vs_setup_bootstrapper.exe –layout C:\vs2017ent
Internet tells me this will be over 20 GB. Fine, I’ll come check on it tomorrow.
I look inside the downloaded folder…there’s no setup.exe. maybe the internet connection died overnight.
I run again,
vs_setup_bootstrapper.exe –layout C:\vs2017ent
A meaningful-looking progress bar appears again, then I get a command line window doing whatever. Command line window disappears, still no setup.exe or anything meaningful that would tell me how to install VS2017.
I repeatedly run
vs_setup_bootstrapper.exe –layout C:\vs2017ent
To notice there was some red text…two lines of it…
But the window disappears before I can read it or screenshot it.
Maybe I can get some help from this?
vs_setup_bootstrapper.exe –help
Nope. Apparently all the error messages/logs are just thrown away. If they exist somewhere, the lack of documentation how to find them makes them non-existent. Yeah, I programmed like that too. When I was 13.
I manage after a few attempts to screenshot the command window. The first error message says:
error: package ‘win10sdk_hidden_10.0.10240_2,version=10.0.10240.108’ failed to create layout cache. the return code of the layout creation is -2146889721.
The internet says … absolutely nothing about this
What the second message says, I won’t even attempt to screenshot it with that kind of precision timing.
So … VS2015 was just slow. I was hoping VS2017 may offer hope, or enough features to consider retiring VS2013.
But I can’t even get the bloody thing to install. .
It really feels to me like nobody actually tested this thing.
Some of us prefer STABLE and DURABLE software to just quickly releasing worthless junk that needs an update before it even hits the streets.
try to use command redirection to log the error to a file:
vs_setup_bootstrapper.exe –layout C:\vs2017ent > c:\vs2017entLog.txt
No luck. The process starting another command line process … thing … made the redirection useless.
But I turned on the slowest anti-virus I could find, and I was able to catch the second error:
Error: Package ‘Win10SDK_10.0.10586.212,version=10.0.10586.21208’ failed to create layout cache. The return code of the layout creation is: -2146889721
Meh. Though it might not help much, here is your hresult in readable / googleable form:
CRYPT_E_HASH_VALUE 0x80091007 -2146889721
from
Wish there was an edit option…
What is your crc32 / sha1 or whatever hash of the file? Maybe it is corrupted. Might help to compare hashes..
Which file?
I only have the folders
Win10SDK_10.0.14393.795,version=10.0.14393.79501
Win10_Emulator_10.0.14393.0,version=10.0.14393.4,chip=x64
The folder
Win10SDK_Hidden_10.0.10240_2,version=10.0.10240.108
Is briefly created, populated with sdksetup.exe and WinSdkInstall.ps1 (maybe others I didn’t see), and then deleted.
I just tried the offline installer myself:
1. the installer needs to be on the same drive / directory as the output directory. Otherwise nothing happens for me. Not even an error message… I did not use my system drive like you did since space is notoriously low.
2. You get logs files in your %temp% directory.
3. I had a look into the log files* and found in a json file the download packages. E.g. Win10SDK_Hidden_10.0.10240_2 downloads and
4. See if manually downloading them and placing them in the folder helps.
5. Otherwise have another look into the log files or pray that someone from MS can help you….
These indentations are killing me…
I’ve found the logs, managed to isolate the “error-ing” line of code to:
.\WinSdkInstall.ps1 -SetupExe sdksetup.exe -SetupLogFolder standalonesdk -PackageId Win10SDK_Hidden_10.0.10240_2 -SetupParameters “/layout “”C:\vs2017ent\Win10SDK_Hidden_10.0.10240_2,version=10.0.10240.108″””
Which gives me a .LOG file which says:
Hash mismatch for path: C:\Users\(Me)\AppData\Local\Temp\{e7a0c8b6-b0e9-41e2-8a0a-a6784f88d1d4}\package_WPTx86_x86_en_us, expected: 1415F6DB7EAC6D2561D977DD4D638E140014C256, actual: DA39A3EE5E6B4B0D3255BFEF95601890AFD80709
So, I think I’ll just wait for “Update 1”. And hopefully an ISO which, being a single, permanent collection of files, rather than files scattered across Microsoft web sites, can be more easily tested to be self-consistent.
I ran the “vs_enterprise.exe –layout” once, downloaded the data. When I want to install it, first install the certificates, then just run vs_enterprise.exe directly, everything is fine. I did that twice on two different computers, both installed successfully. I monitored the Task Manager and I’m sure there’re no downloading occured.
I think the main difference between you and me are I put the “vs_enterprise.exe” and the downloaded data inside the same folder.
I installed the certificates as well. Twice now, to be sure.
I also tried to move the (extracted) contents of mu_visual_studio_enterprise_2017_x86_x64_10049783.exe into C:\vs2017ent but I got the same result as before.
This thing downloaded 22.5 GB of … stuff. Just for some reason, it looks like it’s not enough.
rename mu_visual_studio_enterprise_2017_x86_x64_10049783.exe to vs_enterprise.exe and run it with layout option using the same directory
I have started to download but after 8 hours it has not completed yet. It is really unbelievable that you give instructions people to create offline installer, etc. instead of creating and publishing an ISO file. Is it too hard for you? If not, why do you keep it as a secret?
If you really listened to the developers, you’d release an ISO.
Can’t agree more, many systems are not on the commercial internet and ISOs are the best way to move software over to them! Another rant is the fact I can not write new applications for UWP since most of my customer base is not attached to the commercial internet and can’t allow side loading. That is a bigger issue but by not providing and ISO for studio, MS is making it hard to keep developing with their technology.
Thanks for the feedback. We are listening and gathering further data on this scenario. Would love your feedback on our thoughts here:
F1 is broken….
For example, giving a new .NET 4.6 console application, and the code: Console.ReadKey(); Put the cursor onto “ReadKey” and press F1. In VS2015 it properly navigated to msdn.microsoft.com “Console.ReadKey Method” page, but in VS2017, it opens docs.microsoft.com “Console Class” and just stops there, I have to find the ReadKey method again using my browser’s find text.
Oh and the Help Viewer (2.3) is also broken, the left pane (where those “Contents”, “Index” tool window resides) are missing, making it impossible to use.
Thanks for the information pal. Looks like VS2017 has a lot of bugs. I’ll use VS2015 for now.
Thanks @horeaper. We’re tracking both these issues now – thank you for bringing them to our attention. For the Help Viewer issue, the workaround in the meantime is to go to the Viewer Options (Ctrl+O) [Gear button] and select the Reset Window Layout button. The left hand pane should come back. Again, many thanks for letting us know! Best wishes, Tim Sneath | Visual Studio Team
@horeaper the F1 behavior you are describing will be addressed soon when the .NET Framework docs migrate fully to docs.microsoft.com. We are currently investigating the possibility of a interim fix as well, since that migration is still a month away.
Is there anyway to make the quick action tooltip text bigger?
Screenshot:
Thanks for reaching out. If you could report this issue using our Report-A-Problem tool:, our engineering teams will be able to investigate. Any future suggestions you have can be added to our UserVoice: and I thank you for taking the time to share your thoughts with us!
When I tried to install, I got:
Package ‘Microsoft.VisualStudio.Xamarin.Inspector,version=1.1.2.0’ failed to verify.
Search URL:;PackageAction=DownloadPackage;ReturnCode=0x80096004
Impacted workloads
Mobile development with .NET (Microsoft.VisualStudio.Workload.NetCrossPlat,version=15.0.26228.0)
Impacted components
Xamarin Workbooks (Component.Xamarin.Inspector,version=15.0.26228.0)
Details
SHA256 verification for ‘C:\Users\xyz\AppData\Local\Temp\oduxhm4z\Microsoft.VisualStudio.Xamarin.Inspector.0F1415EB88EB06EC80BB\XamarinInteractive-1.1.2.0.msi’ failed. Expected hash: 32F5A974CBB37569A9588512AF39AF1E0B6A68D325B903DBABB56CF0FF6E86A6, Actual hash: EFB514A531CF54FD02FD17E0FA5B3433912162633C7FEF24A82C0D6835EE2C2F
Where are the Visual C++ “OpenGLES 2 Application (Android, iOS, Windows Universal)” templates? I have Android, iOS and Universal for C++ installed, and I can only see the “OpenGLES Application (Android, iOS)” one; and “OpenGLES 2 Application (iOS)”, but not the cross platform one..
Does VS for Mac support C/C++?
Ok so I created the offline installer for VS2017Pro yesterday (with language = en_us option).. Thought I’d better warn folks what to expect here.
The downloaded size was 20.4GB total (not kidding) – this is largest installer I’ve ever seen in 25+ yrs as a dev. This took me over 12 hrs on my slow internet connection and maxed out my internet download speed for this duration.
During this time, the console window doing the downloads showed NOTHING whatsoever that gave me any idea at all how long it was going to take or how much it was going to download. Just a ‘succeeded’ message after each vsix/etc was finished downloading (of which there were a lot). This was particularly frustrating as I needed to do other work stuff with my connection + never knew at any point if it was about to finish or not. I was monitoring the total size of the downloads folder + after it reached about 8gb I kept thinking it was nearly done (based on previous VS installers/isos).
I’m thankful I didn’t opt to download this on my fast 4G connection (which has 10gb allowance a month) – would have been hit with massive amounts of excess data fees.
Anyhow – I’m not really plussed with this whole experience. Really think some minimal effort could have been spared by Microsoft to better inform users on expected download sizes, speed, progress etc (given no .ISO provided) – even not part of the downloader + was just some notes on the offline installer web page (or this post).
If anyone is reading from Microsoft – please do a *much* better job here. I’m thrilled to have a new VS with all it’s glory but really don’t think the install process had to be made quite so painful.
For everyone else – note that if you have challenging internet speeds like me (or limits) – you can investigate just downloading the workflows you need to keep the size down a bit (I wanted most of them).
Hi, “Enable the Visual Studio hosting process” seems to be missing from the Project Debug tab, any way to resolve this? We keep getting the security warning when we start up our solution. We have a WCF service and a Winform client we start up at the same time. The client uses ClickOnce.
Our Silverlight 5 projects are unable to load in VS 2017. Is this expected? The link the migration wizard sends us to is for VS 2015.
It would have been nice to have some kind of notice you were dropping support for SL if that is the case..
Another vote for an ISO. My installation crashes every time (DISM blah blah blah). I don’t even know if my downloads are not corrupt, so it’s hard to troubleshoot. Why would you do this to the developers? Everything released from Microsoft starting with VS 2015 has been unstable. Every developer on my team gets constant dotnet.exe crashes while building and we are praying that 2017/tools will fix this finally.
VS2017 is the same mess as VS2015.
Open a file with xUnit-tests, right click on Collection attribute, click Run Test – VS builds project and that’s all. No tests run.
VS still has no xUnit support? If so then why “Run test” command does exist in context menu?
It’s seems that Test Explorer is broken forever since VS2013. I just can’t understand how do you suppose it should be used. I open Test Explorer and it shows me all my 5000 tests in ONE PLAIN LIST. Terrible Microsoft.
Reinstall NuGet-packages is still an adventure. It’s still simpler delete packages folder and run nuget.exe in cmd then using VS.
Error List is a mess (same as VS2015) – shows all errors but not for last build. If you’re build a bunch of project (folder with project) you can’t see last errors.
Alt-Shift-L – “Locate currently opened file in Solution Explorer” doesn’t work any more.
It’s just impression after 30 minutes of usage.
So VS is still unusable without Resharper.
Like many others have said, I’d much prefer an ISO. But, I’d be embarrassed too if I told people, “here’s our product, it’s over 20 GB.” I guess the irony is VS 2017 is supposed to be lightweight. 20 GB is ridiculous.
VS 2017 is “lightweight” compared to prior versions in the sense that you can more easily install just the features YOU want. It is definitely NOT lightweight overall. If you install everything, it’s a massive beast.
Microsoft could probably make a DVD sized ISO of just the “core” pieces of VS, and then let everything else be downloaded on demand. That might make some people happy, but then others would complain about not having everything in one place. And you’d have fights over what should be “core” and what should be downloaded. It’s impossible to please everybody.
The VS 2015 Pro With Update 3 ISO (from MSDN subscriber downloads) is 7.3 GB. I wonder why 2017 would have to be any different.
Why do not you create an offline setup as ISO file end let us to download directly instead of making people to create themselves after applying lots of unnecessary steps? I am wondering why do you always keep the offline setup files as a secret.
Have you released Color Theme Editor for Visual Studio 2017? If not yet, when do you think of releasing one of the most commonly used add on like that?
Hi Murat, we’re looking into updating this. It’s not a ton of work, but we prioritized getting the Productivity Power Tools and a couple of other extensions out first. Stay tuned…
Thanks Tim for updating the color theme editor extension. A few small changes made with this extension can help when we’re looking at VS for hours at a time.
Thanks for information. For those who want to use Color Theme Editor for Visual Studio 2017, there is an hacked version o. Please be aware that using it on your own risk as indicated by @SerbanVar on.
I would say I don’t care about productivity tools, I use R# anyway (and most of people that use VS do the same, if Microsoft has not noticed) and I need my Monokai theme back. I will not start using VS 17 before this theme thing is back.
The Theme Editor is a must have, since Visual Studio has less colors options available than windows 3.1 did!
Pretty please? The available work-around barely works at all. 560 votes and counting at
We’ve updated the status to confirm that we’re working on this… thanks to the ‘pretty please’ 🙂
I lost javascript intellisense after upgrading to VS2017 for mvc core application. It colors the code properly but indentation on return and anything else is broken. Anyone else experiencing this (Enterprise edition) ?
Yes, I’ve seen the same. There’s a lot of JavaScript and razor editor bugs, where pressing the enter-key will do crazy things. Here are a few similar bugs I’ve filed:
In addition to what you’ve filed I just found out the Control-E-C is also broken. It should comment out the selected code. Works for c#, not for js.
I filed this bug about the missing intellisense in javascript if anyone else wants to add to it.
There are already 7,000 problems … and counting? Yikes… :/
It’s definitely the buggiest VS version I’ve used since the 1st release in the late 90s. I’m running the RC3 version, and it’s hard to believe but I’m scared to use the new final version, since there are even more bugs in the final version. If they could get the random crashing under control, and fix the editor bugs, then it could be on par with VS 2015 and generally a good tool to use.
After using 2017 for several months now, I’d say unless you need the latest .Net Core tools, or the latest mobile tools, and if you already have 2015 or 2013, I’d stay with those.
Some early first impressions:
* No faster than VS2015
* There is a bug where every time I open a project which is on TFS Online, it tells me I am offline (server unavailbale) – I then have to right-click solution “Go online” EVERY SINGLE TIME I OPEN A PROJECT.
* I hate the new “Publish” experience (to publish web app to Azure). You’ve left in some of the old stuff (like preview), and made some new stuff, and sort of cobbled it together. Now I can’t seem to rename publish profiles…
Why did I bother
@LMKz – the issue you’re having with Team Explorer going offline has been reported by several other people and is being tracked here:. Please follow that issue for updates.
lol I guess I’ll just download another 20GB for the “update”. How do these kind of bugs make it into RTM???
Found another bug: Tried to “Undo pending changes” against a file in my project (TFS), dialog sits on screen forever doing nothing. Click Cancel, VS hangs and never returns. Done this twice now.
Either: 1) Give me an ISO. 2) Send me a set of DVDs by snail mail. MS needs developers w/real-world experience.
Is the VS Installer Plugin already available for VS2017?
It’s here:
Thxs!
Put the project,json back!
SQL items (client, express LocalDb, Data Tools, etc.) should not be required/forced for the .Net Core and web project workloads. “Recommended” of course, but not required.
There’s a user-voice item for this, so if you want you can vote for it:
I have a VS 2015 Professional license, will it work with VS 2017?
Is there any VS 2015 upgrade to VS 2017 option?
Updating android-sdk in Android Studio breaks VS2017 android build tools. It still can build all NDK related things but throws errors when it come to build java related project. You can check it with NativeActivity template in VS2017.
From the notes at the VS2017 compatibility page:
“Windows 10 Enterprise LTSB edition is not supported for development. You may use Visual Studio 2017 to build apps that run on Windows 10 LTSB”
You just ruled out VS2017 from several hundred dev seats. We are one of the few companies already migrated to Windows 10. Of course we use LTSB, what else for a centralized administered and supported corporate Environment? MSFT is punishing their few enterprise clients already migrated to Windows 10? Because Windows 7 SP1 would work.
Is there some brilliance in that strategy I’m to blind to see?
Thanks for pointing this out. I just read the MS page that says this. That’s terrible. When I did move to Win 10 down the road, I was planning to use LTSB. It makes no sense, but then that kind of thing is normal for MS.
Thank you for your question. Visual Studio 2017 supports the Windows 10 Current Branch for Business (CBB)* and Current Branch (CB) under support as of March 2017 when we released, which are Windows 10 version 1507* and later. We recommend CB for general developer use; for enterprises that need to stage adoption of Windows releases, we recommend CBB.
Focusing our support on Windows 10 CBB and CB allows us, over time, to take dependencies on newer operating system features without requiring users to unexpectedly upgrade their LTSB version to adopt a new release of Visual Studio. In addition, Windows 10 LTSB* is designed for use in special-purpose computing environments—such as medical equipment, point-of-sale systems, and ATMs—rather than for daily use by information workers or developers. To support these special-purpose scenarios, LTSB is supported as a development target from Visual Studio 2017, and thus remote debugging* scenarios are fully supported. For general purpose computing in an Enterprise environment, we strongly recommend Windows 10 CBB, which is designed for enterprise deployment and management* by using, for example, System Center Configuration Manager. For enterprises that choose to run Windows 10 LTSB on desktops: while Visual Studio 2017 is not supported on LTSB, it also is not blocked from installing. Issues found on an LTSB install that also reproduce on a supported CBB install would be covered under the Visual Studio support policy.
*For more information, see the following:
CBB:
LTSB:
Remote Debugging:
Enterprise deployment and management:
Released Windows Versions:
Windows as a Service Overview:
-Paul Chapman
Visual Studio Program Manager
Thanks a lot Paul, for the detailed explanations.
Altough I still think this whole Windows as a Service approach does not fit for Enterprise IT installations. Somehow, MSFT has lost the grasp on Enterprise customers. Just mention the fact the VS2017 installer needs local admin rights to download files for later installation. This issue is a good example for how disconnected MSFTs current approaches are from any Enterprise IT reality.
Not to start with the idea of distributing Windows updates in an Enterprise completely changing the start menu behaviour, as the last update did. You ever faced the support efforts such a “simple” change causes for an IT support being in charge for more than 5000 PCs?
Upgrade to Windows 7. It’s the best option.
Paul, your reply makes sense at first glance, but when you realize that Windows 8.1 will be supported with security updates until 6 years from now, that would mean you would potentially “take a dependency” for VS in the next 3-5 years that would prevent using VS on anything lower than Win 10. It’s hard to believe you would do that, but then again…..it’s MS.
So, in light of Win 8.1, your dependency reason doesn’t seem valid.
It’s like Microsoft now blocking Windows 7 Updates and showing messages (SPAM) on computers with recent processor.
Another reason to NEVER in my life use or recommend Window 10.
Coded UI is a disabled component by default. Will it be deprecated soon? If so, what is the replacement from Microsoft?
Hi,
I have partially installed Visual Studio 2017 community edition successfully on my Window 8.1 machine.
I later tried to install the following workloads separately:
1. Mobile development with .NET
2. Mobile Development with Javascript
The second round install failed and I got the below messages in the log.
Is this error because of some internet download problem or some other problem?
How much data (in GB) is downloaded for installing these components?
————– Error Message ————-
The product failed to install the listed workloads and components due to one or more package failures.
Incomplete workloads
Mobile development with .NET (Microsoft.VisualStudio.Workload.NetCrossPlat,version=15.0.26228.0)
Incomplete components
Xamarin (Component.Xamarin,version=15.0.26228.0)
Xamarin Workbooks (Component.Xamarin.Inspector,version=15.0.26228Xamarin.Apple.Sdk,version=10.4.0.123’
Package ‘Xamarin.Android.Sdk,version=7.1.0.41’
————– Error Message ————-
Apparently MS is doing its best to make its loyal devs’ life harder with each new release of VS.
Yes, and now its doing the best with an update to Windows 7 that blocks Windows Update and shows messages (SPAM) on computers with recents processors.
Another reason to NEVER in my life use or recommend Window 10.
Microsoft: Making life harder day by day.
Microsoft, how far are you willing to make it tomorrow?
Not a bad job when people’s biggest gripe is the lack of an ISO. | https://blogs.msdn.microsoft.com/visualstudio/2017/03/07/announcing-visual-studio-2017-general-availability-and-more/ | CC-MAIN-2017-22 | refinedweb | 9,621 | 65.42 |
Tutorial, Part 1: The Basics
Welcome to South. This tutorial is designed to give you a runthrough of all the major features; smaller projects may only use the first lot of features you learn, but everything is built in for a reason!
If you've never heard of the idea of a migrations library, then please read WhatAreMigrations first; that will help you get a better understanding of what both South and others such as django-evolution are trying to achieve.
This tutorial assumes you have South installed correctly; if not, see Download for instructions.
Apps and Migrations
The first principle to learn is that, in South, individual apps are either 'migrated' or not - for example, the django.contrib.admin app isn't migrated - it has no migrations, whereas the app you'll create along with this tutorial will have migrations, and so is 'migrated'.
The reason this is important is that, with South enabled, you will have two ways of changing the database schema:
- ./manage.py syncdb - As before, this only creates models' tables directly, but with South enabled it will only do this for non-migrated apps.
- ./manage.py migrate - This command will change the database schema for migrated apps only.
South differentiates between migrated and non-migrated apps by seeing if they have a appname/migrations/ directory. To create this directory, and create migrations, there is one more important command:
- ./manage.py startmigration - Creates migrations for apps, either blank ones, ones with user-specified actions, or ones with automatically-detected changes - we will cover all three of these uses.
Kicking Off
First, create an application the usual way:
django-admin.py startproject southtut cd southtut << add south to INSTALLED_APPS >> ./manage.py syncdb
Second, you will need an app, with a few models. It doesn't matter what; if you want to follow the examples, make a new app called 'southdemo':
django-admin.py startapp southdemo
Give it the following models.py file:
from django.db import models class Lizard(models.Model): age = models.IntegerField() name = models.CharField(max_length=30) class Adopter(models.Model): lizard = models.ForeignKey(Lizard) name = models.CharField(max_length=50)
Don't forget to update settings.py to:
- pick a DATABASE_ENGINE, and set the relevant settings;
- add both 'south' and 'southdemo' to the list of INSTALLED_APPS.
(Note that you shouldn't pick sqlite3 for this tutorial, since the sqlite3 python bindings do not currently support changing existing columns. See #52)
Now, we need to make our first migration. The way South works is that, on a new installation, it will run through the entire history of migrations for each app, rather than just using syncdb. This helps keep things consistent, and lets you write migrations that put in complex initial data, but it does mean that doing all migrations for an app, one after the other, should take a database from blank to the most recent schema.
Specifically, this means that ./manage.py migrate replaces ./manage.py syncdb for applications with migrations; the effect of syncdb is recreated by the migrations. You should not run syncdb on an application before you migrate it, if it is a new app (if you are converting an existing app, see ConvertingAnApp).
For this reason, the first migration has to be one that creates all the models you currently have. startmigration accepts a --model parameter, which tells it to make a migration that creates the named model, so we could do this:
./manage.py startmigration southdemo initial --model Lizard --model Adopter
(The arguments to startmigration are, in order, app name, migration name, and then parameters)
However, there is a shortcut for adding all models currently in the models.py file, which is --initial:
./manage.py startmigration southdemo --initial
(You can also pass in a migration name here, but it will default to 'initial')
Running this, we get:
$ ./manage.py startmigration southdemo --initial Creating migrations directory at '/home/andrew/Programs/mornsq/southdemo/migrations'... Creating __init__.py in '/home/andrew/Programs/mornsq/southdemo/migrations'... + Added model 'southdemo.Lizard' + Added model 'southdemo.Adopter' Created 0001_initial.py.
As you can see, it has made our southdemo/migrations directory for us, as well as putting an __init__.py file in it (to mark it as a Python package - this is also required).
If you open up the migration file it made - southdemo/migrations/0001_initial.py - you'll see this:
from south.db import db from django.db import models from southdemo.models import * class Migration: def forwards(self, orm): # Adding model 'Lizard' db.create_table('southdemo_lizard', ( ('age', models.IntegerField()), ('id', models.AutoField(primary_key=True)), ('name', models.CharField(max_length=30)), )) db.send_create_signal('southdemo', ['Lizard']) # Adding model 'Adopter' db.create_table('southdemo_adopter', ( ('lizard', models.ForeignKey(orm.Lizard)), ('id', models.AutoField(primary_key=True)), ('name', models.CharField(max_length=50)), )) db.send_create_signal('southdemo', ['Adopter']) def backwards(self, orm): # Deleting model 'Lizard' db.delete_table('southdemo_lizard') # Deleting model 'Adopter' db.delete_table('southdemo_adopter') models = { 'southdemo.lizard': { '_stub': True, 'id': ('models.AutoField', [], {'primary_key': 'True'}) } }
Migrations in South are, as you can see, just Migration classes with forwards() and backwards() methods, which get run as you go forwards or backwards over the migration respectively.
Each method gets an 'orm' parameter, which contains a 'fake ORM' - it will let you access any frozen models for this migration (details on frozen models are covered in part three of the tutorial).
Most of the time, you can get startmigration to write either all or most of a migration for you; continue to part two of the tutorial for more about changing models. | http://south.aeracode.org/wiki/Tutorial1?version=14 | CC-MAIN-2014-52 | refinedweb | 913 | 58.79 |
qdeepcopy.3qt - Man Page
Template class which ensures that
Synopsis
All the functions in this class are reentrant when Qt is built with thread support.</p>
#include <qdeepcopy.h>
Public Members
QDeepCopy ()
QDeepCopy ( const T & t )
QDeepCopy<T> & operator= ( const T & t )
operator T ()
Description
The QDeepCopy class is a template class which ensures that implicitly shared and explicitly shared classes reference unique data., Implicitly and Explicitly Shared Classes, and Non-GUI Classes.
Member Function Documentation
QDeepCopy::QDeepCopy ()
Constructs an empty instance of type T.
QDeepCopy::QDeepCopy ( const T & t )
Constructs a deep copy of t.
QDeepCopy::operator T ()
Returns a deep copy of the encapsulated data.
QDeepCopy<T> & QDeepCopy::operator= ( const T & t )
Assigns a deep copy ofdeepcopy.3qt) and the Qt version (3.3.8).
Referenced By
The man page QDeepCopy.3qt(3) is an alias of qdeepcopy.3qt(3). | https://www.mankier.com/3/qdeepcopy.3qt | CC-MAIN-2022-40 | refinedweb | 142 | 51.44 |
CS::Animation::iSkeletonLookAtNode Struct Reference
An animation node that controls a bone of an animesh in order to make it look at a target. More...
#include <imesh/animnode/lookat.h>
Detailed Description
An animation node that controls a bone of an animesh in order to make it look at a target.
Definition at line 146 of file lookat.h.
Member Function Documentation
Add a listener to be notified when the target has been reached or lost.
Return whether or not there is currently a target defined.
Remove the specified listener.
Remove the current target, ie the animation node will stop acting once the bone has reached the position given by the child node.
The listeners will be called with the 'target lost' event iff a target was specified and was reached.
Set the target to look at as a fixed position (in world coordinates).
Don't be afraid to update often this position if you want it moving. Listeners will be called with the 'target lost' event if a target was specified and was reached.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 2.0 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/new0/structCS_1_1Animation_1_1iSkeletonLookAtNode.html | CC-MAIN-2015-32 | refinedweb | 197 | 65.32 |
Adding 404 Pagesby Sai gowtham1min read
What is a 404 page?
A 404 page is also called not found page it means when a user navigates to the wrong path that doesn’t present in the website we need to show the not found page.
How to add a 404 page in react?
we need to import another component called Switch which is provided by the react router.
What is Switch?
Switch component helps us to render the components only when path matches otherwise it fallbacks to the not found component.
let’s create a Not found component.
notfound.js
import React from 'react' const Notfound = () => <h1>Not found</h1> export default Notfound
index.js
import React from 'react' import ReactDOM from 'react-dom' import './index.css' import { Route, Link, BrowserRouter as Router, Switch } from 'react-router-dom' import App from './App' import Users from './users' import Contact from './contact' import Notfound from './notfound' const routing = ( <Router> <div> <ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/users">Users</Link> </li> <li> <Link to="/contact">Contact</Link> </li> </ul> <Switch> <Route exact path="/" component={App} /> <Route path="/users" component={Users} /> <Route path="/contact" component={Contact} /> <Route component={Notfound} /> </Switch> </div> </Router> ) ReactDOM.render(routing, document.getElementById('root'))
Let’s check it now by manually entering wrong path localhost:3000/posts. | https://reactgo.com/reactrouter/adding404pages/ | CC-MAIN-2020-40 | refinedweb | 221 | 58.58 |
Shuffle Game that i created is not working as i expect.
Nitesh Panchal
Ranch Hand
Posts: 48
posted 11 years ago
Hello,
I created a code which you may be very well aware of the game in which randomly numbers appear on all squares and then you have to rearrange all the nos using the arrow keys. Here is the code i created it so far. I don't know where the problem is but the keylisteners in general are not behaving as i expected them to be. Any help would be greatly appreciated! i tried all possible things i knew but still just unable to make where the problem lies! logically and syntactically it's all right but still its not working
import java.applet.Applet; import java.awt.*; import java.awt.event.*; public class ShuffleKey extends Applet implements KeyListener{ Button[] btn; int space; public void init(){ setSize(300,300); int[] arr = new int[16]; int i,j,ran,space; btn = new Button[16]; for(i = 0 ; i < btn.length ; i++) btn[i] = new Button(); space = 0; for( i = 0 ;i < btn.length ; i++){ ran = (int) (Math.random() * 16 ); for ( j = 0 ; j < i ; j++){ if ( arr[j] == ran ){ ran = (int) (Math.random() * 16 ); j = -1; } } arr[i] = ran; } setFont(new Font("Arial",Font.BOLD | Font.ITALIC,20)); setLayout(new GridLayout(4,4)); for (i = 0 ; i < btn.length ; i++){ add(btn[i]); btn[i].addKeyListener(this); if(arr[i] != 0) btn[i].setLabel( new Integer(arr[i]).toString()); else{ space = i; btn[i].setLabel( " "); btn[i].requestFocus(); } } System.out.println(space); } public void keyPressed(KeyEvent e){ int key = e.getKeyCode(); switch(key){ case KeyEvent.VK_DOWN: if( space - 4 >= 0 ){ String t = btn[space - 4].getLabel(); btn[space - 4].setLabel(" "); btn[space].setLabel(t); space-=4; System.out.println(space); } break; case KeyEvent.VK_UP: if( space + 4 < 16){ String t = btn[space + 4].getLabel(); btn[space + 4].setLabel(" "); btn[space].setLabel(t); space+=4; System.out.println(space); } break; case KeyEvent.VK_RIGHT: if( space - 1 >= 0 && space%4 != 0){ String t = btn[space - 1].getLabel(); btn[space - 1].setLabel(" "); btn[space].setLabel(t); space--; System.out.println(space); } break; case KeyEvent.VK_LEFT: if( space + 1 < 16 && (space+1)%4 != 0){ String t = btn[space + 1].getLabel(); btn[space + 1].setLabel(" "); btn[space].setLabel(t); space++; System.out.println(space); } break; default: showStatus("Invalid Key Press"); } } public void keyReleased(KeyEvent e){ } public void keyTyped(KeyEvent e){ } }
amitabh mehra
Ranch Hand
Posts: 98
posted 11 years ago
Try this:
introduce a new int variable focussed:
Button[] btn; private int focussed; int space;
set this in init:
else{ space = i; focussed = i; btn[i].setLabel( " "); btn[i].requestFocus(); }
in keyPressed, reassign focussed to space:
int key = e.getKeyCode(); btn[focussed].requestFocus(); space = focussed;
and later in each case statment, after all calculations on space, assign it back to focussed:
focussed = space;
edited>> bold tag didnt work within code tag
Nitesh Panchal
Ranch Hand
Posts: 48
posted 11 years ago
Thanks amitabh mehra
apparently i found out that i declared 2 times space variable one as instance variable and second as local variable in init() and obviously in init() the space is not required. If you remove it the program works fine!
amitabh mehra
Ranch Hand
Posts: 98
posted 11 years ago
oops... totally missed out on that
This is my favorite show. And this is my favorite tiny ad:
Devious Experiments for a Truly Passive Greenhouse!
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
Battleships - guys please help me out!
here is a question and i have a very similar code but i need to convert it
Suggestions to make code (Pong) better (from the code I have, not adding additional code)
Swap
My head is about to explode!
More... | https://coderanch.com/t/431291/java/Shuffle-Game-created-working-expect | CC-MAIN-2020-29 | refinedweb | 632 | 68.47 |
This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.
------- Additional Comments From sebor at roguewave dot com 2005-04-19 15:39 ------- I discussed this with Mike Miller of EDG. His response to my query on the issue (copied with his permission) is below. Mike Miller wrote: ... > There were a couple of different examples in that thread, > so just to avoid confusion, here's the one I'll refer to: > > struct A { > int foo_; > }; > template <typename T> struct B: public A { }; > template <typename T> struct C: B<T> { > int foo() { > return A::foo_; // #1 > } > }; > > The question is how the reference on line #1 is treated. Wolfgang's > analysis isn't quite right. While it's true that "A" is non-dependent > and thus is bound to ::A at template definition time, that is > irrelevant. When C<int>::foo() (for instance) is instantiated, it turns > out that the reference to ::A::foo_ is, in fact, a non-static member of > a base class (9.3.1p3), so the reference is transformed into > (*this).::A::foo_ and there is no error. This is not a violation of > 14.6.2p3 -- there's no lookup in a dependent base class involved, as > Wolfgang's comments assume, and the description "the access is assumed > to be from the outside, not within the class hierarchy through this->" > doesn't accurately describe how 9.3.1p3 works. > > In fact, though, this just sort of happens to work because A is both > visible in the definition context and a base class of the instantiated > template. If you add an explicit specialization > > template<> struct B<int> { }; > > as suggested in Andrew's comment, so A is not a base class, or if you > change the program so that A is not visible in the definition context > (by making it a member of a namespace, for instance), we do report an > error in the instantiated C<int>::foo(). (There's no requirement to > report errors in uninstantiated templates, of course, contrary to > Andrew's observation.) > > This is sort of contrary to the "spirit" of two-stage lookup, though -- > Wolfgang's expectation is not unreasonable, I think, even though the > details of his reasoning are incorrect. I'm probably going to open a > core issue on this, especially in light of the differences between > implementations. -- | https://gcc.gnu.org/legacy-ml/gcc-bugs/2005-04/msg02563.html | CC-MAIN-2022-33 | refinedweb | 393 | 59.94 |
Create a Weather App with React Hooks: Part 1
Published on Nov 12, 2020.
Prerequisites
- Comfortable with Html
- Javascript, ES6 to see what is React and what is Javascript
- Basic React knowledge like props, components, one way-data-flow
What we will cover
- Using state and useState
- fetching an API with useEffect
- use of custom hooks in our application
By the end of the tutorial, you will have the following skill sets:
The hands-on practical and real-life scenario of creating Weather Application using React Hooks
What Are React Hooks?
Hooks are a new addition in React 16.8. With the help of hooks, we can use state and other React features without writing a class.
Before Hooks, we would need to understand how this keyword works in Javascript, and to remember to bind event handlers in class components. There wasn't a particular way to reuse stateful component logic and this made the code harder to follow.
We needed to share stateful logic in a better way. React is designed to render components, and it doesn't know anything about routing, fetching data, or architecture of our project. So, React Hooks came to the rescue.
Hooks are just functions that are exported from the official React page. They allow us to manipulate components in a different manner.
Hooks allow for attaching reusable logic to an existing component and use state and lifecycle methods inside a React functional component. We can organize the logic inside a component into reusable isolated units. Hooks give developers the opportunity to separate presentation logic, the logic that is associated with how components appear on a page, from business logic, the logic that is associated with handling, manipulating, and storing business objects.
There are some rules about how to use hooks. The following rules are:
- only call hooks at the top level of the component
- don't call hooks inside loops, conditionals, or nested functions
- only call hooks from React functions
- call them from within React functional components and not just any regular Javascript function
Okay, now let's start working with our application.
Application Tools
- [x] Install NodeJS and make sure it is the LTS(long term support) version. LTS version is a less stable version of NodeJS. We will use NPM (node package manager) and we will use it to install create-react-app.
- [x] Install your preferred code editor or IDE. I will be using Visual Studio Code. You can download it from this website. It is free to use.
- [x] create-react-app is an npm package that we can bootstrap our React application without any configuration.
Let's install our project. Open up your terminal and
cd into the directory you want to create the project.
cd desktop # type this command to install create-react-app, you can give any name for the app. npx create-react-app weather-app
Now, let's wait for the project to be created, now all the packages are installed for us to use it.
Let's go inside our project folder, type the name of our project, and
cd into it.
cd weather-app # open the project files with Visual Studio or any code editor #start the app npm start
Now we can see our app is up and running. Before starting our app, let's make some cleanup and remove some of the files that we will not use.
Let's remove
App.test.js, index.css, logo.svg, setupTests.js from the
src folder. You can copy and paste the basic structure for App.js and index.js from the code snippets below.
// App.js import React from 'react'; import './App.css'; function App() { return <div className="App"></div>; } export default App;
// index.js import React from 'react'; import ReactDOM from 'react-dom';();
Also, we can remove
logo files from the
public folder, now my files are looking like this:
Explore Open Weather App and Styling
Getting Our API Key
Let's go to open weather map to get our API key to fetch real weather data.
Choose 5 Day / 3 Hour Forecast from the page. With this API, we can get access to the next 5-day weather data for a specified city. But before we use the open weather map, we need to have an API key. For that create an account and go to the API keys tab to see your API key.
Let's check the example from the page and open a new tab and paste this URL.
# replace API key with your API key api.openweathermap.org/data/2.5/forecast?q=London,us&appid={API key}
Now, we can see the JSON data.
Default data comes with the imperial system, we can change it to the metric system by specifying another query parameter. If you are comfortable using the imperial system, you don't need to change the query.
api.openweathermap.org/data/2.5/forecast?q=London,us&appid={API key}&units=metric
Now, let's see what we get from our data. We will be using the icon for the weather, let's see what the code means. From the documentation, we can find this page and see what the icon codes mean. We will use this URL for our image source.
We will fetch the minimum and maximum temperature of the next five days, along with icons.
Now, let's create a new folder named
apis directory under the
src directory and create a new file named
config.js for our API key, and add this file to your
.gitignore file to not to expose our API key. Also, let's put our
baseUrl here. We will come back here later to add our fetching logic.
// apis/config.js export const API_KEY = [YOUR_API_KEY]; export const API_BASE_URL = '';
Styling The App
We will be using React Bootstrap for styling the app. You can check out the documentation.
Let's install the react-bootstrap to our project
npm install react-bootstrap bootstrap
Now, we need to include CSS to our project inside
src > index.js.
// index.js import 'bootstrap/dist/css/bootstrap.min.css';
Creating our First Component 🥳
Let's start by creating our first component and show our API data to the user.
Inside the
src folder, let's create another folder named
components. Now, create our first component and name it
WeatherCard.js
This component will be a functional component and it will receive some props and we will display them. We will use the
Bootstrap Card component to add some styling.
Now, we can copy Card component from bootstrap to our component. We don't need
Card.Text and
Button, we will remove those.
// components/WeatherCard.js import React from 'react'; import {Card} from 'react-bootstrap'; const WeatherCard = (props) => { return ( <Card style={{width: '18rem'}}> <Card.Img <Card.Body> <Card.Title>Card Title</Card.Title> </Card.Body> </Card> ); }; export default WeatherCard;
We want to show the
minimum and
maximum temperatures for a date, but
dt datetime is in Unix timestamp. Also, we will display the
main weather.
Now, let's extract our props and display them inside the jsx. Props have the same name as the JSON data that we get from API.
For the icon, we can get a list of weather conditions. Every icon has a different code number.
- example URL :
We will replace
10d with the
icon prop to make it dynamic.
// components/WeatherCard.js import React from 'react'; import {Card} from 'react-bootstrap'; const WeatherCard = ({dt, temp_min, temp_max, main, icon}) => { // create a date object with Date class constructor const date = new Date(dt); return ( <Card style={{width: '18rem'}}> <Card.Img variant="top" // get the src from example url and pass the icon prop for icon code src={`{icon}@2x.png`} /> <Card.Body> <Card.Title>{main}</Card.Title> {/* datetime is received in milliseconds, let's turn into local date time */} <p> {date.toLocaleDateString()} - {date.toLocaleTimeString()} </p> {/* minimum temperature */} <p>Min: {temp_min}</p> {/* maximum temperature */} <p>Max: {temp_max}</p> </Card.Body> </Card> ); }; export default WeatherCard;
Now, let's import the
WeatherCard component into
App.js. And pass our props, we will pass hardcoded values for now.
// App.js import React from 'react'; import WeatherCard from './components/WeatherCard'; import './App.css'; const App = () => { return ( <div className="App"> {/* dt is in unix-seconds but javascript uses milliseconds, multiply with 1000 */} <WeatherCard dt={1602104400 * 1000} </div> ); }; export default App;
Now, let's start our app with
npm start from the terminal. We can see our weather data is displayed. We will use this component to show the next 5 days.
City Selector Component
We will make a new component that the user can select a city and we will display the weather data for that city.
In our component, we will create
input and a
button. When the user clicks the button, we will fetch the weather forecast for that city.
We will use Bootstrap Layout to create rows and columns. You can find the documentation at this link.
Now, let's go to the components folder and create another folder named
CitySelector.js and create our boilerplate code.
useState Hook
State helps build highly performant web apps. To keep track of our application logic, we need to use state. We can reflect any UI or the user interface changes via changes in state.
To be able to change our button's state, we need a special hook named
useState. With
useState, we can add state to functional components.
useState returns an array of two items the first element is the current value of the state, and the second is a state setter function. State tracks the value of our state. Whenever the state updates, it should also rerender JSX elements. The setter function is gonna be used to update our state value.
In class components, state is always an object, with the useState hook, the state does not have to be an object.
When dealing with objects or arrays, always make sure to spread your state variable and then call the setter function.
Every time, with every rerender we don't mutate our state, we get a completely new state, we can change our state, with the setter function.
We need to contain one state property and that will be the city. In order to use, useState in our component, we have to import useState first. useState is a named export; so, we will export it with curly braces.
import React, { useState } from 'react';
Our goal is to update the state when a user clicks on a button.
We need to define a new variable and set it to
useState hook. Inside the hook as an argument, we need to pass the
initial value as an empty string.
// components/CitySelector import React, {useState} from 'react'; const CitySelector = () => { const [city, setCity] = useState(''); return <div></div>; }; export default CitySelector;
We will add Row, Col, FormControl, and Button components from Bootstrap to create our JSX elements. FormControl is for our
input element and we need to take its value by passing
event.target.value
We will pass for the
Button component one function for now, we will use it soon to display our data.
// components/CitySelector.js import React, {useState} from 'react'; import {Row, Col, FormControl, Button} from 'react-bootstrap'; const CitySelector = () => { const [city, setCity] = useState(''); return ( <> <Row> <Col> <h1>Search your city</h1> </Col> </Row> <Row> {/* xs={4} takes the one third of the page*/} <Col xs={4} <FormControl placeholder="Enter city" // update city value with the user's input onChange={(event) => setCity(event.target.value)} // value will be the currently selected city value={city} /> </Col> </Row> <Row> <Col> {/* event handler for button click */} <Button onClick={onSearch} }>Check Weather</Button> </Col> </Row> </> ); }; export default CitySelector;
Now, let's import our CitySelector component into App.js. Also, we can remove our hardcoded WeatherCard component, we can now get the city data from user input.
Our App component is now looking like this. Also, I added a Container from bootstrap.
// App.js import React from 'react'; import CitySelector from './components/CitySelector'; import './App.css'; import {Container} from 'react-bootstrap'; const App = () => { return ( <Container className="App"> <CitySelector /> </Container> ); }; export default App;
Also, copy and paste this CSS code into your
App.css file.
/* App.css */ .App { text-align: center; } .row { justify-content: center; margin: 15px 0; }
Displaying API Results
Now, time to display our API data inside our application.
Let's go back to our
CitySelector component and call our API.
First, let's create an anonymous function for our
onSearch function.
To grab data from an external resource or to just retrieve data, we will use
fetch browser API. Fetch takes our
url call. We need to get our
baseUrl and our
Api key from our
config.js file. Let's import it to our file.
import {API_KEY, API_BASE_URL} from '../apis/config';
Fetch returns a promise and we need to await it, we will put
.then, after that our response will be in
json format, we need to extract the body of the response, and finally, we will get our
result.
Now
onSearch function should look like this:
// components/CitySelector.js const onSearch = () => { fetch(`${ API_BASE_URL}/data/2.5/forecast?q=${city}&appid=${API_KEY}&units=metric`) .then((response) => response.json()) .then((result) => console.log(result)); };
Also, we can show our data when the user presses the
Enter key. Let's implement that with JavaScript.
Add
onKeyDown to
FormControl (input), it will receive a callback function with the event inside.
// components/CitySelector.js const onKeyDown = (event) => { if (event.keyCode === 13) { onSearch(); } }; <Row> <Col xs={4} <FormControl placeholder="Enter city" onChange={(event) => setCity(event.target.value)} value={city} // add onKeyDown onKeyDown={onKeyDown} /> </Col> </Row>;
To display our data, we need to create another state for our
results.
// components/CitySelector.js const CitySelector = () => { const [city, setCity] = useState(''); const [results, setResults] = useState(null); const onSearch = () => { fetch( `${API_BASE_URL}/data/2.5/forecast?q=${city}&appid=${API_KEY}&units=metric` ) .then((response) => response.json()) // update the results .then((results) => setResults(results)); }; return ( <> <Row> <Col> <h1>Search your city</h1> </Col> </Row> <Row> <Col xs={4} <FormControl placeholder="Enter city" onChange={(event) => setCity(event.target.value)} value={city} /> </Col> </Row> <Row> <Col> <Button onClick={onSearch}>Check Weather</Button> </Col> </Row> </> ); };
Okay, that's it for this post. For the
useEffect Hook and custom hooks, we will continue on with the second part of the tutorial.
Thanks for your time. I hope you liked it 🤞 | https://hulyakarakaya.hashnode.dev/create-a-weather-app-with-react-hooks-part-1?guid=none&deviceId=d3fc078b-531d-4e73-bb0d-20f4a7d92585 | CC-MAIN-2021-25 | refinedweb | 2,400 | 56.66 |
Created on 2017-06-13 05:17 by terry.reedy, last changed 2017-08-07 18:07 by terry.reedy. This issue is now closed.
When Louie Lu posted a link to
on core-mentorship list, I tested idlelib.
python -m test -ugui test_idle # SUCCESS, no extraneous output
python -m test -R: test_idle # SUCCESS, no extraneous output
python -m test -R: -ugui test_idle # error output, FAILURE
[So people who leaktest without a screen see nothing in idlelib.]
Error output is about 20 copies of the following:
can't invoke "event" command: application has been destroyed
while executing
"event generate $w <<ThemeChanged>>"
(procedure "ttk::ThemeChanged" line 6)
invoked from within
"ttk::ThemeChanged"
At the end:
test_idle leaked [471, 471, 471, 471] references, sum=1884
test_idle leaked [209, 211, 211, 211] memory blocks, sum=842
[similar for python 3.6]
In a response email, I noted that test_idle gathers tests from idlelib.idle_test.test_* and that something extra is needed to pin leaks to specific test modules.
I don't know whether the absence of 'invoke event' error messages when not running -R means that there are also no refleaks, or not.
---
import os
import subprocess
os.chdir('f:/dev/3x/Lib/idlelib/idle_test')
testfiles = [name for name in os.listdir() if name.startswith('test_')]
for name in testfiles:
os.rename(name, 'x'+name)
for name in testfiles:
os.rename('x'+name, name)
try:
res = subprocess.run(
['f:/dev/3x/python.bat', '-m', 'test', '-R:', '-ugui', 'test_idle'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if res.returncode:
print(name)
print(res.stderr)
except Exception as err:
print(name, err)
os.rename(name, 'x'+name)
for name in testfiles:
os.rename('x'+name, name)
---
reports
test_macosx.py
b'beginning 9 repetitions\r\n123456789\r\n\r\ntest_idle leaked [31, 31, 31, 31] references, sum=124\r\ntest_idle leaked [19, 21, 21, 21] memory blocks, sum=82\r\n'
test_query.py
b'beginning 9 repetitions\r\n123456789\r\n\r\ntest_idle leaked [429, 429, 429, 429] references, sum=1716\r\ntest_idle leaked [190, 192, 192, 192] memory blocks, sum=766\r\n'
There are no 'invoke event' messages.
For further testing within each file, by commenting out code, as suggested in the link above, I replaced 'testfiles' in the middle loop with ['testmacosx.py'] or ['test_query.py']. For test_macosx, the culprit is class SetupTest. For test_query, the culprit is class QueryGuiTest. Adding cls.root.update_idletasks did not solve the problem by itself (as it has in other cases). I plan to continue another time.
test_query were fixed in PR 2147, which is leak by not removing mock.Mock() in dialog.
New changeset b070fd275b68df5c5ba9f6f43197b8d7066f0b18 by terryjreedy (mlouielu) in branch 'master':
bpo-30642: IDLE: Fix test_query refleak (#2147)
New changeset 2bfb45d447c445b3c3afc19d16b4cd4773975993 by terryjreedy in branch '3.6':
bpo-30642: IDLE: Fix test_query refleak (#2147) (#2161)
The.
New changeset b0efd493b6af24a6ae744e7e02f4b69c70e88f3d by terryjreedy in branch '3.6':
[3.6]bpo-30642: Fix ref leak in idle_test.test_macosx (#2163) (#2165)
f:\dev\36>python -m test -R: -ugui test_idle
gives about 40 invoke event messages, but the test passes. So the messages and leaks are not connected.
Unlinked old findleak.py after uploading much improved version to #31130. | https://bugs.python.org/issue30642 | CC-MAIN-2020-16 | refinedweb | 528 | 60.31 |
A sample logger tool for flutter. support logLevel config and log format. Hope making log colorful and logLevel based callback in the future
add dependence in pubspec.yaml
dependencies: flutter: sdk: flutter ... colour_log: ^0.2.0
import "package:colour_log/colour_log.dart" ... var log = Logger(); // default log level debug log.d("debug"); log.i("info"); log.e("warn"); log.e("error"); log.logLevel = LogLevel.INFO log.d("debug"); // will not show log.i("info"); log.e("warn"); log.e("error");
Add this to your package's pubspec.yaml file:
dependencies: colour_log: ^0.2.1
You can install packages from the command line:
with Flutter:
$ flutter pub get
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:colour_log/colour_log.dart';
We analyzed this package on Aug 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Format
lib/colour_log.dart.
Run
flutter format to format
lib/colour_log
colour_log.dart. Packages with multiple examples should provide
example/README.md.
For more information see the pub package layout conventions. | https://pub.dev/packages/colour_log | CC-MAIN-2019-35 | refinedweb | 200 | 53.27 |
Long time listener, first time caller here. I have been reading the posts on this discussion forum for quite some time. I work for a small shop doing usability testing, program testing, installation development, etc, etc. Judging by the programming skill and knowledge in this forum I will most likely never have the programming ability as the majority of the people reading this post. Nevertheless, I’ve got a question. I am currently testing a password retrieval web application designed in C#. Here’s how it works,
Users are given a username and password to log on to our website. Within a “Portfolio” page the user provides all sorts of required information, most importantly their email. Let’s say once the user provides us with their information they forget their password? Here is how they get into the system:
First, they click on a “Forget Password” link on our website. Second, they are brought to a page where they must submit their username and email address. If the username and email address do not match within our database an error is returned to the user. Third, they are sent an email with a link that will return them to our website where they must provide the last four digits of their Social Security Number. Fourth, once they correctly submit the last four digits of their SS# they are brought to a new page where they must change their password. We never give them their old password we make them set a new one.
This is proving to be very difficult for our users to understand. It would be a lot easier if we could have the user click on a Forget Password link, validate their username, email address, and SS# and send them their existing password via email. What does everyone think? Any help on this would be greatly appreciated.
In the Midnight Hour She Cried More..More..More..
Thursday, April 8, 2004
> It would be a lot easier if we could have the user click on a Forget Password link, validate their username, email address, and SS# and send them their existing password via email.
It would be even easier if they only have to submit their email address, and then you send the password by email: this assumes that no-one but me can receive my email.
Christopher Wells
Thursday, April 8, 2004
If you have their SSN I'm assuming you have a certain amount of trust with this client. It's your responsibility to not give something like their password in something as sniffable as their SSN.
Have them enter the last 4 digits of their SSN and let them set a new password after that.
Thursday, April 8, 2004
In the app we're building now, I've decided to go with the lost password form really being a reset password and send it to the email address on file form. For two reasons:
(1) We encrypt our passwords one way in the DB, so we cannot retrieve them and show them to the user.
(2) It will greatly simplify the process. Just click on the link, and enter the email address, along with some kind of additional verification. If it matches one that is on file (when a new account is created, we force the email address to be unique), the password is reset to a randomly generated string and emailed to that address.
We have to use some kind of verification to prevent a random person from annoyingly resetting people's passwords if they can guess their email address. But it is only one form, and two fields. The verification could be zip code, last name, etc. or possibly some kind of cutesy challenge question like "What is your dog's name?"
Beyond that, I don't see the need for more.
Clay Whipkey
Thursday, April 8, 2004
Ask only for the email address; the username is redundant.
Include the username in the email -- people forget those too!
If you don't want to include the password in the email -- put a link in the message that includes an encoded component (using something like MD5). This encoded component is linked to the user's username and password.
When the click is clicked, the user is brought to a page where they can change their password. As a side effect, when the user changes their password the link will no longer work.
Almost Anonymous
Thursday, April 8, 2004
> password in something as sniffable as their SSN.
should be "as sniffable as email."
> Ask only for the email address; the username is redundant.
Good idea. Say, what's your email address? No reason... now what was that site you signed up with?
Sorry, you can't base your security on all publicly available information.
"Good idea. Say, what's your email address? No reason... now what was that site you signed up with?"
Well I think the idea is that the rest of the process is still followed (i.e. you get an email at that address that you have to follow, and then d othe following steps).
Personally I entirely agree : Most "usernames" on websites are entirely redundant, not to mention suffering from classic namespace collisions ("damn...dennis_Forbes is taken...dennis_w_forbes...dwforbes, d_w_forbes, etc): I already hae a globally unique email address. The only sites that should require usernames are sites that display 'handles" on messages, although even that can be just an account attribute rather than a logon element.
Dennis Forbes
Thursday, April 8, 2004
"Good idea. Say, what's your email address? No reason... now what was that site you signed up with?"
Just how would you get my username and password if I have you my email address and a site I've signed up with?!?
Thanks everybody for the insight! I was just looking for a easier way to do this, but of course security has got to make things a bit harder. Our biggest problem is probably that our users are not very Internet savvy. Or even Windows 95 savvy.
"Almost Anonymous" if our users forget their username they can call our helpdesk. Surprisingly, this is rarely the case because most of these usernames (ID's) were issued roughly 1-30 years ago. We incorporated these usernames into the web application. I guess you could say they are a lot like SS#'s. As for two users "not" having the same email address , we recently got burned on that. We have several users (husband and wife) that have the same email address. That's why we now have to ask for the username. On top of all that we are using an encoded component. Thanks for the replying so quick!
In the Midnight Hour She Cried More..More..More..
Thursday, April 8, 2004
also, after a sucessful password change, you may want to send the email address a notice that the password has been changed, just in case it wasn't them that changed it.
apw
Thursday, April 8, 2004
Just ask for _either_ their username (I'm assuming it's unique) or their e-mail address (which you know from the "portfolio page"); use that to lookup their record and send the username and a new, randomly generated password to the e-mail address on file. If they'd like, they can change it from a page on the site (a link to which should also be included in the e-mail).
bpd
Thursday, April 8, 2004
"use that to lookup their record and send the username and a new, randomly generated password to the e-mail address on file."
This isn't good -- if someone mistakenly types in someone elses username or email address they'll clobber that persons password! Sure, they'll get the randomly generated password in their email but I still think it's a bad idea.
The problem is that IMHO a well-designed application shouldn't be *able* to retrieve the user's password - it only stores a hash, hashes the input password, and compares the hashes. In that situation, you don't have the option - you generate a new password (use the old AOL method of dictionary word & random symbol & dictionary word) and email that out.
Philo
Philo
Thursday, April 8, 2004
Almost,
That is why in our approach (which I explained in an earlier post) we ask for the email address AND some piece of verification. That could be the last 4 digits of the SSN#, a special challenge question, whatever will make you feel like you've got your bases covered.
Then when you send your script in to do the resetting of the password, you only do it if you find a match for the email address AND the verification item.
IMHO, this is the method that addresses as much user-friendliness as you can get while still maintaining good security. (Being able to *see* what someone's password is at all is a problem waiting to happen. Forcing them to reset is the only option if you encrypt one-way.)
Philo++
Also, you may want to also use a "salt" value when computing the hash.
joev
Thursday, April 8, 2004
The method I used in a similar situation was to ask for either the email or username, from that, I look up the user record, and if exists, send a token (good for that day) to the email address.
Then I ask the user to enter that token (along with their username or email, but if they leave the browser open it's pre-filled anyway), and then send a new password to the email address on file.
The one thing that is... poor in my system is that a password reset doesn't invalidate the code used to authenticate the reset, but I think I'll look into that next week. :)
Ryan Anderson
Thursday, April 8, 2004
Always start from a threat model. What is it you are trying to protect, and what is it you are trying to protect against. Once you figure this out, you can then evaluate different options. There is no "general" always best solution to this problem.
Just me (Sir to you)
Friday, April 9, 2004
As a user, here is what I like (an dislike) about various user/pass systems:
I like systems that let me use arbitrarily long usernames -- I have a standard username for web-based accounts that I like to use. Systems that only allow a certain number of characters annoy me, as I often forget my username -- and then don't revisit the site much.
I like systems that use my email address as my username, *** but only if they can absolutely guarantee that they won't spam me or sell my details ***.
I like systems that don't place restrictions on my password. I know that using dictionary words, names and birthdates is bad, and so I don't do it. But having a password policy that makes it really hard to remember the password means I'll forget it.
I like what my bank does: they ask for a username, my account number, my date of birth and then 3 digits from a security code that I can chose (i.e. they specify "third", "second", "last"). This is probably overkill for a web-app, but I like the challenge-response idea of e.g. "third", "second", "last" digits, as it means that at a given sign-on, not all of my authentication details are transmitted across the network (and encryption is also used).
An alternative to passwords would be to ask users to choose a pass-phrase (e.g. "the cat sat on the mat"), and then, when they sign on, ask then for the, say, "first" and "fifth" words. It's easy to remember, as secure as passwords (still vulnerable to stupidity and social engineering), but is not easily sniffable over the network.
I like sites that, when I sign-up, ask me for a challenge question (I might choose the question "Where was your first visit overseas?") and a response ("Germany"), so that if I forget my user/pass, they ask me my question.
tinfoil hat
Friday, April 9, 2004
Most sites send you the user name and password when you put in the previously entered email address.
This does mean you should make sure you have a strong password for your email address. There was an article in the "Arab News" last week from a Saudi lady journalist who found that somebody had broken into her Hotmail account and changed her password and then texted her on her mobile to demand she withdrew the unveiled photo of herself that appeared with her articles, and aplogized for allegedly "Unislamic" behaviour. If she didn't do this the Hotmail hijacker threatened to post her emails all over the web!
There are some amusing side-lines to this. Firstly the poor woman appeared to be genuinley miffed that Hotmail and Bill Gates wouldn't do anything to help her get her account back but expected all that to change after she had published her article. Secondly that the hijacker was idiot enough to contact the woman by SMS, thus identifying himself, at least indirectly to any criminal investigation, and thirdly that the woman, despite having denounced the matter to the Interior Ministry hadn't realized that this was the way to catch the guy.
It also raises questions about the common sense of people who have weak passwords, and worse still, keep the only copy of their address book within the Hotmail account.
Incidentally, having your email account hijacked is quite common when you fall out with your girlfriend/boyfriend, so be forewarned.
Stephen Jones
Friday, April 9, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware4/default.asp?cmd=show&ixPost=131131&ixReplies=19 | CC-MAIN-2018-17 | refinedweb | 2,300 | 69.72 |
How to Use Android Things GPIO Pins to Build a Remote-Controlled Car
How to Use Android Things GPIO Pins to Build a Remote-Controlled Car
In this article, we will show you how we can use Android Things GPIO pins to control DC motors, allowing us to build a remote-controlled car.
Join the DZone community and get the full member experience.Join For Free, allowing us to build a remote-controlled car. At the end of this article, you will build an Android Things car that moves in all directions and you can control it using your smartphone or your browser.
Android Things provides a set of APIs we can use to interact with two-state devices like buttons or LEDs using Android Things GPIO. Using Android Things GPIO API, we can simply read the state of a pin or set its value. In more details, in this article, we will explore how to use Android Things GPIO to control motors. Maybe you already know, there are other ways to control motors. Using Android Things GPIO pins we can turn it on or off but we cannot control the motor velocity. GPIO pins have only two states: on or high and off or low. If we want to have more control over the motors applying a proportional control we can use Android Things PWM pins.
Anyway, in this context, we want to only control the motors and turn them on or off so. The first step to control an Android Things GPIO pin is getting the reference to the PeripheralManagerService:
PeripheralManagerService service = new PeripheralManagerService();
The next step is opening the connection to the pin:
pin = service.openGpio(pin_name);
To know which pins are GPIO, according to your Android Things board, you can refer to the Android Things pinout. Once the connection is open we can set the value of the pin using these commands:
pin.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW); pin.setValue(true); // High
For more on how to use Android Things GPIO pins you can refer to my book "Android Things Projects."
Before digging into the project details you should read my other articles about Android Things:
How to Control a Motor Using Android Things
Usually, we connect the device directly to the Android Things board. Anyway, when we use motors this is not possible because a motor may require much more current than a GPIO pin can provide. In this case, we have to provide an external power source and use the Android Things GPIO pins to control the motor. Moreover, we want to control the motor rotation direction. For these reasons, it is advisable to use a simple motor driver that simplifies our work.
In this project, we will use L298N, a simple driver that can control two motors and their directions:
This driver can control motors using PWM too, but, in this project, we will not use these features. Using two Android Things GPIO pins for each motor, we can control its rotation direction or stop it.
Let us see how to connect this driver to our Android Things board. The schema below shows how to connect the GPIO pins to the L298N and the motors:
Even if the schema seems a little bit complex, it is very simple: this project uses four different Android Things GPIO pins:
Right motor:
- BCM17
- BCM27
Left motor:
- BCM23
- BCM24
If you are using a different board than Raspberry Pi3, you have to change the pin names.
At this time, we can create a simple Java class that controls the motors:
public class MotorController { PeripheralManagerService service = new PeripheralManagerService();(); } } }
This class accomplishes these tasks:
- It gets a reference to the PeripheralManagerService.
- It opens the GPIO pins.
- It sets the directions and the initial value.
Moreover, it defines four different methods that control how the car will move:
- Forward
- Backward
- Turn left
- Turn right
All these movements can be controlled turning on or off each pin defined above.
That's all. Now, it is time to implement how we will control the car. There are several options we can implement for this purpose. We could use a simple Web server that has an HTML interface or we can use Android Nearby API, for example, or even a Bluetooth connection.
In this tutorial, we will use a simple Web Interface.
How to Implement do this, it is necessary to modify the
build.gradle file by adding:
compile 'org.nanohttpd:nanohttpd:2.2.0'
Now let us create a new class called
RobotHTTPServer that handles the incoming HTTP requests:
public class RobotHttpServer extends NanoHTTPD { public RobotHttpServer(int port, Context context, CommandListener listener) { super(port); this.context = context; this.listener = listener; Log.d(TAG, "Starting Server"); try { start(); } catch (IOException e) { e.printStackTrace(); } } @Override public Response serve(IHTTPSession session) { Map<String, String> params = session.getParms(); String control = params.get("control"); String action = params.get("btn"); Log.d(TAG, "Serve - Control ["+control+"] - Action ["+action+"]"); if (action != null && !"".equals(action)) listener.onCommand(action); return newFixedLengthResponse(readHTMLFile().toString()); } .. }
The HTML page is very simple and it is made by 5 buttons that represent the four directions and the stop button.
We will add the HTML page to the
assets/ directory. The last part is defining a
CommandListener that is the callback function that is invoked everytime the HTTP server receives a command:
public static interface CommandListener { public void onCommand(String command); }
Assembling the App to Control the Android Things Remote Car
The last step is assembling everything and gluing these classes so that we can finally build the Android Things remote controlled car. To this purpose, it is necessary to create a
MainActivity:
public class MainActivity extends Activity {(); } }); } }
As you can see, the code is very simple, everytime the CommandListener receives a new command it calls a method of the class that handles the motor to control the motors.
This simple project can be further expanded. We could add a set of new feature like Vision, Machine learning, and so on. For this reason, we have used Android Things instead of Arduino or an ESP8266.
You now know how to interact with Android Things GPIO pins and how to turn them on or off. Moreover, you learned how to use motors. All these information you have acquired can be used to build your first Android Things remote controlled car.
Now you can play with your toy! }} | https://dzone.com/articles/how-to-use-android-things-gpio-pins-to-build-a-rem | CC-MAIN-2019-18 | refinedweb | 1,058 | 61.16 |
25 Dec 17:06 2006
Mauve vs 1.5
Hi, Here is the Christmas riddle for you all. Going through the mauve diffs between GNU Classpath 0.93 and current CVS which switched to full 1.5 language support I noticed some compilation errors. There are 2 main failures: When compiling mauve without the 1.5 flag there is the following issue with anything extending java.io.Writer. e.g.: 1. ERROR in gnu/testlet/java/io/CharArrayWriter/ProtectedVars.java line 30: public class ProtectedVars extends CharArrayWriter implements Testlet ^^^^^^^^^^^^^ The return type is incompatible with Writer.append(CharSequence, int, int), CharArrayWriter.append(CharSequence, int, int) jcf-dump shows the issue. CharArrayWriter implements Writer which extends Appendable, but makes the return type of some methods more specific: Method name:"append" public Signature: (char)java.io.Writer Method name:"append" public bridge synthetic Signature: (char)java.lang.Appendable Without -1.5 the bridge method for the covariant return type is ignored. Meaning that the compiler thinks that the class isn't implementing public Appendable append(char c) as defined by the super interface Appendable. Now this is of course easily fixed by using -1.5 so the compiler knows(Continue reading) | http://blog.gmane.org/gmane.comp.java.mauve.general/month=20061201 | CC-MAIN-2016-22 | refinedweb | 199 | 53.58 |
# $NetBSD: Makefile.inc,v 1.8 2012/01/20 16:31:29 joerg Exp $ # @(#)Makefile 8.2 (Berkeley) 2/3/94 # # All library objects contain sccsid strings by default; they may be # excluded as a space-saving measure. To produce a library that does # not contain these strings, delete -DLIBC_SCCS and -DSYSLIBC_SCCS # from CPPFLAGS below. To remove these strings from just the system call # stubs, remove just -DSYSLIBC_SCCS from CPPFLAGS. # # The NLS (message catalog) functions are always in libc. To choose that # strerror(), perror(), strsignal(), psignal(), etc. actually call the NLS # functions, put -DNLS on the CPPFLAGS line below. # # The YP functions are always in libc. To choose that getpwent() and friends # actually call the YP functions, put -DYP on the CPPFLAGS line below. # # The Hesiod functions are always in libc. To choose that getpwent() and friends # actually call the Hesiod functions, put -DHESIOD on the CPPFLAGS line below. USE_FORT?= yes USE_SHLIBDIR= yes .include <bsd.own.mk> WARNS=4 CPPFLAGS+= -D_LIBC -DLIBC_SCCS -DSYSLIBC_SCCS -D_REENTRANT .if (${USE_HESIOD} != "no") CPPFLAGS+= -DHESIOD .endif .if (${USE_INET6} != "no") CPPFLAGS+= -DINET6 .endif CPPFLAGS+= -DNLS .if (${USE_YP} != "no") CPPFLAGS+= -DYP .endif .if ${MACHINE_ARCH} == "i386" # Set lint to exit on warnings LINTFLAGS+= -w .endif # ignore 'empty translation unit' warnings. LINTFLAGS+= -X 272 .include "libcincludes.mk" ARCHDIR= ${.CURDIR}/arch/${ARCHSUBDIR} AFLAGS+= -I${ARCHDIR} CLEANFILES+= tags # Don't try to lint the C library against itself when creating llib-lc.ln LLIBS= INCSDIR= /usr/include | http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/Makefile.inc?rev=1.8&content-type=text/x-cvsweb-markup&sortby=rev&only_with_tag=netbsd-6-1 | CC-MAIN-2020-16 | refinedweb | 235 | 70.09 |
anyone is willing to give me a hint for my small program. My program is basically promote the user to enter 12 characters exactly.Then, the program will test this code in four conditions. These conditions are the following:
1. must be 12 characters //if this condition is completed, we test the second condition.
2. digits 4 and 5 must be used at least once //if this condition is completed, we test the second condition.
.
.
Once I can solve the second condition I will probably will be able to test all conditions. I am thinking to use the tokenizer to do the job but I am little confused in the way I use it. Anyone has an idea or a hint how to detect that the user has only used 4 or 5 at least once. let's say the user has entered this code: EP2V973D341L
Code:#include <iostream> #include <cctype> #include<string> #include <stdio.h> #include <stdlib.h> using namespace std; void newCode(); void printCode(); int main() { int tester; while((tester >= 1) || (tester < 3)) { cout << "select one of these options\n"; cout << "(1) Insert a new code\n"; cout << "(2) Print the data\n"; cout << "(3) Exit the program\n"; cin >> tester; if(tester==1) newCode(); else if (tester==3) cout << "GoodBye"; break; /*else if (tester==3) void exit ( int status );:" << serial_number<< endl; } | http://cboard.cprogramming.com/cplusplus-programming/98266-numbers-code.html | CC-MAIN-2014-52 | refinedweb | 224 | 70.73 |
Abstract
This short report describes the core features of the Ray Distributed Object framework. To illustrate Ray’s actor model, we construct a simple microservice-like application with a web server frontend. We compare Ray to similar features in Parsl and provide a small performance analysis. We also provide an illustration of how Ray Tune can optimize the hyperparameter of a neural network.
Introduction
Ray is a distributed object framework first developed by the amazingly productive Berkeley RISE Lab. It consists of a number of significant components beyond the core distributed object system including a cluster builder and autoscaler, a webserver framework, a library for hyperparameter tuning, a scalable reinforcement learning library and a set of wrappers for distributed ML training and much more. Ray has been deeply integrated with many standard tools such as Torch, Tensorflow, Scikit-learn, XGBoost, Dask, Spark and Pandas. In this short note we will only look at the basic cluster, distributed object system and webserver and the Ray Tune hyperparameter optimizer. We also contrast the basics of Ray with Parsl which we described in a previous post. Both Ray and Parsl are designed to be used in clusters of nodes or a multi-core server. They overlap in many ways. Both have Python bindings and make heavy use of futures for the parallel execution of functions and they both work well with Kubernetes, but as we shall see the also differ in several important respects. Ray supports an extensive and flexible actor model, while Parsl is primarily functional. Parsl is arguably more portable as it supports massive supercomputers in addition to cloud cluster models.
In the next section we will look at Ray basics and an actor example. We will follow that with the comparison the Parsl and discuss several additional Ray features. Finally, we turn to Ray Tune, the hyperparameter tuning system built on top of Ray.
Ray Basics
Ray is designed to exploit the parallelism in large scale distributed applications and, like Parsl and Dask, it uses the concept of futures as a fundamental mechanism. Futures are object returned from function invocations that are placeholders for “future” returned values. The calling context can go about other business while the function is executing in another thread, perhaps on another machine. When the calling function needs the actual value computed by the function it makes a special call and suspends until the value is ready. Ray is easy to install on your laptop. Here is a trivial example.
The result is
As you can see, the “remote” invocation of f returns immediately with the object reference for the future value, but the actual returned value is not available until 2 seconds later. The calling thread is free to do other work before it invokes ray.get(future) to wait for the value.
As stated above, Parsl uses the same mechanism. Here is the same program fragment in Parsl.
Ray Clusters and Actors
Ray is designed to manage run in ray clusters which can be launched on a single multicore node, a cluster of servers or as pods in Kubernetes. In the trivial example above we started a small Ray environment with ray.init() but that goes away when the program exits. To create an actual cluster on a single machine we can issue the command
$ ray start -–head
If our program now uses ray.init(address=’auto’) for the initialization the program is running in the new ray cluster. Now, when the program exits, some of the objects it created can persist in the cluster. More specifically consider the case of Actors which, in Ray, are instances of classes that have persistent internal state and methods that may be invoked remotely.
To illustrate Ray actors and the Ray web server will now describe an “application” that dynamically builds a small hierarchy of actors. This application will classify documents based on the document title. It will first classify them into top-level topics and, for each top-level topic subdivided further into subtopic as shown in Figure 1. Our goal here is not actually to classify documents, but to illustrate how Ray works and how we might find ways to exploit parallelism. Our top-level topics are “math”, “physics”, “compsci”, “bio” and “finance”. Each of these will have subtopics. For example “math” has subtopics, “analysis”, “algebra” and “topology”.
Figure 1. Tree of topics and a partial list of subtopics
We will create an actor for each top-level topic that will hold lists of article titles associated with the subtopics. We will call these actors SubClassifiers and they are each instances of the following class.
We can create an instance of the “math” actor with the call
cl = SubClassifier.options(name=”math”, lifetime=”detached”).remote(“math”)
There are several things that are happening here. The SubClassifier initializer is “remotely” called with the topic “math”. The initializer reads a configuration file (read_config) to load the subtopic names for the topic “math”. From that the subtopic dictionaries are constructed. We have also included an option that instructs that this instance to be “detached” and live in the cluster beyond the lifetime of the program that created it. And it shall have the name “math”. Another program running in the cluster can access this actor with the call
math_actor = ray.get_actor(“math”)
Another program running in the same Ray cluster can now add new titles to the “math” sub-classifier with the call
math_actor.send.remote(title)
In an ideal version of this program we will use a clever machine learning method to extract the subtopics, but here we will simply use a tag attached to the title. This is accomplished by the function split_titles()
We can return the contents of the sub-classifier dictionaries with
print(math_actor.get_classification.remote())
A Full “Classifier” Actor Network.
To illustrate how we can deploy a set of actors to make the set of microservice-like actors we will create a “document classifier” actor that allocates document titles to the SubClassifier actors. We first classify the document into the top-level categories “math”, “physics”, “compsci”, “bio” and “finance”, and then send them to the corresponding SubClassifier actor.
The Classifier actor is shown below. It works as follows. A classifier instance has a method “send(title)” which uses the utility function split_titles to extract the top-level category of the document. But to get the subcategory we need to discover and attach the subcategory topic tag to the title. A separate function compute_subclass does that. For each top-level category, we get the actor instance by name, or if it does not exist yet, we create it. Because computing the subclass may require the computational effort of a ML algorithm, we invoke compute_subclass as remote and store the future to that computation in a list and go on and process the next title. If the list reaches a reasonable limit (given by the parameter queue_size) we empty the list and start again. Emptying the list requires waiting until all futures are resolved. This is also a way to throttle concurrency.
By varying the list capacity, we can explore the potential for concurrency. If queue_size == 0, the documents will be handled sequentially. If queue_size == 10 there can be as many of 10 documents being processed concurrently. We shall see by experiments below what levels of concurrency we can obtain.
In the ideal version of this program the function compute_subclass invoked a ML model to compute the subclassification of the document, but for our experiments here, we will cheat because we are interested in measuring potential concurrency. In fact the subclassification of the document is done by the split_titles() function in the SubClassifier actor. ( The document titles we are using all come from ArXiv and have a top level tag and subclass already attached. For example, the title ‘Spectral Measures on Locally Fields [math.FA]’ which is math with subtopic analysis. )
The simplified function is shown below. The work is simulated by a sleep command. We will use different sleep values for the final analysis.
The Ray Serve webserver
To complete the picture we need a way to feed documents to top-level classifier. We do that by putting a webserver in front and using another program outside the cluster to send documents to the server as as fast as possible. The diagram in Figure 2 below illustrates our “application”.
Figure 2. The microservice-like collection of Ray actors and function invocations. The clients send titles to the webserver “server” as http put requests. The server makes remote calls to the classifier instance which spawns compute_subclass invocations (labeled “classify()” here) and these invoke the send method on the subclassifiers.
The ray serve library is defined by backend objects and web endpoints. In our case the backend object is an instance of the class server.
Note that this is not a ray.remote class. Once Ray Serve creates an instance of the server which, in turn grabs an instance of the Classifier actor). The Server object has an async/await coroutine that is now used in the Python version of ASGI, the Asynchronous Server Gateway Interface. The call function invokes the send operation on the classifier. To create and tie the backend to the endpoint we do the following.
We can invoke this with
Serve is very flexible. You can have more than one backend object tied to the same endpoint and you can specify what fraction of the incoming traffic goes to each backend.
Some performance results
One can ask the question what is the performance implication of the parallel compute_subclass operations? The amount of parallelism is defined by the parameter queue_size. It is also limited by the size of the ray cluster and the amount of work compute_subclass does before terminating. In our simple example the “work” is the length of time the function sleeps. We ran the following experiments on AWS with a single multicore server with 16 virtual cores. Setting queue_size to 0 make the operation serial and we could measure the time per web invocation for the sleep time. For a sleep of t=3 seconds the round trip time per innovation was an average of 3.4 seconds. For t=5, the time was 5.4 and for t=7 it was 7.4. The overhead of just posting the data and returning was 0.12 seconds. Hence beyond sleeping there is about .28 seconds of ray processing overhead per document. We can now try different queue sizes and compute speed-up over the sequential time.
Figure 3. Speed up over sequential for 40 documents sent to the server.
These are not the most scientific results: each measurement was repeated only 3 times. However, the full code available for tests of your own. The maximum speed-up was 9.4 when the worker delay was 7 seconds and the queue length was 20. The code and data are in Github.
Another simple test that involves real computation is to compute the value of Pi in parallel. This involves no I/O so the core is truly busy computing. The algorithm is the classic Monte Carlo method of throwing X dots into a square 2 on a size and counting the number that land inside the unit circle. The fraction inside the circle is Pi/4. In our case we set X to 106 and compute the average of 100 such trials. The function to compute Pi is
The program that does the test is shown below. It partitions the 100 Pi(10**6) tasks in blocks and each block will be executed by a single thread.
As can be seen the best time is when we compute 100 independent blocks. The sequential time is when it is computed as a sequential series of 100 Pi tasks. The speedup is 9.56. We also did the same computation using Dask and Parsl. In the chart below we show the relative performance .We also did the same computation using Dask and Parsl. The graph shows the execution time on the vertical axis and the horizontal is the block sizes 4, 5, 10, 20, 100. As you can see Dask is fastest but Ray in about the same when we have 100 blocks in parallel.
Figure 4. Execution time of 100 invocations of Pi(10**6). Blue=Parsl, Green=Ray and Orange=Dask.
More Ray
Modin
One interesting extension of Ray is Modin, a drop-in replacement for Pandas. Data scientists using Python have, to a large extent, settled on Pandas as a de facto standard for data manipulation. Unfortunately, Pandas does not scale well. Other alternatives out there are Dask DataFrames, Vaex, and the NVIDIA-backed RAPIDS tools. For a comparison see Scaling Pandas: Dask vs Ray vs Modin vs Vaex vs RAPIDS (datarevenue.com) and the Modin view of Scaling Pandas.
There are two important features of Modin. First is the fact that it is a drop-in replacement for Pandas. Modin duplicates 80% of the massive Pandas API including all of the most commonly used functions and it defaults to the original Pandas versions for the rest. This means you only need to change one line:
import pandas as pd
to
import modin.pandas as pd
and your program will run as before. The second important feature of Modin is performance. Modin’s approach is based on an underlying algebra of operators that can be combined to build the pandas library. They also make heavy use of lessons learned in the design of very large databases. In our own experiments with Modin on the same multicore server used in the experiments above Modin performed between 2 time and 8 times better for standard Pandas tasks. However, it underperformed Pandas for a few. Where Modin performance really shines is on DataFrames that are many 10s of gigabytes in size running on Ray clusters of hundreds of cores. We did not verify this claim, but the details are in the original Modin paper cited above.
Ray Tune: Scalable Hyperparameter Tuning
Ray Tune is one of the most used Ray subsystems. Tune is a framework for using parallelism to tune the hyperparameters of a ML model. The importance of hyperparameter optimization is often overlooked when explaining ML algorithm. The goal is to turn a good ML model into a great ML model. Ray Tune is a system to use asynchronous parallelism to explore a parameter configuration space to find the set of parameters that yield the most accurate model given the test and training data. We will Illustrate Tune with a simple example. The following neural network has two parameters: l1 and l2.
These parameters describe the shape of the linear transforms at each of the three layers of the network. When training the network, we can easily isolate two more parameters: the learning rate and the batch size. We can describe this parameter space in tune with the following.
This says that l1 and l2 are powers of 2 between 4 and 64 and the learning rate lr is drawn from the log uniform distribution and the batch size is one of those listed. Tune will extract instances of the configuration parameters and train the model with each instance by using Ray’s asynchronous concurrent execution.
To run Tune on the model you need to wrap it in a function that will encapsulate the model with instances of the configuration. The wrapped model executes a standard (in this case torch) training loop followed by a validation loop which computes an average loss and accuracy for that set of parameters.
One of the most interesting part of Tune is how they schedule the training-validation tasks to search the space to give the optimal results. The scheduler choices include HyperBand, Population Based Training and more. The one we use here is Asynchronous Successive Halving Algorithm (ASHA) (Li, et.al. “[1810.05934v5] A System for Massively Parallel Hyperparameter Tuning (arxiv.org)” ) which, as the name suggests, implements a halving scheme that rejects regions that do not show promise and uses a type of genetic algorithm to create promising new avenues. We tell the scheduler we want to minimize a the loss function and invoke that with the tune.run() operator as follows.
We provide the complete code in a Jupyter notebook. The model we are training to solve is the classic and trivial Iris classification, so our small Neural network converges rapidly and Tune gives the results
Note that the test set accuracy was measured after ray completed by loading the model from checkpoint that Tune saved (see the Notebook for details)
Final Thoughts
There are many aspects of Ray that we have not discussed here. One major omission is the way Ray deploys and manages clusters. You can build a small cluster on AWS with a one command line. Launching applications from the head node allows Ray to autoscale the cluster by adding new nodes if the computation demands more resources and then releases those node if no longer needed.
In the paragraphs above we focused on two Ray capabilities that were somewhat unique. Ray’s actor model in which actors can persist in the cluster beyond the lifetime of the program that created them. The second contribution from Ray that we found exciting was Tune. A use case that is more impressive that our little Iris demo is the use of Tune with hugging face’s Bert model. See their blog. | https://esciencegroup.com/2021/04/08/ | CC-MAIN-2021-39 | refinedweb | 2,909 | 55.44 |
#include <paradox.h>
int PX_delete_record(pxdoc_t *pxdoc, int recpos)
Removes the record with number recpos from the Paradox file. The first record has number 0. The data of the record will be wiped out, making it impossible to reconstruct it later. The data block where the record was stored is reconstruct to make sure all records are at the beginning of the data block followed by the free space.
Calls of PX_insert_record(3) will use the first datablock with free space and add the new record after the records in the data block.
Returns 0 on success or -1 on failure.
PX_retrieve_record(3), PX_insert_record(3), PX_update_record(3)
This manual page was written by Uwe Steinmann uwe@steinmann.cx. | http://www.makelinux.net/man/3/P/PX_delete_record | CC-MAIN-2016-07 | refinedweb | 118 | 72.46 |
We will orient our dash of Python around the first and simplest problem from ProjectEuler.net.
Installing Python
To get Python on your computer, go to python’s website and follow the instructions for downloading and installing the interpreter. Most Window’s users can simply click here to download an installer, Mac OS 10.6 – 10.7 users can click here to get their installer, and linux users can (and should) fend for themselves.
For non-terminal buffs, you can use Python’s official text editor, called IDLE, for editing and running Python programs. It comes packaged with the download links above. For terminal junkies, or people wishing to learn the terminal, you can use your favorite terminal editor (nano for beginners, vim or emacs for serious coders) to write the code, then type
$> python source.py
at a command prompt to run the code, which we assume here is called “source.py” and is in the present working directory. To learn more about the Python interpreter, see the official Python tutorial. On to the puzzle.
Python
Problem: Find the sum of all positive multiples of 3 or 5 below 1000.
To spoil the fun and allow the reader to verify correctness, the answer is 233168. Of course, this problem can be solved quite easily by using nice formulas for arithmetic progressions, but for the sake of learning Python we will do it the programmer’s way.
The first step in solving this problem is to figure out how we want to represent our data. Like most languages, Python has a built-in type for integers. A built-in type is simply a type of data that is natively supported by a language, be it a number, a character of text, a list of things, or a more abstract type like, say, a shopping cart (not likely, but possible). So one program we might start with is
3+5+6+9
which is also known as the sum of all multiples of 3 or 5 less than 10. This evaluates to 23, and we may pat ourselves on the back for a successful first program! Python understands most simple arithmetic expressions we’d hope to use, and a complete list is very googleable.
Unfortunately to type out all such numbers up to 1000 is idiotic. The whole point of a computer is that it does the hard work for us! So we identify a few key goals:
- We want a test for “divisible by n”
- We want to be able to keep track of those numbers which have the desired divisibility property
- We want to apply our test to all numbers less than 1000 in a non-repetitive way
The first is quite easy. Looking at our list of operators, we find the “remainder” operator %. In particular, “a % b”, when performed on two integers
, gives
, the remainder when
is divided by
. In particular, if “a % 3” is zero, then
is divisible by 3, and similarly for 5. Again looking at our list of operators, we have an == equality operator (we will see plain old = later for variable assignment) and an “or” boolean or operator. So our test for divisibility by 3 or 5 is simply
x % 3 == 0 or x % 5 == 0
Typing this into the python interpreter (with “x” replaced with an actual number) will evaluate to either “True” or “False”, the built-in types for truth and falsity. We will use the result of this test shortly.
Once we find a number that we want to use, we should be able to save it for later. For this we use variables. A variable is exactly what it is in mathematics: a placeholder for some value. It differs from mathematics in that the value of the variable is always known at any instant in time. A variable can be named anything you want, as long as it fits with a few rules. To assign a value to a variable, we use the = operator, and then we may use them later. For example,
x = 33 x % 3 == 0 or x % 5 == 0
This program evaluates to True, but it brings up one interesting issue. Instead of a single expression, here we have a sequence of expressions on different lines, and as the program executes it keeps track of the contents of all the variables. For now this is a simple idea to swallow, but later we will see that it has a lot of important implications.
Once we can test whether a number is divisible by stuff, we want to do something special when that test results in True. We need a new language form called an “if statement.” It is again intuitive, but it has one caveat that is unique to Python. An if statement has the form:
if test1: body1 elif test2: body2 ... elif testK: bodyK else: bodyElse
The “test#” expressions evaluate to either True or False. The entire block evaluates in logical order: if “test1” evaluates to true, evaluate the sequence of statements in “body1,” and ignore the rest; if “test1” evaluates to false, do the same for “test2” and “body2”. Continuing in this fashion, if all tests fail, execute the sequence of statements in the “bodyElse” block.
The quirk is that whitespace matters here, and Python will growl at the programmer if he doesn’t have consistent spacing. Specifically, the body of any if statement (a sequence of statements in itself) must be indented consistently for each line. Any nested if statements within that must be further indented, and so on. An indentation can be a tab, or a fixed number of spaces. As long as each line in an indented block follows the same indentation rule, it doesn’t matter how far the indentation is. We use three spaces from here on. This unambiguously denotes which lines are to be evaluated as the body of an if statement. While it may seem confusing at first, indenting blocks of code will soon become second nature, and it is necessary for rigor.
For instance, the following tricky program was written by a sloppy coder, and it will confuse the Python interpreter (and other coders).
x = 1 if False: y = 4 x = x + 1 x
If one tries to run this code, Python will spit out an error like:
IndentationError: unindent does not match any outer indentation level
In other words, Python doesn’t know whether the programmer meant to put “x = x + 1” into the body of the if statement or not. Rather than silently resolve the problem by picking one, Python refuses to continue. For beginners, this will save many headaches, and good programmers almost always have consistent indenting rules as a matter of tidiness and style.
Combining variables, our divisibility test, and if statements, we can construct a near-solution to our problem:
theSum = 0 numberToCheck = 1 if numberToCheck % 3 == 0 or numberToCheck % 5 == 0: theSum = theSum + numberToCheck numberToCheck = 2 if numberToCheck % 3 == 0 or numberToCheck % 5 == 0: theSum = theSum + numberToCheck numberToCheck = 3 ...
While this works, and we could type this block of code once for each of the 1000 numbers, this again is a colossal waste of our time! Certainly there must be a way to not repeat all this code while searching for divisible numbers. For this we look to loops. Loops do exactly what we want: alter a small piece of a block of code that we want to run many times.
The simplest kind of loop is a while loop. It has the form:
while testIsTrue: body
The while loop is almost self-explanatory: check if the test is true, evaluate the body if it is, repeat. We just need to make sure that the test eventually becomes false when we’re done, so that the loop terminates.
Incorporating this into our problem, we have the following program
theSum = 0 numberToCheck = 1 while numberToCheck < 1000: if numberToCheck % 3 == 0 or numberToCheck % 5 == 0: theSum = theSum + numberToCheck numberToCheck = numberToCheck + 1 print(theSum)
Notice that the indentation makes it clear when the nested if body ends, and when the while body itself ends.
We add the extra “print” statement to allow the user to see the result of the computation. The “print” function has quite a lot of detail associated with how to use it, but in its simplest it prints out the value of its argument on a line by itself.
Running this code, we see that we get the correct value, and we applaud ourselves for a great second program.
Diving Deeper for Lists
While we could end here, we want to give the reader a taste for what else Python can do. For instance, what if we wanted to do something with the numbers we found instead of just adding them up? What if we, say, wanted to find the median value or the average value of the numbers? Of course, this has no apparent use, but we still want to know how to do something besides adding.
What we really want to do is save all the numbers for later computation. To do this, we investigate Python’s native list type. Here are some examples of explicitly constructed lists:
list1 = [] list2 = [2,3,4,5] list3 = range(1,1000) list4 = ['a', 7, "hello!", [1,2,3,4]]
Obviously, Python’s lists are comma-separated lists of things enclosed in square brackets. The things inside the list need not be homogeneous, and can even include other lists. Finally, the “range(a,b)” function gives a list with the integers contained in the interval
, where
are integers. One must be slightly careful in naming lists, because the “list” token is a reserved keyword in Python. It may not be used to name any variable.
Lists have quite amazing functionality; they are a more powerful sort of built-in type than integers. Specifically, lists have a whole bunch of named operations, which we call methods. Instead of invoking these operations with a familiar infix symbol like +, we do so with the dot operator, and then the name of the method. For instance, the following program appends the number 5 to the list [1,2,3]:
list1 = [1,2,3] list1.append(5)
The “append” method is a native part of all Python lists, and we apply it to its argument with the function notation. In other words, this code might in some other language look like “append(list1, 5)”, but since we recognize that the “list1” object is the “owner” of the append operation (you can only append something to a list), it deserves a special place to the left of the operation name.
Applying this new data type to our original program, we get the following code:
goodNumbers = [] numberToCheck = 1 while numberToCheck < 1000: if numberToCheck % 3 == 0 or numberToCheck % 5 == 0: goodNumbers.append(numberToCheck) numberToCheck = numberToCheck + 1 print(sum(goodNumbers))
The “sum” function is a special function in Python (a method of no object, so we call it global) which sums the elements in a list, provided that addition is defined for those objects (if one continues her Python education past this blog entry, she will see such odd uses of the symbol +).
Now, to get back at the values in the list, we use the index operation, which has the syntax:
list1[index]
and returns the element of “list1” at the specified index. In Python, as with most programming languages, lists are indexed from zero, so the first element of a list is “list1[0]”. To find the median value of our list of goodNumbers, we could use the following code:
length = len(goodNumbers) if length % 2 == 0: twoMids = (goodNumbers[length/2] + goodNumbers[length/2 - 1]) median = twoMids / 2.0 else: median = goodNumbers[length/2] print(median)
The global “len” function gives the length of a number of different objects (lists, strings, etc). A good exercise for the reader is to walk through the above code line by line to understand why it works. Run it on lists of a number of different sizes, not just the results of the Project Euler problem. Better yet, try to break it! Even this simple program has a small bug in it, if we use the appropriate list instead of “goodNumbers”. If one finds the bug, he will immediately be prompted with what to do in case such input shows up. Should one warn the user or return some placeholder value? This question is a common one in computer science and software engineering, and companies like Microsoft and Google give it quite a lot of thought.
Finally, one may want to “save” this bit of code for later. As we already implied, we may use this code to find the median of any list of numbers, given the appropriate input. Mathematically, we want to define the “median” function. The idea of a function is a powerful one both in mathematics and in programming, so naturally there is a language form for it:
def functionName(args...): body return(returnValue)
A few notes: the entire body of a function definition needs to be indented appropriately; the definition of a function must come before the function is ever used; and not every function requires a return statement.
The easiest way to learn functions is to convert our median-finding code into a median function. Here it is:
def median(myList): length = len(myList) if length % 2 == 0: twoMids = (myList[length/2] + myList[length/2 - 1]) medianValue = twoMids / 2.0 else: medianValue = myList[length/2] return(medianValue)
And now, to use this function, we do what we expect:
print(median([1,2,3,4,5])) print(median(range(1,1000))) print(median(goodNumbers))
In the same way that we used loops to reuse small bits of code that had slight changes, we here reuse longer bits of code that have more complicated changes! Functions are extremely useful both for readability and extensibility, and we will revisit them again and again in future.
So here we’ve covered a few basic constructs in Python: numbers, conditional executions with if statements, looping with whiles, the basics of list manipulation, and simple function definition. After this small bit of feet-wetting, the reader should be capable to pick up (and not be bewildered by) a beginner’s book on programming in Python. Here are a couple such free online sources:
- Non-Programmer’s Guide to Python: WikiBooks
- A Byte of Python
- Learning to Program – Alan Gauld
- More Sources
Feel free to ask me any Python questions here, and I’ll do my best to answer them. Until next time!
Eventually, I hope you’ll teach students that the “correct” solution to your initial problem is this:
sum ([ n for n in range(1,1000) if n % 3 == 0 or n % 5 == 0 ])
Since the runtime of both implementations is at worst off by a constant factor, you must mean “correct” as “using the established Python paradigms and language constructs.” And I agree, any person learning Python should know such list comprehensions. But I certainly wouldn’t introduce them before I introduce lists, or even in the same lesson, since lists (more specifically, the syntax and operations associated with lists) are not easy for someone with no exposure to programming.
The point of this exercise was to show the thought process of developing an algorithm, and I chose to use regular old for-loops so that readers would be able to generalize to any kind of loop (as I’m sure you know, list comprehensions aren’t suitable for every loop), and in particular they would be more prepared to stumble across a language without native list comprehensions (C/C++ and Java, among others).
But of course, your point is that a student of Python should be fluent in Python paradigms and built-in functions. I wholeheartedly agree! Just not in this post.
For beginners of python are the list comprehension short-liners not easy. But if you are happy with python this will be a great benefit for coding. I built some one-line-solutions with python for the project euler | https://jeremykun.com/2011/08/10/a-dash-of-python/ | CC-MAIN-2019-09 | refinedweb | 2,681 | 67.08 |
From: David Abrahams (abrahams_at_[hidden])
Date: 2000-10-20 21:05:15
----- Original Message -----
From: <rwgk_at_[hidden]>
To: <boost_at_[hidden]>
Sent: Friday, October 20, 2000 8:44 PM
Subject: [boost] py_cpp & passing tuples from Python to C++
> I am trying to expose this member function to Python:
>
> void UnitCell::set_len(const double Len[3]) {
> for (int i = 0; i < 3; i++) this->Len[i] = Len[i];
> }
>
> This is my idea for the hook in the module init function:
>
> UnitCell_class.def(&UCTbx::UnitCell::set_len, "set_len");
>
> The compiler spits out a very verbose error message starting
> with:
>
> cxx: Error: ../py_cpp/caller.h, line 275: #304 no instance of
> overloaded function "py::from_python" matches the argument
> list
> argument types are: (PyObject *, py::Type<const double *>)
>
> Of course, I really want to get the three floating point values
> from a Python tuple (or list). Is this possible?
Yes, but it hasn't made it into the documentation yet.
>How?
You could expose a function like this one to get the desired effect:
#include <py_cpp/objects.h>
void set_len(UnitCell& x, py::Tuple tuple)
{
double len[3];
for (std::size_t i =0; i < 3; ++i)
len[i] = py::from_python(tuple[i].get(), py::Type<double>());
x.set_len(len);
}
> I also noticed that there are several places where a maximum of five
> arguments is hardwired. What would be involved in increasing the
> number of arguments (I need at least six), or removing the
> limitation altogether?
Sure (also slated for documentation). In the py_cpp folder, run gen_all.py
with an argument of 6.
/boost/development/py_cpp>python gen_all.py 6
-Dave
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/10/5933.php | CC-MAIN-2021-31 | refinedweb | 286 | 57.37 |
:Fear Mongering (Score 2) 307
Comment: Re:I don't think so (Score 4, Insightful) 185
Comment: Re:Awesome! (Score 1) 171
Carbon nanotubes aren’t required to build the structure.[8] This would make it possible to build the elevator much sooner, since available carbon nanotube materials in sufficient quantities are still years away.[9]
The problem being that building a lunar base to support the space elevator is already years away.
Comment: Re:1st world countries (Score 1) 439
yeah let's worry about how this will affect the 1st world countries, those are the real victims here
If focusing on the economic effects to 1st world countries spurs them to take action I'm all for it. Nobody in the US is going to act if they think only Africa is at risk.
Comment: Re:Speaking of TLDs and (Score 1) 87
However, you're wrong on this preventing namespace collisions - companies are allowed to have the same name so long as they are in completely different lines of business (so there is no confusion).
Actually you are wrong, you are confusing this with trademark law.
From Companies house:
You are however correct that not all companies use their registered names as their trading name
Comment: Re:Power or USB? (Score 1) 247
Comment: Re:Texas eh? (Score 1) 652
You do know that it was the Congress... controlled by mostly non-Southern Democrats... that killed the Supercollider, right
You said Congress, not House. Stop trying to reframe your argument after you've been shown to be wrong.
Comment: Re:Voting with wallet (Score 2) 307
Comment: Re:'Replying to undo moderation mistake. Sorry, pa (Score 4, Insightful) 383
Currently we have a considerable number of "resetting moderation" posts that just serve to spam threads.
Comment: Re:I have the kiss of death for awesome technology (Score 1) 144
Comment: Re:Why (Score 1) 1359
Comment: Re:Really? (Score 1) 1359...".
Comment: Re:order of magnitude (Score 1) 217
Ironically, your example was off by an order of magnitude. Alpha Centauri is 4 light years or 4e15m away.
And Alpha Centauri isn't even the closest star, Proxima Centauri is.
Comment: Re:Clarify (Score 5, Informative) 289. | http://slashdot.org/~BeardedChimp/tags/interesting | CC-MAIN-2015-11 | refinedweb | 370 | 63.9 |
Java Tutorial
New to Java Programming Centre
Learning Path: Getting Started with MIDP
Learning Path: Getting Started with MIDP 2.0
Retrieving MIDlet Attributes
Managing the MIDlet Life-Cycle with a Finite State Machine
Wireless Application Programming: MIDP Programming and Packaging Basics
Wireless Development Tutorial Part 1
JAR File Specification
JSR 30: J2ME Connected Limited Device Configuration (CLDC) 1.0
JSR 118: Mobile Device Information Profile (MIDP) 2.0
Introduction to OTA Application Provisioning
An Overview of JSR 124: J2EE Client Provisioning
Audience: Beginner and Intermediate
Estimated time: 10 hours
by Richard Marejka
February 2005
Welcome to the MIDlet Life-Cycle Learning Path! Here you'll find a
brief introduction to the life-cycle of an application based on the Java
2 Platform, Micro Edition (J2ME), and links you can follow to articles,
sample code, and specifications that will give you a solid grounding in
this crucial area of J2ME-based development.
The MIDlet Life-Cycle learning path is a bit different from others in format. Many simply state goals and prerequisites briefly, then provide links to the content. This one provides the basic content directly, and along the way furnishes links to other resources.
This learning path will focus on source code. Why? Ultimately
in a product life-cycle, someone must actually create an instantiation
of a design that was based an architecture that was begotten by an idea.
Business cases, designs, and architectures cannot be compiled to run on
devices. Someone actually has to produce code that embodies the design,
meets high standards of quality, and meets users' needs. This coding
task includes crafting a build environment, managing a source base, and
serving it all up in a form that allows the the developers to crank out
the product in its deliverable form, on demand. Control and
repeatability are key elements of software product management.
So where does all this lead? Virtually all source code is layered on
top of existing application progamming interfaces (APIs). How
well you understand the APIs you consume, and how correctly you use them,
directly affect the quality of your product. Merely reading the API
specifications isn't enough. Code that reflects a deep understanding of
the API, and that demonstrates correct patterns of use, both reinforce
the specification and provide idioms that can be directly employed by
other developers. The purpose of this learning path is to help you
produce that kind of code.
Understanding the MIDlet life-cycle is fundamental to creating any
MIDlet. The life-cycle defines the execution states of a
MIDlet – creation, start, pause, and exit – and
the valid state transitions. The application management software
(AMS) is the software on a device that manages the downloading and
life-cycle of MIDlets. The AMS provides the runtime environment for a
MIDlet. It enforces security, permissions, and execution states, and
provides system classes and scheduling.
The basic components of any MIDlet suite you will deliver are the
Java Application Descriptor (JAD) file and the Java Archive
(JAR) file. Together, these two items are the MIDlet suite.
The JAD file, as the name implies, describes a MIDlet suite.
The description includes the name of the MIDlet suite, the location and
size of the JAR file, and the configuration and profile requirements.
The file may also contain other attributes, defined by the Mobile
Information Device Profile (MIDP), by the developer, or both. Attributes
beginning with MIDlet- or MicroEdition- are
reserved for use by the AMS. The JAD file syntax is similar to that of
the java.util.Properties class found in the J2SE
environment.
MIDlet-
MicroEdition-
java.util.Properties
A MIDlet suite on a device is identified by the attribute tuple
(MIDlet-Name, MIDlet-Version, MIDlet-Vendor). The JAR file
will be installed from the location MIDlet-Jar-URL. The
size of the download must agree with the MIDlet-Jar-Size
value.
(MIDlet-Name, MIDlet-Version, MIDlet-Vendor)
MIDlet-Jar-URL
MIDlet-Jar-Size
The JAR contains one or more MIDlets, specified in the JAD using the
MIDlet-<n> attribute. The syntax is:
MIDlet-<n>
MIDlet-n : MIDletName , [IconPathname] , ClassName
n
MIDletName
IconPathname
.png
ClassName
javax.microedition.midlet.MIDlet
A manifest file is located at /META-INF/MANIFEST.MF
within the JAR. The manifest has the same syntax as the JAD file and it
may share the same attributes. The rules for attribute location and
lookup are described in the technical tip
"Retrieving
MIDlet Attributes." The manifest is a key component of the MIDP 2.0
signed-MIDlet model. In a signed MIDlet suite, attributes in the JAD
must agree with those in the manifest. While you can modify the JAD file
attributes easily, you can't modify those in the signed MIDlet without
re-signing the MIDlet suite.
/META-INF/MANIFEST.MF
In addition to the Java class files and the manifest, the JAR file
may contain other resources. These may be images that the MIDlet
can load using the
javax.microedition.lcdui.Image.createImage(String) method.
The application can also use
java.lang.Class.getResourceAsStream(String) to access any
resource in the JAR file as a java.io.InputStream
– any resource execpt a class file, that is. The
String argument that either method expects is a pathname
identifying a resource in the JAR file. The definition and rules for
pathname use are found in the "Application Resource Files" section of
the JSR 118: Mobile
Information Device Profile (MIDP) 2.0 specification, on page 36.
javax.microedition.lcdui.Image.createImage(String)
java.lang.Class.getResourceAsStream(String)
java.io.InputStream
String
The J2ME security model is a scaled-down version of the J2SE model.
It has been adapted to work within the contrained resources common among
J2ME devices.
The rules are:
The short of it is:
There are two ways to install a MIDlet suite. The first, called
direct, involves some direct connection between the device and
the development platform – commonly cable, infrared, or
Bluetooth link. In the case of the Nokia 6100, for example, it's a Nokia DKU-5 USB
cable and the Nokia PC Suite software, which includes Nokia Application
Installer. You develop the MIDlet suite, perhaps test it in an emulator,
then install it using the USB cable and Nokia Application Installer.
While this method is efficient for testing on your own device, it is
hardly suitable for deploying an application to millions of end users.
Over-the-air provisioning (OTA) makes large-scale deployment
possible – even easy. A device can install a MIDlet suite
from a remote server using the device's built-in browser. Simply
entering the URL of the suite's JAD file into the browser address field
starts the installation process. In general terms:
MIDlet-Jar-File
This description omits steps relating to signed MIDlet suites,
permissions, and push-registry entries, collapsing them into the
"verifies message and JAR file" step. Error handling has also been
omitted to simplify presentation.
The success of the installation process depends on correct
functioning of the web server and the device's browser. Characteristics
of the network between the device and server can affect installation
too. One frequent cause of OTA failure is a size limit that network
elements impose on the size of the JAR file. Another is specifying
incorrect MIME types for JAD and JAR files. The correct MIME type for a
JAD file is text/vnd.sun.j2me.app-descriptor and for a JAR
file is application/java-archive.
text/vnd.sun.j2me.app-descriptor
application/java-archive
Here's the easiest part: Because a MIDlet suite is a self-contained
entity, deleting it is simple. Most devices allow the user to select a
MIDlet suite and choose a Delete option from a menu. At this
point the device likely asks for confirmation; a positive response
removes the MIDlet suite, including any push-registry entries and record
stores created by any MIDlet in the suite.
On a Sony Ericsson T616, for instance, the deletion process looks
like this:
Since the days of Kernighan and Ritchie's The C Programming
Language (1978), the first program most developers attempt when they
begin using a new language or environment is Hello World. The C version
contains only a few lines, including the specification of an I/O
library, and the output is only a simple greeting, but this venerable
program gives you the chance to prove much: that you can write, build,
debug (if necessary), and run a program. So developers using Java
technology won't be left out, the article "Wireless
Development Tutorial Part I" provides a J2ME version of Hello World
and instructions on building it.
When a MIDlet begins execution, the AMS first calls the
zero-argument constructor to create a new instance of the MIDlet. The
constructor typically does little or no initialization. The AMS
framework provides transitions that you can use as control points for
resource management, as you'll soon see. When the constructor returns,
the AMS places the MIDlet in the Paused state. To shift the
MIDlet to the Active state the AMS calls the
midlet.startApp() method.
midlet.startApp()
A transition from the Active state back to the Paused
state occurs whenever the AMS calls midlet.pauseApp(). You
can think of this method as the inverse of startApp(). The
MIDlet is not being terminated, but it should release any resources it
obtained in startApp(). The MIDlet may shift from
Paused to Active or back any number of times during its
execution, each time on a call to startApp() or
pauseApp().
midlet.pauseApp()
startApp()
pauseApp()
These transition methods give you the opportunities you need to
manage resources effectively. Typically, you'll use
startApp() to allocate record stores, network connections,
UI components, and such, and use pauseApp() to release
these resources.
public class Mandy extends MIDlet {
private boolean once = false;
Mandy() {
}
public void startApp() {
if ( once == false ) {
once = true;
// acquire one-time resources
}
// acquire "other" resources
}
public void pauseApp() {
// release "other" resources
}
}
public class Mandy extends MIDlet {
private boolean once = false;
Mandy() {
}
public void startApp() {
if ( once == false ) {
once = true;
// acquire one-time resources
}
// acquire "other" resources
}
public void pauseApp() {
// release "other" resources
}
}
The MIDlet may enter the Destroyed state from either
Paused or Active, on a call to
midlet.destroyApp(). This method releases any resources
acquired in the constructor, and saves any state information.
midlet.destroyApp()
There are a few variations on this Paused/Active/Destroyed
theme. First, a MIDlet may voluntarily enter the Paused state by
calling midlet.notifyPaused().. Note that this is only a
request to the AMS. Finally, a call to
midlet.resumeRequest() will tell the AMS that the MIDlet is
interested in entering the Active state. How can a Paused
MIDlet make the call to resumeRequest(), you ask? While
it's idle in most ways, a MIDlet in the Paused state may handle
asynchronous events such as timers and callbacks.
midlet.notifyPaused()
midlet.notifyDestroyed()
destroyApp()
boolean
true
false
MIDletStateChangeException
midlet.resumeRequest()
resumeRequest()
There are still some open questions here, about the interactions
between the AMS and the MIDlet. These can be characterized as the
why and the when. For example, when is
startApp() called, and why is pauseApp()
called? The specification is intentionally vague in these areas,
allowing implementations to follow the rules while also allowing host
runtime environments to vary. By way of an example: Suppose a mobile
phone detects an incoming Multimedia Messaging Service (MMS) message.
The AMS may pause an executing MIDlet to free up memory needed by the
MMS message. If the pause operation doesn't free enough memory, the AMS
may then invoke destroyApp() to release more. The
indeterminate answers to such questions may not be the ones developers
are looking for, but they are as definitive as the specification writers
can make them.
Source code available from the Java mobility site comes in two
forms: one for online viewing and another for download. The download
version is in J2ME Wireless Toolkit application format. The following
table also supplies a reference to the article relevant to each sample.
If you're using the J2ME Wireless Toolkit, you can begin working with
the source files simply by unzipping the archive and moving the
applications into the apps subdirectory of your toolkit
installation.
apps
Which brings us to the toolkit itself – the J2ME Wireless
Toolkit, of course, a state-of-the-art toolbox for developing
wireless applications based on MIDP. The current release at this
writing, 2.2, supports CLDC 1.0 and 1.1, MIDP 1.0 and 2.0, and seven
other J2ME JSRs. The toolkit allows the developer to choose the JSRs
that best match the target environment, build JAD and JAR files, and run
MIDlets in an emulator before deploying them on target devices. The
toolkit is available for
download.
This learning path has described the basics of the MIDlet runtime
environment, the AMS, including its security model, available APIs, and
execution-state machine. Several sample MIDlets, with related articles,
are available in both online and download formats for your further
education. Future learning paths will cover such topics as network and
resource I/O, user interface, persistent storage (RMS), and a collection
of miscellaneous subjects.
1As used in this document, the terms "Java virtual machine"
or "JVM" mean a virtual machine for the Java platform. | http://developers.sun.com/mobility/learn/midp/lifecycle/ | crawl-002 | refinedweb | 2,206 | 54.83 |
ccache compiled code has *.i filenames instead of *.c file names so the breakpoint can't be found.
repro steps:
vharron-macbookpro:bp vharron$ cat bp.c
#include <stdio.h>
int
main() {
printf("Hello, World!\n");
return 0;
}
vharron-macbookpro:bp vharron$ ./build.sh
+ CCACHE=ccache
+ ccache gcc -g -o bp.o -c bp.c
+ gcc -o bp bp.o
vharron-macbookpro:bp vharron$ lldb bp
Current executable set to 'bp' (x86_64).
(lldb) b bp.c:5
expected output:
Breakpoint 1: where = bp`main + 22 at bp.c:5, address = 0x0000000100000f56
(lldb) r
actual output:
Breakpoint 1: no locations (pending).
WARNING: Unable to resolve breakpoint to any actual locations.
[reply] [-] Comment 6
Removing ccache fixes this problem
filename in ccache generated symbols is incorrect?
With ccache:
cu_sp->GetPath().c_str() /Users/vharron/.ccache/tmp/bp.tmp.vharron-macbookpro.roam.corp.google.com.6262.i
Without ccache:
cu_sp->GetPath().c_str() /Users/vharron/dev/bp/bp.c
[reply] [-] Comment 3
LLDB says a fix for this would cause significant performance regressions for the general case:
OSX 10.9.4
vharron-macbookpro:bp vharron$ lldb --version
lldb-310.2.37 (also reproduced with head of SVN)
vharron-macbookpro:bp vharron$ gcc -.3.0
Thread model: posix
vharron-macbookpro:bp vharron$ ccache --version
ccache version 3.1.9
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 3 of the License, or (at your option) any later
version.
A workaround should be to build with CCACHE_CPP2=1.
> filename in ccache generated symbols is incorrect?
I would say: no. What the compiler compiled was the preprocessed .i file, not the .c file (unless CCACHE_CPP2=1 in which case the compiler compiles the original .c file instead of the intermediary .i file).
> LLDB says a fix for this would cause significant performance regressions
> for the general case:
Not sure I understand lldb's logic here at a quick glance. Would it work if cu_sp->GetPath().c_str() was "/Users/vharron/.ccache/tmp/bp.tmp.vharron-macbookpro.roam.corp.google.com.6262.c" instead of "/Users/vharron/.ccache/tmp/bp.tmp.vharron-macbookpro.roam.corp.google.com.6262.i" or did have to be "/Users/vharron/dev/bp/bp.c" to work?
Closing due to lack of progress. Please reopen if there are any new feedback on the problem. | https://bugzilla.samba.org/show_bug.cgi?id=10727 | CC-MAIN-2021-17 | refinedweb | 402 | 63.25 |
im_thresh, im_slice - threshold an image
#include <vips/vips.h> int im_thresh(in, out, threshold) IMAGE *in, *out; double threshold; int im_slice(in, out, threshold1, threshold2) IMAGE *in, *out; double threshold1, threshold2;
These functions have been replaced with the relational and boolean packages - see im_lessconst() and im_and() for much better ways of doing this. These functions operate on any non-complex input. The output image is a unsigned char image with the same sizes and the same number of channels as input. im_slice() thresholds the image held by image descriptor in and writes the result on the image descriptor out. Output is a byte image with values less than threshold1) set to 0, values in [threshold1, threshold2) set to 128 and values greater than threshold2 set to 255 (x in range [a,b) means a<=x<b). im_threshold() thresholds the image held by image descriptor in and writes the result on the image descriptor out. Output is a byte image with values less than threshold set to 0, and values greater or equal to threshold set to 255.
The function returns 0 on success and -1 on error.
im_dilate(3), im_erode(3), im_lessconst(3), im_and(3).
N. Dessipris,
N. Dessipris - 26/04/1991 26 April 1991 IM_THRESH(3) | http://huge-man-linux.net/man3/im_slice.html | CC-MAIN-2018-26 | refinedweb | 207 | 70.63 |
Testing APIs using Functional Programming in Scala
How you can apply functional programming concepts to gain confidence about your API code with minimum effort
In this post I’m going to describe a testing approach you can take to reduce the amount of time spent writing tests, whilst gaining enough confidence in your code to ship it. I’ll be focusing specifically on testing APIs, as they often are challenging to test — sometimes business logic and IO are interweaved.
And I’ll be showing how functional programming is particularly adept at cleanly separating IO from logic, to help with good testing.
API
The examples I have in this post are all based on an example API which uses JWT(JSON Web Token) for authentication and a database to retrieve data. Firstly I’m going to describe the API functioning as a way of discussing how to test that.
Let’s look at an example request coming to our API and what it needs to do to serve a response.
Key IO activities for this endpoint are: validating the user’s identity (Validate JWT against an IdP), their access rights (Validate Permissions using a database query), fetching data from a database (another database query) and logging the journey along the way. What catches the eye is that preparing a response requires a lot of IO interaction.
The question you want to ask yourself is: how should you write your code in a way that would best support testing of this IO-heavy API?
The answer is to make your code follow the “Functional Core, Imperative Shell” pattern, and then make the “Imperative Shell” testable by separating description from interpretation.
Functional Core, Imperative Shell
One way to make your application more testable is to use the “Functional Core, Imperative Shell” approach. This means writing your code in terms of pure functions, and pushing impure IO code to the outer edges of the program.
One of the biggest advantages of functional programming is that it is very explicit about whether your code performs IO (i.e. impure or side-effectful code). Let’s say you wanted to validate a JWT, your initial signature might look something like this:
When implementing, you soon realize that you need to perform a call to a JWKS (JSON Web Key Set) cache which has the following signature:
Using types force you to make a decision: you either modify the return type of
validateJwt to include
IO, explicitly telling us that this function needs to talk to the outside world:
Or you can keep the non-IO return type by passing the keys when validating a token:
The latter approach is more testable. You can test the validation logic without having to depend on an external service to give you a set of JWK. You pushed the impure code — making an http call — further outside, keeping your application core side-effect free. You can also apply local reasoning when looking at your
validateJwt code (you don't have to think about the state of
Jwk cache or http calls it makes) all you have is a token and a list of keys, so it's easy to understand how the code will behave.
You can apply this process to the rest of your code and it will naturally push side-effectful code into the “Impreative Shell”. This is how it might look like for the example API endpoint from above:
Separating description from interpretation
The second step I’m recommending is to make that imperative shell testable (“Imperative Shell”).
By keeping all of our side-effectful code at the edges of the application and keeping our core “pure”, we are already making it more testable and easier to reason about.
In many cases this is not sufficient. This approach will take us as far as testing individual functions, but it will not allow us to test how components will interact with each other. For example, in our API you will need something to call the
def getKeys: IO[Set[Jwk]] function and feed its output to
validateJwt.
How do we test this kind of interaction then? We can do so by separating the description of the program from its interpretation. The main idea is that we encode outside-world interactions in a generic way and then provide two interpretations: one for the real interactions and another one for test interactions.
It is still a developing field but the most popular way to do this is “Final Tagless” or “Free Monad”. I have chosen to demonstrate the “Final Tagless” approach because it’s more flexible and requires less boilerplate code. Both approaches start with creating a Domain Specific Language (DSL) for your effectful code. This is how it looks with Final Tagless:
After DSL is defined we can start using it to write our application code:
DSL is defined in terms of an abstract type constructor
F[_] meaning that the return type has to be "wrapped" in some kind of an effect
F without specifying the exact type. Now that the
F type is not "fixed", you are free to replace it with different concrete types for testing and for running.
Test Implementation
For testing you might not only be interested in the result of the computation but we often would like to see what side-effects it will have to perform. For example, it’s often necessary to check that the proper messages were logged. This is what the testing implementation looks like:
Here a
TestProgram type is used which allows recording would-be side-effects using a
WriterT monad:
wrapped is just a convenience method to create a
TestProgram using either a value or a side-effect. This approach allows us to test the behavior of the application as a whole without having to write integration tests. This is what a test typically looks like:
Tests treat the API as a black-box function from
Request to
Response with side-effects. The above example tests successful authentication of a user.
When
Request is sent, implementation details for creating a valid
Response are not important as long as it satisfies the contract of our API.
What needs to be tested here is whether a valid token will be accepted and the audit entry will be created in logs. It is irrelevant whether it called
validateJwt to validate a token, or some other function, or even delegated the whole thing to an external library. You are testing the behavior, not the implementation.
Notice how tests don't check if
LoggerTest was called with a certain method, all they test is that
LogInfo side-effect was produced. Whether
Logger does it, or the
endpoint itself or even
Jwks is irrelevant to make sure that when the method is called it will do what it was designed to do.
Production Implementation
For the code to be executed in a production setting a real implementation needs to be provided. Here are examples for the DSLs created above:
Now that the application code is fully tested including the HTTP layer and wiring between components, there is still a part of the code untested, the production DSL implementations. This is the code that does HTTP requests, reads database or memory. These implementations need to actually interact with the outside world to be tested. Let’s take another look at the production implementation of
JwksDsl:
To test it you’d have to supply it with an
HttpClient and with
Uri, call
getKeys and inspect the result. This can be a very simple integration test which calls the real endpoint and checks that it gets some JWKs back. Since the interface is so simple there is not much to test, so the integration tests are very quick.
Writing good DSLs
The simpler the DSL the easier and the faster it is to test, but it is important to find the right level of abstraction. Taking the previous approach to the extreme you could have a low-level
HttpDsl with http mehods like this:
and then use it instead of
JwksDsl in your code:
Looks like an improvement over less generic
JwksDsl at a first glance, since you only have to test the
HttpDsl implementation once and then reuse for all of your http requests. The problem here, is that this approach will not give you enough confidence that the code works correctly. While testing higher-level
JwksDsl will hit the real endpoint with real headers, and will confirm that it can deserialise the response into the case classes. You would not have the same confidence if you were to test
HttpDsl on its own.
On the other hand it is important to limit a DSL to the external interaction only and limit its functionality as much as possible. For example you could have combined JWT verification logic and got JWKs like this:
Since it needs to perform an HTTP request it can only be tested at an integration test setting. This means it will have to call the
jwks endpoint even if you just want to verify that it correctly extracts claims from a token.
Pros & Cons
Pros
- Testing what matters.
This approach allows you to test what matters to the user of the API.
By having the tests on the
Request/
Responselevel it's easier to translate requirements into tests.
These kind of high-level tests are a good source of documentation — it is clear what the endpoint does just by looking at the input and output.
- Fearless refactoring.
It might look like this kind of approach can lead to spaghetti code, since there are no tests for individual classes guiding the design and interactions between components. In reality, the combination of functional programming which promotes small pure functions and freedom to move things around leads to constant small improvements to the codebase.
How many times have you thought twice about moving a method between classes or just renaming it in fears that mock-based tests would break? Even after tests are fixed, how confident are you that you haven’t introduced bugs into the tests themselves?
- Clearly defined application boundaries.
It’s easy to look at the DSL definitions and see how the application interacts with the outside world.
- Bigger coverage with fewer tests.
The example test covers extracting the token from a request, verifying it with JWK, extracting the claims, and testing how these parts are wired together. This is like having an integration test which runs at the speed of a unit test.
Cons
- Since
Request/
Responsetypes are coming from an http library it ties your tests to that library.
- Tests are not isolated, so a single bug can cause multiple tests to fail.
It’s important to note, that one testing approach doesn’t preclude the other. You can still have classic unit tests which test individual functions/methods where the additional overhead of writing application-level tests is not worth it.
Conclusion
To recap:
1. Identify and push IO interactions to the edges of your application
2. Use either Final Tagless, Free Monad or any other approach to allow separation of program description from its interpretation
3. Provide test interpretation and use unit tests to test it
4. Provide production interpretation and use integration tests talking to real systems to test it.
I would very much welcome comments about this approach and whether or not it would work for your application. | https://medium.com/seek-blog/testing-with-functional-programming-in-scala-bb26bd4d4b42?source=collection_home---4------0----------------------- | CC-MAIN-2020-10 | refinedweb | 1,902 | 57 |
Introduction: Electromagnetic Mjolnir (From Thor's Hammer Prank)
EDIT: I've embedded the original video above. It went crazy viral almost a month ago, but if you haven't seen it then the rest of this Instructable may not make much sense, so It's now here for reference.
SAFETY: An electromagnet like this one is serious business! If you're demonstrating around children, make sure their hands and feet aren't anywhere near the magnet when placing it on the ground. If someone has a pacemaker, keep the electromagnet away from their chest area!
Parts List:
- Costume Prop Mjolnir - Decent price, pretty accurate size, and personally I like the grainy patina look. If you use a different hammer make sure the head is hollow.
- Arduino Pro Mini 5V - Don't forget, you'll need an appropriate FTDI cable to program this board! It also really helps if you have a regular Arduino Uno to prototype on first before you move everything to the Pro Mini.
- Sparkfun Fingerprint Scanner - You probably don't need a lot of prints to be stored, so this is the cheaper of the two scanners carried by Sparkfun. Don't forget to get the matching JST connector! There are great Instructables using this Fingerprint Scanner, but they use a voltage divider to keep Rx and Tx at 3.3V. As far as I've seen this is unnecessary, no level shifting requried.
- TTP223 Capacitive Touch Sensor - From eBay, it's really cheap! There's an equivalent from Adafruit as well if you're wary of eBay parts.
- 4N35 Optocoupler - A ubiquitous optocoupler, got it from All Electronics. Used with a 1K resistor.
- 3.7 V 150 mAH Lithium Battery - I used something like this to power the capacitive sensor, but you can avoid it if you don't accidentally connect the handle to the Arduino Ground like I did.
- 4 AA Battery Holders - I used 4 separate single AA battery holders because I was really tight on space and had to find nooks and crannies to keep the batteries in. I put Dollar Tree AA batteries in them.
- 4 12V 1.2AH SLA Batteries - I got these from All Electronics because they have a physical storefront near me. You can probably get cheaper from eBay, and really you should just spend a little more on 22.2V Lithium batteries. The 1.2 AH Lead-acid batteries don't really like putting out 1 whole amp continuously, and I'll probably be making that upgrade soon.
- Crydom CMX60D10 60V 10A Solid State Relay - Again from All Electronics, you may be able to find it cheaper elsewhere.
- 1n400X diodes - I had 1n4007's on hand from another project, but 1n4002's would be fine. This series of diodes are rated different voltages (50 - 1000) but what's more important for a flyback diode is current rating, and they're all rated at 1 amp. Use them in parallel for at least twice the amount of current your magnet will draw when on (in my case, 2 to handle 2 amps, but these parts are cheap so don't be afraid to overestimate).
- 10 inch 3/4" Galvanized Steel Nipple - From Home Depot, along with corresponding flange and coupler. Best to go to a hardware store and see them in person, a 12 inch long pipe would be more screen accurate but will give more leverage making the hammer easier to "pry" off a metal surface.
- 2 Drawer Pulls - I needed a way to mount the awkwardly shaped transformer to a board of wood that could take some pulling punishment. I decided on modifying metal drawer handles to act as braces through the gaps of the "E" shape. This is another thing you'd do best to find in person, preferably with a prepared transformer to test fit. This particular part worked for me, but you'll likely need to use something different.
- Microwave Oven Transformer - You'll want the largest one possible that's assembled with an "E" piece and an "I" piece cheaply welded together. That way you'll be able to disassemble easily with a dremel cutting wheel or angle grinder without damaging the windings. This tutorial illustrates how to do that in detail. And this great article by K&J Magnetics explains why the "E" shape is so important for holding strength. You might even be able to just buy the "E" laminations online...
- Scrap Wood - I was lucky and had access to scrap 13 ply baltic birch plywood, which is notoriously sturdy stuff. You'd probably be fine with a decent hardwood/plywood/mdf of comparable thickness (3/4").
- Leather Tennis Grip - I got lucky with this too, there was a tennis store by me that happened to carry old timey leather wraps for tennis rackets. They weren't really on sale, they just had some on hand that they let me have for something like $4. There are leather like tennis grip wraps on the internet, but none of the colors or prices worked for me.
- Glues, Epoxies, Washers, Screws, Bolts, Heat Shrink, Tape, Magnets, Wires, Etc. - These will all depend on how you build your own hammer. I mostly used super glue and hot glue for the batteries, since I wanted them to be semi-removable later if I decided to spring for Lithium batteries instead. I used a set of 4 neodymium magnets to help keep the lid of the hammer closed, they sort of work.
I used Conductak to connect the Capacitive Sensor to the handle. It's not commercially available yet, so you can use alligator clips or try soldering, though it can be difficult soldering a wire to a large piece of metal like a flange.
Step 1: Diagram
Fritzing doesn't have all of the components, so there are some substitutions with equivalent wiring:
- The 9V batteries represent the 12V SLA Batteries.
- The Antennae represents the handle of the hammer.
- The Flash Memory represents the Fingerprint Scanner.
- The Solenoid represents the electromagnet.
- The Solid State Relay pictured is for AC power rather than DC.
- The Op-Amp breakout represents the Capacitive Sensor.
- In reality there is also a wire connecting the ground of the Capacitive Sensor to the core of the electromagnet. This provides a path to literal earth when the hammer is positioned on the physical ground.
There are plenty of improvements that can be made! As long as you don't accidentally ground the handle like I did, you can nix the capacitive sensor's power supply and the optocoupler. You may even be able to just use the Arduino capsense library and do without the capacitive sensor altogether, but it may be finicky. On that note, if all you care about is controlling the hammer, you can get rid of the fingerprint scanner and the Arduino completely and just get a remote control unit, such as this one. All you'd have to do is connect the output of the receiver to the input of the Solid State Relay, and boom, remote controlled Mjolnir. No programming required!
EDIT: I forgot to include switches! You'll want some simple slide switches to turn the Arduino and the capacitive sensor on and off.
Attachments
Step 2: Code
Don't forget to get the FPS library!
The code is just copy pasted below, the .ino file is also attached:
/*
FPS library created by Josh Hawley, July 23rd 2013 Licensed for non-commercial use, must include this license message basically, Feel free to hack away at it, but just give me credit for my work =) TLDR; Wil Wheaton's Law */
#include "FPS_GT511C3.h" #include "SoftwareSerial.h"
FPS_GT511C3 fps(4, 5);
int touch = 0; int capPin = 9; int flag = 0;
void setup() { Serial.begin(9600); // fps.UseSerialDebug = true; // so you can see the messages in the serial debug screen fps.Open(); pinMode(10, OUTPUT); digitalWrite(10, LOW); pinMode(capPin, INPUT_PULLUP); }
void loop() { touch = digitalRead(capPin); //Serial.println(touch); if ((touch == 0) && flag == 0) { digitalWrite(10, HIGH); fps.SetLED(true); if (fps.IsPressFinger()) { fps.CaptureFinger(false); int id = fps.Identify1_N(); if (id<200) { //Don't care which fingerprint matches, just as long as there is a match digitalWrite(10, LOW); fps.SetLED(false); flag = 1; } } } else { fps.SetLED(false); digitalWrite(10, LOW); } if ((touch == 1) && flag == 1) { //Reset the flag after the hammer has been lifted to return to normal behavior flag = 0; } }
Attachments
Third Prize in the
Halloween Props Contest 2015
3 People Made This Project!
Recommendations
86 Comments
Question 6 months ago
can anyone please, suggest me (solid state relay) available in "Amazon and Flipkart"
1 year ago
Hi everyone!
I'm making this project as a final project for school, but am really confused on how to fit the fingerprint scanner inside the pipe: the fingerprint scanner is just too big to fit inside the pipe?
My pipe is just over 2cm long (a bit bigger than 3/4'') and I still have this problem. Can anyone please help?
Thanks in advance!
Jeroen
Question 3 years ago
I am currently recreating this as a fun summer project, but I am having trouble connecting the transformer coil. One end came with a connector and is a solid connection, however the other end I am soldering straight to the end of the wire coil. When running voltage through the coil and testing it, it seems that the electricity will only flow when the wire is touched at the very tip, and I can't get it to hold when I solder it or electrical tape it I also tried putting the coil end into a crimp on quick disconnect, but no luck there either. Did you run into this problem/ am I doing something wrong? Please let me know!
Answer 1 year ago
I know this question is a year old, but I am adding an answer in case in case anyone else get the same problem.
"When running voltage through the coil and testing it, it seems that the
electricity will only flow when the wire is touched at the very tip,.."
This is correct. The wire used for coils and transformers is coated with an insulating layer of enamel to prevent the coil windings from short circuiting. The windings may look like plain copper, but they are actually insulated wire.If you want to solder the ends properly, you have to remove this enamel near the end. It is best to sand it off or scrape it off, it can also be burned off. (Some low temp insulation will melt/burn away when you use a hot solder iron, the tougher stuff will not.)
Reply 1 year ago
I discovered that after a few more hours of frustration! I carefully scraped of the end enamel with a box cutter and used a flux to help hold the connection. Got it working great! Good luck to future creators.
1 year ago on Step 2
Great Job! I didn't read all of it, and maybe it is in the works - A wireless switch to allow the kiddos to pick it up, and not the parents... hahahaha
Question 2 years ago on Step 2
Can one be purchased already made?
My son battles an auto immune disease and we are taking him to a kids Comic Con to celebrate his birthday. I'd love to get him one for his costume. If possible please email me VExclusive201@aol.com His birthday isn't for a few months.
5 years ago
Great Instructable Allen. I really like the concept you came up with. My teachers and classmates really got a kick out of this. We set up in the student center at my college. So much fun watching giant weightlifters try to lift it but to no avail. Looking forward to seeing more great projects from you.
Reply 2 years ago
Hello this is really cool I’am in need of help I am doing this project for school and got only 3 days left !!! I’m having a problem knowing what wire goes to what if you can tell me or even better make me a video of what goes where step by step that would be awesome thank you
Reply 3 years ago
what did you use in class to have it sit on for people to lift from that was heavy enough?
3 years ago
How much to have one built??
3 years ago on Step 2
It not letting,me open the last file on the coding
4 years ago
Thanks again Allen Pan for this amazing project. It was a real journey for me figuring it all out (read comments below for dramatic effect). I was able to pick up a pipe wrench with no ease of pulling it off. Totally awesome!
Reply 3 years ago
Can you please share the link to where you bought those batteries? I think I see that you only connect the red and black terminals and leave the other set of wires unconnected right?
Question 3 years ago
Can someone please help, I am having trouble selecting the right batteries. I see economical 11.1 V Lipo batteries online, but they come with two sets of wires and I am afraid that they won't work or that I will have difficulties trying to figure out the wiring since they come with two sets of wires, 1 set looks like the regular negative and positive terminals ( red and black), but the other set is what worries me, idk if they are necessary to connect or where to connect them to. If someone can please share the link to some economical 12V or 11.1 lipo batteries that will work for this project and their charger I would really appreciate that.
5 years ago
Could you just wear a magnetic ring and make a corasponding switch of some sort instead of all the arduinos and finger print scanners or remote. I need it as cheap and simple with as few parts as i possibly can. Im not near any stores and i dont like online shopping. Some one pls check out if this would work and get back to me pls
Reply 3 years ago
yes you can if you use a reed switch or something like that which is what I am working on
Reply 3 years ago
did you ever get this to work?
Reply 3 years ago
I haven't but the hacksmith did
Question 3 years ago
Where can I get the hammer amazon is currently sold out. I want to do this for a project in college | https://www.instructables.com/Electromagnetic-Mjolnir-From-Thors-Hammer-Prank/ | CC-MAIN-2021-43 | refinedweb | 2,443 | 70.63 |
When applications were developed twenty years ago and ran on computers that had no backend, most of the operations carried out by the program were synchronous, causing the rest of the application to wait while certain commands or functions complete.
As time went on and apps became more reliant on accessing data via APIs or other sources that aren’t locally available on a device itself, processing data in a synchronous way became unappealing, and rightfully so.
We can’t lock up a UI for seconds at a time while our user requests data from an API.
For this and many other reasons, modern programming languages and frameworks (like Dart and Flutter) contain constructs that help us deal with streams.
In this post, we’ll look at the following concepts:
- What streams are
- How we can use a
StreamControllerand how we can emit events into it
- How we can use a
StreamBuilderin Flutter to update our UI
Something that should be said almost immediately when talking about streams is that when I first started learning about them, they dazzled and confused me. It’s possible that I’m just an average developer, but there are countless articles online about people, not really understanding streams.
While they’re actually quite simple, they’re very powerful, and with this increase in power comes the possibility that you could implement them incorrectly and cause problems. So, let’s talk about what streams are in the first place.
What are streams?
To understand what streams are, let’s first start with the things that we do understand, which are normal synchronous methods. These aren’t fancy at all, in fact, here’s an example of one:
final time = DateTime.now();
In this example, we’re retrieving the date and time from a method that is synchronous. We don’t need to wait on its output because it completes this function in less than a millisecond. It’s okay for our program to wait on the output in this instance because the wait is incredibly short.
Now, let’s look at an asynchronous method by using the
async and
await keywords. We’ll do this by getting the current time and then, by using
Future.delayed, get the time 2 seconds in the future, like this:
void main() async { print('Started at ${DateTime.now()}'); final time = await Future.delayed(Duration(seconds: 2)).then((value) => DateTime.now()); print('Awaited time was at $time'); }
The result of running this is the following:
Started at 2021-10-28 17:24:28.005 Awaited time was at 2021-10-28 17:24:30.018
So, we can see that in our app, we receive our initial time and a time that is 2 seconds in the future. In reality, we can
await a variety of data sources that return in the future, like APIs, or file downloads.
By using the
async/
await pattern, we can retrieve this data and operate on it when it completes.
But, what would we do if we wanted to retrieve the time every 2 seconds? It’s true that we could wrap it in a
for loop and, for our trivial example, that would be okay.
However, this is essentially the same as polling for updates where we make the request every 2 seconds to see whether something changes. Polling is not good for battery life or user experience because it puts the onus on the client device or app to check whether something changes.
It’s instead better to put this responsibility on the server, having the server tell us when something changes and the app subscribes to those updates.
That’s where streams come in. We can easily subscribe to a
stream, and when it yields a new result, we can work with that data as we choose.
In the below example, we set up a
StreamController and use
Timer.periodic to send an event into the stream every 2 seconds. Immediately afterward, we subscribe to the
stream within the stream controller and print when it updates:
import 'dart:async'; void main() async { final streamController = StreamController<DateTime>(); Timer.periodic(Duration(seconds: 2), (timer) { streamController.add(DateTime.now()); }); streamController.stream.listen((event) { print(event); }); }
The output of this is as follows:
2021-10-28 17:56:00.966 2021-10-28 17:56:02.965 2021-10-28 17:56:04.968 2021-10-28 17:56:06.965 2021-10-28 17:56:08.977 2021-10-28 17:56:10.965
Cleaning up the stream subscription
So, now we have our subscription to the stream and it’s emitting over time. That’s great. But we’ve got a small bug with this implementation: we never disposed or cleaned up our subscription to the stream.
This means that even if the user goes to another part of our app or does something else, our app will still listen to this stream and process the results.
It’s okay for us to expend resources running processes that are relevant to the user, but once the user navigates away or uses a different part of our app, we should cancel our subscriptions and clean up the components we used along the way.
Fortunately, we can keep a reference to our subscription and cancel it when we’re not using it anymore. In this case, we’re canceling our subscription after a certain amount of time:
import 'dart:async'; void main() async { final streamController = StreamController<DateTime>(); final unsubscribeAt = DateTime.now().add(Duration(seconds: 10)); StreamSubscription<DateTime>? subscription; Timer.periodic(Duration(seconds: 2), (timer) { streamController.add(DateTime.now()); }); subscription = streamController.stream.listen((event) async { print(event); if (event.isAfter(unsubscribeAt)) { print("It's after ${unsubscribeAt}, cleaning up the stream"); await subscription?.cancel(); } }); }
Again, we have our subscription, but now we’re canceling it. When we cancel it, the app can release the resources involved in making the subscription, thus preventing memory leaks within our app.
Cleaning up subscriptions is integral to using streams in Flutter and Dart, and, if we want to use them, we must use them responsibly.
Handling stream errors
The last thing we must consider is how we handle errors because sometimes our stream can produce an error.
The reasons for these can be vast, but if your stream is connected for real-time updates from a server and the mobile device disconnects from the internet, then the stream disconnects as well and yields an error.
When this happens and we don’t handle the error, Flutter will throw an exception and the app can potentially be left in an unusable state.
Fortunately, it’s fairly easy to handle errors. Let’s make our stream yield an error if our seconds are divisible by three, and, for the sake of completeness, let’s also handle the event when the stream completes:
import 'dart:async'; void main() async { final streamController = StreamController<DateTime>(); final unsubscribeAt = DateTime.now().add(Duration(seconds: 10)); late StreamSubscription<DateTime> subscription; final timer = Timer.periodic(Duration(seconds: 2), (timer) { streamController.add(DateTime.now()); if (DateTime.now().second % 3 == 0) { streamController.addError(() => Exception('Seconds are divisible by three.')); } }); subscription = streamController.stream.listen((event) async { print(event); if (event.isAfter(unsubscribeAt)) { print("It's after ${unsubscribeAt}, cleaning up the stream"); timer.cancel(); await streamController.close(); await subscription.cancel(); } }, onError: (err, stack) { print('the stream had an error :('); }, onDone: () { print('the stream is done :)'); }); }
The output from this is as follows:
2021-10-28 17:58:08.531 2021-10-28 17:58:10.528 2021-10-28 17:58:12.527 the stream had an error :( 2021-10-28 17:58:14.526 2021-10-28 17:58:16.522 It's after 2021-10-28 17:58:16.518, cleaning up the stream the stream is done :)
We can see that we’re handling the error internally (in this case by printing a message, but here’s where we’d use a logging framework to capture what went wrong).
So, let’s recap. We learned:
- How a basic stream works and what purpose they serve
- How to clean up a subscription after we’ve used it
- How to handle basic errors that come from the stream and capture them when a stream completes
Now let’s bring Flutter into the mix! 😊
Working with streams in a Flutter app
To work out how streams work within a Flutter app, we’ll create a simple app called
flutter_streams that has a service with a
StreamController in it. We’ll subscribe to updates from this
StreamController to update the UI for our users. It’ll look like this:
Our app will show us what cat is coming, going, and also what state the cat is in when it does these things (meowing, content, or purring). So, we’ll need a list of cats to choose from.
Laying the Flutter app’s groundwork
We’ll create our service at
services\petservice.dart and the first few lines will be a list of cats that our service can randomly choose from:
const availablePets = <Pet>[ Pet('Thomas', Colors.grey, PetState.CONTENT), Pet('Charles', Colors.red, PetState.MEOWING), Pet('Teddy', Colors.black, PetState.PURRING), Pet('Mimi', Colors.orange, PetState.PURRING), ];
Next, we’ll use an
enum to define the various states our cat can be in. It can enter or leave while being content, meowing, or purring. Let’s set up these
enums now:
enum PetState { CONTENT, MEOWING, PURRING, } enum PetAction { ENTERING, LEAVING, }
Finally, for our data, let’s declare a
Pet class that contains the name, color, and state of our pet. We must also override the
toString method of this class so when we call
toString() on a
Pet object, we receive information on the object in detail:
class Pet { @override String toString() { return 'Name: $name, Color: ${color.toString()}, state: $state'; } final String name; final Color color; final PetState state; const Pet( this.name, this.color, this.state, ); }
Because our cats can come and go at random, we set up a function to randomly choose cats from our
availablePets list, like this:
Pet randomCat() => availablePets[rand.nextInt(availablePets.length)];
Setting up the stream
While we’re still in the same file, let’s create our
PetService that exposes a
StreamController for other parts in our app to listen to:
final petStream = StreamController<PetEvent>();
Then, in the constructor, we can set up a periodic timer that emits every three seconds into the
StreamController when a pet either arrives or leaves. This is quite a long piece of code, so we’ll sum up what we’re doing first.
First, every three seconds we generate a random number between
0 and
1 (inclusive). If it’s
0, we:
- Get a random cat from the list of cats
- Emit an event to the
petStreamto say that the cat arrived
- Add the cat to an internal list to track that it is currently present
If it’s
1 and the list of current pets is not empty, we:
- Select a random pet from the list
- Emit an event to the
petStreamto say the cat left
- Remove the cat from the internal list because it is no longer present
The code for this looks like the following:
// We add or remove pets from this list to keep track of the pets currently here final pets = <Pet>[]; // Set up a periodic timer to emit every 3 seconds Timer.periodic( const Duration(seconds: 3), (timer) { // If there are less than 3 pets in the list // then we always want to add pets to the list // (otherwise a pet and come and leave over and // over again) // // Otherwise we're flipping a coin between 0 and 1 final number = pets.length < 3 ? 0 : rand.nextInt(2); print(number); switch (number) { // 0 = A cat has arrived case 0: { print('Pet Service: A new cat has arrived'); // Get a random cat final pet = randomCat(); // Emit an event that a cat has arrived petStream.add(PetEvent( pet, PetAction.ENTERING, pets, )); // Add the pet to the internal list pets.add(pet); break; } // 1 = A cat is leaving case 1: { // Only remove pets from the list if there are any pets // to remove in the first place if (pets.isNotEmpty) { print('Pet Service: A cat has left.'); // Get a random pet from the internal list final petIndex = rand.nextInt(pets.length); final pet = pets[petIndex]; // Emit an event that the cat has left petStream.add( PetEvent( pet, PetAction.LEAVING, pets, ), ); // Remove from the internal list pets.removeAt(petIndex); } break; } } }, );
Now that we have our service that emits our pets coming and going, we can wire our visual layer to respond to changes in our stream.
Creating our Flutter stream screen
The first thing that we must create is a
StatefulWidget. This means our widget can subscribe to updates from our
PetService:
@override void initState() { final petService = PetService(); _petStream = petService.petStream.stream; super.initState(); }
Next, we must respond to the updates on this stream and update our app’s screen respectively. Again, this is a bit of a longer code snippet, so let’s review what’s happening before we get to the code.
First, we’ll use a
StreamBuilder to react to changes in the stream. Then, within the
build method for the
StreamBuilder, we must check to see whether the stream yielded any data yet. If it hasn’t, we’ll show a
CircularProgressIndicator; if it has, we’ll show the latest updates from the stream:
@override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Flutter Pet Stream'), ), body: StreamBuilder<PetEvent>( stream: _petStream, builder: (context, state) { // Check if the stream has data if (!state.hasData) { // If not, show a loading indicator return Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, crossAxisAlignment: CrossAxisAlignment.center, children: [ CircularProgressIndicator(), Text('Waiting for some pets...') ], ), ); } // Otherwise, show the output of the Stream return Stack( children: [ Center( child: AnimatedSize( duration: Duration(milliseconds: 300), clipBehavior: Clip.antiAlias, child: Card( child: Wrap( alignment: WrapAlignment.center, children: [ ...?state.data?.activePets.map( (e) => Padding( padding: const EdgeInsets.all(8.0), child: Column( mainAxisSize: MainAxisSize.min, children: [ Icon( Icons.pets, size: 30, color: e.color, ), Text(e.name) ], ), ), ) ], ), ), ), ), SafeArea( child: Align( alignment: Alignment.bottomCenter, child: Card( child: Text( state.data!.pet.name + ' is ' + describeEnum(state.data!.pet.state).toLowerCase() + ' and is ' + describeEnum(state.data!.action).toLowerCase() + '.', ), ), )) ], ); }, ), ); }
And our finished product will look like this, and it will periodically update as cats come and go 🐱.
Conclusion
Streams are a necessary part of handling and processing asynchronous data. It’s possible that the first time you encounter them in any language, they can take quite a bit of getting used to, but once you know how to harness them, they can be very useful.
Flutter also makes it easy for us by way of the
StreamBuilder to rebuild our widgets whenever it detects an update. It’s easy to take this for granted, but it actually takes a lot of the complexity out of the mix for us, which is always a good thing.
As always, feel free to fork or clone the sample app from here, and enjoy working with streams! | https://blog.logrocket.com/understanding-flutter-streams/ | CC-MAIN-2022-21 | refinedweb | 2,511 | 62.98 |
Read File in Java
This tutorial shows you how to read file in Java. Example discussed here simply reads the file and then prints the content on the console. File handling is important concept and usually programmer reads the file line by line in their applications.
Classes and Interfaces of the java.io package is used to handle the files in Java. In this example we will use following classes of java.io package for reading file:
FileReader
The class java.io.FileReader is used to read the character files. This class takes file name as a parameter. This class is used to read the character input stream. So this class (java.io.FileReader) is used to read the character file. If you have to read the raw bytes then you should use the java.io.FileInputStream class.
BufferedReader
The class java.io.BufferedReader is used to read the text from a character-input stream. This class is providing the buffering features and is very efficient in reading the characters, lines and arrays. You can also provide the buffer size or use the its default size. The default buffer size is sufficient for general purpose use.
Generally following way it is used:
BufferedReader input = new BufferedReader(new FileReader("mytextfile.txt"));
Here is the complete example of Java program that reads a character file and prints the file content on console:
import java.io.*; /** * This example code shows you how to read file in Java * */ public class ReadFileExample { public static void main(String[] args) { System.out.println("Reading File from Java code"); /("Data is: " + line); } //Close the buffer reader bufferReader.close(); }catch(Exception e){ System.out.println("Error while reading file line by line:" + e.getMessage()); } } }
If you run the program it will give the following output:: Read File in Java
Post your Comment | http://www.roseindia.net/java/javafile/read-file-in-java.shtml | CC-MAIN-2017-04 | refinedweb | 303 | 68.67 |
Forum Index
-nogc switch doesn't allow compile time GC allocations:
template foo(uint N) {
import std.conv : to;
static if(N == 0) enum foo = "";
else enum foo = N.to!string ~ foo!(N - 1);
}
pragma(msg, foo!10);
Error: No implicit garbage collector calls allowed with -nogc option enabled: `_d_arrayappendcTX`
Is it a bug?
On Monday, 5 November 2018 at 18:47:42 UTC, Jack Applegame wrote:
> Is it a bug?
I guess it can be seen as a bug, or just as a limitation of that apparently LDC-specific feature. IIRC, it just checks for calls to functions in a hardcoded list of GC-allocating functions (may not be up-to-date, as it's probably not widely used) and apparently doesn't differentiate between runtime and compile-time interpretation. | https://forum.dlang.org/thread/iynzsmvvydawtfaixlzj@forum.dlang.org | CC-MAIN-2019-04 | refinedweb | 131 | 55.13 |
Nick Roberts <address@hidden> writes: > In fact if BYTE_CODE_SAFE is defined, it appears Emacs will just > generate an error rather than crash. For those following along at home: | /* Binds and unbinds are supposed to be compiled balanced. */ | if (SPECPDL_INDEX () != count) | #ifdef BYTE_CODE_SAFE | error ("binding stack not balanced (serious byte compiler bug)"); | #else | abort (); | #endif I'm curious to know: if the error is recoverable, why abort? The BYTE_CODE_SAFE branch certainly seems to suggest that it is recoverable... -- Romain Francoise <address@hidden> | The sea! the sea! the open it's a miracle -- | sea! The blue, the fresh, the | ever free! --Bryan W. Procter | https://lists.gnu.org/archive/html/emacs-devel/2006-08/msg00127.html | CC-MAIN-2021-10 | refinedweb | 102 | 65.73 |
I have simple code to write new branches to an existing TTree, but the resulting root file seems to contain two copies of the same ttree. Can anyone see why in the code below?
The problem I’m trying to solve is that I have an existing TTree that contains a few arrays of known length. I’m writing a function to loop through all entries in the TTree, and then loop over all elements in the array to find elements that pass certain cuts and place those in a new branch.
I’m still thinking how I can simplify the code below, or if it could be made faster.
Code here:
[code]from ROOT import TFile, TTree # Import any ROOT class you want
from array import array # used to make Float_t array ROOT wants
import sys
ttreeName = “NTuples/Analysis” # TTree name in all files
listOfFiles = [“testing.root”]
for fileName in listOfFiles:
file = TFile(fileName, “update”) # Open TFile
if file.IsZombie():
print “Error opening %s, exiting…” % fileName
sys.exit(0)
print “Opened %s, looking for %s…” % (fileName, ttreeName)
ttree = TTree() # Create empty TTree, and try: # try to get TTree from file. file.GetObject(ttreeName, ttree) except: print "Error: %s not found in %s, exiting..." % (ttreeName, fileName) sys.exit(0) print "found." # Add those variables into the TTree print "Adding new branches:\n ", listOfNewBranches = [] newJetPt = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetPt", newJetPt, "passjetPt/F") ) newJetEta = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetEta", newJetEta, "passjetEta/F") ) newJetPhi = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetPhi", newJetPhi, "passjetPhi/F") ) newJetEmEnergyFraction = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetEmEnergyFraction", newJetEmEnergyFraction, "passjetEmEnergyFraction/F") ) newJetFHPD = array( 'f', [0] ) listOfNewBranches.append( ttree.Branch("passjetFHPD", newJetFHPD, "passjetFHPD/F") ) # Loop over all the entries numOfEvents = ttree.GetEntries() for n in xrange(numOfEvents): newJetPt[0] = 0.0 newJetEta[0] = 0.0 newJetPhi[0] = 0.0 newJetEmEnergyFraction[0] = 0.0 newJetFHPD[0] = 0.0 ttree.GetEntry(n) for i in 0,1,2,3: # Loop over the top 3 jets until we find one passing cuts if ttree.jetPt[i] < 5.0: break if (ttree.emEnergyFraction[i]>0.01) and (ttree.fHPD[i]<0.98): # Found a jet that passes cuts newJetPt[0] = ttree.jetPt[i] newJetEta[0] = ttree.jetEta[i] newJetPhi[0] = ttree.jetPhi[i] newJetEmEnergyFraction[0] = ttree.emEnergyFraction[i] newJetFHPD[0] = ttree.fHPD[i] break # Fill new branches for newBranch in sorted(listOfNewBranches): newBranch.Fill() file.Write() file.Close()[/code] | https://root-forum.cern.ch/t/adding-new-branches-to-existing-ttree-or-tntuple/9569 | CC-MAIN-2022-27 | refinedweb | 404 | 52.87 |
Using the Toolbox
The Toolbox is a sliding tree control that behaves much like Windows Explorer, but without grid or connection lines. Multiple segments of the Toolbox (called "tabs") can be expanded simultaneously, and the entire tree scrolls inside the Toolbox window. To expand any tab of the Toolbox, click the plus (+) sign next to its name. To collapse an expanded tab, click the minus (-) sign next to its name.
The Toolbox displays icons for items that you can add to projects. Each time you return to an editor or designer, the Toolbox automatically scrolls to the tab and item most recently selected. As you shift focus to a different editor or designer or to a different project, the current selection in the Toolbox shifts with you.
The Toolbox only displays items appropriate to the type of file you are working in. In an HTML page, for example, only the HTML and General tabs are available. In a Windows Form, every category of Windows Forms controls is displayed. No Toolbox items are displayed while editing Console applications, because they are typically designed without a graphical user interface., and on the target .NET Framework version..
You can customize the Toolbox by rearranging items within a tab or adding custom tabs and items. For more information, see How to: Manage the Toolbox Window and How to: Manipulate Toolbox Tabs. To add or remove Toolbox items, use the Choose Toolbox Items Dialog Box (Visual Studio). Items that can be made available as Toolbox icons include components from the .NET Framework class library, COM components, controls for Windows Forms and Web Forms, HTML elements, and XML namespaces.
When you choose a different settings combination, the current Toolbox state is cleared. Choosing a different settings combination can change which Toolbox tabs are now available, and what items are displayed on a tab. For more information, see Working with Settings. | http://msdn.microsoft.com/en-us/library/ms165354(v=vs.100).aspx | CC-MAIN-2014-15 | refinedweb | 314 | 64.71 |
Created on 2020-07-29 18:13 by zmwangx, last changed 2020-07-29 22:20 by eryksun.
I noticed that on Windows, socket operations like recv appear to always block SIGINT until it's done, so if a recv hangs, Ctrl+C cannot interrupt the program. (I'm a *nix developer investigating a behavioral problem of my program on Windows, so please excuse my limited knowledge of Windows.)
Consider the following example where I spawn a TCP server that stalls connections by 5 seconds in a separate thread, and use a client to connect to it on the main thread. I then try to interrupt the client with Ctrl+C.
import socket
import socketserver
import time
import threading
interrupted = threading.Event()
class HoneypotServer(socketserver.TCPServer):
# Stall each connection for 5 seconds.
def get_request(self):
start = time.time()
while time.time() - start < 5 and not interrupted.is_set():
time.sleep(0.1)
return self.socket.accept()
class EchoHandler(socketserver.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
self.request.sendall(data)
class HoneypotServerThread(threading.Thread):
def __init__(self):
super().__init__()
self.server = HoneypotServer(("127.0.0.1", 0), EchoHandler)
def run(self):
self.server.serve_forever(poll_interval=0.1)
def main():
start = time.time()
server_thread = HoneypotServerThread()
server_thread.start()
sock = socket.create_connection(server_thread.server.server_address)
try:
sock.sendall(b"hello")
sock.recv(1024)
except KeyboardInterrupt:
print(f"processed SIGINT {time.time() - start:.3f}s into the program")
interrupted.set()
finally:
sock.close()
server_thread.server.shutdown()
server_thread.join()
if __name__ == "__main__":
main()
On *nix systems the KeyboardInterrupt is processed immediately. On Windows, the KeyboardInterrupt is always processed more than 5 seconds into the program, when the recv is finished.
I suppose this is a fundamental limitation of Windows? Is there any workaround (other than going asyncio)?
Btw, I learned about SIGBREAK, which when unhandled seems to kill the process immediately, but that means no chance of cleanup. I tried to handle SIGBREAK but whenever a signal handler is installed, the behavior reverts to that of SIGINT -- the handler is called only after 5 seconds have passed.
(I'm attaching a socket_sigint_sigbreak.py which is a slightly expanded version of my sample program above, showing my attempt at handler SIGBREAK. Both
python .\socket_sigint_sigbreak.py --sigbreak-handler interrupt
and
python .\socket_sigint_sigbreak.py --sigbreak-handler exit
stall for 5 seconds.)
Winsock is inherently asynchronous. It implements synchronous functions by using an alertable wait for the completion of an asynchronous I/O request. Python doesn't implement anything for a console Ctrl+C event to alert the main thread when it's blocked in an alterable wait. NTAPI NtAlertThread will alert a thread in this case, but it won't help here because Winsock just rewaits when alerted.
You need a user-mode asynchronous procedure call (APC) to make the waiting thread cancel all of its pended I/O request packets (IRPs) for the given file (socket) handle. Specifically, open a handle to the thread, and call QueueUserAPC to queue an APC to the thread that calls WinAPI CancelIo on the file handle. (I don't suggest using the newer CancelIoEx function from an arbitrary thread context in this case. It would be simpler than queuing an APC to the target thread, but you don't have an OVERLAPPED record to cancel a specific IRP, so it would cancel IRPs for all threads.)
Here's a context manager that temporarily sets a Ctrl+C handler that implements the above suggestion:
import ctypes
import threading
import contextlib
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True)
CTRL_C_EVENT = 0
THREAD_SET_CONTEXT = 0x0010
@contextlib.contextmanager
def ctrl_cancel_async_io(file_handle):
apc_sync_event = threading.Event()
hthread = kernel32.OpenThread(THREAD_SET_CONTEXT, False,
kernel32.GetCurrentThreadId())
if not hthread:
raise ctypes.WinError(ctypes.get_last_error())
@ctypes.WINFUNCTYPE(None, ctypes.c_void_p)
def apc_cancel_io(ignored):
kernel32.CancelIo(file_handle)
apc_sync_event.set()
@ctypes.WINFUNCTYPE(ctypes.c_uint, ctypes.c_uint)
def ctrl_handler(ctrl_event):
# For a Ctrl+C cancel event, queue an async procedure call
# to the target thread that cancels pending async I/O for
# the given file handle.
if ctrl_event == CTRL_C_EVENT:
kernel32.QueueUserAPC(apc_cancel_io, hthread, None)
# Synchronize here in case the APC was queued to the
# main thread, else apc_cancel_io might get interrupted
# by a KeyboardInterrupt.
apc_sync_event.wait()
return False # chain to next handler
try:
kernel32.SetConsoleCtrlHandler(ctrl_handler, True)
yield
finally:
kernel32.SetConsoleCtrlHandler(ctrl_handler, False)
kernel32.CloseHandle(hthread)
Use it as follows in your sample code:
with ctrl_cancel_async_io(sock.fileno()):
sock.sendall(b"hello")
sock.recv(1024)
Note that this requires the value of sock.fileno() to be an NT kernel handle for a file opened in asynchronous mode. This is the case for a socket.
HTH | https://bugs.python.org/issue41437 | CC-MAIN-2020-34 | refinedweb | 759 | 52.15 |
As I posted yesterday, an ICE compound with no category and no task will not show up in the Preset Manager. Here’s a Python script that checks .xsicompound files and reports any that are missing both the category and tasks attributes.
I use ElementTree to parse the .xsicompound XML, and get the category and tasks attributes from the xsi_file element, which looks something like this:
<xsi_file type="CompoundNode" name="abScatter" author="Andreas Bystrom" url="" formatversion="1.4" compoundversion="1.0" constructionmode="Modeling" backgroundcolor="7765887">
Here’s the script.
from siutils import si # Application from siutils import sidict # Dictionary from siutils import sisel # Selection from siutils import siuitk # XSIUIToolkit from siutils import siut # XSIUtils from siutils import log # LogMessage from siutils import disp # win32com.client.Dispatch from siutils import C # win32com.client.constants from xml.etree import ElementTree as ET import os, fnmatch # # Generator function for finding files # def find_files(directory, pattern): for root, dirs, files in os.walk(directory): for basename in files: if fnmatch.fnmatch(basename, pattern): filename = os.path.join(root, basename) yield filename # # Check .xsicompound file for category and tasks attributes # def check_xsicompound( f ): try: tree = ET.parse( f ) except Exception, inst: print "Unexpected error opening %s: %s" % (f, inst) # Get the xsi_file element xsi_file = tree.getroot() # name = xsi_file.attrib['name'] # Check the category and task elements cat = False tasks = False if 'category' in xsi_file.attrib and xsi_file.attrib['category'] != '': cat = True if 'tasks' in xsi_file.attrib and xsi_file.attrib['tasks'] != '': tasks = True # return False if both are blank return cat or tasks # # # # list of compounds with no category and no tasks compounds = [] # check all compounds in all workgroups for wg in si.Workgroups: d = siut.BuildPath( wg, "Data", "Compounds" ); for filename in find_files(d, '*.xsicompound'): b = check_xsicompound( filename ) if not b: compounds.append( filename ) log( "%d compounds found with no category and no tasks:" % (len(compounds)) ) for f in compounds: log( f )
I can’t execute the script in 2011 SAP without getting this error:
[error on line 1.]
You need to replace the from siutils import statements with this:
Works. Thank you. Nice Script 🙂
# ERROR : Traceback (most recent call last):
# File “”, line 65, in
# b = check_xsicompound( filename )
# File “”, line 32, in check_xsicompound
# print “Unexpected error opening %s: %s” % (xml_file, inst)
# NameError: global name ‘xml_file’ is not defined
# – [line 32]
SI2012.5 Win7/64
On line 32, change xml_file to f:
print “Unexpected error opening %s: %s” % (f, inst)
brilliant!
works perfect now. thanks | https://xsisupport.com/2011/11/02/checking-xsicompounds-for-no-category-and-no-tasks/ | CC-MAIN-2018-17 | refinedweb | 409 | 50.12 |
Custom Modules
For more advanced analysis, users can save files of standardized code, utils, and modules in their Sisense for Cloud Data Teams Git repository to import and use in their in-app chart editor. Custom modules are a necessity to users that value code quality, repeatability, and scalability across their organization.
Note: Custom modules are available on sites with both the Git Integration and the Python/R Integration. Site Administrators can contact their Account Managers for additional information.
Creating Custom Modules
Custom modules help teams maintain consistent analysis, smooth out workflows, and define business logic that can be reused easily. These modules are saved in a directory in the user’s Sisense for Cloud Data Teams Git repository in a directory called custom_modules.
Getting Started:
- Create a new directory called custom_modules in the periscope/master branch if one does not already exist.
- Custom_modules will be at the same level of the dashboard, views, and snippet directories.
- Add files into this directory and push to the remote periscope/master branch.
Notes on custom modules
- The total size of all custom modules is currently limited to 1MB.
- All files in the custom_modules directory will be available to all users that have SQL edit access in Sisense for Cloud Data Teams.
- Custom modules cannot be used across different spaces.
- API calls are not supported in these files.
Using Custom Modules
After uploading files to the custom_modules directory in the git repository, users will be able to import their files in the code environment of any chart or view editor.
Note: Sisense for Cloud Data Teams officially supports .r and .py modules, but other file types may be unofficially supported.
R
- In R call periscope.source(‘mylib’), where mylib is the name of the module.
Python
- In Python call import mylib, where mylib is the name of the module.
- For other files (e.g. images or models) use periscope.open(myfile), where myfile is the name of the file.
Importing other files like images or models is currently only supported for Python.
For example:
with periscope.open(‘state_pop.csv’) as file:
df = pd.read_csv(file)
print(df)
With custom modules, analysts can load in a pre-trained model to the Sisense for Cloud Data Teams python environment by adding the .sav file to the custom_modules directory.
The user will need to first pickle their trained model as an .sav file. This can be done by adding these two lines of code to the bottom of the original training script. The script used to train the model and generate the .sav file does not have to be in the custom_modules directory.
import pickle
filename = 'finalized_model.sav'
pickle.dump(model, open(filename, 'wb'))
Once the .sav file is added to the custom_modules directory it can be imported into the Sisense for Cloud Data Teams python environment by using the periscope.open() function and the _Unpickler(). Users will need to add import pickle at the top of their python editor in order to load their model.
Below is an example of importing the .sav file and using the model in Sisense for Cloud Data Teams.
<a href="#top">Back to top</a> | https://dtdocs.sisense.com/article/custom-modules | CC-MAIN-2022-40 | refinedweb | 526 | 58.48 |
DirectButton
DirectButton is a DirectGui object that will respond to the mouse and can execute an arbitrary function when the user clicks on the object. This is actually implemented by taking advantage of the “state” system supported by every DirectGui object.
Each DirectGui object has a predefined number of available “states”, and a current state. This concept of “state” is completely unrelated to Panda’s FSM object. For a DirectGui object, the current state is simply as an integer number, which is used to select one of a list of different NodePaths that represent the way the DirectGui object appears in each state. Each DirectGui object can therefore have a completely different appearance in each of its states.
Most types of DirectGui objects do not use this state system, and only have one state, which is state 0. The DirectButton is presently the only predefined object that has more than one state defined by default. In fact, DirectButton defines four states, numbered 0 through 3, which are called ready, press, rollover, and disabled, in that order. Furthermore, the DirectButton automatically manages its current state into one of these states, according to the user’s interaction with the mouse.
With a DirectButton, then, you have the flexibility to define four completely
different NodePaths, each of which represents the way the button appears in a
different state. Usually, you want to define these such that the ready state is
the way the button looks most of the time, the press state looks like the button
has been depressed, the rollover state is lit up, and the disabled state is
grayed out. In fact, the DirectButton interfaces will set these NodePaths up for
you, if you use the simple forms of the constructor (for instance, if you
specify just a single text string to the
text parameter).
Sometimes you want to have explicit control over the various states, for instance to display a different text string in each state. To do this, you can pass a 4-tuple to the text parameter (or to many of the other parameters, such as relief or geom), where each element of the tuple is the parameter value for the corresponding state, like this:
b = DirectButton(text=("OK", "click!", "rolling over", "disabled"))
The above example would create a DirectButton whose label reads “OK” when it is not being touched, but it will change to a completely different label as the mouse rolls over it and clicks it.
Another common example is a button you have completely customized by painting
four different texture maps to represent the button in each state. Normally, you
would convert these texture maps into an egg file using
egg-texture-cards
like this:
egg-texture-cards -o button_maps.egg -p 240,240 button_ready.png button_click.png button_rollover.png button_disabled.png
And then you would load up the that egg file in Panda and apply it to the four different states like this:
maps = loader.loadModel('button_maps') b = DirectButton(geom=(maps.find('**/button_ready'), maps.find('**/button_click'), maps.find('**/button_rollover'), maps.find('**/button_disabled')))
You can also access one of the state-specific NodePaths after the button has
been created with the interface
myButton.stateNodePath[stateNumber].
Normally, however, you should not need to access these NodePaths directly.
The following are the DirectGui keywords that are specific to a DirectButton. (These are in addition to the generic DirectGui keywords described on the previous page.)
Like any other DirectGui widget, you can change any of the properties by treating the element as a dictionary:
button["state"] = DGG.DISABLED
Example
import direct.directbase.DirectStart from direct.gui.OnscreenText import OnscreenText from direct.gui.DirectGui import * from panda3d.core import TextNode # Add some text bk_text = "This is my Demo" textObject = OnscreenText(text=bk_text, pos=(0.95,-0.95), scale=0.07, fg=(1, 0.5, 0.5, 1), align=TextNode.ACenter, mayChange=1) # Callback function to set text def setText(): bk_text = "Button Clicked" textObject.setText(bk_text) # Add button b = DirectButton(text=("OK", "click!", "rolling over", "disabled"), scale=.05, command=setText) # Run the tutorial base.run()
Note that you will not be able to set the text unless the mayChange flag is 1. This is an optimization, which is easily missed by newcomers.
When you are positioning your button, keep in mind that the button’s vertical center is located at the base of the text. For example, if you had a button with the word “Apple”, the vertical center would be aligned with the base of the letter “A”. | https://docs.panda3d.org/1.11/python/programming/gui/directgui/directbutton | CC-MAIN-2022-27 | refinedweb | 749 | 54.42 |
Harwood Jones1,377 Points
Defining the function
I'm not sure if I'm way off or not. Am I defining correctly?
import math def square(number) return number * number number = int(input())
2 Answers
Miss Lucy Andresen388 Points
def square(TheNumber): return number * number number = int(input("please insert a number" )) the_ancear = square(number) print("the square root of {} is".format(number),the_ancear)
your missing some dots and the name of the def was the same as the input they need to be different names I also put in some user text and a print statement with a bit of formatting to see what you are doing
Taurai Mashozhera2,308 Points
You forgot to put the colons after defining the function . def square(number): return number*number There is no need of importing math and using the input method because it is not asked for.
Happy coding
lemelleio10,144 Points
lemelleio10,144 Points
EDIT woops sorry, didn't see the second step.
There's no need to import math in this challenge. Also the challenge asks you to assign the output of your function to "result"
you only need the following: | https://teamtreehouse.com/community/defining-the-function | CC-MAIN-2018-47 | refinedweb | 192 | 66.57 |
Sometimes we have to remove character from String in java program. But java String class doesn’t have
remove() method. So how would you achieve this?
Table of Contents
Java Remove Character from String
If you notice String class, we have
replace() methods with different variations. Let’s see what all overloaded replace() methods String class has;
replace(char oldChar, char newChar): Returns a string resulting from replacing all occurrences of oldChar in this string with newChar.
replace(CharSequence target, CharSequence replacement): Replaces each substring of this string that matches the literal target sequence with the specified literal replacement sequence.
replaceFirst(String regex, String replacement): Replaces the first substring of this string that matches the given regular expression with the given replacement.
replaceAll(String regex, String replacement): Replaces each substring of this string that matches the given regular expression with the given replacement.
So can we use
replace('x','');? If you will try this, you will get compiler error as
Invalid character constant. So we will have to use other replace methods that take string, because we can specify “” as empty string to be replaced.
Java String Remove Character Example
Below code snippet shows how to remove all occurrences of a character from the given string.
CopyString str = "abcdDCBA123"; String strNew = str.replace("a", ""); // strNew is 'bcdDCBA123'
Java Remove substring from String
Let’s see how to remove first occurrence of “ab” from the String.
CopyString str = "abcdDCBA123"; String strNew = str.replaceFirst("ab", ""); // strNew is 'cdDCBA123'
Notice that
replaceAll and
replaceFirst methods first argument is a regular expression, we can use it to remove a pattern from string. Below code snippet will remove all small case letters from the string.
CopyString str = "abcdDCBA123"; String strNew = str.replaceAll("([a-z])", ""); // strNew is 'DCBA123'
Java Remove Spaces from String
CopyString str = "Hello World Java Users"; String strNew = str.replace(" ", ""); //strNew is 'HelloWorldJavaUsers'
Java Remove Last Character from String
There is no method to replace or remove last character from string, but we can do it using string substring method.
CopyString str = "Hello World!"; String strNew = str.substring(0, str.length()-1); //strNew is 'Hello World'
Java String Remove Character and String Example
Here is the complete java class for the examples shown above.
Copypackage com.journaldev.examples; public class JavaStringRemove { public static void main(String[] args) { String str = "abcdDCBA123"; System.out.println("String after Removing 'a' = "+str.replace("a", "")); System.out.println("String after Removing First 'a' = "+str.replaceFirst("ab", "")); System.out.println("String after replacing all small letters = "+str.replaceAll("([a-z])", "")); } }
Output produced by above program is:
CopyString after Removing 'a' = bcdDCBA123 String after Removing First 'a' = cdDCBA123 String after replacing all small letters = DCBA123
That’s all for removing character or substring from string in java program.
Indrajit Das says
I want to replace a few words from a String , Whenever a match will be founded it will remove that match . Example : “Learning java is not so easy but also” /* is not so much hard */ “. All that I need to replace the whole comment section ( /* ———-*/). In this case what I should do ?
Pankaj says
You need to use regex for that.
Nisha says
This article will provide good knowledge, who are welling to learn java. . It was great experience. Good platform to enhance our knowledge. I found a clear description in each and every topic.
abcd says
how to remove the string of characters from another string
eg: “lhe” from “hello world”. | https://www.journaldev.com/18361/java-remove-character-string | CC-MAIN-2019-13 | refinedweb | 573 | 57.27 |
fork - create a new process
#include <sys/types.h> #include <unistd.h> pid_t fork(void);
The fork() function creates a new process. The new process (child process) is an exact copy of the calling process (parent process) except as detailed below.
- The child process has a unique process ID.
- The child process ID also does not match any active process group ID.
- The child process has a different parent process ID (that is, the process ID of the parent. Each open directory stream in the child process may share directory stream positioning with the corresponding directory stream of the parent.
- The child process may have its own copy of the parent's message catalogue descriptors.
- The child process' values of tms_utime, tms_stime, tms_cutime and tms_cstime are set to 0.
- The time left until an alarm clock signal is reset to 0.
- All semadj values are cleared.
- File locks set by the parent process are not inherited by the child process.
- The set of signals pending for the child process is initialised to the empty set.
- Interval timers are reset in the child process.
- If the Semaphores option is supported, any semaphores that are open in the parent process will also be open in the child process.
- If the Process Memory Locking option is supported, the child process does not inherit any address space memory locks established by the parent process via calls to mlockall() or mlock().
- Memory mappings created in the parent are retained in the child process. MAP_PRIVATE mappings inherited from the parent will also be MAP_PRIVATE mappings in the child, and any modifications to the data in these mappings made by the parent prior to calling fork() will be visible to the child. Any modifications to the data in MAP_PRIVATE mappings made by the parent after fork() returns will be visible only to the parent. Modifications to the data in MAP_PRIVATE mappings made by the child will be visible only to the child.
- If the Process Scheduling option is supported, for the SCHED_FIFO and SCHED_RR scheduling policies, the child process inherits the policy and priority settings of the parent process during a fork() function. For other scheduling policies, the policy and priority settings on fork() are implementation-dependent.
- If the Timers option is supported, per-process timers created by the parent are not inherited by the child process.
- If the Message Passing option is supported, the child process has its own copy of the message queue descriptors of the parent. Each of the message descriptors of the child refers to the same open message queue description as the corresponding message descriptor of the parent.
- If the Asynchronous Input and Output option is supported, no asynchronous input or asynchronous output operations are inherited by the child process.
The inheritance of process characteristics not defined by this document is implementation-dependent. After fork(), both the parent and the child processes are capable of executing independently before either one terminates.
A process is created with a single thread. If a multi-threaded process calls fork(), the new process contains.
Upon successful completion, fork() returns 0 to the child process and returns the process ID of the child process to the parent process. Otherwise, -1 is returned to the parent process, no child process is created, and errno is set to indicate the error.
The fork() function will.
None.
alarm(), exec, fcntl(), semop(), signal(), times(), <sys/types.h>, <unistd.h>.
Derived from Issue 1 of the SVID. | http://pubs.opengroup.org/onlinepubs/7990989775/xsh/fork.html | crawl-003 | refinedweb | 575 | 64.1 |
#include <wx/event.h>
A help event is sent when the user has requested context-sensitive help.
This can either be caused by the application requesting context-sensitive help mode via wxContext.
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros:
wxEVT_HELPevent.
wxEVT_HELPevent for a range of ids.
Indicates how a wxHelpEvent was generated.
Constructor.
Returns the origin of the help event which is one of the wxHelpEvent::Origin values.
The application may handle events generated using the keyboard or mouse differently, e.g. by using wxGetMousePosition() for the mouse events.
Returns the left-click position of the mouse, in screen coordinates.
This allows the application to position the help appropriately.
Set the help event origin, only used internally by wxWidgets normally.
Sets the left-click position of the mouse, in screen coordinates. | https://docs.wxwidgets.org/trunk/classwx_help_event.html | CC-MAIN-2021-17 | refinedweb | 142 | 51.65 |
Hello,
I'm writing a simple calculator in C. The main problem is I need to parse string to evaluate for example 1+2*(3-1)/2.5.
However writing such parses is a very hard task.
This is a trivial thing Python using this code:
s = raw_input('Please enter expression:') try: x = eval(s) except NameError: print 'Unknown elements in the expression!' except ZeroDivisionError: print 'Division with zero attempted!' except: print 'Some other kind of error!' else: print 'Expression is evaluated correctly'
I saw that Python has C API so I assume that Python functions can be called from C.
Is it possible to use Python function eval() to evaluate user string in C program? In other words is it possible to make wrapper to this Python code and evaluate expression in C?
I tried to find answer in Python manual but I'm a complete jackass beginer.
For example I have Test.c
#include <stdio.h> char buff[BUFSIZ]; int main(void) { char * p; double res;/* to store result of expression evaluation*/ printf ("Please enter expression: "); fgets(buff, sizeof buff, stdin); if (p = strrchr(buff,'\n')) { *p = '\0'; } /* now wrapper to python function eval()-....? */ }
and from that c file I'd like to call eval and if success to store result to variable res.
I'm using MSCV++ .NET with Win XP and I assume I'll need to use some kind of Python libraray...
There is explanation in Pythion doc, but there is no complete testing source file and my english is not very good to understand all the described steps.
Is this possible at all?
Please, can you help me?
Thanks in advance! | https://www.daniweb.com/programming/software-development/threads/31682/calling-python-function-from-c-c | CC-MAIN-2018-13 | refinedweb | 277 | 75.3 |
This is your resource to discuss support topics with your peers, and learn from each other.
03-20-2010 05:21 PM - edited 03-20-2010 05:24 PM
jimlongo,
There are so many reasons why those links are not going to work on your Blackberry that it's not worth getting into it.
Your only hope is to try to find an rtsp link to your video on youtube. The rtsp link will look something like:
<a href="rtsp://v2.cache6.c.youtube.com/CkYLENy73wIaP
The challenge here is that you won't be able to just browse to your video in a normal browser, look at the HTML source code and find the link. I think youtube will only render the page with an rtsp link (instead of an http link) when it detects a Blackberry browser. So to find the rtsp link, you may have to build something that imitates a Blackberry browser (i.e. sends a User-Agent header like
"BlackBerry9530/5.0.0.328 Profile/MIDP-2.1 Configuration/CLDC-1.1 VendorID/105") and displays the html that comes back from youtube. Personally, I built a little vb.net program that does just that.
Once you have the rtsp link you can just embed that in a web page as a normal <a /> link (as shown above) and it should work.
03-20-2010 05:29 PM
Jim,
Try embedding the following link in a web page and pointing your blackberry at it:
<a href="rtsp://v3.cache5.c.youtube.com/CkYLENy73wIaP
I think that's the rtsp link to your video.
Let me know if it works.
03-20-2010 10:43 PM - edited 03-20-2010 10:44 PM
03-21-2010 07:26 AM
Ok, good.
If your web server supports PHP, it's very simple to write a short PHP script that will play the RTSP link at Blackberries and an HTTP <OBJECT><EMBED><EMBED /><OBJECT /> construct at everything else....
03-21-2010 07:46 AM - edited 03-21-2010 08:36 AM
Try adding .php to the name of your index file, updating your .htaccess to point to it and then put something like the below in the html (note: obviously replace the rtsp and http links with your real ones, and put in the correct height and width, etc):
<body> <?php if (stristr($_SERVER['HTTP_USER_AGENT'], "blackberry")) : ?> <a href="rtsp://v3.cache5.c.youtube.com/CkYLENy73wIaP
QnI7ix7vigZyRMYESARFEIJbXYtZ29vZ2xlSARSBXdhdGNoWg5QnI7ix7vigZyRMYESARFEIJbXYtZ29vZ2xlSARSBXdhdGNoWg5 DbGlja1RodW1ibmFpbGD-1Mb8w7jP0ksM/0/0/0/video.3gp"DbGlja1RodW1ibmFpbGD-1Mb8w7jP0ksM/0/0/0/video.3gp" > <img src="media/prs_poster_ns.jpg"></a> <?php else : ?> <object id="qtobject" classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6> <img src="media/prs_poster_ns.jpg"></a> <?php else : ?> <object id="qtobject" classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6 B" height="240" width="320"> <param name="controller" value="false"> <param name="autoplay" value="true"> <param name="showlogo" value="true"> <param name="cache" value="false"> <param name="target" value="myself"> <param name="data" value="" height="240" width="320"> <param name="controller" value="false"> <param name="autoplay" value="true"> <param name="showlogo" value="true"> <param name="cache" value="false"> <param name="target" value="myself"> <param name="data" value=" &rel=0"> <param name="src" value=""> <param name="src" value=" &rel=0"> <param name="type" value="video/3gpp"> <embed height="240" width="320" autoplay="true" target="myself" controller="false" src=""> <param name="type" value="video/3gpp"> <embed height="240" width="320" autoplay="true" target="myself" controller="false" src=" &rel=0" qtsrc="" qtsrc=" &rel=0" /> </object> <?php endif ?> </body> </html>&rel=0" /> </object> <?php endif ?> </body> </html>
03-21-2010 04:45 PM - edited 03-22-2010 12:26 PM
yes that works!! the rtsp stream link gets shown to the BB and the regular object tag to all others.
EDIT, re-reading the thread I see you've already given me the rtsp URL, and also described how you got it. Thanks for your help.
03-31-2010 08:00 AM
Finally it looks like there may be a fix for the blackberry-can't-play-video problem: lots of rumors on the web and elsewhere that Verizon will have the iPhone this summer.
So:
(a) pray that the rumors are true and
(b) don't upgrade your Blackberry - wait until the iPhone is available
04-08-2010 02:06 PM
Zelaza,
Thanks for pushing the heck out of this issue - very annoying indeed. So one question (and an interesting test that may point to how to get a working 3gp file).
(a) how did you generate that rtsp link for the YouTube video? I'm trying to see if we can dynamically link to a url like that when we detect blackberry users.
(b) Here is a very strange phenomenon - we DID find a way to get a 3gp file to play via http! most 3gp tests we created failed, but this one works on my verizon powered storm (v5.0.0.328 of the OS) (not a storm 2 just the original):-
We're trying to figure out right now specifically how we pulled it off but let me know if this works. I will post any info I discover if we can re-output another file that successfully plays. Interestingly enough it was more random luck then anything because other 3gp files we've output don't work via http...
more to come,
Brian
04-09-2010 10:41 AM
Brian.
First, thanks so much for contributing to this thread.
Holy heck, I think you may have stumbled on the magic alchemical formula (even though you don't know exactly what that formula is yet)!!!
The darn thing plays over HTTP. It asks me if I want to OPEN or SAVE, which is only slightly annoying, but when I choose OPEN, it does indeed play on my Storm v5.0.0.328.
I notice you are using an <EMBED /> tag rather than an </OBJECT > tag. Very interesting. Would be interesting to see how the same 3gp file behaves when embedded with an <OBJECT /> tag... Would be cool if there were some way to defeat the "OPEN or SAVE" dialog box, but actually, I can live with that if necessary.
Dude, you are onto something here. Please do us all a huge favor and let us know once you figure out the magical incantation you have to do while building the 3gp file to get this to work.
Good job!
04-09-2010 02:57 PM
I actually think it is the file itself vs. the embed / object tag - I'm just linking directly to a url but didn't do anything special with the link. The downside is that we STILL can't figure out what we did to get this to work - we keep trying different combinations with no luck (arrrgggghhhh!). I will post if we discover anything else.
In the meantime I figured out how to easily grab the youtube streams (just go to and search for your video, right click the video, copy url and boom). However, the streaming server is actually a bit testy so I keep getting halts etc. This may be a throughput issue with the storm but who knows.
I will post when break the back of this silly blackberry mystery. AND - I'm with you - my plan is to migrate to Android or iPhone as soon as Verizon decides it wants to release cutting edge phones and not get consistently trumped by AT&T / Sprint etc.
Take it easy,
Brian | https://supportforums.blackberry.com/t5/Web-and-WebWorks-Development/Blackberry-Browser-cannot-Play-Embedded-MP4-Videos/m-p/479684 | CC-MAIN-2017-04 | refinedweb | 1,246 | 70.63 |
The python program to calculate the square root of (N+1)th Prime number for a given number N using binary search with a precision of unto 7 decimal places includes the following steps.
1. First we are generating next prime number for a given number
2. Second, we are finding its square root.
3. After we are rounding the values to 7 decimal places
The following program illustrates above process
import math def prime(n): np=[] isprime=[] for i in range (n+1,n+200): np.append(i) for j in np: val_is_prime = True for x in range(2,j-1): if j % x == 0: val_is_prime = False break if val_is_prime: isprime.append(j) return min(isprime) n=int(input("Enter a number: ")) x=prime(n) print("next prime number is",x) #calculating square root by taking x as input s=sqrt(x) print("and it squre root is", round(s,7))
The following is the output for above program
Enter a number: 5 next prime number is 7 and it squre root is 2.6457513
Note: only a member of this blog may post a comment. | http://www.tutorialtpoint.net/2021/12/python-program-for-fibonacci-series.html | CC-MAIN-2022-05 | refinedweb | 187 | 65.15 |
In this article we will learn how to display localized data in ASP.NET controls based on language and culture settings of the browser.
Introduction
The objective of this article is to localize ASP.NET controls to display data according to the browser's language setting. The browser's language setting is determined by its current Culture. Localization of ASP.NET controls will enable us to display controls in multiple languages. Here, we will localize two Label controls and one Image control in two languages, English and Hindi. If the current culture is "en-US", then the USA flag is displayed with country and capital name. And if the current culture is "hi-IN", then the Indian flag is displayed with country and capital name.
Culture
The culture of a system or browser is used to determine its language and region. A culture consists of two parts. The first part represents the language code and the second part represents the region code. For example, the culture "hi-IN" represents the Hindi language of the India region. This is a standard format of culture given by IETF (Internet Engineering Task Force).
Step 1 : Create an ASP.NET web application. In the Default.aspx HTML source, add UICulture="auto" in the Page directive as below:
<%@ Page Language="C#" UICulture="auto" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="LocalizedWebsite._Default" %>
This is done to automatically detect UICulture of the page.
For localization, ASP.NET has provided two properties, Culture and UICulture. Culture is used to localize date, numbers, currency etc. And UICulture is used to specify resource file of current culture. Both these properties accept standardized Culture value specified by the IETF.
Step 2 : In the Default.aspx HTML source, add the following code inside the form tag to design the user interface. In the UI we have three Label controls. lblCultureValue displays current culture name, lblCountry displays country and lblCapital displays its capital. An Image control imgFlag displays a flag.
<table border="1px" cellspacing="0">
<tr>
<td class="style2">
Culture
</td>
<td>
<asp:Label</asp:Label>
</tr>
Country
<asp:Label
</asp:Label>
Capital
<asp:Label
</asp:Label>
<tr>
<td colspan="2">
<asp:Image
</table>
Note
The "meta:resourceKey" tag in the controls are used to access values from the resource file.
Step 3 : Import the System.Globalization namespace with a "using" keyword at the top and write the following code in Default.aspx code behind page to display current culture name in the lblCulture Label.
protected void Page_Load(object sender, EventArgs e){lblCulture.Text = CultureInfo.CurrentUICulture.ToString();}
Step 4 : For each language we have to create a resource file. Each resource file contains text translated in that language. We are localizing controls in two languages. So, we need to add two resource files, one each for Hindi and English.
Resource files are created inside a special ASP.NET folder, "App_LocalResources". Create this folder by right-clicking on the project in the solution explorer and selecting:
Add -> Add ASP.NET Folder -> App_LocalResources
Now, add a resource file named Default.aspx.en-US.resx for the English language in the App_LocalResources folder by right-clicking on the folder and selecting:
Add -> New Item -> Resource File
We have created a resource file for the English language. Now we need to add data to the resource file so that we can display data in English if the current Culture is "en-US". Add data for each control like this:
Here, we have added three values in the resource file for our controls which are self-explanatory. Add another resource file for the Hindi language and name it, Default.aspx.hi-IN.resx and add following values:
Now, we have added a resource file for each language. But if the browser has a language other than English or Hindi, it will give an error. For this we need to add a default resource file named Default.aspx.resx. In this default resource file add the same data as in Default.aspx.en-US.resx. If any other language is found then the data from this resource file will be displayed.
The naming convention of the resource file is PageName.CultureName.resx.
Step 5 : Run the application. You will get the following output, assuming that the default language of your browser is "en-US":
Step 6 : Now change language of your browser to Hindi (hi-IN). In Internet Explorer, go to:
Tools -> Internet Options -> General tab -> Languages -> Add
Now select Hindi (India) [hi-IN] from the drop down and click OK. Move Hindi to the top using Move up button. Now run the application again. As we have set the default language culture of the browser to Hindi, we will get data in the Hindi language like the following:
I have added an extra resource file named Default.aspx.hi.resx because if you select the language as Hindi, Mozilla returns "hi" instead of "hi-IN".
If the culture contains only a language code but not the region code, then it is called a neutral culture. If both the language and region code are present, then it is called a specific culture. For example, if the culture is "en", it is neutral but if the culture is "en-US" or "en-GB", it is specific for US or UK.
Summary
In this article we learned to display controls of a website in different languages based on user's browser language setting.
View All
View All | https://www.c-sharpcorner.com/UploadFile/deepak.sharma00/how-to-localize-Asp-Net-controls-based-on-browser%E2%80%99s-language/ | CC-MAIN-2022-40 | refinedweb | 907 | 58.48 |
Hi,
i have found a lot of problems with ojb, if i try this:
/**
* @ojb.class table = "PRODUCT_STOCK"
* @DOC this is a description
*/
public class ProductStock
the "@DOC" statement cant be there, because the ant script does it not
recognize.
However, this i can solve self, but another problem is, that if i have
in my code a
@ojb.reference ....
it must be before a
@ojb.collection
because it will also not recognized.
Can someone tell me why ?
My code is very big and now some Collections of an object will not be
saved.
If i sysout the object before save - everything is fine, after save and
relaod from databes all collections are empty.
(in the database too)
I get no error with ant script, the corosponding SQL Code for the
database looks good, every referencens are available.
Is there a general Problem with large ojb projects ?
Hans
---------------------------------------------------------------------
To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
For additional commands, e-mail: ojb-user-help@db.apache.org | http://mail-archives.apache.org/mod_mbox/db-ojb-user/200806.mbox/%3C4846595B.9050108@repcom.de%3E | CC-MAIN-2014-15 | refinedweb | 169 | 66.94 |
iSaverFile Struct Reference
This interface represents a CS file to save to. More...
#include <imap/saverfile.h>
Inheritance diagram for iSaverFile:
Detailed Description
This interface represents a CS file to save to.
Attach engine objects to this to save them to this file. This is useful to the saver to support saving to multiple files.
Definition at line 52 of file saverfile.h.
Member Function Documentation
Get the file name of the saver file.
Get the type of the saver file.
The documentation for this struct was generated from the following file:
- imap/saverfile.h
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4/structiSaverFile.html | CC-MAIN-2014-41 | refinedweb | 107 | 61.53 |
Agenda
See also: IRC log
MS: is this still draft?
SAZ: not yet approved by AC
MS: we are behind, were supposed to public Last
Call for the EARL Schema
... also need to publish first draft of the EARL Guide
... also the Requirements document
SAZ: there are change requests for the requirements document in the f2f minutes and in tracker
MS: need to get started on test suites to catch
up
... what are the plans?
CV: working now on it
SAZ: if we publish EARL Schema WD by mid-end
April
... we could aim for a the big Last Call publication for beginning of June
... that means EARL 1.0 Schema + HTTP-in-RDF + Content-in-RDF + Pointers-in-RDF
... is this realistic?
JK: doing on-going changes
SAZ: forgot, EARL 1.0 Guide should be part of the "big publication"
CV: should be possible
JK: am on vacation during second half of May
SAZ: if you can get the changes done before then
it would be useful
... most worried about the Guide
... back to test suites, what do we need to host them?
MS: generated tests serve as test suites?
SAZ: could we provide test results for the WCAG
2.0 Test Samples?
... would provide the according tests (for the context)
... but adds a dependency
CV: mixing two objectives
... can provide examples of valid and invalid EARL reports
MS: need to describe how the test suites should look like
CV: EARL does not require accessibility tests
SAZ: instead of inventing test criteria, my thought was to reuse the WCAG 2.0 Test Samples
<cvelasco> typo in related instances earl:cantTell
SAZ: removed OWL namespace and fixed namespace
for DC elements and DC terms
... added instances for OutcomeValue in Section 2.7, Appenix A, and the RDF file
... need to discuss these changes
CV: wording for Related Instances is too weak
JK: don't think it's too weak
<JohannesK> JK: change "can not" to "cannot"?
MS: maybe we get a question why we didn't do this for Test Mode
JK: last week we discussed that Test Mode doesn't need subclassing
MS: isn't that clear to me
SAZ: how about adding an editor's note in section Test Mode to ask for feedback?
[agreement]
JK: isn't "cannot" one word?
MS: yes, one word
RESOLUTION: Shadi to change "can not" to "cannot"
CV: don't like acronyms in upper case for instance names
SAZ: instance names were not discussed
... it is generally not considered good practice to differentiate between to entities using just the casing
[agreement to "passed", "failed", and "cantTell"]
MS: how about earl:na and earl:nt (lower case)?
<MikeS> other option sfor names is 'inapplicable' and 'untested'
RESOLUTION: Shadi to change "NA" and "NT" to "inapplicable" and "untested"
MS: everyone set?
... other issues?
SAZ: in this publication we are looking for feedback on: 1. use of foaf:Document, 2. instances for TestMode (like OutcomeValue), 3. replacing earl:Software with DOAP, 4. conformance section
<scribe> ...pending adding these 4 questions to the "Status of the Document", and making the two changes discussed above, any objection to publication?
JK: range for earl:info is Literal or XML:Literal?
MS: think Literal
JK: should be RDF namespace?
... actually RDFS
<JohannesK> <>: rdfs:Literal
RESOLUTION: Shadi to change range of earl:info to rdfs:Literal
<JohannesK> So it should be <>
RESOLUTION: publish EARL 1.0 Schema as an updated Working Draft pending the three changes recorded above
SAZ: regrets for the 15th
MS: next week discuss EARL 1.0 Guide and
Requirements document
... next meeting *15 April* | http://www.w3.org/2009/04/08-er-minutes.html | CC-MAIN-2018-05 | refinedweb | 598 | 65.93 |
Sunday Oct 18, 2009
Thursday Mar 12, 2009
Bug).
Thursday Oct 23, 2008
Long absolute jumps on AMD64
By nike on Oct 23, 2008
DECLINLINE(void) tcg_out_pushq(TCGContext \*s, tcg_target_long val) { tcg_out8(s, 0x68); /\* push imm32, subs 8 from rsp \*/ tcg_out32(s, val); /\* imm32 \*/ if ((val >> 32) != 0) { tcg_out8(s, 0xc7); /\* mov imm32, 4(%rsp) \*/ tcg_out8(s, 0x44); tcg_out8(s, 0x24); tcg_out8(s, 0x04); tcg_out32(s, ((uint64_t)val) >> 32); /\* imm32 \*/ } } DECLINLINE(void) tcg_out_long_jmp(TCGContext \*s, tcg_target_long dst) { tcg_out_pushq(s, dst); tcg_out8(s, 0xc3); /\* ret \*/ }
Friday Sep 05, 2008
Python API to the VirtualBox VM
By nike on Sep 05, 2008
One of the important advantages of the VirtualBox virtualization solution is powerful public API allowing to control every aspect of virtual machine configuration and execution. Last month I was working on Python and Java bindings to that API. Those bindings are shipped with VirtualBox 2.0 SDK.
There are two families of API bindings:
- download VirtualBox 2.0 for your platform (Linux and Solaris Python bindings officially supported)
- download SDK
- unpack SDK
cd sdk/bindings/xpcom/python/sample
export VBOX_PROGRAM_PATH=/opt/VirtualBox-2.0.0/ PYTHONPATH=..:$VBOX_PROGRAM_PATH
./vboxshell.pyto start the shell
def showvdiCmd(ctx, args): mach = argsToMach(ctx,args) if mach == None: return 0 hdd = mach.getHardDisk(ctx['ifaces'].StorageBus.IDE, 0, 0) print 'HDD0 info: id=%s desc="%s" size=%dM location=%s' %(hdd.id,hdd.description,hdd.size,hdd.location) return 0and add following line to
commandsmap:
'vdiinfo':['Show VDI info', showvdiCmd],
Then you can run it like this:
vdiinfo Win32 (or however your VM of interest is named).
Easy, isn't it? Moreover this command will work not only with XPCOM bindings, but with SOAP too.
This example also shows how to access VirtualBox constants in toolkit neutral manner - 'ifaces' field of context contains reflection information
useadble to get values of the constant.
Actually, there are other languages bindings to VirtualBox API shipped with SDK, including Java and C++, but I personally find Python easiest for start. You can ask here questions on VirtualBox language bindings (not only Python), and I will try to help.
Thursday Aug 07, 2008
Informative paper on memory
By nike on Aug 07, 2008
Seriously speaking, this paper could be of interest if you want to understand what really goes on when you do
MOV EAX,[ECX].
Wednesday Aug 06, 2008
Back at Sun
By nike on Aug 06, 2008
Update: thanks everybody who welcomed me back!
Wednesday Aug 15, 2007
Leaving Sun
By nike on Aug 15, 2007.
Thursday Aug 02, 2007
FS neutral data recovery tool
By nike on Aug 02, 2007
Thursday Jul 19, 2007
Saturday Jul 14, 2007
Double mapping of memory regions on Unix
By nike on Jul 14, 2007
Sunday Jul 08, 2007
Hotspot internals Q&A
By nike on Jul 08, 2007
ILP64, LP64, LLP64
By nike on Jul 08, 2007
Thursday Jul 05, 2007
Raw page table access
By nike on Jul 05, 2007
Wednesday Jul 04, 2007
Debugger for Win32 (v2)
By nike on Jul 04, 2007
Tuesday Jul 03, 2007
C mini-contest
By nike on Jul 03, 2007
About
nike | https://blogs.oracle.com/nike/ | CC-MAIN-2015-22 | refinedweb | 520 | 54.97 |
Search Criteria
Notice: Undefined variable: name in /srv/http/vhosts/aur-dev.archlinux.org/public/web/lib/pkgfuncs.inc.php on line 248
Package Details: frescobaldi 2.19.0-1
Dependencies (19)
- hyphen
- poppler (poppler-git, poppler-minimal, poppler-lcdfilter)
- python-poppler-qt4
- tango-icon-theme
- python-ly>=0.9.4 (python-ly-git)
- python>=3.2
- qt4>=4.7 (qt4-revert80e3108)
- python-pyqt4>=4.8.3
-
1 2 3 4 5 6 ... Next › Last »
rdoursenaud commented on 2015-12-24 13:42
Thanks to Wilbert (Frescobaldi's author), the package should now be fixed.
I was cleaning up the source a bit too much.
My apologies for the inconvenience.
jonarnold commented on 2015-10-13 19:36
An reinstallation fixed the previous issue.
Regarding the hyphen dicts, they are present in that directory, and I already have hunspell-en installed (from extra repo).
jonarnold commented on 2015-10-13 18:07
I'm now getting this error and Frescobaldi doesn't open:
Traceback (most recent call last):
File "/usr/bin/frescobaldi", line 4, in <module>
from frescobaldi_app import toplevel
ImportError: No module named 'frescobaldi_app'
rdoursenaud commented on 2015-09-21 22:26
@jonarnold, that's strange.
Could you please check that you have .dic files in /usr/share/hyphen?
You may also want to try installing hunspell-en.
Please report any information, success or failure, that may help identify the problem.
Thanks! | https://aur-dev.archlinux.org/packages/frescobaldi/ | CC-MAIN-2019-51 | refinedweb | 233 | 51.95 |
User talk:Nx/Archive2
Contents
- 1 whoa
- 2 Bug
- 3 Scripts
- 4 RWW
- 5 Intercom timeout
- 6 Fortnight in MW
- 7 hmmmmm now I wonder
- 8 Fantastic!
- 9 Vandal brake
- 10 Now that you are the go-to guy "when things go wrong"
- 11 Vandal bin
- 12 Barefeets (foots) and dead files
- 13 close font tag
- 14 Password reset request
- 15 Begging
- 16 Confuzzuleification
- 17 Redirectbot
- 18 Capturebot2
- 19 Your username
- 20 Ahem
- 21 My concern
whoa[edit]
The wiki is wobbling. I'm worried. Mei 00:16, 25 April 2009 (UTC)
- It was just the backup script. -- Nx / talk 00:17, 25 April 2009 (UTC)
- Have you polished all the wheels. Mei 00:21, 25 April 2009 (UTC)
- No, there's still a lot of polishing to do. -- Nx / talk 00:22, 25 April 2009 (UTC)
- That is dangerous, I am confiscating your wiki license. Can you direct me to your series of tubes. Mei 00:23, 25 April 2009 (UTC)
- Any way to add a few more "hide/show"s to watchlist? Like blocks, moves, and stuff? In your copious spare time, of course... ħuman
00:27, 25 April 2009 (UTC)
So what was up with Ipatrol's MW space creation spree? Aren't all those things "defaults" until changed anyway? (Like the editing tabs, for instance) ħuman
23:06, 25 April 2009 (UTC)
- Yeah, he's customizing them with our custom language (such as replacing delete with vaporise everywhere) and adding some features. -- Nx / talk 06:43, 26 April 2009 (UTC)
Bug[edit]
the "cp" interwiki link isn't working quite right.
send you to:
Which of course doesn't work without a little fixing.
Thanks, ħuman
20:09, 26 April 2009 (UTC)
- Shit. It's not my extension, it must be Trent's old code that somehow didn't die in the update. I'm on it. -- Nx / talk 20:14, 26 April 2009 (UTC)
Scripts[edit]
Do I need the vandal log script any longer? - π 10:32, 28 April 2009 (UTC)
- No, of course not, it's just one of the smaller things I forgot. -- Nx/talk 10:33, 28 April 2009 (UTC)
RWW[edit]
You've jumped ship too? What we really need is someone who really cares about it. I would, but I'm terrible at writing articles. Word evil Hoover! 17:59, 28 April 2009 (UTC)
- Noone cares, as evidenced by the saloon bar thread. The place is dead. -- Nx/talk 18:17, 28 April 2009 (UTC)
- Look. I care. --
21:26, 28 April 2009 (UTC)
- And of course, that means everyone should care, right? Because you're the center of the world, CUR! Fall down
- Very well. -- Nx/talk 21:32, 28 April 2009 (UTC)
- Damn. Word evil Hoover! 20:27, 28 April 2009 (UTC)
- Don't know about anyone else, but I gave up because of the constant CUR/falldown mutual wankfest. Could it be that other people are driven away by the stupidity? the only good thing about them doing it there is that it stops them doing it here. Totnesmartin 20:34, 28 April 2009 (UTC)
- The failure of the survey proposal, the utter failure of wigo rw and Fun:Drama queen. When one of the founders of RWW decides to post their article here instead of RWW, the place is dead. -- Nx/talk 20:52, 28 April 2009 (UTC)
- By the way, I think BoN was actually Jinx. --
21:00, 28 April 2009 (UTC)
- I don't care any more. -- Nx/talk 21:01, 28 April 2009 (UTC)
- That was eggsellent, Nx. What a way to administer the final coup!
and marmite 22:24, 28 April 2009 (UTC)
- CUR has lots of free time and a reason to bitch and moan about RW, as well as a desire to prove himself as a leader. I think he is the perfect candidate. -- Nx/talk 22:32, 28 April 2009 (UTC)
- Not to mention that I can keep the wiki on life support virtually forever. I'm implementing several changes that I hope will stir up activity, such as making a Saloon Bar. --
22:53, 28 April 2009 (UTC)
- But there's no one there to talk to besides you and FallDown. ħuman
23:17, 28 April 2009 (UTC)
- Actually, theemperor and Redback are also there. --
23:29, 28 April 2009 (UTC)
- Don't forget that the domain name will expire some day, and I highly doubt RA will pay for it again. -- Nx/talk 23:36, 28 April 2009 (UTC)
- I can pay for it. . . I hope. --
01:50, 29 April 2009 (UTC)
- It is about 30 bucks a year to register rationalwiki.com; we actually pay 90 to get .net and .org as well. - π 03:55, 29 April 2009 (UTC)
- That's a lot. I think godaddy.com is charging about $35 for all three registrations. I use them for my crummy websites and have never had the sticker shock that $90 a year figure gave me. But hey CUR, it's more than you'll make selling tin cans and delivering newspapers. User:Nutty Roux/signature 04:10, 29 April 2009 (UTC)
- Yeah, we're overpaying. I pay 20USD for two years for my dot com. ħuman
04:42, 29 April 2009 (UTC)
- GoDaddy is cheaper, but I think we decided to go with someone reliable. - π 04:56, 29 April 2009 (UTC)
- Of course, my $20 is via the site host, so they may be "discounting" the service for customers. Still, it ought to be less than $30/DN. We're gettin' chewed, I tell ya! ħuman
05:02, 29 April 2009 (UTC)
- Actually that $90 got us three years for all three. Oh well CUR now I don't have to chip in for this years I pay for yours. - π 05:08, 29 April 2009 (UTC)
That's rationalwiki.org, not rationalwikiwiki.org -- Nx/talk 07:07, 30 April 2009 (UTC)
Intercom timeout[edit]
3 fucking hours max? What if I am trying to leave a message at 2 AM that I want everyone to see? Bummer... :( ħuman
06:20, 30 April 2009 (UTC)
- Oops, sorry. I see "other"... never mind... ħuman
06:20, 30 April 2009 (UTC)
- See also: MediaWiki:Intercom-expires -- Nx/talk 06:23, 30 April 2009 (UTC)
- Nice intercom. Please accept my apologies. How easy is to add the "hot new article" category? (As if we needed it, everyone here is an RC whore) ħuman
06:24, 30 April 2009 (UTC)
- Mediawiki:Intercom-list -- Nx/talk 06:25, 30 April 2009 (UTC)
EC! haha
Next question: Can you make the "sig" link to the sender's user and talk pages? ħuman
06:36, 30 April 2009 (UTC)
- Yes, but you can also sign with ~~~~~, it works. -- Nx/talk 06:37, 30 April 2009 (UTC)
- OK, but it's mildly redundant to do so. Can't it just be automatic? Pleeaasse? ħuman
06:53, 30 April 2009 (UTC)
- It's giving redlinks because of the order of parsing. I'll fix it, but I have to take care not to break messages from the old intercom (that are stored unparsed in the database), and messages that are stored parsed. -- Nx/talk 06:54, 30 April 2009 (UTC)
- Urgh, backward compatibility... is hard work. ħuman
07:02, 30 April 2009 (UTC)
- Ok, there's a link to the user's page now, although it can't be customized. -- Nx/talk 08:00, 30 April 2009 (UTC)
- Talk page would be better, if we can only have one link. Sorry to whine. ħuman
08:20, 30 April 2009 (UTC)
- We can have whatever we want, but since it's uncustomizable, it has to be a sane default. I'll add the talk page link. -- Nx/talk 08:27, 30 April 2009 (UTC)
- I've added a contrib link too, the intercom uses the same function that generates these links on recent changes, except the block link is suppressed. I can remove contrib if you think it's overkill. -- Nx/talk 08:35, 30 April 2009 (UTC)
- The test looked good to me. Although, I haven't gotten all "prettifying" on this yet. Can you move the "tags" (mark as read, etc.) to the right? Contribs seems a bit redundant, all we really need is user talk, though user is also OK. ħuman
08:37, 30 April 2009 (UTC)
- It's talk only now. The buttons can be customized in Mediawiki:Intercomnotice, the html for them is inserted at $6,$7 and $8 -- Nx/talk 08:41, 30 April 2009 (UTC)
Wow, thanks. Nice work. ħuman
08:50, 30 April 2009 (UTC)
Some more info since I've got to go. The text in Mediawiki:Intercomnotice is for logged in users, Mediawiki:Intercommessage is for anons (site wide message group) (I know, the naming is stupid). The latter doesn't have the buttons. This text is first wikiparsed, then the $ placeholders are replaced by various parameters. This is why user and talk page links gave redlinks: they were parsed as links to User:$3, then $3 was replaced with the actual username in the html, which was already a redlink. Finally, the whole thing is placed into a div with the usermessage class and an id (intercommessage I think). Because of the id, the styling can be customized in Common.css. I gave it the usermessage class to be consistent with the "you have new messages" notice. The reason the outer div is not in Mediawiki space and is in the code and unchangeable is that a random sysop could screw it up and bork the whole intercom (the javascript that gets the next/prev message and that hides the message when it's marked as read relies on the id for example). -- Nx/talk 08:58, 30 April 2009 (UTC)incorrect, please see MediaWiki talk:Intercomnotice -- Nx/talk 07:49, 1 May 2009 (UTC)
Fortnight in MW[edit]
I seriously doubt it. ħuman
07:05, 30 April 2009 (UTC)
Interesting, I tried to block you distance instead of time (yards, furlongs, parsecs, light years) and the form pooped it out. Funny how it let you use fortnights, yet didn't actually work. ħuman
07:09, 30 April 2009 (UTC)
- So it does recognize fortnight, but refuses to use it. Fascinating -- Nx/talk 07:11, 30 April 2009 (UTC)
- Yes. When CUR blocked himself past the heat death of the universe, it happily accepted it, but did not actually work. It is very interesting. All I was trying to do was keep you a few furlongs from here, is that so difficult to program? After all, time and relative distance in space ought to be simple enough... for a Doctor... ħuman
07:27, 30 April 2009 (UTC)
- PS, ain't it fun to have a wiki where we actually try to break every aspect of the software, instead of work with it? ħuman
07:28, 30 April 2009 (UTC)
hmmmmm now I wonder[edit]
Fantastic![edit]
I found the script. Now what do I do? --
00:43, 1 May 2009 (UTC)
- RationalWiki:Bots#How to make your own -- Nx/talk 11:35, 1 May 2009 (UTC)
- Can't find a download link. --
19:08, 1 May 2009 (UTC)
- -- Nx/talk 19:10, 1 May 2009 (UTC)
- Damn it Nx, I can't find a download link! I need a direct link to a website, not some little page telling me to go to a site without a download link. --
20:29, 2 May 2009 (UTC)
- Allow me to translate: You can either download it using svn with the commandor you can download a nightly snapshot from this page. -- Nx/talk 21:16, 2 May 2009 (UTC)
svn co pywikipedia
- This depends on whether CUR's lowly Windows machine has svn or not. Word evil Hoover! 20:44, 7 May 2009 (UTC)
- It doesn't. Can I just stick the python file under User:BorkBot/web_link_checker.py? --
23:46, 7 May 2009 (UTC)
Vandal brake[edit]
Doesn't it prevent the binnee from moving pages? (Well, it should, but apparently doesn't). ħuman
01:14, 1 May 2009 (UTC)
Now that you are the go-to guy "when things go wrong"[edit]
Look at what I did at [1] and please add more info on "what surrounds it", etc.? Or tell me things here or on my user page, I'll see them either way. Thanks! ħuman
07:20, 1 May 2009 (UTC)
- I've changed some things in the code (removed border from buttons, added a class so they too can be customized), I'll write detailed information on how this works on the talk page. -- Nx/talk 07:49, 1 May 2009 (UTC)
Vandal bin[edit]
It seems rather opaque. Can you/trent add some VB links to special pages, at least? Like, "see contents of VB"? Also, the VBuser page looks just like the block page, it's a bit confusing I think. And it doesn't show the user's VB history (or maybe it does, but the history got vaped in the upgrade?) ħuman
04:40, 4 May 2009 (UTC)
- No, it doesn't show the history, I didn't write/copy that part from Special:Block, will do that eventually. -- Nx/talk 09:50, 4 May 2009 (UTC)
- On special pages it has: Vandal binned IP addresses and usernames, although the IPs that are auto blocked are just given a number like #42. - π 04:42, 4 May 2009 (UTC)
- I'll add a link to Special:VandalBin on Special:VandalBrake. The row id is used to protect Autoblocked IPs, i.e. you shouldn't be able to determine a users IP address by vandal binning them with autoblock and then checking what IP got binned too, since that would essentially give sysops checkuser abilities. -- Nx/talk 09:50, 4 May 2009 (UTC)
- Doh, thanks. I hate the "new" special pages. I guess I'll learn my way around them, I'll have to. ħuman
04:57, 4 May 2009 (UTC)
- It is a bit of a learning curve. The old style's problem was the list was getting long and hard to find things unless you knew the exact name, now you are suppose to search by categories, but they are vague too and some of the things are placed in stupid categories. - π 05:00, 4 May 2009 (UTC)
- Yes. The catting was done by monkeys with typewriters and inexperienced teenagers, I assume? ħuman
05:03, 4 May 2009 (UTC)
- I've added several new features: Special:VandalBrake has convenience links and a log fragment similar to Special:Block (the code is mostly copied from MediaWiki), the vandal log has parole links, and Special:Contributions has links to Special:VandalBin and the vandal log in the subheading. I wanted to add a vandal link to recent changes entries (in the (Talk | contribs | block) part), but the only hook to do that would require replacing the code that generates a recent changes entry with code in an extension, or doing some nasty html-hacking. -- Nx/talk 13:21, 4 May 2009 (UTC)
Barefeets (foots) and dead files[edit]
Hey, I didn't know there were other barefoot-o-philes around. ;-)
Anyhow, real question I have for you. The "wanted page" list has a whole bunch of redlinks that look like "File:Wigo1128 2.png (1 link)". I don't know what they are, nor can I find them to delink them or anything. Any suggestions? If I had my druthers (and the reason I spend time de-redding) is that I think the "wanted pages" is most useful when it's somewhat short, and the lists are things that people might say "oh, I can write that". But if you have to wade through "templates" and "files" and of course CP-linked pages, it's harder to figure out what would be a useful article. Any suggestions, help or advice appreciated. yes, i'm rambling. i do that. :D--
En attendant Godot"«Her intense and pure religiousness took the form of her having equal faith in the existence of another world and in the impossibility of comprehending it in terms of earthly life. V.Nabokov» 16:45, 5 May 2009 (UTC)
- I think the denizens of CP are funny enough on their own, so there's no need to vandalize the place with goatses and stuff and I don't have the talent/patience for parody. That file looks like it's a capturebot upload, I'll see if I can find it. -- Nx/talk 16:48, 5 May 2009 (UTC)
- Ah yes, those are images that capturebot missed because the link was added later, when the img tag was already there, causing capturebot to ignore the wigo... Not much that can be done about it unfortunately, unless the linked page is still on CP... otherwise turning the img link blue would be misleading, since we don't have a capture of that link. -- Nx/talk 16:57, 5 May 2009 (UTC)
- LOL. At least I know why I can't find them in the search. Ok. Thanks! Enjoy your "color testing". Programming seems fun. "testing" however, seems a headache. --
En attendant Godot"«Her intense and pure religiousness took the form of her having equal faith in the existence of another world and in the impossibility of comprehending it in terms of earthly life. V.Nabokov» 17:04, 5 May 2009 (UTC)
close font tag[edit]
Was it really that lame? Thanks for finding it and fixing it. ħuman
08:28, 6 May 2009 (UTC)
- Yeah, the new parser is a bit more sensitive, so things like that will pop up. I've fixed all the userboxes I could find, but there may still be stuff that's broken. -- Nx/talk 08:33, 6 May 2009 (UTC)
- Thanks. If you can tell me what to look for (and what to fix), I don't mind digging in and helping. Thanks for fixing my pile'o'shit! ħuman
08:44, 6 May 2009 (UTC)
- No problem. All the problems I've found manifested as a </div> appearing in the rendered text and the tabs moving away. Just in case only the cached version is broken, purge the page and see if it's fixed. If not, start removing stuff from it and preview to find what part is causing it. Most of the time the </div> will tell you where you should begin your search, in case of userboxen, the box before it has an unclosed tag. -- Nx/talk 08:51, 6 May 2009 (UTC)
Password reset request[edit]
I hear you can do password resets for people. Can you please reset the password for my bot account User:Weaseldroid & email details to my Weaseloid email address. Cheers. Щєазєюіδ
Methinks it is a Weasel 20:17, 7 May 2009 (UTC)
- I've set weaseldroid's email to the same as your account's, you should be able to use the email new password feature now. -- Nx/talk 20:29, 7 May 2009 (UTC)
- I'm not feeling Nx's dark & scary superpowers. User:Mei
- Hey! Read this about an hour ago and it's slowly percolated through my little brain that Nx could hijack anyone's account using this? Scary's right.
and marmite 23:53, 7 May 2009 (UTC)
- Nx, could you hack Fall down's account so that you log in as him, and post a big message saying that women are superior to him in every way and that we are his masters? --
23:59, 7 May 2009 (UTC)
- Aren't women surperior in every way? I thought that was just a given? — Unsigned, by: WaitingforGodot / talk / contribs
- Now that's one reason why people won't return your sysopship, CUR.
and marmite 00:11, 8 May 2009 (UTC)
- Actually, that's the reason you won't give me site access. Note also that you need to stop taking everything I say seriously. --
00:14, 8 May 2009 (UTC)
- CUR; it's the attitude I mean, not the action. Could you please flag your non-serious comments with "</joke>".
and marmite 00:18, 8 May 2009 (UTC)
- (EC>9000) Um, I thought it was obvious that I could hack into anyone's account? There's even a convenient Mediawiki maintenance script to change a user's password, and I did hack into Mei's account to reset her password. And no CUR, I won't give you the root password, because a) the server is not set up very securely, so for example even with a limited user, you can read the mysql password of RW from LocalSettings.php, and the RW user has full read-write access to the database, and b) because of your suggestion above. -- Nx/talk 00:28, 8 May 2009 (UTC)
- What? I thought passwords were never stored in plaintext anymore. Word evil Hoover! 06:46, 10 May 2009 (UTC)
- They are not. They are hashed, so I cannot get your password and use it to break into your email account for example. -- Nx/talk 06:55, 10 May 2009 (UTC)
- I know that; you weren't there when I blew my top at Jeeves for a joke on the old server. I refer to the mysql password. Word evil Hoover! 17:09, 10 May 2009 (UTC)
Begging[edit]
Pleeeeeaaaase, pleeeeeeaaaaase disable editing of the MediaWiki namespace by all users on RWW. Word evil Hoover! 06:44, 10 May 2009 (UTC)
- Haha, why, what have they done now? ħuman
07:03, 10 May 2009 (UTC)
- (EC) I have disabled editing of the interface for janitors, police and bureaucrats. Try it. Access group can still edit the interface, so give yourself access and clean up, then tell me and I'll disable giving of the access group, but still allow removing it so you can remove it from the people who have it. -- Nx/talk 07:05, 10 May 2009 (UTC)
Confuzzuleification[edit]
A quick question that I can't seem to find the answer to. I am writing a Java program to read a file and print out seats based on a provided file. The rest is working nicely, but I can't seem to coax my nested for loop to work out. Could you lend me a hand?
for (i = 0; i <= array1.length; i++){ for(j = 0; _______; j++){ system.out.printline(array1[i][j]) } }
The problem I have with that for loop is I cannot for the life of me figure out what should be in that ______; What would your recommendation be? ĴάΛäšςǍ₰ EEdNpjDwDzzzG ARTfrR 23:15, 10 May 2009 (UTC)
I assume array1[i] is an array, since you use it like that (array1[i][j]), so _____ should be array1[i].length -- Nx/talk 04:14, 11 May 2009 (UTC)
Ah crap *kicks self*. Well, thanks for the help... it works now. ĵ₳¥ášÇ♠ʘ things that make you go "hm" 01:14, 12 May 2009 (UTC)
Redirectbot[edit]
Hi Nx, could you reset Redirectbot's password for me? - π 00:09, 13 May 2009 (UTC)
- I've set Redirectbot's email address, try "email new password" on the login screen. -- Nx/talk 08:02, 13 May 2009 (UTC)
Capturebot2[edit]
I think I have fixed the instruction to what you meant, rather than literally what you wrote, is what is there now correct? Also if you are running hooks over the page could you move the earlier entries up to being under the first 5? - π 00:38, 31 May 2009 (UTC)
- You mean on WIGO:CP? That wasn't me (IMHO it's a bit too early to start advertising it, it might still have serious bugs like the two I just fixed), but thanks. I think the moving entries down part would be best left to a bot, since hooks don't leave difflinks, can't be reverted, and disabling them would also be problematic, as well as determining which pages to run them on. The current one runs on all pages, which is ok I guess, since it only kicks in when it detects vote nextpoll. The only disadvantage is that you can't write it into instructions without the nowiki tag directly before it (it's a hack, best I could do with a single regexp, I didn't want to write complicated code to detect whether the tag is in a pair of nowikis or not) -- Nx/talk 06:00, 31 May 2009 (UTC)
Your username[edit]
Whence "Nx"? Word evil Hoover! 21:05, 31 May 2009 (UTC)
- The first letter of my first name and an x for everything else. It's the result of a windows registry crash, my previous login name was my first name, but it became unusable and I had to create a new one (with a different name obviously). It's also easy to type on various login screens. update to RWW article in 3,2... -- Nx/talk 21:11, 31 May 2009 (UTC)
Ahem[edit]
FTW--Opcn 21:41, 1 June 2009 (UTC)
My concern[edit]
My only issue is that since it would be running on the server and using server resources a request to proccess one or several substantial images could cut severely into performance for the site. A user with malicious intent, or just stupidity, shouldn't be able to even have the option. Can we pre-check links and refuse to proccess the image if it will exceed a certain amount of resources? tmtoulouse 18:35, 2 June 2009 (UTC)
- I'm not sure, but I think memory usage depends on how big the window is, so I can check the horizontal pixel size of the webpage before resizing the window, and cancel it if it's bigger than a set size. Since the command console and config page are locked (we can also enable bureaucrat protection and protect it further), I can add an option to the check command to override this restriction so trusted users can still use it to upload bigger images.
- Also, the webkit2png.py script does active waiting while the page is loading, which means 100% processor usage, so I'll have to fix that. -- Nx/talk 18:55, 2 June 2009 (UTC)
- there used to be a warning that the file was over a certain size, but there was an option tocarry on anyway. i just uploaded a biggie and didn't get such a warning. Perhaps we could block large files completely when you're away Trent? Totnesmartin 18:45, 2 June 2009 (UTC)
- (EC) That's not the problem. The problem is the uncompressed image while capturing which can be several hundred megabytes for large pages (which is then compressed into a few megabyte PNG file), all stored in RAM. -- Nx/talk 18:55, 2 June 2009 (UTC)
- I suspect that's a problem you can solve by painting to a QSvgGenerator rather than to a pixmap. I guess the generated SVG is more or less just a journal of the paint commands as they're issued. You go directly from the in memory render tree representation of tha page which is infinitely smaller than the pixmap representation to an on disk SVG image, bypassing entirely the need to render and save a bitmap. --JeevesMkII The gentleman's gentleman at the other site 20:03, 2 June 2009 (UTC)
- It works, and it only needs 40 megs to render the largest page on cp (and it doesn't have the 32766 pixels limitation of QImage), but MediaWiki has to convert it to a png to display it anyway, so anything gained in the capturing process is lost there. And with large images like this, it locks up the server for minutes with 100% processor usage. -- Nx/talk 22:44, 2 June 2009 (UTC)
- That's a shame. Isn't there a configuration option for "never rasterise SVG, get yourself a better browser you cheap fuck?" --JeevesMkII The gentleman's gentleman at the other site 00:38, 3 June 2009 (UTC) | https://rationalwiki.org/wiki/User_talk:Nx/Archive2 | CC-MAIN-2019-22 | refinedweb | 4,693 | 80.11 |
Understand the guidelines for using materials such as images, screenshots, and text that are copyrighted by Adobe, and materials such as logos, marks, and icons that are trademarked by Adobe.
Learn the guidelines for using Adobe trademarks, including logos.
Are you a developer? See the trademark guidelines for developers of third-party plug-ins and extensions for Adobe products.
Trademark guidelines ›
List of Adobe trademarks
Learn the guidelines for using images, materials, and user guides that come with an Adobe product, are available in the Adobe Newsroom, or are displayed on Adobe.com.
Images and user guides ›
Learn the guidelines for using Adobe product icons, including the Adobe PDF file icon, and Adobe web logos, such as the Get Adobe Reader linking badge.
Icon and web logo guidelines ›
Understand how we handle copyright and trademark issues related to user-generated content on our hosted services.
Learn about Adobe's DMCA guidelines, or file a DMCA notice, counter notice or trademark complaint using our online form.
DMCA Policy ›
Some Adobe services and applications include integrated Google or Flickr image search features. Before you use those websites to search for and
import images directly into your own project or workspace, be sure to review the specific image licenses and terms of reuse. Join us in respecting the intellectual property rights of photographers and creative professionals everywhere.
Get the details ›
Send requests for use of press releases, press materials, or other materials found in the Adobe Newsroom to adobepr@adobe.com | https://www.adobe.com/at/legal/permissions.html | CC-MAIN-2020-45 | refinedweb | 248 | 51.07 |
For Buildbot Nine, see the nine branch or `BuildbotStatusShields <>`__ on PyPI
Buildbot version eight offers build status shields in PNG form at /png from the WebStatus server. However, they look pretty ugly and there isn’t any configuration available. So I made this here thing to allow expanded use of the status shield/badge/thing.
Note: This whole “bind()“ thing feels wrong, I just haven’t worked out how to do it right. Please drop me a note if you wish to enlighten me
In your master.cfg file, use BuildbotStatusShields.bind(WebStatus) to bind to the WebStatus server:
import BuildbotStatusShields as shields c['status'].append(shields.bind(html.WebStatus(http_port=8010, authz=authz_cfg)))
You can pass bind() options to change configuration settings. For example:
import BuildbotStatusShields as shields c['status'].append(shields.bind(html.WebStatus(http_port=8010, authz=authz_cfg), path="shield"))
Will make it bind to /shield.png and /shield.svg. See below for all configuration options.
When it is configured, run the buildbot master. Badges will be available at /badge.svg and /badge.png (by default), and can be passed the following parameters:
There are several options available, here’s a quick list that I’ll probably forget to update. Check shields.py for the full list:
You can also customize the badge. Simply place an SVG Jinja2 template at templates/badge.svg.j2 in the buildbot master folder. Several examples from shields.io can be found in the templates/ folder of. | https://pypi.org/project/BuildbotEightStatusShields/ | CC-MAIN-2017-09 | refinedweb | 245 | 61.43 |
I'm trying to start or restart Unicorn when I do
cap production deploy
namespace :unicorn do
desc "Start unicorn for this application"
task :start do
run "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/myapp.conf.rb -D"
end
end
run
deploy.rb
# within the :deploy I created a task that I called after :finished
namespace :deploy do
...
task :unicorn do
run "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/myapp.conf.rb -D"
end
after :finished, 'deploy:unicorn'
end
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
execute :run, "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/deployrails.conf.rb -D"
end
end
run "cd ... " then I'll get a
unicorn -c /etc/unicorn/deployrails.conf.rb -D
unicorn -c ...
$ kill USR2 58798
bash: kill: USR2: arguments must be process or job IDs
Can't say anything specific about capistrano 3(i use 2), but i think this may help: How to run shell commands on server in Capistrano v3?. Also i can share some unicorn-related experience, hope this helps.
I assume you want 24/7 graceful restart approach.
Let's consult unicorn documentation for this matter. For graceful restart(without downtime) you can use two strategies:
kill -HUP unicorn_master_pid It requires your app to have 'preload_app' directive disabled, increasing starting time of every one of unicorn workers. If you can live with that - go on, it's your call.
kill -USR2 unicorn_master_pid
kill -QUIT unicorn_master_pid
More sophisticated approach, when you're already dealing with performance concerns. Basically it will reexecute unicorn master process, then you should kill it's predecessor. Theoretically you can deal with usr2-sleep-quit approach. Another(and the right one, i may say) way is to use unicorn before_fork hook, it will be executed, when new master process will be spawned and will try to for new children for itself. You can put something like this in config/unicorn.rb:
# Where to drop a pidfile pid project_home + '/tmp/pids/unicorn.pid' before_fork do |server, worker| server.logger.info("worker=#{worker.nr} spawning in #{Dir.pwd}") # graceful shutdown. old_pid_file = project_home + '/tmp/pids/unicorn.pid.oldbin' if File.exists?(old_pid_file) && server.pid != old_pid_file begin old_pid = File.read(old_pid_file).to_i server.logger.info("sending QUIT to #{old_pid}") # we're killing old unicorn master right there Process.kill("QUIT", old_pid) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end
It's more or less safe to kill old unicorn when the new one is ready to fork workers. You won't get any downtime that way and old unicorn will wait for it's workers to finish.
And one more thing - you may want to put it under runit or init supervision. That way your capistrano tasks will be as simple as
sv reload unicorn,
restart unicorn or
/etc/init.d/unicorn restart. This is good thing. | https://codedump.io/share/WPwAAiZdOnhQ/1/starting-or-restarting-unicorn-with-capistrano-3x | CC-MAIN-2017-13 | refinedweb | 499 | 58.38 |
Ticket #4846 (closed defect: fixed)
Virtualbox gets stuck with higher network load (Host Networking) (VERR_VMX_INVALID_VMCS_PTR guru)
Description
Machine Guest (Windows 2003 R2 x86) gets stuck with higher network load on host interface (intel server)
It happens sometimes with high load, not always.
VBox 3.0.4 with Guest Additions. Host is opensuse 11.1 x86_64.
Attachments
Change History
comment:1 Changed 7 years ago by mkromer
I would like to correct ... I can reproduce problem also without a high load on the network...
The only thing I did after it was running stable for a long time: Installed Office 2003. Since that time its not possible anymore to keep that system alive for more than 5 minutes. Microsoft Update I don't even get as far as needed to select updates.
Happens also with VBoxHeadless
comment:2 Changed 7 years ago by mkromer
I can now tell what the problem was:
SMP. I had the machine running with 2 vCPU's. The System is a 4-Core i7 (with HT 8), setting the CPU down to one keeps the system running stable again. However the System *was* running stable before installing Office 2003, so this might be an issue to look at.
comment:3 Changed 7 years ago by mkromer
CPU of host
processor : 7 vendor_id : GenuineIntel cpu family : 6 model : 26 model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz stepping : 4 cpu MHz : 1600.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 3 cpu cores : 4 apicid : 7 initial apicid : tsc_reliable nonstop_tsc pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 lahf_lm ida tpr_shadow vnmi flexpriority ept vpid bogomips : 5345.96 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:
comment:4 Changed 7 years ago by sandervl73
- Summary changed from Virtualbox gets stuck with higher network load (Host Networking) to Virtualbox gets stuck with higher network load (Host Networking) (VERR_VMX_INVALID_VMCS_PTR guru)
You aren't running other VM software (such as KVM) at the same time, are you?
comment:5 Changed 7 years ago by mkromer
No, of course not. in fact there isn't even anything installed that could collide (like kvm kernel module, xen, vmware)... host is a cleanly installed and updated opensuse 11.1 x64 with a newer kernel.
Linux borg 2.6.29.6-jen82-rt #1 SMP PREEMPT RT 2009-08-12 08:40:26 +0200 x86_64 x86_64 x86_64 GNU/Linux
Changed 7 years ago by mkromer
- attachment VBox.2.log
added
crash again with only 1 CPU and 'high' network load (30m/bit steady) with hostonly interface
comment:6 Changed 7 years ago by mkromer
So, it seems to be the same problem again, it just takes longer with only 1 CPU
comment:7 Changed 6 years ago by ARA
Found this issue and here is how it can be reproduced on FreeBSD 8.2 AMD64 host system. Following script should be launched to install virtualbox stuff on host machine:
#!/bin/tcsh echo "Executing $0..." if ($#argv != 1) then echo "Usage: $0 local_user_name" exit 1 endif set -r local_user_name = $1 set -r this_path = `dirname $0` # pkg_add -r virtualbox-ose if (! -e /etc/rc.conf.virtualbox) then cp /etc/rc.conf /etc/rc.conf.virtualbox echo vboxnet_enable=\"YES\" >> /etc/rc.conf endif if (! -e /boot/loader.conf.virtualbox) then cp /boot/loader.conf /boot/loader.conf.virtualbox echo vboxdrv_load=\"YES\" >> /boot/loader.conf endif if (! -e /etc/devfs.conf.virtualbox) then cp /etc/devfs.conf /etc/devfs.conf.virtualbox echo perm cd0 0660 >> /etc/devfs.conf echo perm xpt0 0660 >> /etc/devfs.conf echo perm pass0 0660 >> /etc/devfs.conf endif pw groupmod vboxusers -m $local_user_name exit 0
This will install virtualbox 3.2.12_OSE on host machine.
After that, create FreeBSD guest machine (64-bit) and set Network to "Bridged adapter", select from list host's machine's network interface (in my case - ale0). Install guest FreeBSD VM with minimal image, open sshd there and try to rsync some big data from host machine to guest - VB crashes with VERR_VMX_INVALID_VMCS_PTR guru when network load reaches some critical peak value (usually, in 5-10 seconds). However, limiting bandwidth using rsync cmd line option (--bwlimit) to say 10 (kb) allows to avoid this crash, 50 can work up to minute, 100 crashes in ~30 sec). Also crash never happens when data transfer happens between guest machine and "internet".
So in short - crash happen when "Bridged adapter" is set to ale0 and direct data transfer happens on high speed between host and guest machines.
There is a workaround - create and use "proper" bridge (for example, like described here -). After that, when I select "bridge0" issue gone completely.
p.s. at least two other tickets may be the same issue - 4918 and 5962, just updated oldest one.
Changed 6 years ago by ARA
- attachment ara_vb.log
added
comment:8 Changed 6 years ago by ARA
In previous post - After that, when I select "bridge0" issue gone completely. must be After that, when I select "tap0" issue gone completely.
comment:9 Changed 5 years ago by aleksey
Does the problem appear with the latest versions of VirtualBox (4.0.12 and 4.1)?
comment:10 Changed 5 years ago by mkromer
Problem could not be reproduced with openSUSE 11.4 (x86_64) and Virtualbox 4.1 Packages.
comment:11 Changed 5 years ago by aleksey
- Status changed from new to closed
- Resolution set to fixed
Good to hear. I am resolving this one then.
VirtualBox Log at crash time | https://www.virtualbox.org/ticket/4846 | CC-MAIN-2017-04 | refinedweb | 927 | 65.01 |
Localizing Your Next.js App
Instructing Next.js your app intends to have routes for different locales (or countries, or both) could not be more smooth. On the root of your project, create a
next.config.js if you have not had the need for one. You can copy from this snippet.
/** @type {import('next').NextConfig} */ module.exports = { reactStrictMode: true, i18n: { locales: ['en', 'gc'], defaultLocale: 'en', } }
Note: The first line is letting the TS Server (if you are on a TypeScript project, or if you are using VSCode) which are the properties supported in the configuration object. It is not mandatory but definitely a nice feature.
You will note two property keys inside the
i18n object:
locales
A list of all locales supported by your app. It is an
arrayof
strings.
defaultLocale
The locale of your main root. That is the default setting when either no preference is found or you forcing to the root.
Those property values will determine the routes, so do not go too fancy on them. Create valid ones using locale code and/or country codes and stick with lower-case because they will generate a
url soon.
Now your app has multiple locales supported there is one last thing you must be aware of in Next.js. Every route now exists on every locale, and the framework is aware they are the same. If you want to navigate to a specific locale, we must provide a
locale prop to our
Link component, otherwise, it will fall back based on the browser’s
Accept-Language header.
<Link href="/" locale="de"><a>Home page in German</a></Link>
Eventually, you will want to write an anchor which will just obey the selected locale for the user and send them to the appropriate route. That can easily be achieved with the
useRouter custom hook from Next.js, it will return you an
object and the selected
locale will be a
key in there.
import type { FC } from 'react' import Link from 'next/link' import { useRouter } from 'next/router' const Anchor: FC<{ href: string }> = ({ href, children }) => { const { locale } = useRouter() return ( <Link href={href} locale={locale}> <a>{children}</a> </Link> ) }
Your Next.js is now fully prepared for internationalization. It will:
- Pick up the user’s preferred locale from the
Accepted-Languagesheader in our request: courtesy of Next.js;
- Send the user always to a route obeying the user’s preference: using our
Anchorcomponent created above;
- Fall back to the default language when necessary.
The last thing we need to do is make sure we can handle translations. At the moment, routing is working perfectly, but there is no way to adjust the content of each page.
Creating A Dictionary
Regardless if you are using a Translation Management Service or getting your texts some other way, what we want in the end is a JSON object for our JavaScript to consume during runtime. Next.js offers three different runtimes:
- client-side,
- server-side,
- compile-time.
But keep that at the back of your head for now. We’ll first need to structure our data.
Data for translation can vary in shape depending on the tooling around it, but ultimately it eventually boils down to locales, keys, and values. So that is what we are going to get started with. My locales will be
en for English and
pt for Portuguese.
module.exports = { en: { hello: 'hello world' }, pt: { hello: 'oi mundo' } }
Translation Custom Hook
With that at hand, we can now create our translation custom hook.
import { useRouter } from 'next/router' import dictionary from './dictionary' export const useTranslation = () => { const { locales = [], defaultLocale, ...nextRouter} = useRouter() const locale = locales.includes(nextRouter.locale || '') ? nextRouter.locale : defaultLocale return { translate: (term) => { const translation = dictionary[locale][term] return Boolean(translation) ? translation : term } } }
Let’s breakdown what is happening upstairs:
- We use
useRouterto get all available locales, the default one, and the current;
- Once we have that, we check if we have a valid locale with us, if we do not: fallback to the default locale;
- Now we return the
translatemethod. It takes a
termand fetches from the dictionary to that specified locale. If there is no value, it returns the translation
termagain.
Now our Next.js app is ready to translate at least the more common and rudimentary cases. Please note, this is not a dunk on translation libraries. There are tons of important features our custom hook over there is missing: interpolation, pluralization, genders, and so on.
Time To Scale
The lack of features to our custom hook is acceptable if we do not need them right now; it is always possible (and arguably better) to implement things when you actually need them. But there is one fundamental issue with our current strategy that is worrisome: it is not leveraging the isomorphic aspect of Next.js.
The worst part of scaling localized apps is not managing the translation actions themselves. That bit has been done quite a few times and is somewhat predictable. The problem is dealing with the bloat of shipping endless dictionaries down the wire to the browser — and they only multiply as your app requires more and more languages. That is data that very often becomes useless to the end-user, or it affects performance if we need to fetch new keys and values when they switch language. If there is one big truth about user experience, it’s this: your users will surprise you.
We cannot predict when or if users will switch languages or need that additional key. So, ideally, our apps will have all translations for a specific route at hand when such a route is loaded. For now, we need to split chunks of our dictionary based on what the page renders, and what permutations of state it can have. This rabbit hole goes deep.
Server-Side Pre-Rendering
Time to recap our new requirements for scalability:
- Ship as little as possible to the client-side;
- Avoid extra requests based on user interaction;
- Send the first render already translated down to the user.
Thanks to the
getStaticProps method of Next.js pages, we can achieve that without needing to dive at all into compiler configuration. We will import our entire dictionary to this special Serverless Function, and we will send to our page a list of special objects carrying the translations of each key.
Setting Up SSR Translations
Back to our app, we will create a new method. Set a directory like
/utils or
/helpers and somewhere inside we will have the following:
export function ssrI18n(key, dictionary) { return Object.keys(dictionary) .reduce((keySet, locale) => { keySet[locale] = (dictionary[locale as keyof typeof dictionary][key]) return keySet , {}) }
Breaking down what we are doing:
- Take the translation
keyor
termand the
dictionary;
- Turn the
dictionaryobject into an array of its
keys;
- Each key from the dictionary is a
locale, so we create an object with the
keyname and each
localewill be the value for that specific language.
An example output of that method will have the following shape:
{ 'hello': { 'en': 'Hello World', 'pt': 'Oi Mundo', 'de': 'Hallo Welt' } }
Now we can move to our Next.js page.
import { ssrI18n } from '../utils/ssrI18n' import { DICTIONARY } from '../dictionary' import { useRouter } from 'next/router' const Home = ({ hello }) => { const router = useRouter() const i18nLocale = getLocale(router) return ( <h1 className={styles.title}> {hello[i18nLocale]} </h1> ) } export const getStaticProps = async () => ({ props: { hello: ssrI18n('hello', DICTIONARY), // add another entry to each translation key } })
And with that, we are done! Our pages are only receiving exactly the translations they will need in every language. No external requests if they switch languages midway, on the contrary: the experience will be super quick.
Skipping All Setup
All that is great, but we can still do better for ourselves. The developer could take some attention; there is a lot of bootstrapping in it, and we are still relying on not making any typos. If you ever worked on translated apps, you’ll know that there will be a mistyped key somewhere, somehow. So, we can bring the type-safety of TypeScript to our translation methods.
To skip this setup and get the TypeScript safety and autocompletion, we can use
next-g11n. This is a tiny library that does exactly what we have done above, but adds types and a few extra bells and whistles.
Wrapping Up
I hope this article has given you a larger insight into what Next.js Internationalized Routing can do for your app to achieve Globalization, and what it means to provide a top-notch user experience in localized apps in today’s web. Let hear what you think in the comments below, or send a tweet my way.
| https://www.smashingmagazine.com/2021/11/localizing-your-nextjs-app/ | CC-MAIN-2022-33 | refinedweb | 1,438 | 62.38 |
CodePlexProject Hosting for Open Source Software
Hi,
I am struck on to deploy Web Api app on windows 2008 R2.
I am getting 404 error when browsing service.
I pounced thru web and tried most solutions and didn't work.
Please advise any pointers you may know.
Thank you
V
Could you please show us your code that you are using to host the Web API?
Daniel Roth
using System;using System.Web.Routing;
using System.Collections.Generic;
using Microsoft.ApplicationServer.Http.Activation;
[assembly: WebActivator.PreApplicationStartMethod(typeof(Restaurant.App_Start.WebApi), "Start")]
namespace Restaurant.App_Start {
public static class WebApi {
public static void Start() {
// TODO: change "MyModel" to desired route segment
RouteTable.Routes.MapServiceRoute<SampleService.Resources.RestaurantService>("RestaurantServices"); } }}
I put this code in App_Start directory
Then try to browse
I tried ContactManager sample as well on same server. It shows same error on browsing
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://wcf.codeplex.com/discussions/278898 | CC-MAIN-2016-44 | refinedweb | 178 | 53.07 |
If you’ve never had this experience you have my envy. You’re on a development team and one of the developers does sloppy work and there is nothing you can do about it.
Time was when everyone in software from the first-day QA trainee to the executives had some experience at coding. That is long gone and now we have layers of methodology “masters” and managers who have never written a line, and who regard any and all complaints about others’ work as insubordinate and as personal conflicts, never considering the criticisms on their technical merits.
If I say that one…
Long introductions are bad for reader retention.
So this is going to be rough, because anything that has aroused so much controversy needs some introducing.
I will state my own opinion upfront: returning from a function before its end is a bad practice.
A typical function or method performs a series of parameter checks or setup operations such as allocating a buffer, opening a file or network connection, each of which must be undone before exiting. These pseudocode examples are based on a C-like language, while much of the code shown here would be handled now by destructors and garbage…
I see it every day. Go to any social network and people are announcing that they just came out as “trans.” Their profiles include “they/them” as their referential pronouns of choice.
I read an article on here (not about gender issues) in which the writer referred to one of his professors at university and described him as “cismale.” So I checked his profile. Yup, “nonbinary.” I can hardly think of any safer presumption than gender congruity.
As I write the Republican party is opening a new front in its failing culture wars, going after girls’ sports teams and demanding gender…
Stop, you’re making my ribs ache. You’re obviously one of those SJW people, you probably use “they” instead of “he” or “she,” and you believe, despite all the evidence to the contrary, that people strive to be unique instead of struggling to conform as hard as they can.
Anyway I wasn’t talking about the Japanese people. I was talking about their language. Japanese is ill-suited to expression of logical thought or scientific ideas, which is why pretty much every technician, engineer, and scientist in Japan speaks English, and conferences of all three are held in English.
The Japanese language is centered on deference, on status, e.g. “how low do I bow,” whereas this crap is of little importance in English. Whether talking to an ant or a president there is only “you.”
Seriously man get off the diversity kick and grow a brain.
So only nonwhite people use bad English? And you call me racist?
Christ you need help. Listen up.
There is a religious group in the USA called the Quakers; the live in a 19th century lifestyle without cars, electricity, or any of the conveniences we take for granted.
And they still use the deprecated pronoun thou, which is equivalent to du in German or tous in French, tu in Spanish, ты in Russian. Problem is, they don’t say “thou” in the nominative, they say “thee,” which is the direct object, what in other languages we call the accusative case.
It’s in the second person so it’s not “me” but in the first person it would be.
They’re using it wrong. And they’re white.
You have a web site that delivers its payload to users as an Excel file. This is convenient and easy for users, they can use Excel or one of the substitutes like Google Sheets to import their data.
I am talking specifically about Excel files, but from the comments I have read the same issues described herein also happen with other Microsoft Office Interop packages.
So you read the documentation, install the Microsoft.Office.Interop.Excel NuGet package, create a class to write your data as a spreadsheet, being sure to add a using directive for the namespace. …
You will not see the word “team” much in this article. Software development is the work of collaborating individuals who work alone, when it is not the work of one person. Members of a team interact constantly; in software we meet to coordinate and then disperse and work alone. Any workplace that has developers in communication as they work is dysfunctional.
In the 32 years this writer has been paid to write code the industry has gone through many changes and frankly most of them, particularly in the last twenty years, have not been improvements. Wisdom has been lost, insights…
"It might not be as easy as you think"
—Prison guard to Mick Travis (Malcolm McDowell) as he leaves at the end of his sentence, O Lucky Man (Lindsey Anderson, 1973)
Beautiful writing, Steve, moving and heartfelt. You are without doubt a significant voice.
I am however reminded of Ukraine after the collapse of the Soviet Union, where 70 years of repression did nothing to stop the immediate resurgence of antisemitism, underground over three generations yet flourishing again.
And reminded also of the terrible steps back in the last four years where the liberation of America's beating Heart of Darkness…
When we go for a job interview we tend to be deferential and obsequious. We’re there to get a job, for money, for our careers. We try to come across as constructive and bring no negativity to the discussion.
We don’t talk about previous jobs, nor mention that we quit because our managers were jerks and liars. We obediently do whatever the interviewer says; balance a beach ball on our noses, write code on a whiteboard.
And almost nobody realizes that the interview goes both ways; yes they are going to pay you money if they hire you, but on…
I’m sitting in a coffee shop having my pre-workout espresso. There is a succession of vapid happy-sad songs playing from overhead, and I forgot my earbuds.
When I get to the gym they will be playing one song on rotation, a female vocalist singing up and down a minor scale, so gratingly simple that as music it barely makes the grade of baby talk. I will keep my workout brief and even so I will leave in a foul mood.
As in the USA, nearly everyone in the gym will be wearing earbuds, listening to whatever they choose to hear…
American Software Developer living in Vietnam. Classical musician (guitar, woodwinds), weightlifter, multilingual, misanthrope • XY | https://cheopys.medium.com/?source=post_page-----69864ce4a84b-------------------------------- | CC-MAIN-2021-21 | refinedweb | 1,099 | 60.45 |
Open Shortest Path First (OSPF) is an Interior Gateway Protocol and receiving packets.
OSPF Version 3 (OSPFv3) expands on OSPF Version 2, providing support for IPv6 routing prefixes.
This module describes the concepts and tasks you need to implement both versions of OSPF on your Cisco XR 12000 Series Router . The term “OSPF" implies both versions of the routing protocol, unless otherwise noted.
The following are prerequisites for implementing OSPF on Cisco IOS XR software:
To implement OSPF you need to understand the following concepts:
OSPF is a routing protocol for IP. It is a link-state protocol, as opposed to a distance-vector protocol. A link-state protocol makes its routing decisions based on the states of the links that connect source and destination machines. The state of the link is a description of that interface and its relationship to its neighboring networking devices. The interface information includes the IP address of the interface, network mask, type of network to which it is connected, routers connected to that network, and so on. This information is propagated in various types of link-state advertisements (LSAs).
A router stores the collection of received LSA data in a link-state database. This database includes LSA data for the links of the router. The contents of the database, when subjected to the Dijkstra algorithm, extract data to create an OSPF routing table. The difference between the database and the routing table is that the database contains a complete collection of raw data; the routing table contains a list of shortest paths to known destinations through specific router interface ports.
OSPF is the IGP of choice because it scales to large networks. It uses areas to partition the network into more manageable sizes and to introduce hierarchy in the network. A router is attached to one or more areas in a network. All of the networking devices in an area maintain the same complete database information about the link states in their area only. They do not know about all link states in the network. The agreement of the database information among the routers in the area is called convergence.
At the intradomain level, OSPF can import routes learned using Intermediate System-to-Intermediate System (IS-IS). OSPF routes can also be exported into IS-IS. At the interdomain level, OSPF can import routes learned using Border Gateway Protocol (BGP). OSPF routes can be exported into BGP.
Unlike Routing Information Protocol (RIP), OSPF does not provide periodic routing updates. On becoming neighbors, OSPF routers establish an adjacency by exchanging and synchronizing their databases. After that, only changed routing information is propagated. Every router in an area advertises the costs and states of its links, sending this information in an LSA. This state information is sent to all OSPF neighbors one hop away. All the OSPF neighbors, in turn, send the state information unchanged. This flooding process continues until all devices in the area have the same link-state database.
To determine the best route to a destination, the software sums all of the costs of the links in a route to a destination. After each router has received routing information from the other networking devices, it runs the shortest path first (SPF) algorithm to calculate the best path to each destination network in the database.
The networking devices running OSPF detect topological changes in the network, flood link-state updates to neighbors, and quickly converge on a new view of the topology. Each OSPF router in the network soon has the same topological view again. OSPF allows multiple equal-cost paths to the same destination. Since all link-state information is flooded and used in the SPF calculation, multiple equal cost paths can be computed and used for routing.
On broadcast and nonbroadcast multiaccess (NBMA) networks, the designated router (DR) or backup DR performs the LSA flooding. On point-to-point networks, flooding simply exits an interface directly to a neighbor.
OSPF runs directly on top of IP; it does not use TCP or User Datagram Protocol (UDP). OSPF performs its own error correction by means of checksums in its packet header and LSAs.
In OSPFv3, the fundamental concepts are the same as OSPF Version 2, except that support is added for the increased address size of IPv6. New LSA types are created to carry IPv6 addresses and prefixes, and the protocol runs on an individual link basis rather than on an individual IP-subnet basis.
OSPF typically requires coordination among many internal routers: Area Border Routers (ABRs), which are routers attached to multiple areas, and Autonomous System Border Routers (ASBRs) that export reroutes from other sources (for example, IS-IS, BGP, or static routes) into the OSPF topology. At a minimum, OSPF-based routers or access servers can be configured with all default parameter values, no authentication, and interfaces assigned to areas. If you intend to customize your environment, you must ensure coordinated configurations of all routers.
The Cisco IOS XR Software implementation of OSPF conforms to the OSPF Version 2 and OSPF Version 3 specifications detailed in the Internet RFC 2328 and RFC 2740, respectively.
The following key features are supported in the Cisco IOS XR Software implementation:
Much of the OSPFv3 protocol is the same as in OSPFv2. OSPFv3 is described in RFC 2740.
The key differences between the Cisco IOS XR Software OSPFv3 and OSPFv2 protocols are as follows:
Cisco IOS XR Software introduces new OSPF configuration fundamentals consisting of hierarchical CLI and CLI inheritance.
Hierarchical CLI is the grouping of related network component information at defined hierarchical levels such as at the router, area, and interface levels. Hierarchical CLI allows for easier configuration, maintenance, and troubleshooting of OSPF configurations. When configuration commands are displayed together in their hierarchical context, visual inspections are simplified. Hierarchical CLI is intrinsic for CLI inheritance to be supported.
With CLI inheritance support, you need not explicitly configure a parameter for an area or interface. In Cisco IOS XR Software, the parameters of interfaces in the same area can be exclusively configured with a single command, or parameter values can be inherited from a higher hierarchical level—such as from the area configuration level or the router ospf configuration levels.
For example, the hello interval value for an interface is determined by this precedence “IF” statement:
If the hello interval command is configured at the interface configuration level, then use the interface configured value, else
If the hello interval command is configured at the area configuration level, then use the area configured value, else
If the hello interval command is configured at the router ospf configuration level, then use the router ospf configured value, else
Use the default value of the command.
Before implementing OSPF, you must know what the routing components are and what purpose they serve. They consist of the autonomous system, area types, interior routers, ABRs, and ASBRs.
This figure illustrates the routing components in an OSPF network topology.
The autonomous system is a collection of networks, under the same administrative control, that share routing information with each other. An autonomous system is also referred to as a routing domain. Figure 1 shows two autonomous systems: 109 and 65200. An autonomous system can consist of one or more OSPF areas.
Areas allow the subdivision of an autonomous system into smaller, more manageable networks or sets of adjacent networks. As shown in Figure 1, autonomous system 109 consists of three areas: Area 0, Area 1, and Area 2.
OSPF hides the topology of an area from the rest of the autonomous system. The network topology for an area is visible only to routers inside that area. When OSPF routing is within an area, it is called intra-area routing. This routing limits the amount of link-state information flood into the network, reducing routing traffic. It also reduces the size of the topology information in each router, conserving processing and memory requirements in each router.
Also, the routers within an area cannot see the detailed network topology outside the area. Because of this restricted view of topological information, you can control traffic flow between areas and reduce routing traffic when the entire autonomous system is a single routing domain.
A backbone area is responsible for distributing routing information between multiple areas of an autonomous system. OSPF routing occurring outside of an area is called interarea routing.
The backbone itself has all properties of an area. It consists of ABRs, routers, and networks only on the backbone. As shown in Figure 1, Area 0 is an OSPF backbone area. Any OSPF backbone area has a reserved area ID of 0.0.0.0.
A stub area is an area that does not accept route advertisements or detailed network information external to the area. A stub area typically has only one router that interfaces the area to the rest of the autonomous system. The stub ABR advertises a single default route to external destinations into the stub area. Routers within a stub area use this route for destinations outside the area and the autonomous system. This relationship conserves LSA database space that would otherwise be used to store external LSAs flooded into the area. In Figure 1, Area 2 is a stub area that is reached only through ABR 2. Area 0 cannot be a stub area.
A Not-so-Stubby Area (NSSA) is similar to the stub area. NSSA does not flood Type 5 external LSAs from the core into the area, but can import autonomous system external routes in a limited fashion within the area.
NSSA allows importing of Type 7 autonomous system external routes within an NSSA area by redistribution. These Type 7 LSAs are translated into Type 5 LSAs by NSSA ABRs, which are flooded throughout the whole routing domain. Summarization and filtering are supported during the translation.
Use NSSA to simplify administration if you are a network administrator that must connect a central site using OSPF to a remote site that is using a different routing protocol.
Before NSSA, the connection between the corporate site border router and remote router could not be run as an OSPF stub area because routes for the remote site could not be redistributed into a stub area, and two routing protocols needed to be maintained. A simple protocol like RIP was usually run and handled the redistribution. With NSSA, you can extend OSPF to cover the remote connection by defining the area between the corporate router and remote router as an NSSA. Area 0 cannot be an NSSA.
The OSPF network is composed of ABRs, ASBRs, and interior routers.
An area border routers (ABR) is a router with multiple interfaces that connect directly to networks in two or more areas. An ABR runs a separate copy of the OSPF algorithm and maintains separate routing data for each area that is attached to, including the backbone area. ABRs also send configuration summaries for their attached areas to the backbone area, which then distributes this information to other OSPF areas in the autonomous system. In Figure 1, there are two ABRs. ABR 1 interfaces Area 1 to the backbone area. ABR 2 interfaces the backbone Area 0 to Area 2, a stub area.
An autonomous system boundary router (ASBR) provides connectivity from one autonomous system to another system. ASBRs exchange their autonomous system routing information with boundary routers in other autonomous systems. Every router inside an autonomous system knows how to reach the boundary routers for its autonomous system.
ASBRs can import external routing information from other protocols like BGP and redistribute them as AS-external (ASE) Type 5 LSAs to the OSPF network. If the Cisco IOS XR router is an ASBR, you can configure it to advertise VIP addresses for content as autonomous system external routes. In this way, ASBRs flood information about external networks to routers within the OSPF network.
ASBR routes can be advertised as a Type 1 or Type 2 ASE. The difference between Type 1 and Type 2 is how the cost is calculated. For a Type 2 ASE, only the external cost (metric) is considered when multiple paths to the same destination are compared. For a Type 1 ASE, the combination of the external cost and cost to reach the ASBR is used. Type 2 external cost is the default and is always more costly than an OSPF route and used only if no OSPF route exists.
An interior router (such as R1 in Figure 1) is attached to one area (for example, all the interfaces reside in the same area).
An OSPF process is a logical routing entity running OSPF in a physical router. This logical routing entity should not be confused with the logical routing feature that allows a system administrator (known as the Cisco IOS XR Software Owner) to partition the physical box into separate routers.
A physical router can run multiple OSPF processes, although the only reason to do so would be to connect two or more OSPF domains. Each process has its own link-state database. The routes in the routing table are calculated from the link-state database. One OSPF process does not share routes with another OSPF process unless the routes are redistributed.
Each OSPF process is identified by a router ID. The router ID must be unique across the entire routing domain. OSPF obtains a router ID from the following sources, in order of decreasing preference:
We recommend that the router ID be set by the router-id command in router configuration mode. Separate OSPF processes could share the same router ID, in which case they cannot reside in the same OSPF routing domain.
OSPF classifies different media into the following types of networks:
You can configure your Cisco IOS XR network as either a broadcast or an NBMA network. Using this feature, you can configure broadcast networks as NBMA networks when, for example, you have routers in your network that do not support multicast addressing.
OSPF Version 2 supports two types of authentication: plain text authentication and MD5 authentication. By default, no authentication is enabled (referred to as null authentication in RFC 2178).
OSPV Version 3 supports all types of authentication except key rollover.
Plain text authentication (also known as Type 1 authentication) uses a password that travels on the physical medium and is easily visible to someone that does not have access permission and could use the password to infiltrate a network. Therefore, plain text authentication does not provide security. It might protect against a faulty implementation of OSPF or a misconfigured OSPF interface trying to send erroneous OSPF packets.
MD5 authentication provides a means of security. No password travels on the physical medium. Instead, the router uses MD5 to produce a message digest of the OSPF packet plus the key, which is sent on the physical medium. Using MD5 authentication prevents a router from accepting unauthorized or deliberately malicious routing updates, which could compromise your network security by diverting your traffic.
See OSPF Authentication Message Digest Management.
Authentication can be specified for an entire process or area, or on an interface or a virtual link. An interface or virtual link can be configured for only one type of authentication, not both. Authentication configured for an interface or virtual link overrides authentication configured for the area or process.
If you intend for all interfaces in an area to use the same type of authentication, you can configure fewer commands if you use the authentication command in the area configuration submode (and specify the message-digest keyword if you want the entire area to use MD5 authentication). This strategy requires fewer commands than specifying authentication for each interface.
To support the changing of an MD5 key in an operational network without disrupting OSPF adjacencies (and hence the topology), a key rollover mechanism is supported. As a network administrator configures the new key into the multiple networking devices that communicate, some time exists when different devices are using both a new key and an old key. If an interface is configured with a new key, the software sends two copies of the same packet, each authenticated by the old key and new key. The software tracks which devices start using the new key, and the software stops sending duplicate packets after it detects that all of its neighbors are using the new key. The software then discards the old key. The network administrator must then remove the old key from each the configuration file of each router.
Routers that share a segment (Layer 2 link between two interfaces) become neighbors on that segment. OSPF uses the hello protocol as a neighbor discovery and keep alive mechanism. The hello protocol involves receiving and periodically sending hello packets out each interface. The hello packets list all known OSPF neighbors on the interface. Routers become neighbors when they see themselves listed in the hello packet of the neighbor. After two routers are neighbors, they may proceed to exchange and synchronize their databases, which creates an adjacency. On broadcast and NBMA networks all neighboring routers have an adjacency.
On point-to-point and point-to-multipoint networks, the Cisco IOS XR software floods routing updates to immediate neighbors. No DR or backup DR (BDR) exists; all routing information is flooded to each router.
On broadcast or NBMA segments only, OSPF minimizes the amount of information being exchanged on a segment by choosing one router to be a DR and one router to be a BDR. Thus, the routers on the segment have a central point of contact for information exchange. Instead of each router exchanging routing updates with every other router on the segment, each router exchanges information with the DR and BDR. The DR and BDR relay the information to the other routers. On broadcast network segments the number of OSPF packets is further reduced by the DR and BDR sending such OSPF updates to a multicast IP address that all OSPF routers on the network segment are listening on.
The software looks at the priority of the routers on the segment to determine which routers are the DR and BDR. The router with the highest priority is elected the DR. If there is a tie, then the router with the higher router ID takes precedence. After the DR is elected, the BDR is elected the same way. A router with a router priority set to zero is ineligible to become the DR or BDR.
Type 5 (ASE) LSAs are generated and flooded to all areas except stub areas. For the routers in a stub area to be able to route packets to destinations outside the stub area, a default route is injected by the ABR attached to the stub area.
The cost of the default route is 1 (default) or is determined by the value specified in the default-cost command.
Each of the following LSA types has a different purpose:
Each of the following LSA types has a different purpose:
An address prefix occurs in almost all newly defined LSAs. The prefix is represented by three fields: Prefix Length, Prefix Options, and Address Prefix. In OSPFv3, addresses for these LSAs are expressed as “prefix and prefix length” instead of “address and mask.” The default route is expressed as a prefix with length 0.
Inter-area-prefix and intra-area-prefix LSAs carry all IPv6 prefix information that, in IPv4, is included in router LSAs and network LSAs. The Options field in certain LSAs (router LSAs, network LSAs, interarea-router LSAs, and link LSAs) has been expanded to 24 bits to provide support for OSPF in IPv6.
In OSPFv3, the sole function.
In OSPF, routing information from all areas is first summarized to the backbone area by ABRs. The same ABRs, in turn, propagate such received information to their attached areas. Such hierarchical distribution of routing information requires that all areas be connected to the backbone area (Area 0). Occasions might exist for which an area must be defined, but it cannot be physically connected to Area 0. Examples of such an occasion might be if your company makes a new acquisition that includes an OSPF area, or if Area 0 itself is partitioned.
In the case in which an area cannot be connected to Area 0, you must configure a virtual link between that area and Area 0. The two endpoints of a virtual link are ABRs, and the virtual link must be configured in both routers. The common nonbackbone area to which the two routers belong is called a transit area. A virtual link specifies the transit area and the router ID of the other virtual endpoint (the other ABR).
A virtual link cannot be configured through a stub area or NSSA.
This figure illustrates a virtual link from Area 3 to Area 0. provider edge .
Configured source and destination addresses serve as the endpoints of the sham link. The source and destination IP addresses must belong to the VRF and must be advertised by Border Gateway Protocol (BGP) as host routes to remote PE routers. The sham-link endpoint addresses should not be advertised by OSPF.
For example, Figure 1 shows three client sites, each with backdoor links. Because each site runs OSPF within Area 1 configuration, all routing between the sites follows the intra-area path across the backdoor links instead of over the MPLS VPN backbone.
If the backdoor links between the sites are used only for backup purposes, default route selection over the backbone link is not acceptable as it creates undesirable traffic flow. To establish the desired path selection over the MPLS backbone, an additional OSPF intra-area (sham link) link between the ingress and egress PErouters must be created.
A sham link is required between any two VPN sites that belong to the same OSPF area and share an OSPF backdoor link. If no backdoor link exists between sites, no sham link is required.
Figure 2 shows an MPLS VPN topology where a sham link configuration is necessary. A VPN client has three sites, each with a backdoor link. Two sham links are configured, one between PE-1 and PE-2 and another between PE-2 and PE-3. A sham link is not required between PE-1 and PE-3, because there is no backdoor link between these sites.
When a sham link is configured between the PE routers, the PE routers can populate the virtual routing and forwarding (VRF) table with the OSPF routes learned over the sham link. These OSPF routes have a larger administrative distance than BGP routes. If BGP routes are available, they are preferred over these OSPF routes with the high administrative distance.
The OSPFv2 SPF Prefix Prioritization feature enables an administrator to converge, in a faster mode, important prefixes during route installation.
When a large number of prefixes must be installed in the Routing Information Base (RIB) and the Forwarding Information Base (FIB), the update duration between the first and last prefix, during SPF, can be significant.
In networks where time-sensitive traffic (for example, VoIP) may transit to the same router along with other traffic flows, it is important to prioritize RIB and FIB updates during SPF for these time-sensitive prefixes.
The OSPFv2 SPF Prefix Prioritization feature provides the administrator with the ability to prioritize important prefixes to be installed, into the RIB during SPF calculations. Important prefixes converge faster among prefixes of the same route type per area. Before RIB and FIB installation, routes and prefixes are assigned to various priority batch queues in the OSPF local RIB, based on specified route policy. The RIB priority batch queues are classified as "critical," "high," "medium," and "low," in the order of decreasing priority.
When enabled, prefix alters the sequence of updating the RIB with this prefix priority:
Critical > High > Medium > Low
As soon as prefix priority is configured, /32 prefixes are no longer preferred by default; they are placed in the low-priority queue, if they are not matched with higher-priority policies. Route policies must be devised to retain /32s in the higher-priority queues (high-priority or medium-priority queues).
Priority is specified using route policy, which can be matched based on IP addresses or route tags. During SPF, a prefix is checked against the specified route policy and is assigned to the appropriate RIB batch priority queue.
These are examples of this scenario:
prefix-set ospf-medium-prefixes 0.0.0.0/0 ge 32 end-set
Redistribution allows different routing protocols to exchange routing information. This technique can be used to allow connectivity to span multiple routing protocols. It is important to remember that the redistribute command controls redistribution into an OSPF process and not from OSPF. See Configuration Examples for Implementing OSPF for an example of route redistribution for OSPF.
OSPF SPF throttling makes it possible to configure SPF scheduling in millisecond intervals and to potentially delay based on the frequency of topology changes in the network. The chosen interval is within the boundary of the user-specified value ranges. If network topology is unstable, SPF throttling calculates SPF scheduling intervals to be longer until topology becomes stable.
SPF calculations occur at the interval set by the timers throttle spf command. The wait interval indicates the amount of time to wait until the next SPF calculation occurs. Each wait interval after that calculation is twice as long as the previous interval until the interval reaches the maximum wait time specified.
The SPF timing can be better explained using an example. In this example, the start interval is set at 5 milliseconds (ms), initial wait interval at 1000 ms, and maximum wait time at 90,000 ms.
timers spf 5 1000 90000
This figure shows the intervals at which the SPF calculations occur as long as at least one topology change event is received in a given wait interval.
Notice that the wait interval between SPF calculations doubles when at least one topology change event is received during the previous wait interval. After the maximum wait time is reached, the wait interval remains the same until the topology stabil to the parameters specified in the timers throttle spf command. Notice in Figure 2that a topology change event was received after the start of the maximum wait time interval and that the SPF intervals have been reset.
Cisco IOS XR Software NSF for OSPF Version 2 allows for the forwarding of data packets to continue along known routes while the routing protocol information is being restored following a failover. With NSF, peer networking devices do not experience routing flaps. During failover, data traffic is forwarded through intelligent line cards while the standby Route Processor (RP) assumes control from the failed RP. The ability of line cards to remain up through a failover and to be kept current with the Forwarding Information Base (FIB) on the active RP is key to Cisco IOS XR Software NSF operation.
Routing protocols, such as OSPF, run only on the active RP or DRP and receive routing updates from their neighbor routers. When an OSPF NSF-capable router performs an RP failover, it must perform two tasks to resynchronize its link-state database with its OSPF neighbors. First, it must relearn the available OSPF neighbors on the network without causing a reset of the neighbor relationship. Second, it must reacquire the contents of the link-state database for the network.
As quickly as possible after an RP failover, the NSF-capable router sends an OSPF NSF signal to neighboring NSF-aware devices. This signal is in the form of a link-local LSA generated by the failed-over router. Neighbor networking devices recognize this signal as a cue. After this exchange is completed, the NSF-capable device uses the routing information to remove stale routes, update the RIB, and update the FIB with the new forwarding information. OSPF on the router and the OSPF neighbors are now fully converged.
The OSPFv3 Graceful Restart feature preserves the data plane capability in the following circumstances:
This feature supports nonstop data forwarding on established routes while the OSPFv3 routing protocol is restarting. Therefore, this feature enhances high availability of IPv6 forwarding.
The two operational modes that a router can be in for this feature are restart mode and helper mode. Restart mode occurs when the OSPFv3 process is doing a graceful restart. Helper mode refers to the neighbor routers that continue to forward traffic on established OSPFv3 routes while OSPFv3 is restarting on a neighboring router.
When the OSPFv3 process starts up, it determines whether it must attempt a graceful restart. The determination is based on whether graceful restart was previously enabled. (OSPFv3 does not attempt a graceful restart upon the first-time startup of the router.) When OSPFv3 graceful restart is enabled, it changes the purge timer in the RIB to a nonzero value. See Configuring OSPFv3 Graceful Restart,for descriptions of how to enable and configure graceful restart.
During a graceful restart, the router does not populate OSPFv3 routes in the RIB. It tries to bring up full adjacencies with the fully adjacent neighbors that OSPFv3 had before the restart. Eventually, the OSPFv3 process indicates to the RIB that it has converged, either for the purpose of terminating the graceful restart (for any reason) or because it has completed the graceful restart.
The following are general details about restart mode. More detailed information on behavior and certain restrictions and requirements appears in Graceful Restart Requirements and Restrictions section.
Helper mode is enabled by default. When a (helper) router receives a grace LSA (Type 11) from a router that is attempting a graceful restart, the following events occur:
The requirements for supporting the Graceful Restart feature include:
OSPFv2 warm standby provides high availability across RP switchovers. With warm standby extensions, each process running on the active RP has a corresponding standby process started on the standby RP. A standby OSPF process can send and receive OSPF packets with no performance impact to the active OSPF process.
Nonstop routing (NSR) allows an RP failover, process restart, or in-service upgrade to be invisible to peer routers and ensures that there is minimal performance or processing impact. Routing protocol interactions between routers are not impacted by NSR. NSR is built on the warm standby extensions. NSR alleviates the requirement for Cisco NSF and IETF graceful restart protocol extensions.
This feature helps OSPFv3 to initialize itself prior to Fail over (FO) and be ready to function before the failure occurs. It reduces the downtime during switchover. By default, the router sends hello packets every 40 seconds.
With warm standby process for each OSPF process running on the Active Route Processor, the corresponding OSPF process must start on the Standby RP. There are no changes in configuration for this feature.
Warm-Standby is always enabled. This is an advantage for the systems running OSPFv3 as their IGP when they do RP failover.
The multicast-intact feature provides the ability to run multicast routing (PIM) when IGP shortcuts are configured and active on the router. Both OSPFv2 and IS-IS support the multicast-intact feature.
You can enable multicast-intact in the IGP when multicast routing protocols (PIM) are configured and IGP shortcuts are configured on the router. IGP shortcuts are MPLS tunnels that are exposed to IGP. The IGP routes:
In OSPF, the max-paths (number of equal-cost next hops) limit is applied separately to the native and mcast-intact next hops. The number of equal cost mcast-intact next hops is the same as that configured for the native next hops.
When a router learns multiple routes to a specific network by using multiple routing processes (or routing protocols), it installs the route with the lowest administrative distance in the routing table. Sometimes the router must select a route from among many learned by using the same routing process with the same administrative distance. In this case, the router chooses the path with the lowest cost (or metric) to the destination. Each routing process calculates its cost differently; the costs may need to be manipulated to achieve load balancing.
OSPF performs load balancing automatically. If OSPF finds that it can reach a destination through more than one interface and each path has the same cost, it installs each path in the routing table. The only restriction on the number of paths to the same destination is controlled by the maximum-paths (OSPF) command.
The range for maximum paths is 1 to 16 and the default number of maximum paths is 16.
The multi-area adjacency feature for OSPFv2 allows a link to be configured on the primary interface in more than one area so that the link could be considered as an intra-area link in those areas and configured as a preference over more expensive paths.
This feature establishes a point-to-point unnumbered link in an OSPF area. A point-to-point link provides a topological path for that area, and the primary adjacency uses the link to advertise the link consistent with draft-ietf-ospf-multi-area-adj-06.
The following are multi-area interface attributes and limitations:
The multi-area interface inherits the interface characteristics from its primary interface, but some interface characteristics can be configured under the multi-area interface configuration mode as shown below:
RP/0/0/CPU0:router(config-ospf-ar)# multi-area-interface GigabitEthernet 0/1/0/3 RP/0/0/CPU0:router(config-ospf-ar-mif)# ? authentication Enable authentication authentication-key Authentication password (key) cost Interface cost cost-fallback Cost when cumulative bandwidth goes below the theshold database-filter Filter OSPF LSA during synchronization and flooding dead-interval Interval after which a neighbor is declared dead distribute-list Filter networks in routing updates hello-interval Time between HELLO packets message-digest-key Message digest authentication password (key) mtu-ignore Enable/Disable ignoring of MTU in DBD packets packet-size Customize size of OSPF packets upto MTU retransmit-interval Time between retransmitting lost link state advertisements transmit-delay Estimated time needed to send link-state update packet RP/0/0/CPU0:router(config-ospf-ar-mif)#
Label Distribution Protocol (LDP) Interior Gateway Protocol (IGP) auto-configuration simplifies the procedure to enable LDP on a set of interfaces used by an IGP instance, such as OSPF. LDP IGP auto-configuration can be used on a large number of interfaces (for example, when LDP is used for transport in the core) and on multiple OSPF instances simultaneously.
This feature supports the IPv4 unicast address family for the default VPN routing and forwarding (VRF) instance.
LDP IGP auto-configuration can also be explicitly disabled on an individual interface basis under LDP using the igp auto-config disable command. This allows LDP to receive all OSPF interfaces minus the ones explicitly disabled.
See Cisco IOS XR MPLS Configuration Guide for the Cisco XR 12000 Series Router for information on configuring LDP IGP auto-configuration.
All OSPF routing protocol exchanges are authenticated and the method used can vary depending on how authentication is configured. When using cryptographic authentication, the OSPF routing protocol uses the Message Digest 5 (MD5) authentication algorithm to authenticate packets transmitted between neighbors in the network. For each OSPF protocol packet, a key is used to generate and verify a message digest that is appended to the end of the OSPF packet. The message digest is a one-way function of the OSPF protocol packet and the secret key. Each key is identified by the combination of interface used and the key identification. An interface may have multiple keys active at any time.
To manage the rollover of keys and enhance MD5 authentication for OSPF, you can configure a container of keys called a keychain with each key comprising the following attributes: generate/accept time, key identification, and authentication algorithm.
OSPF is a link state protocol that requires networking devices to detect topological changes in the network, flood Link State Advertisement (LSA) updates to neighbors, and quickly converge on a new view of the topology. However, during the act of receiving LSAs from neighbors, network attacks can occur, because there are no checks that unicast or multicast packets are originating from a neighbor that is one hop away or multiple hops away over virtual links.
For virtual links, OSPF packets travel multiple hops across the network; hence, the TTL value can be decremented several times. For these type of links, a minimum TTL value must be allowed and accepted for multiple-hop packets.
To filter network attacks originating from invalid sources traveling over multiple hops, the Generalized TTL Security Mechanism (GTSM), RFC 3682, is used to prevent the attacks. GTSM filters link-local addresses and allows for only one-hop neighbor adjacencies through the configuration of TTL value 255. The TTL value in the IP header is set to 255 when OSPF packets are originated, and checked on the received OSPF packets against the default GTSM TTL value 255 or the user configured GTSM TTL value, blocking unauthorized OSPF packets originated from TTL hops away.
A PCE is an entity (component, application, or network node) that is capable of computing a network path or route based on a network graph and applying computational constraints.
PCE is accomplished when a PCE address and client is configured for MPLS-TE. PCE communicates its PCE address and capabilities to OSPF then OSPF packages this information in the PCE Discovery type-length-value (TLV) (Type 2) and reoriginates the RI LSA. OSPF also includes the Router Capabilities TLV (Type 1) in all its RI LSAs. The PCE Discovery TLV contains the PCE address sub-TLV (Type 1) and the Path Scope Sub-TLV (Type 2).
The PCE Address Sub-TLV specifies the IP address that must be used to reach the PCE. It should be a loop-back address that is always reachable, this TLV is mandatory, and must be present within the PCE Discovery TLV. The Path Scope Sub-TLV indicates the PCE path computation scopes, which refers to the PCE ability to compute or participate in the computation of intra-area, inter-area, inter-AS or inter-layer TE LSPs.
PCE extensions to OSPFv2 include support for the Router Information Link State Advertisement (RI LSA). OSPFv2 is extended to receive all area scopes (LSA Types 9, 10, and 11). However, OSPFv2 originates only area scope Type 10.
For detailed information for the Path Computation Element feature see the Implementing MPLS Traffic Engineering on Cisco IOS XR Softwaremodule of the Cisco IOS XR MPLS Configuration Guide for the Cisco XR 12000 Series Router and the following IETF drafts:
The OSPF queue tuning parameters configuration allows you to:
The OSPF IP Fast Reroute Loop Free Alternates computation provides:
This section contains the following procedures:
This task explains how to perform the minimum OSPF configuration on your router that is to enable an OSPF process with a router ID, configure a backbone or nonbackbone area, and then assign one or more interfaces on which OSPF runs.
Although you can configure OSPF before you configure an IP address, no OSPF routing occurs until at least one IP address is configured.
1. configure
2. Do one of the following:
3. router-id { router-id }
4. area area-id
5. interface type interface-path-id
6. Repeat Step 5 for each interface that uses OSPF.
7. log adjacency changes [ detail ] [ enable | disable ]
8. Do one of the following:
This task explains how to configure the stub area and the NSSA for OSPF.
1. configure
2. Do one of the following:
3. router-id { router-id }
4. area area-id
5. Do one of the following:
6. Do one of the following:
7. default-cost cost
8. Do one of the following:
9. Repeat this task on all other routers in the stub area or NSSA.
This task explains how to configure neighbors for a nonbroadcast network. This task is optional.
Configuring NBMA networks as either broadcast or nonbroadcast assumes that there are virtual circuits from every router to every router or fully meshed network.
1. configure
2. Do one of the following:
3. router-id { router-id }
4. area area-id
5. network { broadcast | non-broadcast | { point-to-multipoint [ non-broadcast ] | point-to-point }}
6. dead-interval seconds
7. hello-interval seconds
8. interface type interface-path-id
9. Do one of the following:
10. Repeat Step 9 for all neighbors on the interface.
11. exit
12. interface type interface-path-id
13. Do one of the following:
14. Repeat Step 13 for all neighbors on the interface.
15. Do one of the following:
This task explains how to configure MD5 (secure) authentication on the OSPF router process, configure one area with plain text authentication, and then apply one interface with clear text (null) authentication.
If you choose to configure authentication, you must first decide whether to configure plain text or MD5 authentication, and whether the authentication applies to all interfaces in a process, an entire area, or specific interfaces. See Route Authentication Methods for OSPF for information about each type of authentication and when you should use a specific method for your network.
1. configure
2. router ospf process-name
3. router-id { router-id }
4. authentication [ message-digest | null ]
5. message-digest-key key-id md5 { key | clear key | encrypted key | LINE}
6. area area-id
7. interface type interface-path-id
8. Repeat Step 7 for each interface that must communicate, using the same authentication.
9. exit
10. area area-id
11. authentication [ message-digest | null ]
12. interface type interface-path-id
13. Repeat Step 12 for each interface that must communicate, using the same authentication.
14. interface type interface-path-id
15. authentication [ message-digest | null ]
16. Do one of the following:
This task explains how to tune the convergence time of OSPF routes in the routing table when many LSAs need to be flooded in a very short time interval.
1. configure
2. Do one of the following:
3. router-id { router-id }
4. Perform Step 5 or Step 6 or both to control the frequency that the same LSA is originated or accepted.
5. timers lsa refresh seconds
6. timers lsa min-arrival seconds
7. timers lsa group-pacing seconds
8. Do one of the following:
This task explains how to create a virtual link to your backbone (area 0) and apply MD5 authentication. You must perform the steps described on both ABRs, one at each end of the virtual link. To understand virtual links, see Virtual Link and Transit Area for OSPF .
The following prerequisites must be met before creating a virtual link with MD5 authentication to area 0:
1. Do one of the following:
2. configure
3. Do one of the following:
4. router-id { router-id }
5. area area-id
6. virtual-link router-id
7. authentication message-digest
8. message-digest-key key-id md5 { key | clear key | encrypted key }
9. Repeat all of the steps in this task on the ABR that is at the other end of the virtual link. Specify the same key ID and key that you specified for the virtual link on this router.
10. Do one of the following:
11. Do one of the following:
In the following example, the show ospfv3 virtual links EXEC command verifies that the OSPF_VL0 virtual link to the OSPFv3 neighbor is up, the ID of the virtual link interface is 2, and the IPv6 address of the virtual link endpoint is 2003:3000::1.
RP/0/0/CPU0:router# show ospfv3 virtual-links Virtual Links for OSPFv3 1 Virtual Link OSPF_VL0 to router 10.0.0.3 is up Interface ID 2, IPv6 address 2003:3000::1 Run as demand circuit DoNotAge LSA allowed. Transit area 0.1.20.255, via interface GigabitEthernet 0/1/0/1, Cost of using 2 Transmit Delay is 5 sec, State POINT_TO_POINT, Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5 Hello due in 00:00:02 Adjacency State FULL (Hello suppressed) Index 0/2/3, retransmission queue length 0, number of retransmission 1 First 0(0)/0(0)/0(0) Next 0(0)/0(0)/0(0) Last retransmission scan length is 1, maximum is 1 Last retransmission scan time is 0 msec, maximum is 0 msec Check for lines: Virtual Link OSPF_VL0 to router 10.0.0.3 is up Adjacency State FULL (Hello suppressed) State is up and Adjacency State is FULL
If you configured two or more subnetworks when you assigned your IP addresses to your interfaces, you might want the software to summarize (aggregate) into a single LSA all of the subnetworks that the local area advertises to another area. Such summarization would reduce the number of LSAs and thereby conserve network resources. This summarization is known as interarea route summarization. It applies to routes from within the autonomous system. It does not apply to external routes injected into OSPF by way of redistribution.
This task configures OSPF to summarize subnetworks into one LSA, by specifying that all subnetworks that fall into a range are advertised together. This task is performed on an ABR only.
1. configure
2. Do one of the following:
3. router-id { router-id }
4. area area-id
5. Do one of the following:
6. interface type interface-path-id
7. Do one of the following:
This task redistributes routes from an IGP (could be a different OSPF process) into OSPF.
For information about configuring routing policy, see Implementing Routing Policy on Cisco IOS XR Software module in the Cisco IOS XR Routing Configuration Guide for the Cisco XR 12000 Series Router.
1. configure
2. Do one of the following:
3. router-id { router-id }
4. redistribute protocol [ process-id ] { level-1 | level-1-2 | level-2 } [ metric metric-value ] [ metric-type type-value ] [ match { external [ 1 | 2 ]} [ tag tag-value ] [ route-policy policy-name ]
5. Do one of the following:
6. Do one of the following:
This task explains how to configure SPF scheduling in millisecond intervals and potentially delay SPF calculations during times of network instability. This task is optional.
1. configure
2. Do one of the following:
3. router-id { router-id }
4. timers throttle spf spf-start spf-hold spf-max-wait
5. area area-id
6. interface type interface-path-id
7. Do one of the following:
8. Do one of the following:
In the following example, the show ospf EXEC command is used to verify that the initial SPF schedule delay time, minimum hold time, and maximum wait time are configured correctly. Additional details are displayed about the OSPF process, such as the router type and redistribution of routes.
RP/0/0/CPU0:router# show ospf 1 Routing Process "ospf 1" with ID 192.168.4.3 Supports only single TOS(TOS0) routes Supports opaque LSA It is an autonomous system boundary router Redistributing External Routes from, ospf 2 Initial SPF schedule delay 5 msecs Minimum hold time between two consecutive SPFs 100 msecs Maximum wait time between two consecutive SPFs 1000 msecs Minimum LSA interval 5 secs. Minimum LSA arrival 1 secs Number of external LSA 0. Checksum Sum 00000000 Number of opaque AS LSA 0. Checksum Sum 00000000 Number of DCbitless external and opaque AS LSA 0 Number of DoNotAge external and opaque AS LSA 0 Number of areas in this router is 1. 1 normal 0 stub 0 nssa External flood list length 0 Non-Stop Forwarding enabled
This task explains how to configure OSPF NSF specific to Cisco on your NSF-capable router. This task is optional.
OSPF NSF requires that all neighbor networking devices be NSF aware, which happens automatically after you install the Cisco IOS XR software image on the router. If an NSF-capable router discovers that it has non-NSF-aware neighbors on a particular network segment, it disables NSF capabilities for that segment. Other network segments composed entirely of NSF-capable or NSF-aware routers continue to provide NSF capabilities.
1. configure
2. router ospf process-name
3. router-id { router-id }
4. Do one of the following:
5. nsf interval seconds
6. nsfflush-delay-timeseconds
7. nsflifetimeseconds
8. nsfietf
9. Do one of the following:
This task explains how to configure OSPF for MPLS TE. This task is optional.For a description of the MPLS TE tasks and commands that allow you to configure the router to support tunnels, configure an MPLS tunnel that OSPF can use, and troubleshoot MPLS TE, see Implementing MPLS Traffic Engineering on Cisco IOS XR Software module of the Cisco IOS XR MPLS Configuration Guide for the Cisco XR 12000 Series Router
Your network must support the following features before you enable MPLS TE for OSPF on your router:
1. configure
2. router ospf process-name
3. router-id { router-id }
4. mpls traffic-eng router-id interface-type interface-instance
5. area area-id
6. mpls traffic-eng
7. interface type interface-path-id
8. Do one of the following:
9. show ospf [ process-name ] [ area-id ] mpls traffic-eng { link | fragment }
This section provides the following output examples:
In the following example, the show route ospf EXEC command verifies that GigabitEthernet interface 0/3/0/0 exists and MPLS TE is not configured:
RP/0/0/CPU0:router# show route ospf 1 O 11.0.0.0/24 [110/15] via 0.0.0.0, 3d19h, tunnel-te1 O 192.168.0.12/32 [110/11] via 11.1.0.2, 3d19h, GigabitEthernet0/3/0/0 O 192.168.0.13/32 [110/6] via 0.0.0.0, 3d19h, tunnel-te1
In the following example, the show ospf mpls traffic-eng EXEC command verifies that the MPLS TE fragments are configured correctly:
RP/0/ RP0/CPU0:router# show ospf 1 mpls traffic-eng fragment OSPF Router with ID (192.168.4.3) (Process ID 1) Area 0 has 1 MPLS TE fragment. Area instance is 3. MPLS router address is 192.168.4.2 Next fragment ID is 1 Fragment 0 has 1 link. Fragment instance is 3. Fragment has 0 link the same as last update. Fragment advertise MPLS router address Link is associated with fragment 0. Link instance is 3 Link connected to Point-to-Point network Link ID :55.55.55.55 Interface Address :192.168.50.21 ospf mpls traffic-eng EXEC command verifies that the MPLS TE links on area instance 3 are configured correctly:
RP/0/0/CPU0:router# show ospf mpls traffic-eng link OSPF Router with ID (192.168.4.1) (Process ID 1) Area 0 has 1 MPLS TE links. Area instance is 3. Links in hash bucket 53. Link is associated with fragment 0. Link instance is 3 Link connected to Point-to-Point network Link ID :192.168.50.20 Interface Address :192.168.20.50 route ospf EXEC command verifies that the MPLS TE tunnels replaced GigabitEthernet interface 0/3/0/0 and that configuration was performed correctly:
RP/0/0/CPU0:router# show route ospf 1 O E2 192.168.10.0/24 [110/20] via 0.0.0.0, 00:00:15, tunnel2 O E2 192.168.11.0/24 [110/20] via 0.0.0.0, 00:00:15, tunnel2 O E2 192.168.1244.0/24 [110/20] via 0.0.0.0, 00:00:15, tunnel2 O 192.168.12.0/24 [110/2] via 0.0.0.0, 00:00:15, tunnel2
This task explains how to configure a graceful restart for an OSPFv3 process. This task is optional.
1. configure
2. router ospfv3 process-name
3. graceful-restart
4. graceful-restart lifetime
5. graceful-restart interval seconds
6. graceful-restart helper disable
7. Do one of the following:
8. show ospfv3 [ process-name [ area-id ]] database grace
This section describes the tasks you can use to display information about a graceful restart.
The following screen output shows the state of the graceful restart capability on the local router:
RP/0/0/CPU0:router# show ospfv3 1 database grace Routing Process “ospfv3 1” with ID 2.2.2.2 Initial SPF schedule delay 5000 msecs Minimum hold time between two consecutive SPFs 10000 msecs Maximum wait time between two consecutive SPFs 10000 msecs Initial LSA throttle delay 0 msecs Minimum hold time for LSA throttle 5000 msecs Maximum wait time for LSA throttle 5000 msecs Minimum LSA arrival 1000 msecs LSA group pacing timer 240 secs Interface flood pacing timer 33 msecs Retransmission pacing timer 66 msecs Maximum number of configured interfaces 255 Number of external LSA 0. Checksum Sum 00000000 Number of areas in this router is 1. 1 normal 0 stub 0 nssa Graceful Restart enabled, last GR 11:12:26 ago (took 6 secs) Area BACKBONE(0) Number of interfaces in this area is 1 SPF algorithm executed 1 times Number of LSA 6. Checksum Sum 0x0268a7 Number of DCbitless LSA 0 Number of indication LSA 0 Number of DoNotAge LSA 0 Flood list length 0
The following screen output shows the link state for an OSPFv3 instance:
RP/0/0/CPU0:router# show ospfv3 1 database grace OSPFv3 Router with ID (2.2.2.2) (Process ID 1) Router Link States (Area 0) ADV Router Age Seq# Fragment ID Link count Bits 1.1.1.1 1949 0x8000000e 0 1 None 2.2.2.2 2007 0x80000011 0 1 None Link (Type-8) Link States (Area 0) ADV Router Age Seq# Link ID Interface 1.1.1.1 180 0x80000006 1 PO0/2/0/0 2.2.2.2 2007 0x80000006 1 PO0/2/0/0 Intra Area Prefix Link States (Area 0) ADV Router Age Seq# Link ID Ref-lstype Ref-LSID 1.1.1.1 180 0x80000006 0 0x2001 0 2.2.2.2 2007 0x80000006 0 0x2001 0 Grace (Type-11) Link States (Area 0) ADV Router Age Seq# Link ID Interface 2.2.2.2 2007 0x80000005 1 PO0/2/0/0
This task explains how to configure a provider edge (PE) router to establish an OSPFv2 sham link connection across a VPN backbone. This task is optional.
Before configuring a sham link in a Multiprotocol Label Switching (MPLS) VPN between
provider edge (PE) routers, OSPF must be enabled as follows:
See Enabling OSPF for information on these OSPF configuration prerequisites.
1. configure
2. interface type interface-path-id
3. vrf vrf-name
4. ipv4 address ip-address mask
5. end
6. router ospf instance-id
7. vrf vrf-name
8. router-id { router-id }
9. redistribute bgp process-id
10. area area-id
11. sham-link source-address destination-address
12. cost cost
13. Do one of the following:
This optional task describes how to enable nonstop routing (NSR) for OSPFv instance-id
3. nsr
4. Use one of these commands:
This task describes how to enable nonstop routing (NSR) for OSPFvv3 instance-id
3. nsr
4. Use one of these commands:
Perform this task to configure OSPFv2 SPF (shortest path first) prefix prioritization.
1. configure
2. prefix-set prefix-set name
3. route-policy route-policy name if destination in prefix-set name then set spf-priority {critical | high | medium} endif
4. router ospf ospf name
5. spf prefix-priority route-policy route-policy name
6. Use one of these commands:
7. show rpl route-policy route-policy name detail
This optional task describes how to enable multicast-intact for OSPFv2 routes that use IPv4 addresses.
1. configure
2. router ospf instance-id
3. mpls traffic-eng multicast-intact
4. Do one of the following:
This task explains how to associate an interface with a VPN Routing and Forwarding (VRF) instance.
1. configure
2. router ospf process-name
3. vrf vrf-name
4. interface type interface-path-id
5. ipv4 address ip-address mask
6. ipv6 address ipv6-prefix/prefix-length [ eui-64 ]
7. ipv4 mtu mtu
8. Do one of the following:
1. configure
2. router ospf process-name
3. vrf vrf-name
4. router-id { router-id }
5. redistribute protocol [ process-id ] { level-1 | level-1-2 | level-2 } [ metric metric-value ] [ metric-type type-value ] [ match { external [ 1 | 2 ] }] [ tag tag-value ] route-policy policy-name]
6. area area-id
7. interface type interface-path-id
8. exit
9. domain-id [ secondary ] type { 0005 | 0105 | 0205 | 8005 } value value
10. domain-tag tag
11. disable-dn-bit-check
12. Do one of the following:
This task explains how to create multiple OSPF instances. In this case, the instances are a normal OSPF instance and a VRF instance.
1. configure
2. router ospf process-name
3. area area-id
4. interface type interface-path-id
5. exit
6. vrf vrf-name
7. area area-id
8. interface type interface-path-id
9. Do one of the following:
This task explains how to create multiple areas on an OSPF primary interface.
1. configure
2. router ospf process-name
3. area area-id
4. interface type interface-path-id
5. area area-id
6. multi-area-interface type interface-path-id
7. Do one of the following:
This task explains how to configure LDP auto-configuration for an OSPF instance.
Optionally, you can configure this feature for an area of an OSPF instance.
1. configure
2. router ospf process-name
3. mpls ldp auto-config
4. Do one of the following:
Perform this task to configure LDP IGP Synchronization under OSPF.
1. configure
2. router ospf process-name
3. Use one of the following commands:
4. Use one of the following commands:
This task explains how to manage authentication of a keychain on the OSPF interface.
A valid keychain must be configured before this task can be attempted.
To learn how to configure a keychain and its associated attributes, see the Implementing Key Chain Management on Cisco IOS XR Software module of the Cisco IOS XR System Security Configuration Guide for the Cisco XR 12000 Series Router.
1. configure
2. router ospf process-name
3. router-id { router-id }
4. area area-id
5. interface type interface-path-id
6. authentication message-digest keychain keychain
7. Do one of the following:
The following example shows how to configure the keychain ospf_intf_1 that contains five key IDs. Each key ID is configured with different send-lifetime values; however, all key IDs specify the same text string for the key.
key chain ospf_intf_1 key 1 send-lifetime 11:30:30 May 1 2007 duration 600 cryptographic-algorithm MD5T key-string clear ospf_intf_1 key 2 send-lifetime 11:40:30 May 1 2007 duration 600 cryptographic-algorithm MD5 key-string clear ospf_intf_1 key 3 send-lifetime 11:50:30 May 1 2007 duration 600 cryptographic-algorithm MD5 key-string clear ospf_intf_1 key 4 send-lifetime 12:00:30 May 1 2007 duration 600 cryptographic-algorithm MD5 key-string clear ospf_intf_1 key 5 send-lifetime 12:10:30 May 1 2007 duration 600 cryptographic-algorithm MD5 key-string clear ospf_intf_1
The following example shows that keychain authentication is enabled on the Gigabit Ethernet 0/4/0/1 interface:
RP/0/0/CPU0:router# show ospf 1 interface GigabitEthernet0/4/0/1 GigabitEthernet0/4/0/1 is up, line protocol is up Internet Address 100.10.10.2/24, Area 0 Process ID 1, Router ID 2.2.2.1, Network Type BROADCAST, Cost: 1 Transmit Delay is 1 sec, State DR, Priority 1 Designated Router (ID) 2.2.2.1, Interface address 100.10.10.2 Backup Designated router (ID) 1.1.1.1, Interface address 100.10.10.1 Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5 Hello due in 00:00:02 Index 3/3, flood queue length 0 Next 0(0)/0(0) Last flood scan length is 2, maximum is 16 Last flood scan time is 0 msec, maximum is 0 msec Neighbor Count is 1, Adjacent neighbor count is 1 Adjacent with neighbor 1.1.1.1 (Backup Designated Router) Suppress hello for 0 neighbor(s) Keychain-based authentication enabled Key id used is 3 Multi-area interface Count is 0
The following example shows output for configured keys that are active:
RP/0/0/CPU0:router# show key chain ospf_intf_1 Key-chain: ospf_intf_1/ - Key 1 -- text "0700325C4836100B0314345D" cryptographic-algorithm -- MD5 Send lifetime: 11:30:30, 01 May 2007 - (Duration) 600 Accept lifetime: Not configured Key 2 -- text "10411A0903281B051802157A" cryptographic-algorithm -- MD5 Send lifetime: 11:40:30, 01 May 2007 - (Duration) 600 Accept lifetime: Not configured Key 3 -- text "06091C314A71001711112D5A" cryptographic-algorithm -- MD5 Send lifetime: 11:50:30, 01 May 2007 - (Duration) 600 [Valid now] Accept lifetime: Not configured Key 4 -- text "151D181C0215222A3C350A73" cryptographic-algorithm -- MD5 Send lifetime: 12:00:30, 01 May 2007 - (Duration) 600 Accept lifetime: Not configured Key 5 -- text "151D181C0215222A3C350A73" cryptographic-algorithm -- MD5 Send lifetime: 12:10:30, 01 May 2007 - (Duration) 600 Accept lifetime: Not configured
This task explains how to set the security time-to-live mechanism on an interface for GTSM.
1. configure
2. router ospf process-name
3. router-id { router-id }
4. log adjacency changes [ detail | disable ]
5. nsf { cisco [ enforce global ] | ietf [ helper disable ]}
6. timers throttle spf spf-start spf-hold spf-max-wait
7. area area-id
8. interface type interface-path-id
9. security ttl [ disable | hops hop-count ]
10. Do one of the following:
11. show ospf [ process-name ] [ vrf vrf-name ] [ area-id ] interface [ type interface-path-id ]
The following is sample output that displays the GTSM security TTL value configured on an OSPF interface:
RP/0/0/CPU0:router# show ospf 1 interface GigabitEthernet0/5/0/0 GigabitEthernet0/5/0/0 is up, line protocol is up Internet Address 120.10.10.1/24, Area 0 Process ID 1, Router ID 100.100.100.100, Network Type BROADCAST, Cost: 1 Transmit Delay is 1 sec, State BDR, Priority 1 TTL security enabled, hop count 2 Designated Router (ID) 102.102.102.102, Interface address 120.10.10.3 Backup Designated router (ID) 100.100.100.100, Interface address 120.10.10.1 Flush timer for old DR LSA due in 00:02:36 Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5 Hello due in 00:00:05 Index 1/1, flood queue length 0 Next 0(0)/0(0) Last flood scan length is 1, maximum is 4 Last flood scan time is 0 msec, maximum is 0 msec Neighbor Count is 1, Adjacent neighbor count is 1 Adjacent with neighbor 102.102.102.102 (Designated Router) Suppress hello for 0 neighbor(s) Multi-area interface Count is 0
This task explains how to verify the configuration and operation of OSPF.
1. show { ospf | ospfv3 } [ process-name ]
2. show { ospf | ospfv3 } [ process-name ] border-routers [ router-id ]
3. show { ospf | ospfv3 } [ process-name ] database
4. show { ospf | ospfv3 } [ process-name ] [ area-id ] flood-list interface type interface-path-id
5. show { ospf | ospfv3 } [ process-name ] [ vrf vrf-name ] [ area-id ] interface [ type interface-path-id ]
6. show { ospf | ospfv3 }[ process-name ] [ area-id ] neighbor [ t ype interface- path-id ] [ neighbor-id ] [ detail ]
7. clear { ospf | ospfv3 }[ process-name ] process
8. clear{ospf|ospfv3[ process-name ] redistribution
9. clear{ospf|ospfv3[ process-name ] routes
10. clear{ospf|ospfv3[ process-name ] vrf [vrf-name|all] {process |redistribution|routes|statistics [interface type interface-path-id|message-queue|neighbor]}
11. clear { ospf | ospfv3 }[ process-name ] statistics [ neighbor [ type interface-path-id ] [ ip-address ]]
The following procedures explain how to limit the number of continuous incoming events processed, how to set the maximum number of rate-limited link-state advertisements (LSAs) processed per run, how to limit the number of summary or external Type 3 to Type 7 link-state advertisements (LSAs) processed per shortest path first (SPF) run, and how to set the high watermark for incoming priority events.
1. configure
2. router ospf process-name
3. queue dispatch incoming count
4. queue dispatch rate-limited-lsa count
5. queue dispatch spf-lsa-limit count
6. queue limit { high | medium | low } count
This task describes how to enable the IP fast reroute (IPFRR) per-link loop-free alternate (LFA) computation to converge traffic flows around link failures.
To enable protection on broadcast links, IPFRR and bidirectional forwarding detection (BFD) must be enabled on the interface under OSPF.
1. configure
2. router ospf process-name
3. area area-id
4. interface type interface-path-id
5. fast-reroute per-link { enable | disable }
6. Do one of the following:
1. configure
2. router ospf process-name
3. area area-id
4. interface type interface-path-id
5. fast-reroute per-link exclude interface type interface-path-id
6. Do one of the following:
This section provides the following configuration examples:
The following example shows how an OSPF interface is configured for an area in Cisco IOS XR Software.
area 0 must be explicitly configured with the area command and all interfaces that are in the range from 10.1.2.0 to 10.1.2.255 are bound to area 0. Interfaces are configured with the interface command (while the router is in area configuration mode) and the area keyword is not included in the interface statement.
interface GigabitEthernet 0/3/0/0 ip address 10.1.2.1 255.255.255.255 negotiation auto ! router ospf 1 router-id 10.2.3.4 area 0 interface GigabitEthernet 0/3/0/0 ! !
The following example shows how OSPF interface parameters are configured for an area in Cisco IOS XR software.
In Cisco IOS XR software, OSPF interface-specific parameters are configured in interface configuration mode and explicitly defined for area 0. In addition, the ip ospf keywords are no longer required.
interface GigabitEthernet 0/3/0/0 ip address 10.1.2.1 255.255.255.0 negotiation auto ! router ospf 1 router-id 10.2.3.4 area 0 interface GigabitEthernet 0/3/0/0 cost 77 mtu-ignore authentication message-digest message-digest-key 1 md5 0 test ! !
The following example shows the hierarchical CLI structure of Cisco IOS XR software:
In Cisco IOS XR software, OSPF areas must be explicitly configured, and interfaces configured under the area configuration mode are explicitly bound to that area. In this example, interface 10.1.2.0/24 is bound to area 0 and interface 10.1.3.0/24 is bound to area 1.
interface GigabitEthernet 0/3/0/0 ip address 10.1.2.1 255.255.255.0 negotiation auto ! interface GigabitEthernet 0/3/0/1 ip address 10.1.3.1 255.255.255.0 negotiation auto ! router ospf 1 router-id 10.2.3.4 area 0 interface GigabitEthernet 0/3/0/0 ! area 1 interface GigabitEthernet 0/3/0/1 ! !
The following example configures the cost parameter at different hierarchical levels of the OSPF topology, and illustrates how the parameter is inherited and how only one setting takes precedence. According to the precedence rule, the most explicit configuration is used.
The cost parameter is set to 5 in router configuration mode for the OSPF process. Area 1 sets the cost to 15 and area 6 sets the cost to 30. All interfaces in area 0 inherit a cost of 5 from the OSPF process because the cost was not set in area 0 or its interfaces.
In area 1, every interface has a cost of 15 because the cost is set in area 1 and 15 overrides the value 5 that was set in router configuration mode.
Area 4 does not set the cost, but GigabitEthernet interface 01/0/2 sets the cost to 20. The remaining interfaces in area 4 have a cost of 5 that is inherited from the OSPF process.
Area 6 sets the cost to 30, which is inherited by GigabitEthernet interfaces 0/1/0/3 and 0/2/0/3. GigabitEthernet interface 0/3/0/3 uses the cost of 1, which is set in interface configuration mode.
router ospf 1 router-id 10.5.4.3 cost 5 area 0 interface GigabitEthernet 0/1/0/0 ! interface GigabitEthernet 0/2/0/0 ! interface GigabitEthernet 0/3/0/0 ! ! area 1 cost 15 interface GigabitEthernet 0/1/0/1 ! interface GigabitEthernet 0/2/0/1 ! interface GigabitEthernet 0/3/0/1 ! ! area 4 interface GigabitEthernet 0/1/0/2 cost 20 ! interface GigabitEthernet 0/2/0/2 ! interface GigabitEthernet 0/3/0/2 ! ! area 6 cost 30 interface GigabitEthernet 0/1/0/3 ! interface GigabitEthernet 0/2/0/3 ! interface GigabitEthernet 0/3/0/3 cost 1 ! !
The following example shows how to configure the OSPF portion of MPLS TE. However, you still need to build an MPLS TE topology and create an MPLS TE tunnel. See the Cisco IOS XR MPLS Configuration Guide for the Cisco XR 12000 Series Routerfor information.
In this example, loopback interface 0 is associated with area 0 and MPLS TE is configured within area 0.
interface Loopback 0 address 10.10.10.10 255.255.255.0 ! interface GigabitEthernet 0/2/0/0 address 10.1.2.2 255.255.255.0 ! router ospf 1 router-id 10.10.10.10 nsf auto-cost reference-bandwidth 10000 mpls traffic-eng router-id Loopback 0 area 0 mpls traffic-eng interface GigabitEthernet 0/2/0/0 interface Loopback 0
The following example shows the prefix range 2300::/16 summarized from area 1 into the backbone:
router ospfv3 1 router-id 192.168.0.217 area 0 interface GigabitEthernet 0/2/0/1 area 1 range 2300::/16 interface GigabitEthernet 0/2/0/0
The following example shows that area 1 is configured as a stub area:
router ospfv3 1 router-id 10.0.0.217 area 0 interface GigabitEthernet 0/2/0/1 area 1 stub interface GigabitEthernet 0/2/0/0
The following example shows that area 1 is configured as a totally stub area:
router ospfv3 1 router-id 10.0.0.217 area 0 interface GigabitEthernet 0/2/0/1 area 1 stub no-summary interface GigabitEthernet 0/2/0/0
This example shows how to configure /32 prefixes as medium-priority, in general, in addition to placing some /32 and /24 prefixes in critical-priority and high-priority queues:
prefix-set ospf-critical-prefixes 192.41.5.41/32, 11.1.3.0/24, 192.168.0.44/32 end-set ! prefix-set ospf-high-prefixes 44.4.10.0/24, 192.41.4.41/32, 41.4.41.41/32 end-set ! prefix-set ospf-medium-prefixes 0.0.0.0/0 ge 32 end-set ! route-policy ospf-priority if destination in ospf-high-prefixes then set spf-priority high else if destination in ospf-critical-prefixes then set spf-priority critical else if destination in ospf-medium-prefixes then set spf-priority medium endif endif endif end-policy
router ospf 1 spf prefix-priority route-policy ospf-priority area 0 interface POS0/3/0/0 ! ! area 3 interface GigabitEthernet0/2/0/0 ! ! area 8 interface GigabitEthernet0/2/0/0.590
The following example uses prefix lists to limit the routes redistributed from other protocols.
Only routes with 9898:1000 in the upper 32 bits and with prefix lengths from 32 to 64 are redistributed from BGP 42. Only routes not matching this pattern are redistributed from BGP 1956.
ipv6 prefix-list list1 seq 10 permit 9898:1000::/32 ge 32 le 64 ipv6 prefix-list list2 seq 10 deny 9898:1000::/32 ge 32 le 64 seq 20 permit ::/0 le 128 router ospfv3 1 router-id 10.0.0.217 redistribute bgp 42 redistribute bgp 1956 distribute-list prefix-list list1 out bgp 42 distribute-list prefix-list list2 out bgp 1956 area 1 interface GigabitEthernet 0/2/0/0
This example shows how to set up a virtual link to connect the backbone through area 1 for the OSPFv3 topology that consists of areas 0 and 1 and virtual links 10.0.0.217 and 10.0.0.212:
router ospfv3 1 router-id 10.0.0.217 area 0 interface GigabitEthernet 0/2/0/1 area 1 virtual-link 10.0.0.212 interface GigabitEthernet 0/2/0/0
router ospfv3 1 router-id 10.0.0.212 area 0 interface GigabitEthernet 0/3/0/1 area 1 virtual-link 10.0.0.217 interface GigabitEthernet 0/2/0/0
The following examples show how to configure a virtual link to your backbone and apply MD5 authentication. You must perform the steps described on both ABRs at each end of the virtual link.
After you explicitly configure the ABRs, the configuration is inherited by all interfaces bound to that area—unless you override the values and configure them explicitly for the interface.
To understand virtual links, see Virtual Link and Transit Area for OSPF.
In this example, all interfaces on router ABR1 use MD5 authentication:
router ospf ABR1 router-id 10.10.10.10 authentication message-digest message-digest-key 100 md5 0 cisco area 0 interface GigabitEthernet 0/2/0/1 interface GigabitEthernet 0/3/0/0 area 1 interface GigabitEthernet 0/3/0/1 virtual-link 10.10.5.5 ! !
In this example, only area 1 interfaces on router ABR3 use MD5 authentication:
router ospf ABR2 router-id 10.10.5.5 area 0 area 1 authentication message-digest message-digest-key 100 md5 0 cisco interface GigabitEthernet 0/9/0/1 virtual-link 10.10.10.10 area 3 interface Loopback 0 interface GigabitEthernet 0/9/0/0 !
The following examples show how to configure a provider edge (PE) router to establish a VPN backbone and sham link connection:
logging console debugging vrf vrf_1 address-family ipv4 unicast import route-target 100:1 ! export route-target 100:1 ! ! ! interface Loopback0 ipv4 address 2.2.2.1 255.255.255.255 ! interface Loopback1 vrf vrf_1 ipv4 address 10.0.1.3 255.255.255.255 ! interface GigabitEthernet0/2/0/2 vrf vrf_1 ipv4 address 100.10.10.2 255.255.255.0 ! interface GigabitEthernet0/2/0/3 ipv4 address 100.20.10.2 255.255.255.0 ! ! route-policy pass-all pass end-policy ! router ospf 1 log adjacency changes router-id 2.2.2.2 vrf vrf_1 router-id 22.22.22.2 domain-id type 0005 value 111122223333 domain-tag 140 nsf ietf redistribute bgp 10 area 0 sham-link 10.0.1.3 10.0.0.101 ! interface GigabitEthernet0/2/0/2 ! ! ! ! router ospf 2 router-id 2.22.2.22 area 0 interface Loopback0 ! interface GigabitEthernet0/2/0/3 ! ! ! router bgp 10 bgp router-id 2.2.2.1 bgp graceful-restart restart-time 300 bgp graceful-restart address-family ipv4 unicast redistribute connected ! address-family vpnv4 unicast ! neighbor 2.2.2.2 remote-as 10 update-source Loopback0 address-family ipv4 unicast ! address-family vpnv4 unicast ! ! vrf vrf_1 rd 100:1 address-family ipv4 unicast redistribute connected route-policy pass-all redistribute ospf 1 match internal external ! ! ! mpls ldp router-id 2.2.2.1 interface GigabitEthernet0/2/0/3 ! !
The following example shows how to configure the OSPF queue tuning parameters:
router ospf 100 queue dispatch incoming 30 queue limit high 1500 queue dispatch rate-limited-lsa 1000 queue dispatch spf-lsa-limit 2000
To configure route maps through the RPL for OSPF Version 2, see Implementing Routing Policy on Cisco IOS XR Software module.
To build an MPLS TE topology, create tunnels, and configure forwarding over the tunnel for OSPF Version 2; see Cisco IOS XR MPLS Configuration Guide for the Cisco XR 12000 Series Router.
The following sections provide references related to implementing OSPF. | http://www.cisco.com/c/en/us/td/docs/routers/xr12000/software/xr12k_r3-9/routing/configuration/guide/b_xr12krc39/b_xr12krc39_chapter_0100.html | CC-MAIN-2016-18 | refinedweb | 12,593 | 55.74 |
Scaffolding a Clojure/Compojure Webapp for Heroku
In this post we’ll go through the process to create a basic Clojure/Compojure/libnoir scaffolding project and deploying it to Heroku.
First, make sure you’ve installed the prereqs: Leiningen >= v2.0 Heroku Toolbelt
and here’s the GitHub if that’s your style.
After installing leiningen, run:
lein new compojure scaffold-app
to scaffold a new project. Then cd into the project and run
lein ring server to install dependencies and run the app.
cd scaffold-applein ring server```We can kill the server with `C-c`. We will need a `Procfile` to deploy to Heroku and it will look like this:`web: java $JVM_OPTS -cp target/scaffolding-app.jar clojure.main -m scaffold-app.handler $PORT`Be sure to save that as `Procfile`. This says we will have a “web” dyno type, which is a special type on heroku that is allowed to receive web traffic.We need a `:main` namespace in our app so that `lein run` knows how to run the app.Inside of `project.clj` add `:main` and a dependency on `lib-noir`, from which we will use a jetty adapter. We also want to add `min-lein-version` so that heroku uses lein 2.0 and add a section for our `:uberjar-name`. This will help us out with some startup-timing issues we could encounter otherwise.```clojure(defproject scaffold-app "0.1.0-SNAPSHOT" :description"FIXME: write description" :url "" :dependencies[[org.clojure/clojure "1.5.1"][lib-noir "0.7.9"] [compojure "1.1.6"]] :mainscaffold-app.handler :min-lein-version "2.0.0" :uberjar-name"scaffolding-app.jar" :plugins [[lein-ring "0.8.10"]] :ring {:handlerscaffold-app.handler/app} :profiles {:dev {:dependencies[[javax.servlet/servlet-api "2.5"][ring-mock "0.1.5"]]}})```In `src/scaffold_app/handler.clj` add `ring.adapter.jetty` to `:use` and bracket```clojure(:use [compojure.core][ring.adapter.jetty :as ring])```and `-main` to the body where the port will be given to us from Heroku:```clojure(defn -main [port] (run-jetty (handler/site app-routes){:port (read-string port) :join? false}))```At this point you should be able to run `lein run 8080` to start an instance ofthe app on port 8080. If this works, you are ready to deploy to Heroku.Assuming you have git, a Heroku account and the Toolbelt (mentioned at the topof the post) installed we can deploy to heroku in this fashion: (Remember tochange “scaffolding-clojure” to something else. There is already an app withthat name that exists on heroku.)```shellgit init heroku apps:create scaffolding-clojure```heroku’s `apps:create` adds a “heroku” remote to git.```shellgit add Procfile .gitignore README.md project.clj src/ test/git commit -m 'first commit' git push -u heroku master```We can open our app with `heroku open` or watch it run with `heroku logs --tail`In the next post we’ll dive into lib-noir a bit to investigate potentialapplications (such as JSON APIs).[]() | https://www.christopherbiscardi.com/2014/1/15/scaffolding-a-clojurecompojure-webapp-for-heroku/ | CC-MAIN-2019-47 | refinedweb | 499 | 59.5 |
How to break an entire ecosystem by publishing a release
As a Doctrine maintainer, one of the most exciting parts is working on software that powers thousands of businesses, and helps developers write better software for themselves and their employers. It is a huge challenge to get right, and a large source of satisfaction when it goes to plan. The downside is that sometimes it can go wrong. Very wrong. Not the “whoops, I found a bug” kind of wrong. Worse. But first, let me explain what we’re trying to do.
The Doctrine Framework
You would think that the most-used part of the Doctrine libraries is the ORM, as that’s our core piece of software. What most people don’t know is that the ORM is powered by a bunch of smaller libraries that provide essential core functions. I’ve previously talked about all these libraries at SymfonyLive Berlin in 2018 (Slides, Recording in German), but they mostly stay in the background and do their thing. Some of them are more known than others, like the annotations library that you may be familiar with if you’ve followed the Symfony best practices.
All of these libraries were part of the Doctrine ORM until they were extracted to the doctrine/common package. Over the years, this was split up into individual packages, as there was no point to have everybody install the persistence interfaces when they only wanted to use an annotation library. But in essence, most of this code is around 10 years old. It was written for PHP 5.3 and upgraded over time to make use of new features in PHP. In 2017, we dropped support for PHP 5.6 and 7.0, beginning a new era for the Doctrine project: that of strictly typed code. This had a large impact for our users that we hadn’t considered before, and I knew that we had to be more careful about communicating such changes in the future.
With PHP 7.1 and the introduction of nullable type declarations, we could finally make proper use of typing in our libraries. With the help of the Doctrine Coding Standard, we started migrating our code to be strictly typed under PHP 7.1 and newer. Private members were upgraded immediately, while public members were upgraded wherever they could. Due to SemVer, this isn’t always possible for existing interfaces, as we may not break backward compatibility for our users.
This is even more important for “base libraries”, like the doctrine/persistence, doctrine/annotations, doctrine/collections, or doctrine/inflector packages that are installed hundreds of thousands of times a day. The easiest way to do this would be to change the branch-alias for the master branch to 2.0.x-dev, change all interfaces to have strict type declarations (both for arguments and return types), test it and release it for the world to use. While it would be easy for us to release the packages this way, the impact for the ecosystem would be catastrophic and would most likely slow adoption to a crawl unless we tried to force it.
The annotations and collections libraries are probably the most impactful packages that will be changed in the next few months. The collections library has a large impact on ORM and the MongoDB ODM, while the annotations library is used by many other projects, most notably the Symfony Framework where it can be used to define controller routings, entity mappings, validation constraints, and many other features. Releasing a 2.0 release with BC breaks would be very disrupting to the Symfony community at large. We’ve tried to release annotations 2.0 in 2016, and another attempt has been made in 2018. Both times, these efforts started to collect dust and eventually fizzled out.
How NOT to build a new major version
For most of our packages, including the annotations library, we always started working on a new major version by removing stuff that we wanted to remove and build the new functionality at the same time or even later. This is true for both annotations efforts, for MongoDB ODM 2.0, as well as DBAL and ORM 3.0. When working on MongoDB ODM 2.0, I realised that releasing the new major version without a clean upgrade path would not work, considerably slowing or even preventing adoption. The effort to build a somewhat acceptable upgrade path took the better part of three months, and was considerably more difficult because we didn’t do it from the start.
With annotations 2.0, one of the first decisions that were made was to change the namespace. The legacy of doctrine/common still lives on in the
Doctrine\Common\Annotations namespace, and people wanted to get rid of it and change this to
Doctrine\Annotations. This causes a hard BC break that requires people to change their code before they update. With many different packages depending on this library, upgrading a single one of them can cause a ripple effect in the ecosystem that sends people into the deeper realms of dependency hell. If you try to install any library that requires annotations 2.0 alongside a library that only supports annotations 1.x, composer will tell you that this just doesn’t work. Due to long-term support constraints, not all package authors will be able to migrate to the new version, so doing a hard upgrade like that can cause severe issues.
Even worse, the only way for users to upgrade is to update the constraint in their composer.json to
^2.0, run tests and fix stuff until nothing is broken anymore. When introducing strict typing across all interfaces, this is even more complicated due to the subtle BC breaks this can cause. After all,
string $foo is completely different from
@param string $foo, with the former being a strict requirement but the latter being a loose suggestion. After all, releasing popular packages this way is not feasible and will not be a source of happiness for our users. Another example of this are the interfaces published by the PHP-FIG, which is also discussing how to upgrade these.
Modernising popular packages
To start the transitional process in the Doctrine libraries, I decided to apply my learnings from maintaining the MongoDB ODM library to the persistence library and try my luck with that. With the help of an excellent blog post by Grégoire Paris, we decided to deprecate the
Doctrine\Common\Persistence namespace in favour of
Doctrine\Persistence. This is done by a combination of extending classes and interfaces, class and interface aliases, and clever autoloading tricks to provide deprecation notices at the right time. This was released as
doctrine/persistence 1.3 to prepare people for the upcoming 2.0 release. The idea was that people upgrade to 1.3, fix deprecation notices by changing the namespace, but are able to run their code as they are used to in the meantime.
Unfortunately, this didn’t exactly work the way we hoped it would. First, we were informed of type incompatibilities which were caused by some missing autoload calls, which was fixed in 1.3.1. Then, we realised that our autoloading caused deprecation notices of its own, which was fixed in 1.3.2 a few hours later. Again, we were informed of missing autoload calls breaking code and fixed this in yet another patch release. This of course is not the user experience we want to provide, and we will have to do better for our next packages.
Many of these errors were spotted by the Symfony team installing development versions of packages in their CI pipeline. While fixing these deprecations, Nicolas discovered some of the missing autoload calls and additional deprecations and quickly created pull requests fixing them. Since many of our own projects run the CI suite with a fixed set of dependencies by committing the composer.lock file, our own packages did not alert us of the upcoming BC breaks. We will have to revisit our testing process to ensure that we’re also testing our packages ourselves instead of relying on others to do it for us.
To 2.0 and beyond
doctrine/persistence 2.0 will drop the deprecation layer and remove the
Doctrine\Common\Persistence namespace. It will also add type declarations for parameters in all interfaces and abstract classes, and bump the PHP requirement to PHP 7.2 or newer. This allows people that have dropped usage of the deprecated classes to change the version constraint for the persistence library to
^1.3 || ^2.0. When running PHP 7.2 and persistence 2.0, the parameter type widening feature prevents BC breaks when omitting a new typehint in an extending class. Thus, no changes to method signatures are necessary to allow installing 2.0, but you can control when you want to receive 2.0 to avoid subtle BC breaks due to parameter type declarations.
With PHP allowing you to add return type declarations in extending classes, people can also start adding return type declarations by looking at the upcoming return types documented in PHPDoc. These return type declarations will be added in persistence 3.0, which will present a hard BC break (e.g. return type declarations have to be added before installing 3.0). At the same time, developers can start phasing out the usage of deprecated APIs like legacy namespaces and obsolete classes. These deprecated APIs will be removed in persistence 3.0 as well.
While this process of releasing the new strictly typed API is more complicated for us maintainers and takes longer to finish, it allows the ecosystem to upgrade packages at their own pace, without needing to coordinate the entire effort across multiple Open Source projects. The Doctrine persistence library is the first package to follow this development process, which will subsequently be applied to most other Doctrine projects as well. Feedback from the community is extremely important during this process, especially when it breaks and causes disruption, as happened with the 1.3.0 release. However, rest assured that our intention is not to break your application when you upgrade. Instead, this disruption was the result of our efforts to reduce the impact of these coming releases, where we missed subtle differences in behaviour between different PHP versions.
Summarising the upgrade process
On the example of doctrine/persistence, this is the roadmap for the transition to a strictly typed API:
- doctrine/persistence 1.3.0 introduces a deprecation layer, informing users of upcoming BC breaks and provides the new API. Users should use
^1.3.3as constraint in composer.json and start fixing deprecation messages.
- doctrine/persistence 2.0 will drop the deprecated API from the
Doctrine\Common\Persistencenamespace and add argument type declarations to the new API. Users should use
^1.3.3 || ^2.0as constraint in composer.json. Some additional autoload calls may be required to ensure class aliases are properly loaded. Please check the blog post on type deprecation for details, as this cannot be easily summarised.
- doctrine/persistence 3.0 will be released later, dropping all deprecated API and adding return type declarations. At this point, users should add both argument and return type declarations, then use
^2.0 || ^3.0as constraint in composer.json. | https://alcaeus.medium.com/how-to-break-an-entire-ecosystem-by-publishing-a-release-b6aaab2b8aaa | CC-MAIN-2022-27 | refinedweb | 1,890 | 63.49 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Dear community,
What is a good way to evaluate a parameter value at a given frame from a threaded context?
I need to get the value of a parameter at a given time within the ModifyObject() function of a class derived from ObjectData. I need to do this when using bucket rendering. The required time is stored in a BaseTime member. The solution I've found so far is to use ExecutePasses(bt, true, true, true, BUILDFLAGS::EXTERNALRENDERER). However, this causes issues because the ModifyObject() function is called again. Additionally, I would like to avoid calculating the entire scene again. Is there a better way to do this?
ModifyObject()
ObjectData
BaseTime
ExecutePasses(bt, true, true, true, BUILDFLAGS::EXTERNALRENDERER)
On a related note, I get a debugbreak when calling ExecutePasses, is this due to the way I'm calling it?
ExecutePasses
Thank you for your attention,
danniccs
Hello @danniccs,
thank you for reaching out to us. Your question is a bit ambiguous. You do not mention if the parameter you want to retrieve the value for is animated or not.
CKey
To get a better answer, I would recommend posting executable code and providing more information on the scope of parameter values you want to evaluate.
Cheers,
Ferdinand
Hi @ferdinand, thanks a lot for the answer.
The parameter I am trying to evaluate is actually a BaseLink to a BaseObject representing a mesh. As such, I cannot use the CCurve::GetValue approach directly on the parameter. However, if possible I want to avoid running ExecutePass, since this would probably be quite slow.
BaseLink
BaseObject
CCurve::GetValue
ExecutePass
I realized I can solve my issue if I can get access to the world matrix of the mesh at the given time. Is there a way to get the interpolated world matrix at a specific time from a BaseObject? If not, can I get the CTrack associated with that world matrix? I could then interpolate the values between keys and use the resulting matrix. If neither of those is possible I will probably end up using ExecutePasses.
CTrack
Thanks again for the reply,
Daniel
@danniccs said in Get parameter value at given time.:
The parameter I am trying to evaluate is actually a BaseLink to a BaseObject representing a mesh. As such, I cannot use the CCurve::GetValue approach directly on the parameter.
The parameter I am trying to evaluate is actually a BaseLink to a BaseObject representing a mesh. As such, I cannot use the CCurve::GetValue approach directly on the parameter.
BaseLink is a discrete parameter, as you cannot interpolate between two links. Nevertheless, you should be able to find out what link is represented by a CCurve "curve", as you still have keys.
Here is some Python script that reads the currently linked camera from a (selected) stage object:
import c4d
from c4d import gui
def PrintLinkKey (currentTime, obj, parameterDesc):
track = obj.FindCTrack(parameterDesc)
if track == None: return
curve = track.GetCurve()
cat = track.GetTrackCategory()
if cat != c4d.CTRACK_CATEGORY_DATA: return
key = None
currentKey = curve.FindKey(currentTime, c4d.FINDANIM_LEFT)
if currentKey:
key = currentKey['key']
else:
currentKey = curve.FindKey(currentTime, c4d.FINDANIM_RIGHT)
if currentKey:
key = currentKey['key']
if key != None:
data = key.GetGeData()
print (data)
def main():
if op == None : return
descCamLink = c4d.DescID(c4d.DescLevel(c4d.STAGEOBJECT_CLINK, c4d.DTYPE_BASELISTLINK, 0))
PrintLinkKey (doc.GetTime(), op, descCamLink)
if __name__=='__main__':
main()
(Yes, I know you tagged C++ but you can use the same calls and logic - not going to create a full plugin for that...).
t
I have tried using CCurve.FindKey() and CKey.GetGeData()/CKey.GetValue() to get the mesh position at t, but the parameters in obase.h don't have tracks themselves. What I was wondering is if there is a way to get the world matrix of an object at time t without using ExecutePasses().
CCurve.FindKey()
CKey.GetGeData()
CKey.GetValue()
ExecutePasses().
Okay, you are right, that wasn't totally clear
But anyway, in my understanding now: you have some undefined things that drive a value that does not have a track and therefore neither a curve nor keys, and you want the value itself.
In my experience C4D will evaluate such a situation dynamically (unless cached) since there may be any kind of stuff used as driver: Python tags, XPresso tags, dynamics, particles, cloners, etc etc. So, unless you replicate the evaluation in your own code, you cannot access the result without actually allowing C4D to execute the evaluation.
Maybe Ferdinand will have a better idea once you provide the details he asked for, I am not yet seeing quite what you are going for.
Hi @danniccs,
So, effectively you do want to evaluate the global transform T of an object O at some arbitrary time t? I am afraid ExecutePasses is indeed the only viable solution here, as the transform of an object can be influenced indirectly by many things, as for example ancestor nodes, constraints, or simulations. There is no meaningful way to resolve this other than executing the passes.
This is also a case where you should not execute the passes for frame n, but all frames up to n, when you want to support simulations. In order to avoid extreme overhead by doing this over and over in ModifyObject, you should cache such information. In the simplest form this could be a button 'Build Time Cache' in the GUI of the object.
ModifyObject
All in all, this also sounds very much like a plugin that is in violation of design principles for ObjectData plugins. An object that can look omniscient into the past and future for any object in the scene, is in principle a quite expensive idea; it does not really matter that it is only the transform you want to know. This also could open a whole can of worms of feedback loops, depending on what you intend to do with that transform. When the vertices of object P rely on the transform T of object Q, and T relies in turn in some form on the vertices of P, you are going to have a problem, especially when you make this also time dependent.
I would also first check if there are simpler ways to achieve a similar effect. We cannot provide support on designing plugins, and we also do not known the greater context here, but I am sure that there is a simpler or at least more performant solution.
Hi @ferdinand and @Cairyn, thanks for the answers.
There may be a simpler way to achieve an effect similar to what we want (for example, by caching the world matrix at the time t) but I'm still thinking of alternate ways to be able to access the full mesh information at time t. Caching the full mesh is probably not a good option since we would have to update that cache every time the mesh is changed at any time before t (since this might change the mesh at t). A possible option is to restrict the usability a bit and always evaluate the mesh at time 0, I'm checking if this is something we can do.
Feedback loops could be an issue, but it should never happen that (using the example @ferdinand wrote) P relies on T and T also relies on P in this specific plugin. Also, t should always be a time before the current time, although that might not alleviate this particular problem.
In any case, I'm going to go ahead and mark this as solved, I think I got all the information I need to find a solution.
Thank you both very much for helping me out,
Daniel | https://plugincafe.maxon.net/topic/14058/get-parameter-value-at-given-time | CC-MAIN-2022-27 | refinedweb | 1,318 | 62.07 |
Create a load balancer via the API
Overview
In this walkthrough, you will use the Cloudflare API to set up an active-passive failover configuration for your load balancer. An active-passive failover configuration sends traffic to the servers in your active pool until a failure threshold (configurable) occurs. At the point of failure, Cloudflare then redirects traffic to the passive pool.
We will walk through the following steps to create and configure your load balancer via the Cloudflare API:
- Creating a monitor — You’ll start by creating a monitor so that you can configure health checks for your load balancer. When you attach a monitor to a pool of origin servers, Cloudflare will use the monitor configuration to start running health checks on those servers.
- Creating pools — Next you’ll create a couple of pools for your load balancer to manage.
- Validating pool health — Before going further, you’ll attach your monitor to your new pools and run a health check to confirm their availability.
- Creating a load balancer — Next you will create and configure a load balancer to manage traffic for your pools.
- Configuring Geo Steering (optional) — Finally, you will see how to programmatically enable Geo Steering for your load balancer.
Using the Cloudflare API
The Cloudflare API provides a standardized programmatic interface for accessing Cloudflare applications and resources, including Load Balancing. The Cloudflare API is implemented as a RESTful service using HTTPS requests and JSON responses.
Users access the Cloudflare API by making HTTPS requests. The Cloudflare API is only available via SSL-enabled HTTPS connections over port 443.
A valid request includes the following, which define what command to execute:
- A URL that identifies a path to a Cloudflare resource
- An HTTP method that defines the action to take on the resource
- An HTTP header that specifies authorization details and the content type for request and response bodies
- An optional payload for specifying parameters and values
Request URL
The request URL is composed of a base path and an endpoint. The stable base URL for Version 4 of the Cloudflare API is:
The endpoint is a URI that represents the path to a given resource, such as a Cloudflare load balancer. For example, the Cloudflare API endpoint for load balancers is:
/zones/:identifier/load_balancers/:identifier
When an endpoint specifies an
:identifier term, the identifier must be replaced with the Cloudflare ID for an instance of the resource type that precedes it. The load balancer example above requires two Cloudflare IDs: one to identify the zone and one to identify a specific load balancer, respectively. Cloudflare IDs are represented by 32-byte strings.
To form a request URL, append the endpoint to the base path, as in the example below:
HTTP methods
Cloudflare API endpoints accept one or more of the following HTTP methods, which typically behave as outlined:
GETretrieves a representation/information about a resource or collection of resources without modifying them in any way.
POSTcreates a new resource.
PUTupdates an existing resource.
PATCHpartially updates an existing resource.
DELETEremoves an existing resource.
HTTP header
The Cloudflare API requires that headers include the following:
- Authorization
- When authorizing via API Token, use
"Authorization: Bearer <API token>".
- When authorizing via API Key, both
"X-Auth-Key: <API key>"and
"X-Auth-Email:<account_email>"are required.
- Content Type—The Cloudflare API supports JSON-formatted request and response bodies only. Set
"Content-Type:application/json"in request headers.
API documentation conventions
Throughout this guide, API references will omit the base path for simplicity.
See Cloudflare API Documentation v4 for a complete reference to the latest version of the API.
Before you begin
Be sure that you have the following:
- Access to Load Balancing via one of the following:
- An Enterprise account with Load Balancing enabled.
- An existing Free, Pro, or Business account with a Load Balancing subscription. (Enable Load Balancing in the Traffic app.)
- Load balancer hostname: The hostname for which the Cloudflare Load Balancer will manage traffic. The default hostname is the root hostname.
- Origin servers (2): You will need access to at least two origin servers (origin-server-1 and origin-server-2, for example).
- Location: Initially, we will configure only a single geographic region.
Step 1: Create a monitor
Monitors are configurations that describe how to run health checks on your origin servers. When a monitor is attached to a pool, Cloudflare will run health checks on that pool’s origin servers from our data centers around the world. Because monitors exist independently, you can attach them to multiple pools. This way you can make a change to a single monitor and automatically update the health check policy for every pool that uses it.
Use the Create Monitor command to create a new monitor, as in the example below. If you are using virtual hosting, it is important to define a
host value for the
header property so that your web server knows which virtual host to serve. In most cases, this will be the same hostname as the one you intend the load balancer to manage—the same one you will use to name the load balancer. (See Monitors for a full list of monitor properties and available commands.)
Request example
# POST{ "type": "https", "description": " Health Check", "method": "GET", "path": "/health", "header": { "Host": [ "example.com" ], "X-App-ID": [ "abc123" ] }, "port": 8080, "timeout": 3, "retries": 0, "interval": 90, "expected_body": "alive", "expected_codes": "2xx", "follow_redirects": true, "allow_insecure": true}
Response
{ "success": true, "errors": [], "messages": [], "result": { "id": "f1aba936b94213e5b8dca0c0dbf1f9cc", "created_on": "2014-01-01T05:20:00.12345Z", "modified_on": "2014-01-01T05:20:00.12345Z", "type": "https", "description": " Health Check", "method": "GET", "path": "/health", "header": { "Host": [ "example.com" ], "X-App-ID": [ "abc123" ] }, "port": 8080, "timeout": 3, "retries": 0, "interval": 90, "expected_body": "alive", "expected_codes": "2xx", "follow_redirects": true, "allow_insecure": true }}
Locate the Monitor ID in the response and record it. Your new monitor won’t trigger any health checks until we attach the monitor to a pool, and to do that we will need the Monitor ID.
Step 2: Create pools
A Cloudflare pool represents a group of origin servers, each identified by their IP address or hostname. If you're familiar with DNS terminology, think of a pool as a record set, except we only return addresses that are considered healthy.
Before you continue, gather the following:
- The IP addresses or hostnames of your origin servers
- The ID of the monitor you just created. (Use the List Monitors command,
GET /load_balancers/monitors, to fetch the Monitor ID.)
- An email address for receiving health check notifications
Create Pool 1
Use the Create Pool command on the Cloudflare API to create a new pool, as in the example below. Set the
origins array to supply a list of origin server objects. In this example, use the two origin servers you reserved for this exercise. (See Pools for a list of pool properties and available commands.)
Setting the pool’s
monitor property will attach your monitor to the pool and enable health checks. For this example, use the Monitor ID you generated in the previous step.
Request
// POST{ ": "someone@example.com"}": "someone@example.com", "healthy": true }, "success": true, "errors": [], "messages": []}
If the response is an error, check the error message for a suggestion. If you’re using curl to make requests, check that any shell escaping isn’t breaking your JSON request body.
Create Pool 2
To create a second pool, use the same command you did to create Pool 1, but give Pool 2 a different
name.
Request
// POST"{ ": "someone@example.com"}
Response
The response will be similar as for Pool 1, but the ID and timestamps will be different.
Now that you’ve created your pools and attached a monitor, Cloudflare will initiate health checks from each of our data centers. (See Monitors for more on how health checks work.)
Step 3: Validate pool health
Use the List Pools command to verify you have configured your monitor and pools correctly. The value for the
healthy property should be
true, indicating that health checks are configured and the pool is available.
Request
// /load_balancers/pools *OR* GET /load_balancers/pools/:pool_id{ // ... "healthy": true // ...}
Response
{ ": []}
The response breaks down results for each Cloudflare PoP, as well as the origin servers associated with them.
For most use cases, using the List Pools command—
GET /load_balancers/pools—is sufficient. Responses from the Pool Health Details command—
GET pools/:pool_id/health—can be verbose, so it’s a better tool for drilling into isolated failures.
Step 4: Create a load balancer
To start delivering traffic to your pools, you must attach them to a load balancer. Load balancers are identified by the DNS hostname whose traffic you want to balance (, for example). The load balancer defines which origin server pools to use, the order in which they should be used, and how to geographically distribute traffic among them.
Important load balancer properties
The following load balancer properties are important for this step. (See Load Balancers for a complete list of properties.)
Cloudflare Zone IDs
Notice that the Create Load Balancer command requires a
zone_id:
POST /zones/:zone_id/load_balancers
This represents the Cloudflare ID of the DNS zone associated with your load balancer.
A DNS zone is a portion of the DNS namespace that is managed by a specific organization or administrator. The domain name space is a hierarchical tree, with the DNS root domain at the top. A DNS zone starts at a domain within the tree and can also extend down into subdomains so that multiple subdomains can be managed by one entity.
You can get the Cloudflare Zone ID for your hostname by using the List Zones command:
GET /zones
This command returns a list of zones, each with an associated hostname and Cloudflare Zone ID. You can filter the list by setting values for properties, as in the following curl example, which queries for the zone associated with the hostname example.com.
Request example (curl)"
Response
Notice that the response includes data not only for example.com but also for each of its subdomains:
{ "success": true, "errors": [], "messages": [], "result": [ { "id": "023e105f4ecef8ad9ca31a8372d0c353", "name": "example.com", "development_mode": 7200, "original_name_servers": [ "ns1.originaldnshost.com", "ns2.originaldnshost": {}, "email": {}, "type": "user" }, "account": { "id": "01a7362d577a6c3019a474fd6f485823", "name": "Demo Account" }, "permissions": [ "#zone:read", "#zone:edit" ], "plan": { "id": "e592fd9519420ba7405e1307bff33214", "name": "Pro Plan", "price": 20, "currency": "USD", "frequency": "monthly", "legacy_id": "pro", "is_subscribed": true, "can_subscribe": true }, "plan_pending": { "id": "e592fd9519420ba7405e1307bff33214", "name": "Pro Plan", "price": 20, "currency": "USD", "frequency": "monthly", "legacy_id": "pro", "is_subscribed": true, "can_subscribe": true }, "status": "active", "paused": false, "type": "full", "name_servers": [ "tony.ns.cloudflare.com", "woz.ns.cloudflare.com" ] } ]}
Use the List Zones command to retrieve the zone for the DNS hostname you want to use to create a load balancer. Review the response and record the zone’s ID
.
Create your load balancer
Use the Create Load Balancer command to create your new load balancer, as in the example below. Remember to set
zone_id to the value you found in the previous section.
Request example
//": {}}
Response
{ "success": true, "errors": [], "messages": [], "result": { "id": "699d98642c564d2e855e9661899b7252", "created_on": "2014-01-01T05:20:00.12345Z", "modified_on": "2014-01-01T05:20:00.12345Z", "description": "Load Balancer for", "name": "", "enabled": true, "ttl": 30, "fallback_pool": "17b5962d775c646f3f9725cbc7a53df4", "default_pools": [ "17b5962d775c646f3f9725cbc7a53df4", "9290f38c5d07c2e2f4df57b1f61d4196", "00920f38ce07c2e2f4df50b1f61d4194" ], "region_pools": {} }}
Step 5: Configuring Geo Steering (optional)
If you have servers in different geographic regions, you may want to steer traffic to pools based on the region from which visitors are connecting. For example, your European visitors should land on your European pool first, and then on your US pool if the European pool is down. Your North American users would have the reverse configuration.
Cloudflare Geo Steering directs traffic to pools based on the client’s region or PoP (Enterprise accounts only). You can configure Geo Steering to implement this use case as follows:
- Direct EU users to the pool in Europe first, followed by the US pool.
- Direct North American clients to the US pool first and the EU pool second.
- All other regions will use the default pools.
Use the regions_pool property of the Update Load Balancers command—
PUT /zones/:zone_id/load_balancers— to specify an array of regions. Specify each region using the appropriate region code followed by a list of origin servers to use for that region. In the example below,
WNAM and
ENAM represent the West and East Coasts of North America, respectively.
Request example
//"]}}
If you only define
WNAM, then traffic from the East Coast will be routed to the
default_pools. You can test this using a client in each of those locations. | https://developers.cloudflare.com/load-balancing/create-load-balancer-api | CC-MAIN-2020-50 | refinedweb | 2,069 | 52.8 |
putw - put a word on a stream (LEGACY)
#include <stdio.h> int putw(int w, FILE *stream);() or fclose() on the same stream or a call to exit() or abort().
This interface need not be reentrant.
Upon successful completion, putw() returns 0. Otherwise, a non-zero value is returned, the error indicators for the stream are set, and errno is set to indicate the error.
Refer to fputc().
None.
Because of possible differences in word length and byte ordering, files written using putw() are implementation-dependent, and possibly cannot be read using getw() by a different application or by the same application on a different processor.
The putw() function is inherently byte stream oriented and is not tenable in the context of either multibyte character streams or wide-character streams. Application programmers are recommended to use one of the character based output functions instead.
None.
fopen(), fwrite(), getw(), <stdio.h>.
Derived from Issue 1 of the SVID. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/putw.html | crawl-003 | refinedweb | 157 | 57.57 |
Question:
I recently got stuck in a situation like this:
class A { public: typedef struct/class {...} B; ... C::D *someField; } class C { public: typedef struct/class {...} D; ... A::B *someField; }
Usually you can declare a class name:
class A;
But you can't forward declare a nested type, the following causes compilation error.
class C::D;
Any ideas?
Solution:1
You can't do it, it's a hole in the C++ language. You'll have to un-nest at least one of the nested classes.
Solution:2
class IDontControl { class Nested { Nested(int i); }; };
I needed a forward reference like:
class IDontControl::Nested; // But this doesn't work.
My workaround was:
class IDontControl_Nested; // Forward reference to distinct name.
Later when I could use the full definition:
#include <idontcontrol.h> // I defined the forward ref like this: class IDontControl_Nested : public IDontControl::Nested { // Needed to make a forwarding constructor here IDontControl_Nested(int i) : Nested(i) { } };
This technique would probably be more trouble than it's worth if there were complicated constructors or other special member functions that weren't inherited smoothly. I could imagine certain template magic reacting badly.
But in my very simple case, it seems to work.
Solution:3
If you really want to avoid #including the nasty header file in your header file, you could do this:
hpp file:
class MyClass { public: template<typename ThrowAway> void doesStuff(); };
cpp file
#include "MyClass.hpp" #include "Annoying-3rd-party.hpp" template<> void MyClass::doesStuff<This::Is::An::Embedded::Type>() { // ... }
But then:
- you will have to specify the embedded type at call time (especially if your function does not take any parameters of the embedded type)
- your function can not be virtual (because it is a template)
So, yeah, tradeoffs...
Solution:4
I would not call this an answer, but nonetheless an interesting find: If you repeat the declaration of your struct in a namespace called C, everything is fine (in gcc at least). When the class definition of C is found, it seems to silently overwrite the namspace C.
namespace C { typedef struct {} D; } class A { public: typedef struct/class {...} B; ... C::D *someField; } class C { public: typedef struct/class {...} D; ... A::B *someField; }
Solution:5
This would be a workaround (at least for the problem described in the question -- not for the actual problem, i.e., when not having control over the definition of
C):
class C_base { public: class D { }; // definition of C::D // can also just be forward declared, if it needs members of A or A::B }; class A { public: class B { }; C_base::D *someField; // need to call it C_base::D here }; class C : public C_base { // inherits C_base::D public: // Danger: Do not redeclare class D here!! // Depending on your compiler flags, you may not even get a warning // class D { }; A::B *someField; }; int main() { A a; C::D * test = a.someField; // here it can be called C::D }
Solution:6
This can be done by forward declare the outer class as a namespace.
Sample: We have to use a nested class others::A::Nested in others_a.h, which is out of our control.
others_a.h
namespace others { struct A { struct Nested { Nested(int i) :i(i) {} int i{}; void print() const { std::cout << i << std::endl; } }; }; }
my_class.h
#ifndef MY_CLASS_CPP // A is actually a class namespace others { namespace A { class Nested; } }
« Prev Post
Next Post »
EmoticonEmoticon | http://www.toontricks.com/2018/06/tutorial-forward-declaration-of-nested.html | CC-MAIN-2018-34 | refinedweb | 560 | 61.77 |
Keeping track of an object's location in Firebase
When using Firebase to store and retrieve objects (POJOs) created by the user (for example: posts or comments), it becomes necessary to pass these objects around the application. But what is the suggested way to keep track of the associated
DatabaseReference, location or unique key in the database for this object?
Example scenario
A simple to do list app allows the user to freely add, edit and remove items in their list. So when the user creates an item, something similar to the below would happen:
private Item storeItem(String title) { String key = mDatabase.child("items").push().getKey(); // Where do we keep this key? Item item = new Item(title); mDatabase.child("items").child(key).setValue(item); return item; }
Where
Item is this Java object:
public class Item { private String title; private String description; public Item() {} public Item(String title) { this.title = title; } // ... }
Behind the scenes, this item is added to a
RecyclerView, either by inserting the returned
Item to the adapter or when a
ChildEventListener attached to the "items" reference is fired.
The user then wishes to rename this new item or add text to the
description field, so tapping on it in the
RecyclerView starts a separate
Activity which receives the passed
Item and uses getters/setters to make changes.
Now, we'll need to save these changes to the database, which we can do by calling
setValue() again, as above. However, we didn't store the
key variable from
storeItem() so we don't actually know where this item is currently stored in the database.
So, where can we keep track of the created item's key for later use to save changes back to the database?
Possible solutions
There are a number of different paths we could take here, but I'm looking for some guidance on the suggested method, as the Firebase documentation doesn't mention this hurdle. I've outlined some examples that I can think of:
- Store the key inside the object. We could add another field to the
Itemobject to store the database key. So within the previous
storeItem()method, the
keyvariable is added to the
Itemconstructor and stored in the database as a field.
- Create a wrapper object. We could wrap the
Itemobject in a container that has methods such as
getItem()and
getKey()or
getDatabaseReference()and then pass this around the app instead of the
Itemitself.
- Use the
DataSnapshotinstead. Once the item is created, wait until an attached listener receives it, then use and pass around the retrieved
DataSnapshot, which has methods for
getKey()and
getRef().
- Retrieve the object every time it is needed. Instead of passing
Itemaround the app, we could retrieve it from the database every time it is needed, by using the key or
DatabaseReference.
Wrapping up
Looking back on this huge question, it seems I might have overcomplicated it a little, but I wanted to be thorough in my explanation. I'm also hoping that it's not purely opinion-based and there currently is some standard way to achieve this.
So I guess my question is: is there a standard method to handle and make changes to Java objects stored in Firebase?
1 answer
- answered 2017-10-11 10:23 Frank van Puffelen
Most developers I see struggling with this end up storing the key inside the Java objects too. To prevent it being duplicated in the JSON, you can annotate it in the Java class:
public class Item { private String title; private String description; @Exclude public String key; public Item() {} public Item(String title) { this.title = title; } // ... }
See: Is there a way to store Key in class which I cast from Firebase object?
My personal preference in such cases is to keep the
DataSnapshotaround. The main disadvantage I see in that is that the information on the object-type of the snapshot is spreading out over my code since this exists in multiple places:
snapshot.getValue(Item.class);
I've been lobbying to generify the
DataSnapshotclass so that it'd become
DataSnapshot<Item>, which would solve that problem. I think that is currently being considered in the Firestore SDK for JavaScript/TypeScript.
But lacking such a solution for the Android SDK for the Realtime Database, you're probably better off with the first approach: storing the key inside the Java objects. | http://codegur.com/46685794/keeping-track-of-an-objects-location-in-firebase | CC-MAIN-2018-09 | refinedweb | 725 | 59.03 |
). There is no obvious way to speed up this dp, because the transition of states is already done in O(1), and that's where dp optimization techniques usually cut the complexity. It's also useless to use some other definition of dp, since they will all take O(n^3) time to compute. But what we can do is to use the same trick used to solve the task Alien, from IOI 2016, or 674C - Levels and Regions in O(n log k) as Radewoosh had described on his blog, and completely kick out a dimension from our dp!
Kicking out the 3rd dimension:
By kicking out the 3rd dimension, we're left with dp[i][x]. This is now defined as the highest expected number of caught pokemon in the prefix of i pokemon if we throw at most x A-pokeballs and any number of B-pokeballs. Obviously this will always use the maximum amount of B-pokeballs. But what's really cool is that we can actually try to simulate this last dimension: we define some C as a "cost" we have to pay every time we want to take a B-pokeball. This is essentially adding the functions f(x) = dp[n][a][x] and g(x) = -Cx. The cool thing is, f(x) is concave, i.e. f(x+1) — f(x) <= f(x) — f(x-1). This is intuitive because whenever we get a new B-pokeball, we will always throw it at the best possible place. So if we get more and more of them, our expected number of caught pokemon will increase more and more slowly. And why is it useful that f(x) is convex? Well, h(x) = f(x) + g(x) has a non-trivial maximum, that we can find. And if h(x) is maximal, it means that for this C, it's optimal to throw x B-pokeballs. Now it's pretty obvious that we can do a binary search on this C to find one such that it's optimal to throw exactly b B-pokeballs, as given in the input. Inside our binary search we just do the O(n^2) algorithm, and when we finish, do a reconstruction of our solution to see how many B-pokeballs we've used, and use that information to continue binary searching. This gives us complexity O(n^2 log n), which is good enough to get AC. This trick was shown to us at our winter camp, which ended yesterday.
#include <bits/stdc++.h> using namespace std; const int maxn = 2020; const double eps = 1e-8; int n, a, b, opt[maxn][maxn]; double dp[maxn][maxn], pa[maxn], pb[maxn], pab[maxn]; int solve(double mid){ for(int i = 1; i <= n; i++){ for(int j = 0; j <= a; j++){ double &d = dp[i][j]; int &o = opt[i][j]; d = dp[i - 1][j]; o = 0; if(j && d < dp[i - 1][j - 1] + pa[i]){ d = dp[i - 1][j - 1] + pa[i]; o = 1; } if(d < dp[i - 1][j] + pb[i] - mid){ d = dp[i - 1][j] + pb[i] - mid; o = 2; } if(j && d < dp[i - 1][j - 1] + pab[i] - mid){ d = dp[i - 1][j - 1] + pab[i] - mid; o = 3; } } } int ret = 0, la = a; for(int i = n; i >= 1; i--){ if(opt[i][la] > 1) ret++; if(opt[i][la] & 1) la--; }; for(int it = 0; it < 50; it++){ mid = (lo + hi) / 2; if(solve(mid) > b) lo = mid; else hi = mid; } int ans = solve(hi); cout << fixed << setprecision(10) << dp[n][a] + hi * b << endl; return 0; }
Kicking out another dimension?
But is this all? Can we do better? Why can't we kick out the 2nd dimension in the same way we kicked out the first one? It turns out that in this task, we actually can! We just define D as the cost that we deduct each time we use an A-pokeball, and then using binary search find the C for which we use exactly enough B-pokeballs, and reconstruct the solution to see if we've used too many or too little A-pokeballs. The function is again concave, so the same trick works! Using this I was able to get AC in O(n log^2 n), which is pretty amazing for a Div1 E task with N <= 2000. My friends vilim_l, jklepec, lukatiger and me are still amazed that this can be done!
#include <bits/stdc++.h> using namespace std; typedef pair<int, int> pii; const int maxn = 2020; const double eps = 1e-8; int n, a, b, opt[maxn]; double dp[maxn], pa[maxn], pb[maxn], pab[maxn]; pii solve(double &D, double &C){ for(int i = 1; i <= n; i++){ double &d = dp[i]; int &o = opt[i]; d = dp[i - 1]; o = 0; if(d < dp[i - 1] + pa[i] - D){ d = dp[i - 1] + pa[i] - D; o = 1; } if(d < dp[i - 1] + pb[i] - C){ d = dp[i - 1] + pb[i] - C; o = 2; } if(d < dp[i - 1] + pab[i] - C - D){ d = dp[i - 1] + pab[i] - C - D; o = 3; } } pii ret = pii(0, 0); for(int i = 1; i <= n; i++){ if(opt[i] > 1) ret.second++; if(opt[i] & 1) ret.first++; }, lo2, hi2, mid2; for(int it2 = 0; it2 < 50; it2++){ mid = (lo + hi) / 2; lo2 = 0, hi2 = 1, mid2; for(int it = 0; it < 50; it++){ mid2 = (lo2 + hi2) / 2; if(solve(mid, mid2).second > b) lo2 = mid2; else hi2 = mid2; } if(solve(mid, hi2).first > a) lo = mid; else hi = mid; } solve(hi, hi2); cout << fixed << setprecision(10) << dp[n] + hi2 * b + hi * a << endl; return 0; } | http://codeforces.com/topic/49921/en5 | CC-MAIN-2017-22 | refinedweb | 962 | 71.99 |
In these series of articles I'm going to answer following questions:
- What are React hooks?
- Why there are React hooks?
- How to use React hooks?
From now on I assume that:
- You have no knowledge of React hooks.
- You have at least basic knowledge of React (any tutorial longer than 5 mins will be enough).
My story
I've been working with React for over two years now. I must admit it's been very nice two years. So I was very sceptical when I heard about React hooks for the first time. "Why change something that is good and works?" When I saw first hooks examples my feeling "this is not a good direction" was even stronger. But hooks kept attacking me from every direction and more and more people seemed to be delighted with the new React addition. I decided to give them a try... and I joined a circle of delighted. But first things first.
What are React Hooks?
Hooks were introduced to React to replace class creation of components. Replace with what? Replace with function creation.
'Whoa!' one can shout. We could have created components with functions this whole time. What the whole fuss with hooks is about? Before I answer this question let us take two steps back.
How do we create components in React?
As a general rule there are two ways to crate components in React.
- Using classes (class components).
- Using functions (function components).
Function components seem to be much easier:
- One doesn't have to "wrestle" with
thiskeyword and remember to bind methods.
- They are more readable and faster to write.
- They are easier to test and reasoning about.
So let us ask a simple question...
Why there are two ways for creating components in React?
If function components are so "cool" why not using only them? Why would one use classes in the first place?
Class components have two important features not available for function components:
- They can have state.
- They give access to component's lifecycle methods.
What is state? It's the component's ability to "remember" any information about itself.
E.g. a button component can remember whether user clicked it or not. And depending on that render itself in green or red.
What are component's lifecycle methods? Component's lifecycle is a period starting with the first painting of a component in a browser (and even one moment before) up until removing it from there. Lifecycle methods let us execute any code in key moments of component's existance.
E.g. let’s say we'd like to know the height of the button. This information is available after the button is actually rendered in the browser.Thanks to
componentDidMount we can have access to the button and get its height when it's rendered.
We couldn’t have used these features while using function components in the past. But since React 16.8 - thanks to introduction of React hooks - both state and lifecycle methods are available to function components!
Show me some code!
Let's begin our adventure with React hooks from writing a class component.
We have a simple component that renders input field. The user can enter their name and it'll be saved in component state and displayed above the input field.
import React from 'react'; class MyComponent extends React.Component { state = { userName: "Bob", } handleUserNameChanged = (e) => { this.setState({ userName: e.target.value }); } render() { return( <> <h2>User name: {this.state.userName}</h2> <input type="text" value={this.state.userName} onChange={this.handleUserNameChanged} /> </> ); } }
Let's write a function component now. The goal is to write a component that has exactly the same functionality as the class component. Let's start with an empty arrow function:
import React from 'react'; const MyComponent = () => { // code goes here };
And then do the following:
- Copy the code returned by
rendermethod. It'll be returned directly by our function component.
- Copy
handleUserNameChangedmethod and add
constkeyword in front of it.
- We don't have
thiskeyword in function component. Delete all its occurrences.
- We are interested in
userNamenot
state.userName. Remove all
state.from the code.
- We don't define
stateas an object. We define
userNamevariable instead and give it a string
"Bob"as initial value.
- Change
setStatewith a more descriptive function:
setUserName. We pass it a value we get from input field. This function will be responsible for changing the value we keep in the
userNamevariable.
Our function component should look as follows:
import React from 'react'; const MyComponent = () => { const userName = "Bob"; const handleUserNameChanged = (e) => { setUserName(e.target.value); } return( <> <h2>User name: {userName}</h2> <input type="text" value={userName} onChange={handleUserNameChanged} /> </> ); }
At this stage our component is not working. We get information about an error:
setUserName is not defined. Let's remind ourselves what
setUserName should be? It should be a function that changes the value of the
userName.
We're going to write a naive implementation of that function. This function will accept a new
userName value and (for now) it'll return current
userName value.
const setUserName = (newUserName) => userName;
Now add it to our function component (in line 4):
import React from 'react'; const MyComponent = () => { const userName = "Bob", setUserName = (value) => userName; const handleUserNameChanged = (e) => { setUserName(e.target.value); } return( <> <h2>User name: {userName}</h2> <input type="text" value={userName} onChange={handleUserNameChanged} /> </> ); }
Our code almost works. Almost because it shows input field and user name as "Bob". But we can't change that user name. Why? We are lacking the component’s state in which we could keep our new user name. We'd like to use state here. Luckily for us React gives us a
useState hook.
useState hook
useState is a hook that let us use state in a function component.
useState hook is a function that returns array with two elements:
- First element is a variable to store a value of our state.
- Second element is a function we can use to change the state with a new value.
We can pass
useState an argument with initial state value. It can be any
string,
number,
boolean,
array or
object. In our example we pass
string "Bob".
We can write:
const state = useState("Bob"); // state is an array const userName = state[0]; // first element is a state's value const setUserName = state[1]; // second element is a function
Thanks to array destructuring we can write it more elegant:
const [userName, setUserName] = useState("Bob");
We can read this as follows:
- We want to use state and keep its value in a variable called
userName.
- We can change the state by calling
setUserNamefunction with a new value.
- We set initial
userNamevalue to
"Bob".
With this knowledge at hand let's get back to our example. Import
useState from React and use it in the component.
import React, { useState } from 'react'; // import useState hook const MyComponent = () => { const [userName, setUserName] = useState("Bob"); const handleUserNameChanged = (e) => { setUserName(e.target.value); } return( <> <h2>User name: {userName}</h2> <input type="text" value={userName} onChange={handleUserNameChanged} /> </> ); }
Now our function component should work exactly the same as our class component. Thanks to React’s
useState hook we've created a function component that can have state!
Great, it's working but where are those miracles?
You may be thinking that adding Hooks to React doesn't bring any spectacular benefits to the table. And actually you're right. If you compare initial class component with its function counterpart there are not too many differences. It's really hard to understand why so many people is so excited about hooks.
I promise you one thing. If you stay with me to the end of this series you'll have a Wow! This is so super! moment. At least I had one.
See you in the next part of the Gentle Introduction to React Hooks!
Thanks for reading! If you liked this let me know! Leave a comment, give a ❤️ or share it!
Feel free to check my Twitter account with more content like this.
Discussion (9)
Great work Przemek 😄 transforming a class component into a functional one is always a great way to teach hooks ! I did something similar a while back: dev.to/christopherkade/introductio..., glad to see it's a commonly used method.
Thank you! I felt it was a natural way for someone who worked with React before hooks. BTW I'm gonna check your post :)
This was a smooth intro to React hooks, i was very reluctant to read on hooks as i'm a newbie and there's lots of stuff we have to absorb.
I'm now looking for a similar intro to React ContextAPI, i'd appreciate a link from anyone who've come a cross an intro to Context which is done in this similar style 😎
Amazing content. Congrats dude :)
Thanks! I hope you'll enjoy the whole series ;)
Nice read, thanks
Thank you. I'm glad you like it!
Great content, waiting for the next one :)
Thank you! Next part almost ready. Stay tuned! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/przemwo/gently-introduction-to-react-hooks-part-1-1a47 | CC-MAIN-2021-25 | refinedweb | 1,497 | 68.57 |
Hi This fixes the bio allocation problems, but doesn't fix a similar deadlock in device mapper when allocating from md->io_pool or other mempools in the target driver.). Mikulas On Tue, 28 Aug 2012,. > > This can be worked around in the driver code - we could check if we're > running under generic_make_request(), then mask out __GFP_WAIT when we > go to allocate a bio, and if the allocation fails punt to workqueue and > retry the allocation. > > But this is tricky and not a generic solution. This patch solves it for > all users by inverting the previously described technique. We allocate a > rescuer workqueue for each bio_set, and then in the allocation code if > there are bios on current->bio_list we would be blocking, we punt them > to the rescuer workqueue to be submitted. > > > @@ -307,6 +324; > @@ -324,13 +342,37 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs) > front_pad = 0; > inline_vecs = nr_iovecs; > } else { > + /* > + * generic_make_request() converts recursion to iteration; this > + * means if we're running beneath it, any bios we allocate and > + * submit will not be submitted (and thus freed) until after we > + * return. > + * > + * This exposes us to a potential deadlock if we allocate > + * multiple bios from the same bio_set() while running > + * underneath generic_make_request(). If we were to allocate > + * multiple bios (say a stacking block driver that was splitting > + * bios), we would deadlock if we exhausted the mempool's > + * reserve. > + * > + * We solve this, and guarantee forward progress, with a rescuer > + * workqueue per bio_set. If we go to allocate and there are > + * bios on current->bio_list, we first try the allocation > + * without __GFP_WAIT; if that fails, we punt those bios we > + * would be blocking to the rescuer workqueue before we retry > + * with the original gfp_flags. > + */ > + > + if (current->bio_list && !bio_list_empty(current->bio_list)) > + gfp_mask &= ~__GFP_WAIT; > +retry: > p = mempool_alloc(bs->bio_pool, gfp_mask); > front_pad = bs->front_pad; > inline_vecs = BIO_INLINE_VECS; > } > > if (unlikely(!p)) > - return NULL; > + goto err; > > bio = p + front_pad; > bio_init(bio); > @@ -351,6 +393); > @@ -1607,9 +1669 3a8345e..84fdaac 100644 > --- a/include/linux/bio.h > +++ b/include/linux/bio.h > @@ -492,6 +492,15 @@ struct bio_set { > mempool_t *bio_integrity_pool; > #endif > mempool_t *bvec_pool; > + > + /* > + * Deadlock avoidance for stacking block drivers: see comments in > + * bio_alloc_bioset() for details > + */ > + spinlock_t rescue_lock; > + struct bio_list rescue_list; > + struct work_struct rescue_work; > + struct workqueue_struct *rescue_workqueue; > }; > > struct biovec_slab { > -- > 1.7.12 > | http://www.redhat.com/archives/dm-devel/2012-August/msg00354.html | CC-MAIN-2013-20 | refinedweb | 374 | 56.89 |
Opened 3 years ago
Last modified 2 years ago
#5028 closed change
Use the browser extension API via the "browser" (instead of "chrome") namespace — at Version 4
Description (last modified by sebastian)
Background
In the upcoming browserext standard, which is already implemented by Firefox and Microsoft Edge, the browser extension API is provided by the browser namespace, as opposed to the chrome namespace.
As a quick fix, we therefore aliased chrome = browser in the edge branch (#3695), in the Firefox we can either use chrome or browser. However, given that browser.* is going to be standardized, this is backwards, and using chrome.* in code that also runs on other platforms is also misleading.
What to change
- If the browser object is undefined (i.e. on Chrome), globally define browser = chrome.
- Adapt all usage of the extension API to use browser.* instead of chrome.*.
Change History (4)
comment:1 Changed 3 years ago by sebastian
comment:2 Changed 3 years ago by greiner
- Cc greiner added
comment:3 Changed 2 years ago by mjethani
comment:4 Changed 2 years ago by sebastian
Note: See TracTickets for help on using tickets.
Mozilla has a polyfill for this. | https://issues.adblockplus.org/ticket/5028?version=4 | CC-MAIN-2019-47 | refinedweb | 195 | 62.38 |
#include <sys/scsi/scsi.h> int scsi_validate_sense(uint8_t *sense_buffer, int sense_buf_len, int *flags);
Solaris DDI specific (Solaris DDI).
Pointer to a buffer containing SCSI sense data. The sense data is expected in wire format starting at the response code.
Length of sense buffer in bytes.
Returns additional properties of the sense data.
The scsi_validate_sense() function returns the format of the sense data contained in the provided sense buffer. If the response code field in the sense data is not recognized or if there is not enough sense data to include sense key, asc, and ascq then scsi_validate_sense () returns SENSE_UNUSABLE. If the buffer contains usable sense data in fixed format, the function returns SENSE_FIXED_FORMAT. If the buffer contains usable sense data in descriptor format, the function returns SENSE_DESCR_FORMAT.
The following flags may be set as appropriate depending on the sense data:
The sense data buffer provided for the request is too small to hold all the sense data.
The sense data contained in the buffer relates to an error that has occurred during the processing of a successfully completed command, such as a cached write that could not be committed to the media.
The response code from the sense data is unrecognized or not enough sense data present to provide the sense key, asc, and ascq.
The sense data in the buffer is in “fixed format”.
The sense data in the buffer is in “descriptor format”.
The scsi_validate_sense() function can be called from user or interrupt context.
scsi_ext_sense_fields(9F), scsi_find_sense_descr(9F), scsi_sense_asc(9F), scsi_sense_ascq(9F), scsi_sense_cmdspecific_uint64(9F), scsi_sense_info_uint64(9F), scsi_sense_key(9F) | http://docs.oracle.com/cd/E36784_01/html/E36886/scsi-validate-sense-9f.html | CC-MAIN-2015-48 | refinedweb | 260 | 55.64 |
a method defines a local variable named identically to a class and attempts to invoke a static method on the class, mcs incorrectly reports error CS0165 "Use of unassigned local variable". Here's a simple repro case:
===== Test.cs =====
public class A
{
public static A Get() { return null; }
}
public class Test
{
void M()
{
A A = A.Get();
}
}
===== End of Test.cs =====
===== Sample command lines =====
# From Visual Studio 2013 command prompt:
$ csc /target:library Test.cs
Microsoft (R) Visual C# Compiler version 12.0.21005.1
[No errors]
# From Mono 2.10.9 command prompt:
$ mcs /target:library Test.cs
[No errors]
# From Mono 3.2.3 command prompt:
$ mcs /target:library Test.cs
Class1.cs(10,15): error CS0165: Use of unassigned local variable `A'
Compilation failed: 1 error(s), 0 warnings
===== End of sample command lines =====
Thus, this source file is compiled by csc and mcs version 2.10.9. mcs 3.x fails to compile this code. This looks like a regression between the 2.x and 3.x series of the mcs compiler.
Thanks, Richard.
Richard Cook | Principal engineer
Coverity | Columbia Center Tower | 701 Fifth Avenue, Suite 1220 | Seattle, WA
98104
The Leader in Development Testing
Read our profile in Forbes, Coverity Gets Code Right 25% Faster
Already fixed in master and Mono 3.2.7
Could you provide the commit hash for the fix?
I don't think there was single commit. It was probably fixes as part of flow-analysis rewrite which is ~10 commits | https://bugzilla.xamarin.com/17/17519/bug.html | CC-MAIN-2021-39 | refinedweb | 250 | 68.16 |
documentation updates fixed some portability bugs in vmgen-ex and vmgen-ex2 updated copyright years
1: \ Forth output paging add-on (like more(1)) 2: 3: \ Copyright (C) add-on is for those poor souls whose terminals cannot scroll 23: \ back but who want to read the output of 'words' at their leisure. 24: 25: \ currently this is very primitive: it just counts newlines, and only 26: \ allows continuing for another page (and of course, terminating 27: \ processing by sending a signal (^C)) 28: 29: \ Some things to do: 30: \ allow continuing for one line (Enter) 31: \ count lines produced by wraparound (note tabs and backspaces) 32: \ allow continuing silently 33: \ fancy features like searching, scrollback etc. 34: 35: \ one more-or-less simple way to achieve all this is to 36: \ popen("less","w") and output there. Before getting the next `key`, 37: \ we would perform a pclose. This idea due to Marcel Hendrix. 38: 39: require termsize.fs 40: 41: variable last-#lines 0 last-#lines ! 42: 43: :noname ( -- c ) 44: 1 last-#lines ! 45: defers key ; 46: is key 47: 48: :noname ( c -- ) 49: dup defers emit 50: #lf = 51: if 52: 1 last-#lines +! 53: last-#lines @ rows >= 54: if 55: ." ... more ?" key drop 10 backspaces 10 spaces 10 backspaces 56: endif 57: endif ; 58: is emit 59: 60: :noname ( c-addr u -- ) 61: bounds 62: ?DO 63: I c@ emit 64: LOOP ; 65: is type | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/more.fs?rev=1.4;sortby=rev;f=h;only_with_tag=v0-6-2;ln=1 | CC-MAIN-2021-49 | refinedweb | 240 | 69.41 |
This preview shows
pages
1: ANSWERS TO KEYSTONE PROBLEMS—CHAPTER 2 (112161.) The leading case on the issue of the deductibility of home office expenses by teachers is David J.
Weissman, 85-1 USTC 11 9106, 751 F.2d 512 (CA-3 1984), rev’g 47 TCM 520, Dec. 40,645(M), T.C. Memo. 1983-724.
A college professor was required to do scholarly research and writing in addition to teaching. He spent the majority
of his employment-related time doing research and writing at home because a quiet and safe place to perform this
work was not available at the college. The court held that he was entitled to deduct his home office expenses because
his home, not his college office, was his principal place of business. The home office was necessary to carry out an
essential aspect of his job (i.e., his research) and was maintained for the convenience of the employer. The 7th Circuit
in Thomas C. Cadwallader v. Commissioner, 90-2 USTC 1150,597, aff’ g 57 TCM 1030, denied a deduction for a home
office maintained for the taxpayer’s own convenience where a university provided him with adequate office space.
The US. Supreme Court, in another office-in-home case, Nadar E. Soliman, 93-1 USTC 1150,014, held that the office-
in-home must be the principal place where the activities are performed to be deductible. Since Weissman’s principal
income-earning activity could be held to be teaching, the deductibility of his office-in-home after the Soliman decision
would be questionable. (112333.) In an office examination the taxpayer or a representative should take only information or support
for items which are requested of the taxpayer by the IRS; otherwise the tax auditor might open up other areas for
investigation. Situations may vary, but some practitioners believe that it is better for the taxpayer, assuming he or she
has a representative such as a CPA or a lawyer, not to be present because the representative can keep better control
over the interview and also approach the matter in a less emotional atmosphere. IRS personnel should be treated courteously and should be promptly furnished information and substantiation
relating to applicable tax return items. Although the cooperation of the taxpayer (or representative) is important, the taxpayer should respond only to questions asked by the agent. Disclosing unnecessary information could cause
problems to the taxpayer. When there is a disagreement after an office examination, if practicable, the taxpayer is given an opportunity
for an interview with the tax auditor’s immediate supervisor or for a conference with an Appeals officer. If these
actions are not feasible, the taxpayer will be sent a 30-day letter from the District Office indicating the proposed
adjustments and the courses of action. If the taxpayer agrees with the adjustment, the agreement form can be signed.
If the taxpayer disagrees, he or she may request an Appeals Office conference within 30 days, or ignore the 30-day
letter and wait for the 90-day letter which allows the taxpayer to file a petition in the Tax Court. There are a number of factors a taxpayer should consider in trying to decide whether to pursue the matter.
Going to the Appeals Office is less expensive than litigation, and yet the taxpayer leaves open the opportunity to file a
petition in the Tax Court or to sue for refund in a District Court or the Court of Federal Claims. In addition, a taxpayer is often able to gather more information about the IRS position in the event the taxpayer needs to carry the case further
and there may be a chance that the taxpayer may convince the Appeals Officer that the IRS was incorrect at the agent
level. The Appeals Officer may be at some disadvantage in that the officer has not personally prepared the case and is
relying on the information presented by the revenue agent, which could be an advantage to the taxpayer. On the other
hand, there may be some disadvantages to having an Appeals Conference. New issues might be raised in an Appeals
conference, although the IRS’s policy is not to raise an issue unless the grounds for such action are “substantial” and
the potential effect upon tax liability is “material.” The 10 factors mentioned in the “Choice of Tax Forum” should also be considered. It is important for taxpayers
to be aware of the characteristics of the courts so that an appropriate choice can be made if the taxpayer decides to go
to court. A taxpayer, having made a decision to go to the District Court, for example, cannot later decide to go to the
Tax Court. The taxpayer must think very seriously before taking the case to court. Not only may the economic costs
be high, but the psychological and emotional costs may also be high. The taxpayer has to consider whether the tax
savings will be worth the legal fees, time, and psychological costs. In deciding to which court to take the case, the
taxpayer should not look simply at the statistics on taxpayer winnings in the various courts. Statistics like that only
have some value if winnings by taxpayers on similar issues are being examined. Internal Revenue Code Structure 1. Yes, the Internal Revenue Code of 1986 includes all existing tax laws, regardless of the date when such
provisions were enacted. Internal Revenue Code Organization 2. The majority of the income tax law is found in the Internal Revenue Code, Subtitle A, Chapter 1. Treasury Regulations: Judicial Precedent 3. Yes, Regulations are issued by the US. Treasury Department, and are authorized by Congress. In contrast,
Revenue Rulings are issued by the Internal Revenue Service, which is a branch of the Treasury Department. Regulations v. Revenue Rulings 4. Yes, in dealing with the IRS, Regulations have the authority of law. Revenue Rulings are similar to Regulations
in that they represent administrative interpretations of the law; however, they do not have the force and effect
of Regulations, but they may be used as precedents. Administrative Sources of Tax Law 5. Treasury Regulations, Revenue Rulings, Revenue Procedures, Technical Information Releases and
Announcements, Private Letter Rulings, Determination Letters, and Technical Advice Memoranda. Revenue Ruling and Revenue Procedure Citation 6. When a Revenue Ruling or a Revenue Procedure is first issued, the available citation is the reference to the
Internal Revenue Bulletin. However, once the Cumulative Bulletinfor the period has been issued, all rulings
and procedures reprinted in that Cumulative Bulletin should be cited according to their permanent CB page
references—not according to the temporary IRB reference. Judicial Circuits 7. There are presently ll numbered judicial circuits plus the Federal Circuit. The District of Columbia is a
separate Circuit. Trial Court System 8. The three trial courts that have jurisdiction over tax cases are the US. Tax Court, the US. District Court, and
the US. Court of Federal Claims. Tax Court: Regular and Memorandum Decisions 9. Regular decisions require an interpretation of the law; memorandum decisions generally concern only well-
established principles of law and require only a determination of facts. Tax Court: IRS Acquiescences 10. No, an acquiescence to a court decision can be retroactively withdrawn at any time by the IRS. Common Tax Law Abbreviations 11. CCH ............................................................................................................. .. CCH, a Wolters Kluwer business
RIA .................................................................................................................... .. Research Institute of America
BTA ................................................................................................................................ .. Board of Tax Appeals
USTC ........................................................................................ ..United States Tax Cases (published by CCH)
AFTR ......................................................... .. American Federal Tax Reports (RIA original series of tax cases)
AFTRZd .............. .. American Federal Tax Reports, 2nd series (current years of tax cases, published by RIA)
S.Ct. ....................... .. Supreme Court Reporter (Supreme Court decisions published by West Publishing Co.)
CA-3 ................................................................................................................... .. Court of Appeals, 3rd Circuit
TCM .......................................................................................................................... .. Tax Court Memorandum Tax Law Publication Services 12. Research Institute of America (RIA) Tax Law Publication Services 13. Merten’s, Law of Federal Income Taxation Tax Research Methodology: Case #1 14. 2011 = 0; 2012 = $1,900. See Reg. §l.165-1(d)(2)(ii). Tax Research: Computer-Based Research Systems 15. Computer-based research systems in which the text of tax treaties may be found include the publishers CCH,
West, Lexis/Nexis, and RIA (Research Institute of America). Tax Research: Court Case Historical Record 16. The historical record of a court case can be found in a Citator. IRS Organization 17. See Exhibit 8 in the text for the organization of the Internal Revenue Service. The Internal Revenue Service
consists of a National Office and an extensive field organization composed of over 100,000 revenue agents,
revenue officers, and support personnel. The IRS is divided into four operating divisions, each responsible for
serving a group of similar taxpayers. Practice Before the IRS 18. Attorneys or certified public accountants who are not under suspension or disbarment may practice before the
IRS, as may any person enrolled as an agent. Enrolled agents, however, must demonstrate special competence
in tax matters by written examination administered by the IRS. In certain situations, other persons may represent taxpayers: (1) An individual may represent another individual who is his or her full-time employer, may represent a
partnership of which he or she is a member or a full-time employee, or may represent a member of his
or her immediate family. (2) Corporations, associations, or organized groups may be represented by bona fide officers or full-time
employees. (3) Trusts, receiverships, guardianships, or estates may be represented by their trustees, receivers, guardians,
administrators, or executors or by full-time employees. (4) An individual who prepares the taxpayer’s return may represent the taxpayer before officers and
employees of the Examination Division of the IRS. Rulings, Determination Letters, and Technical Advice Memoranda 19. Private Letter Rulings. A private ruling is a “written statement issued to a taxpayer by the National Office of
the IRS that interprets and applies the tax laws to that taxpayer’s specific set of facts.” It is issued in response
to a specific request by a taxpayer. The private ruling is applicable only for the taxpayer requesting the ruling,
although it may provide to taxpayers in similar situations some indication of the IRS’s viewpoint. Determination Letters. A determination letter is a “written statement issued by a District Director in response
to a written inquiry by a taxpayer which applies the principles and precedents previously announced by the
National Office to a specific set of facts.” Determination letters are issued by District Directors whereas rulings
are issued by the National Office. Most determination letters are issued as to matters involving pension plans
and exempt organizations. Technical Advice Memoranda. Technical advice is “advice or guidance furnished by the National Office upon
request of a District or an Appeals Office in response to any technical or procedural question” that develops
during the examination or appeals process. Both the taxpayer and the District or Appeals Office may request
technical advice. The taxpayer may request advice where there appears to be inconsistency in the application
of law or where the issue is unusual or complex. Requests for technical advice memoranda sometimes become
the basis for a Revenue Ruling. IRS Examination of Returns: Selection Programs
20. DIF. The Discriminant Function system used by the IRS involves computer scoring using mathematical
formulae to select tax returns with the highest probability of errors. T CMP. Taxpayer Compliance Measurement Program is a program for measuring taxpayer compliance through
specialized audits of individual tax returns. IRS Examination of Returns: Selection Criteria 21. The following events might cause an IRS examination:
(1) Total positive income is above specified amounts.
(2) Another IRS office or a non-IRS party might provide information (e.g., a tip from a bitter former
spouse).
(3) A claim for a refund may result in a closer examination of the return. (4) A return of a related party (family member, partner) might be examined to determine the correctness of
the taxpayer’s return. Correspondence Examinations: Taxpayer Errors Resolved by Mail 22. Mathematical errors can be broadly defined to mean (1) an error in addition, subtraction, multiplication, or
division shown on any return; (2) an incorrect use of any IRS table if such incorrect use is apparent from other
information on the return; (3) inconsistent entries on the return; (4) an omission of information required to be
supplied on the return to substantiate a return item; or (5) a deduction or credit disallowed by law that is either
a specified monetary amount or a percentage, ratio, or fraction—if the items entering into the application of
such limit appear on the return. District Office Examinations 23. Examples of types of issues which lend themselves to interview examinations are income items that are
not subject to withholding, deductions for travel and entertainment, items such as casualty and theft losses
that involve the use of fair market value, education expenses, deductions for business related expenses, and
determination of basis of property. Also, if the taxpayer’s income is low in relation to financial responsibilities
as indicated on the return through the number of dependents, or interest expense, or if the taxpayer’s occupation
is of the type that required only a limited formal education, an office interview might be deemed appropriate.
Certain business activities or occupations may also lend themselves to office interview examinations. Field Examinations 24. In addition to being less costly than settling at higher levels, negotiations with the revenue agent are generally
more informal than higher levels and less demanding as to technical aspects. Also, if questionable issues exist
but were not raised at the agent level, it may be wise to settle at that level in order to avoid the possibility of
persons at higher levels raising those questionable issues. Notices of Deficiency 25. 30-day letter. If the taxpayer and the agent do not agree, the taxpayer will be sent a 30-day letter which
explains the appellate procedures and urges the taxpayer to reply within 30 days either by signing the waiver
or by requesting a conference. 90-day letter. If the taxpayer does not respond to the 30-day letter, a statutory notice of deficiency (90-day
letter) will be sent which gives the taxpayer 90 days to file a petition with the Tax Court. Appeals Procedure: Administrative Process 26. If an appeal is made within the IRS, an appropriate request must be made if required. A taxpayer may go to
the Appeals Office at two different times: (1) if the protest is filed within the 30-day period as stated in the 30-
day letter, or (2) if the 30-day period passes and the taxpayer files a petition in the Tax Court within 90 days
after receipt of a statutory notice of deficiency (the “90-day letter”). The taxpayer may represent himself or
herself at an Appeals conference or the taxpayer may be represented by an attorney, CPA, or person enrolled
to practice before the IRS. The Appeals Officer, who actually handles the appeals, reports to the Regional Director of Appeals, who
reports to the Regional Commissioner. Proceedings before the Appeals Office are informal and are held in
the District Office. The Appeals Officer may request that the taxpayer submit additional information, which
could involve additional conferences. Appeals Procedure: Federal Court System 27. There is a small tax cases procedure in the Tax Court if the amount of the deficiency or claimed overpayment is
not greater than $50,000. In addition, there are three other trial courts or courts of original jurisdiction: the US.
Tax Court, a federal District Court, and the US. Court of Federal Claims. Appeals from the Tax Court and the
District Court go to the Circuit Court of Appeals and appeals from the Court of Federal Claims go to the US.
Court of Appeals for the Federal Circuit. Appeals from all Courts of Appeals go to the US. Supreme Court. Tax Forum Selection 28. The factors to be considered include the following:
(1) Jurisdiction.
(2) Payment of tax.
(3) Jury trial.
(4) Rules of evidence.
(5) Expertise of judges. (6) Publicity.
(7) Legal precedent.
(8) Factual precedent.
(9) Statute of limitations.
(10) Discovery.
See also the choice of tax forum section at 1l2311 in the textbook. Delinquency Penalties; Types
29. The two delinquency penalties are the penalty for failure to file a return and the penalty for failure to pay the tax. Delinquency Penalties: Reasonable Causes for Avoidance 30. The following are some “reasonable causes” for purposes of avoiding the delinquency penalties:
(1) A return was mailed in time but returned for insufficient postage.
(2) A return was filed within the legal period but in the wrong district.
(3) Death or serious illness of the taxpayer or of someone in the immediate family.
(4) Unavoidable absence of the taxpayer.
(5) Destruction of the taxpayer’s business or business records by fire or other casualty.
(6) Erroneous information was given the taxpayer by an IRS official. (7) The Taxpayer made an effort to obtain assistance or information necessary to complete the return by a
personal appearance at an IRS office but was unsuccessful because the taxpayer, through no fault of his
own, was unable to see an IRS representative. (8) The taxpayer is unable to obtain the records necessary to determine the amount of tax due for reasons
beyond the taxpayer’s control. (9) The taxpayer contacts a competent tax adviser, furnishes the necessary information, and then is
incorrectly advised that the filing of a return is not required. Negligence Penalty 3 1 . A 20 percent penalty, part of the accuracy-related penalty, is imposed for underpayment of tax due to negligence
or intentional disregard of rules or regulations. Understatement of Tax Liability Penalty 32. (1) A taxpayer and (2) any person who aids in the preparation or presentation of any tax document in
connection with matters arising under the internal revenue laws who knows that the document will result in
the understatement of tax liability of another person. Valuation Overstatement Penalty 33. Any taxpayer having an underpayment of tax attributable to a valuation overstatement is subject to a penalty.
The amount of the penalty is 20 percent and is part of the accuracy-related penalty. Underpayment of Tax Penalty 34. An individual taxpayer can avoid the penalty for underpayment if the payments of estimated tax are at least
as large as any one of the following: (1) 90 percent of the tax shown on the return or 100 percent of the tax shown on the return of the individual for
the preceding taxable year (assuming it showed a tax liability and covered a taxable year of 12 months);
or (2) An amount equal to 90 percent of the tax for the taxable year computed by annualizing the taxable
income received for the months in the taxable year ending before the month in which the installment is required to be paid. Delinquency Penalties: Computation of Penalty 35. Jim’s total penalties (disregarding interest) are $400, consisting of a failure to pay penalty of $40 (1/2 X 1% X
$8,000) and a failure to file penalty of $360 or $400 (5% X $8,000) less the failure to pay penalty of $40. Negligence Penalty: Computation of Penalty
36. Rose’s total penalty is $4,000 (20% X $20,000). Appeals Procedure: Administrative Process
37. There are three options available to Olivia:
(1) She may request a conference in the IRS Appeals Office. (2) She may ignore the 30-day letter. She would then receive a statutory notice of deficiency at which time
she may file a petition in the Tax Court within the 90-day period. (3) She could wait for the 90-day period to expire, pay the assessment, and start a refund suit in the District
Court or the Claims Court. Multiple Choice—Internal Revenue Code Organization
38. b. Partners and partnerships is the topic covered in Code Sec. 731 of the Internal Revenue Code. Multiple Choice—Treasury Regulations
39. (1. Treasury Regulations are published in the Federal Register. Multiple Choice—Revenue Rulings Publication 40. (c) and (d). Revenue Rulings are published when they are issued in the Federal Register. They are also
published in the Internal Revenue Bulletin (issued weekly). Multiple Choice—Tax Court Memorandum Decisions Publication 41. d. Tax Court Memorandum Decisions (cited TCM), published by CCH, would be a publication in which to
find memorandum decisions of the US. Tax Court. Practice Before the IRS 42. Attorneys or certified public accountants who are not under suspension or disbarment may practice before
the IRS as may any person enrolled as an agent. Thus, if Matthew is an attorney or CPA, he may represent
Timothy as well as if he has become an enrolled agent by taking a written examination administered by the
IRS. If Matthew was related to Timothy, he could represent him without enrollment. Appeals Procedure: Refund Claims 43. Yes, Marvin should file a claim for refund by filing Form 1040X (Amended U.S. Individual Income Tax
Return) and mail it to the IRS Center where he filed the original return. A claim for refund must be filed within
three years from the date the return was filed or within two years from the date the tax was paid, whichever is
later. Therefore, since he filed the return on August 15, 2010, and paid the tax on February 15, 2011, he must
file the claim by August 15, 2013. IRS Letter Rulings: Areas Not Subject to Rulings 44. No, because the IRS will not issue rulings in a number of general areas, one of which applies to this situation.
The IRS will not issue a ruling on the results of a transaction that lacks bona fide business purposes or has as
its principal purpose the reduction of federal taxes. As Steve’s principal purpose for wanting to incorporate
is the reduction of taxes and the transaction also lacks business purpose, he would not receive a ruling from
the IRS. Delinquency Penalties: Reasonable Causes for Avoidance 45. No, because the penalty can be avoided if the taxpayer can show that failure to file and/or pay was due to
reasonable cause and not to willful neglect. The Internal Revenue Manual states that if a return is mailed on time
but returned for insufficient postage, the “reasonable cause” requirement for avoiding the penalty is met. Overstatement of Deposit of Tax Penalty 46. Any person who makes an overstated deposit claim is subject to a penalty of 10 percent of such claim. The
term “overstated deposit claim” means the excess of the amount of tax claimed in a filed return to have been
deposited in a government depository over the amount actually deposited in a depository on or before the
date such return is filed. Thus, Douglas Corporation may be penalized $500 (($15,000 - $10,000) X 10%) as
a result of this error. Tax Preparer Penalties 47. Joe is subject to a preparer penalty. Any preparer who endorses or otherwise negotiates a refund check issued
to a taxpayer for a return or claim for refund prepared by the preparer is liable for a penalty of $500 with
respect to such check. Thus, Joe is potentially liable for a penalty of $500 as a result of his depositing Karen’s
refund check into his account in payment for his services. Statute of Limitations: Omissions of Income 48. Jim must omit more than $50,000 ($200,000 X 25%) for the six-year statute of limitations to apply. If the
taxpayer omits income in excess of 25 percent of the gross income reported on his return, the IRS has six
years in which to make any additional assessment of tax. In computing gross income, revenues from the sales
of goods or services are not to be reduced by costs of goods sold. Tax Practice Ethics 49. Per “Statements on Standards for Tax Services,” Andrea should ask Rodney to disclose the error to the IRS. If
Rodney does not comply with her request, Andrea may have a duty to withdraw from the engagement. Since
the Statements indicate standards followed by members of the accounting profession, a violation of them
might mean that “due care” has not been exercised. Thus, if Rodney does not comply with Andrea’s request
and she does not withdraw from the engagement, she may be subject to charges of negligence. Tax Practice Ethics 50. No, per the AICPA “Statements on Standards for Tax Services.” In preparing a tax return, a CPA may take
a position contrary to Treasury Department or IRS interpretations of the Code without disclosure if there is
reasonable support for the position. Delinquency Penalties: Computation of Penalty
51. (1) Failure to pay penalty: 3% (.5% per month for the 6 months from April 16 through September 20,
with the fractional month counted as a full month) of the $1,200 balance due $ 36 (2) Failure to file penalty: Penalty at 5% for maximum of five months, 25% of $1,200 300
Less: failure to pay penalty for 5 months ($6 X 5) i0
Failure to file penalty {Q
Total delinquency penalties (l) and (2) $306 Valuation Misstatement Penalty: Computation of Penalty 52. The valuation claimed ($50,000) is 250% of the correct valuation ($20,000). The penalty, however, is 40%
of the underpayment of tax since a charitable contribution is involved. Tommy’s underpayment of tax is
$12,000, which means the penalty is $4,800. Statute of Limitations: Omissions of Income 53. 25% of gross income of $420,000 ($400,000 + $20,000) is $105,000. If Sandy omitted $100,000 income,
which is less than $105,000, the statute of limitation would be three years. If she omitted $120,000 income, which is greater than $105,000, the statute of limitation would be increased
to six years. Refunds: Timeliness of Claims 54. If Brent files the claim on March 14, 2012, he can recover $6,000 because the claim is filed within athree-year
period from the due date of the return. (He filed the original return before the due date.) If he files the claim on May 15, 2012, his recovery is limited to the amount he actually paid during the last two years, that is, the
$3,000 paid on June 10, 2010. Multiple Choice—Notices of Deficiencies 55. b. If the taxpayer omits from gross income an amount which is in excess of 25 percent of the amount of gross
income stated on the return, the tax may be assessed at any time within six years after the return is filed, or the
due date for filing, if later. However, there is no such rule for overstated deductions, and therefore the date is
three years after the due date of the 2010 return (i.e., April 15, 2014). ‘ Multiple Choice—Statute of Limitations: Omissions of Income 56. d. For the six-year statute of limitations to apply, Maude would have had to omit in excess of 25 percent of
gross income. In computing gross income, revenues from the sale of goods or services are not to be reduced
by cost of goods sold. Also, gross income includes capital gains. Thus, 25 percent of $440,000 is $110,000. Multiple Choice—Statute of Limitations: Refund Claims 57. d. There is a special seven-year period of limitation on a claim for refund based on a debt that became wholly
worthless or on a worthless security. Code Sec. 6511(d)(1). Multiple Choice—Tax Practice Ethics 58. a. Advise client. Multiple Choice—Tax Forum Selection 59. 0. Pay the additional tax, then file a claim for refund. Multiple Choice—Appeals Procedure 60. c. Submit a written protest within a specified time limit. Multiple Choice—Tax Preparer Penalties 61. c. The tax return preparer has the burden of proof. Research Problem—Revenue Rulings
62. Rev. Rul. 57-82 has been superseded by Rev. Rul. 76-74. Research Problem—Code References 63. Code Secs. 2053 and 2054. This is an exercise in locating a detailed Code section reference. However, please
note that the reference in the regulation is unintelligible, unless you find out what Code Secs. 2053 and 2054
stand for. Research Problem—Revenue Rulings
64. Rev. Rul. 76—74 supersedes Rev. Ruls. 57-82; 56-445; 55-477. Research Problem—IRS Letter Rulings
65. The date of IRS Letter Ruling 8302032 is October 7, 1982. Research Problem—Regulations
66. Reg. §l.274-8 was adopted on June 24, 1963. Research Problem—Code Organization
67. Section 280A. Research Problem—Citator Case Citations 68. a. CA-2 reversed the district court.
b. S.Ct. reversed CA-2. Research Problem—Code Organization
69. Standard Deduction: Code Sec. 63; Trade or Business Expenses: Code Sec. 162; Losses: Code Sec. 165;
Medical Deductions: Code Sec. 213; Moving Expenses: Code Sec. 217. Research Problem—Code References
70. Code Secs. 902 and 936 are referred to in Code Sec. 56(f)(2)(F)(ii)(Il), which has been repealed. Research Problem—Citator Case Citations
71. Cert. denied, 296 US. 588; 56 S.Ct. 99. Research Problem—Regulations
72. Code Secs. 212 and 266 are referred to in Reg. § 212-1(n). Research Problem—Legal Terms: Definitions 73. ANNOTATED—To make or furnish critical or explanatory notes or comments. CERTIORARI—An appellate proceeding for reexamination of action of inferior tribunal or auxiliary process
to enable appellate court to obtain further information in pending cause. REMANDED—To send back to the same (lower) court out of which a case came for purpose of having some
action on it there. DICTUM—Statements and comments in an opinion concerning some rule of law or legal proposition not
necessarily involved in or essential to determination of the case at hand are “obiter dicta” and lack the force
of an adjudication. ACQUIESCED—When the IRS gives its express consent to a decision of the US. Tax Court. Research Problem—Code References 74. No, see Code Sec. 280F(d)(4). MACRS is not allowed for cellular telephones unless business use exceeds 50 percent.
If such use is 50 percent or less, depreciation must be computed under the alternative depreciation system. Research Problem—Code References
75. July 10, 1989, is the effective date of Code Sec. 1031(f). Research Problem—Publishers' Loose—Leaf Services 76. a. In most foreclosures of real estate, the borrower will have a basis below the amount of the outstanding
debt because of tax deductions for depreciation on the property. When the property is repossessed, gain is
recognized to the extent that the amount realized exceeds the borrower’s basis. Code Sec. 1001(a); Reg.
§ 1.1001-2(a); and J. W Yarbro, 84-2 usrc 11 9691 (CA-5 1984), 737 F.2d 479, cert. denied 105 S.Ct. 959.
Any excess of the outstanding debt over the fair market value of the property is ordinary income under
the forgiveness of indebtedness rules of Code Sec. 61(a)(12). When a borrower is insolvent, however, this income can be excluded from income to the extent of the amount by which the taxpayer is insolvent. Code
Sec. 108(a)(3); Rev. Rul. 90-16, 1990-1 CB 12. Result. Borrower realizes and recognizes a capital gain of $200,000, which is the excess of the FMV over
basis ($1.2 million - $1 million). Borrower also has forgiveness of income of $300,000, which is the
excess of the indebtedness over the FMV ($1.5 million - $1.2 million), but this gain is not taxed because
Borrower is insolvent. If Borrower is not personally liable on the debt, then the forgiveness of indebtedness exception of Code
Sec. 108(a) is not applicable and Borrower must treat the entire amount as being realized on a sale or
exchange under Code Sec. 1001. Therefore, Borrower has a capital gain of $5 million. Research Problem—Electronic Data Base 77. No, Anthony may not deduct the cost of the bar review course. The courts have held that preparing for the bar exam
of a second state is meeting the minimum requirements for practicing in that state and as such is not deductible
as education expense. See: LR. Adamson, 32 TCM 486, Dec. 31,963(M), T.C. Memo. 1973-107; MD. Siewert,
80-2 USTC 1] 9613 (DC Tex 1980); SF Avery, 76-2 USTC 119694 (DC Iowa 1976); ME. Walker, 54 TCM 169, Dec.
44,128(M), T.C. Memo. 1987-409; RM Kohen, 44 TCM 1518, Dec. 39,451(M), T.C. Memo. 1982-625; J.A.
Sharon, 78-2 usrc 79834, 591 F.2d 1273 (CA-9 1978), aff’ g per curiam 66 TC 515, Dec. 33,890. Research Problem—District Office Examinations: Nonappearance 78. Under Code Sec. 7210, any person who, being duly summoned to appear to testify or to appear and produce
books, accounts, records, memoranda, or other papers, as required under Code Secs. 6420(e)(2), 6421(g)
(2), 6427(j)(2), 7602, 7603, and 7604(b), neglects to appear or to produce such books, accounts, records,
memoranda, or other papers, will, upon conviction thereof, be fined not more than $1,000, or imprisoned not
more than one year, or both, together with costs of prosecution. Research Problem—Business Expense 79. (a) and (b). According to Private Letter Ruling 9144042 (July 1, 1991), the issue is not whether a takeover is
hostile or friendly. Rather, the proper inquiry to be made is whether the target corporation obtained a long-
terrn benefit as a result of the expenditure. In order to obtain a deduction, the taxpayer must show it did not
obtain a long-term benefit. Each case will turn on its own specific facts and circumstances. Research Problem—Exclusion from Gross Income 80. Damages for impairment of business income are taxable as gross income under Code Sec. 61 (Hort v. Comm.,
313 US. 28 (1941); Freeman v. Comm, 33 TC 323 (1959), Letter Ruling 9348002). Punitive damages are
also taxable under Code Sec. 61 (Comm. v. Glenshaw Glass, 348 US. 426 (1955)). ...
View Full Document
This note was uploaded on 12/02/2011 for the course ACCOUNTING 4001 taught by Professor Nathanielbell during the Spring '11 term at FIU.
- Spring '11
- NATHANIELBELL
Click to edit the document details | https://www.coursehero.com/file/6591194/CH2-Sol/ | CC-MAIN-2017-17 | refinedweb | 5,693 | 55.64 |
Combine the Neural with the Normal.
In this document I will demonstrate density mixture models. The goal for me was to familiarize myself with tensorflow a bit more but it grew to a document that compares models too. The model was inspired from this book by Christopher Bishop who also wrote a paper about it in 1994. I’ll mention some code in the post but if you feel like playing around with the (rather messy) notebook you can find it here.
A density mixture network is a neural network where an input \(\mathbf{x}\) is mapped to a posterior distribution \(p(\mathbf{y} | \mathbf{x})\). By enforcing this we gain the benefit that we can have some uncertainty in our prediction (assign a lot of doubt for one prediction and a lot of certainty for another one). It will even give us the opportunity to suggest that more than one prediction is likely (say, this person is either very tall or very small but not medium). The main trick that facilitates this is the final hidden layer which can be split up into three different parts:
The idea is that the final prediction will be a probability distribution that is given by the trained output nodes via; \[ p(\mathbf{y} | \mathbf{x}) = \sum_{i=1}^k \pi_i \times N(\mu_i, \sigma_i) \]
Graphically, and more intuitively, the network will look something like:
All the non-coloured nodes will have tahn activation functions but to ensure that the result is actually a probability distribution we will enforce this neural network to:
Note that the architecture we have is rather special. Because we assign nodes to probibalistic meaning we enforce that the neural network gets some bayesian properties.
The implementation is relatively straightforward in tensorflow. To keep things simple, note that I am using the
slim portion of
contrib.
import tensorflow as tf # number of nodes in hidden layer N_HIDDEN = [25, 10] # number of mixtures K_MIX = 10 x_ph = tf.placeholder(shape=[None, 1], dtype=tf.float32) y_ph = tf.placeholder(shape=[None, 1], dtype=tf.float32) nn = tf.contrib.slim.fully_connected(x_ph, N_HIDDEN[0], activation_fn=tf.nn.tanh) for nodes in N_HIDDEN[1:]: nn = tf.contrib.slim.fully_connected(nn, nodes, activation_fn=tf.nn.tanh) mu_nodes = tf.contrib.slim.fully_connected(nn, K_MIX, activation_fn=None) sigma_nodes = tf.contrib.slim.fully_connected(nn, K_MIX, activation_fn=tf.exp) pi_nodes = tf.contrib.slim.fully_connected(nn, K_MIX, activation_fn=tf.nn.softmax) norm = (y_ph - mu_nodes)/sigma_nodes pdf = tf.exp(-tf.square(norm))/2/sigma_nodes likelihood = tf.reduce_sum(pdf*pi_nodes, axis=1) log_lik = tf.reduce_sum(tf.log(likelihood)) optimizer = tf.train.RMSPropOptimizer(0.01).minimize(-log_lik) init = tf.global_variables_initializer()
With the implementation ready, I figured it would be nice to generate a few odd datasets to see how the architecture would hold. Below you will see five charts for four datasets.
When looking at these charts we seem to be doing a few things right:
If you look carefully though you could also spot two weaknesses.
The values for sigma can suddenly spike to unrealistic heights, this is mainly visable in the fourth plot. I’ve introduced some regularisation to see if it helps. The model seems to improve, not just in the \(\sigma\)-space but also in the \(\mu\)-space we are able to see more smooth curves.
The regularisation can help, but since \(\pi_i\) can be zero (which cancels out the effect of \(\sigma_i\)) you may need to regulise drastically if you want to remove it all together. I wasn’t able to come up with a network regulazier that removes all large sigma values, but I didn’t care too much because I didn’t see it back posterior output (because in those cases, one can imagine that \(\pi_i \approx 0\)).
Our model is capable of sampling wrong numbers around the edges of the \(x\)-space, this is because \(\sum_i \pi_i = 1\) which means that for every \(x\) the likelihood must be zero. To demonstrate the extremes consider these samples from the previous models;
Note how the model seems to be drawing weird samples around the edges of the known samples. There’s a blob of predicted orange where there are no blue datapoints to start with.
Because \(\sum \pi(x) = 1\) we are always able to generate data in regions where there really should not be any. You can confirm this by looking at the \(\mu\) plots. We could append this shortcomming by forcing that \(\sum \pi(x) = 0\) if there is no data near \(x\). We could “fix” this by appending the softmax part of the \(\pi_i\) nodes with a sigmoid part.
This can be done, but it feels like a lot of hacking. I decided not to invest time in that and instead considered comparing mixture density networks to their simpler counterpart; gaussian mixture models.
It is fashionable thing to try out a neural approach these days, but if you would’ve asked me the same question three years ago with the same dataset I would’ve proposed to just train a gaussian mixture model. That certainly seems like a right thing to do back then so why would it be wrong now?
The idea is that we throw away the neural network and that we train \(K\) multivariate gaussian distributions to fit the data. The great thing about this approach that we do not need to implement it in tensorflow either since scikit learn immediately has a great implementation for it.
from sklearn import mixture clf = mixture.GaussianMixture(n_components=40, covariance_type='full') clf.fit(data)
With just that bit of code, very quickly train models on our original data too.
The model trains very fast and you get the eyeball impression that it fits the data reasonably.
Let’s zoom in on predictions from both models to see how different they are.
It should be said that we’ll be making comparisons between two approaches without any hypertuning (or even proper convergence checking). This is not very appropriate academically, but it may demonstrate the subtle differences in output better (as well as simulate how industry users will end up applying these models). Below we’ll list predictions from both models.
In each case, we’ll be given an \(x\) value and we want to predict the \(y\) value. The orange line is the neural approach and the blue line is from the scikit model. I’ve normalized both likelihoods on the same interval in order to compare them better.
It’s not exactly a full review but just from looking at this we can see that the models show some modest differences. The neural approach seems to be more symmetric (which is correct; I sampled the data that way) but it seems to suffer from unfortunate jitter/spikyness at certain places. More data/regularisation might fix this issue.
Other than that, I would argue the normal scikit learn model is competitive simply because it trains much faster which could make it feasible to do a grid search on the number of components relatively quickly.
Both methods have it’s pros and cons, most notably;
Again, if you feel like playing around with the code, you can find it here. | https://koaning.io/posts/feed-forward-posteriors/ | CC-MAIN-2020-16 | refinedweb | 1,194 | 55.03 |
I am always looking for ways to store data and create shapefiles, or other filetypes I need, from it. Just picked up a book on MongoDB last night and have thrown together a quick example of how to enter some point data using Long and Lat and then retrieving the data and sending it to Shapefile.py. This is the first example, if I get around to it, the next example will query for 10 points near another then spit that out to a shapefile. But I need to walk before I run, so here are my first steps.
- Install Mongo
- Install pymongo
- Install Shapefile.py Check out this blog for more info – this guy is awesome, it is where I got the code to write the projection: GeoSpatial Python.
- Run mongod
- You can use mongo, a full JavaScript Shell, to enter data, but I used Python.
- Enter some data. I entered two points (35,-106) and (35.8,-106.8)
- write a python program to convert this data to a shapefile.
Here is the python program to retrieve these two points, write a shapefile and a PRJ:
from pymongo import Connection,GEO2D
import shapefile
db=Connection().geo
w=shapefile.Writer(shapefile.POINT)
w.field(“comment”)
for x in db.places.find():
w.point(float(x[“loc”][0]),float(x[“loc”][1]))
w.record(“Booya!”)
prj = open(“Web.prj”, “w”)
epsg = ‘GEOGCS[“WGS 84”,’
epsg += ‘DATUM[“WGS_1984”,’
epsg += ‘SPHEROID[“WGS 84”,6378137,298.257223563]]’
epsg += ‘,PRIMEM[“Greenwich”,0],’
epsg += ‘UNIT[“degree”,0.0174532925199433]]’
prj.write(epsg)
prj.close()
w.save(“mongoSHP”)
;
Now you should have a shapefile with 2 points in WGS84
One Response to “Creating a Shapefile From a MongoDB Using Python” | https://paulcrickard.wordpress.com/2012/11/14/creating-a-shapefile-from-a-mongodb-using-python/ | CC-MAIN-2018-09 | refinedweb | 283 | 72.36 |
KLayout Documentation (Qt 4): Main Index » Various Topics » About Packages
"Salt" is KLayout's package manager which allows selecting and installing packages from a global repository.
Packages make KLayout more tasty. Packages (the "grains") may cover a variety of features:
Packages can depend on other packages - these are installed automatically if a package requires them and they are not installed yet.
Packages are identified by name. A package name needs to be unique in the package universe.
You can use a prefixed name like to create a non-ambiguous name.
Use a slash to separate the prefix from the actual package name.
The choice of the prefix is entirely up to you as long as it contains letters, digits, underscores, hypthens or dots.
You can use a domain name that is owned by
yourself for example. You can use multiple prefixes to further differentiate the packages
inside your namespace.
Packages also come with version information, so KLayout can check for updates and install
them if required. KLayout will assume strict upward compatibility. This specifically
applies to packages that other packages are depending on (such as code libraries).
If you need to change them in a non-backward compatible way, you'd need to provide
a new package with a different name.
Packages come with some meta data such as authoring information, an optional icon and
screen shot image, license information and more. The more information you provide, the
more useful a package will become.
The key component for public package deployment is the "Salt.Mine" package repository
service. This is a web service that maintains a package index. It
does not host the packages, but stores links to the actual hosting site. In order
to author a package, you need to upload the package to one of the supported host
sites and register your package on the Salt.Mine page. Registration is a simple
process and the only information required is the link to your host site and a mail
account for confirmation.
To install external packages, open the package manager with "Tools/Manage Packages".
On the "Install New Packages" page, a list of available packages is shown. Select
the desired packages and mark them using the check mark button. Marked packages will
be downloaded and installed with the "Apply" button.
A filter above the package list allows selecting packages by name.
The right panel shows details about the package currently selected.
To check for updates, use the "Update Packages" tab of the package manager.
In the list, those packages for which updates are available are shown.
Mark packages for update using the check mark button. Click "Apply" to
apply the selected updates.
To uninstall packages, open the package manager using "Tools/Manage Packages".
Go to the "Current Packages" tab. Select a package and use the "Remove Package"
button to uninstall the package.
For package development you can utilize KLayout to initialize and edit the files inside
the package folder or populate the folder manually.
KLayout offers initialization of new packages from templates. You can modify that package
according to your requirements afterwards.
To create a package from a template, open the package manager using "Tools/Manage Packages",
go to the "Current Packages" tab and push the "Create (Edit) Package" button.
Chose a template from the list that opens and enter a package name (with prefix, if
desired). Select "Ok" to let KLayout create a new package based on the template you
selected.
The package details can be edited with the "pen" button at the top right of the
right details panel. Please specify at least some author information, a license
model and a version. If the package needs other packages, the dependencies can be
listed in the "Depends on" table. Those packages will be automatically installed
together with the new package. The showcase image can be a screen shot that gives
some idea what the package will do.
The package details are kept in a file called "grain.xml" inside the package
folder. You can also edit this file manually. The "grain.xml" is the basic description
file for the package.
If the package is a macro or static library package, the macro editor can be used
to edit the package files. If the package is a tech package, the technology manager
can be used to edit the technology inside the package. To populate the package
folder with other files use your favorite editor of KLayout itself for layout files.
Once a package is finished, it needs to be deployed to make it available to other
users. Deployment basically means to put it on some public place where others
can download the package. For local deployment inside an organisation,
this can be a web server or a folder on
a file server. KLayout talks WebDAV, so the web server needs to offer WebDAV
access. A subversion (SVN) server provides WebDAV by default, so this is a good
choice. Git can be used too, but you'll need to mirror the Git repository to
a file system or WebDAV share.
After a package has been made available for download, it needs to be entered
in the package index. For local deployment, the index can be a file hosted
on a web server or on the file system. The package index location needs to be
specified by the KLAYOUT_SALT_MINE environment variable which contains the
download URL of the package index file.
For public deployment, the Salt.Mine service () is used to register
new packages in the package index. By default, KLayout loads the package index from that service, so
once your package is registered there, everyone using KLayout will see it.
Public Packages are published on the Salt.Mine server. This is a web service that delivers a
packages index with some meta data such as current version, the icon
and a brief description. KLayout uses this list to inform users of packages available
for installation and available updates. For local deployment, the package index can be served by other
ways too. The only requirement is to be accessible by a http, https or file URL.
The basic format of the index is XML with this structure:
<salt-mine>
<salt-grain>
<name>name</name>
<version>Version</version>
<title>Title of the package</title>
<doc>A brief description</doc>
<doc_url>Documentation URL</doc_url>
<url>Download URL</url>
<license>License model</license>
<icon>Icon image: base64-encoded, 64x64 max, PNG preferred</icon>
</salt-grain>
...
<include>URL to include other index files into this one</include>
...
</salt-mine>
You can include other repositories - specifically the default one - into a
custom XML file. This allows extending the public index with local packages.
When the package manager is opened, KLayout will download the index from. You can set the KLAYOUT_SALT_MINE
environment variable to a different URL which makes KLayout use another dictionary
service, i.e. one inside your own organisation. This service can be any HTTP server
that delivers a package list in the same format than the Salt.Mine package service.
The URL can also be a "file:" scheme URL. In this case, KLayout will download the
list from the given file location.
When installing a package, KLayout will simply download the files from the URL given
in the package list. KLayout employs the WebDAV protocol to download the files.
This protocol is spoken by Subversion and GitHub with the subversion bridge. The
latter requires a simple translation of the original Git URL's to obtain the
subversion equivalent. | https://www.klayout.de/doc-qt4/about/packages.html | CC-MAIN-2018-26 | refinedweb | 1,244 | 57.06 |
Episode 175 · February 27, 2017
Implement the decorator pattern with the Draper gem in your Rails application
What's up guys, this episode we're talking about the draper gem, how you can use it to add decorators to your rails app. Now in the last episode I explained what decorators are, how you use them and how you build them from scratch and why you would want to use them. So we're going to talk about implementing them with Draper in this episode. So the reason why you might want to use Draper over your own implementation is that it's actually integrated super deeply into rails, but that could also potentially be a problem, becuase if you want to upgrade your rails app, then you are probably waiting along for draper to get upgraded to be compatible with your rails version. If we scroll down here to the installation instructions, we just need to add Draper to our Gemfile. Now if you're using rails 5, you want to grab the prerelease version of draper, becuase the ActiveModel serializers XML dependency is not included in rails 5, so if you were to install the latest stable version, that won't work with rails 5 yet. So this now depends on active_model_serializers and everything should be good if you use that. So go down here, we'll paste that in, and we want to make sure we get that pre-release, so let's grab that one and paste that in. Then we can go into our terminal and run
bundle install.
As you can see right thee, Draper is designed to be heavily integrated to rails, so you will get things like that where if rails changes, then your draper gem is going to have to change, or you're going to have to use a version from GitHub, and it's not going to be near as stable as building your own decorators where they're going to have a less deep integration with rails and you have more control over that way. But there are some great benefits of using this gem, such as whenever you generate a resource, it will automatically create a model and a decorator for you, but you can also generate a decorator with the command line which is great.
rails generate decorator
And we'll try and recreate what we wrote in the last episode. So here we get a test for the user decorator, and we also get the decorator in the exact same location as we saw before, so let's take a look at that and see what's different.
This time, we have
Draper::Decorator that we inherit from, and that's going to provide us all of the methods specific to Draper, such as the initialize so that we can take the object like the user automatically, and it creates the initialize method, and stuff like that, so we don't have to do any of that. It will also take care of the helpers for us, so we get that view contex that we had to pass in ourselves. We get all of that access to that via the helpers method. So helpers, or the alias for it, called H, can allow us to create those content tags, or link_to or anything we want inside of the decorator here, and we already have that all set up for us, which is nice.
Let's talk about how we instantiate the decorator, and then we'll talk about how we can use it, and how it's different from our own version. So if we go into our controller, let's imagine that we were logged in and we had a current_user object. This could be decorated by just saying
current_user.decorate and this will work for any sort of ActiveRecord model. You might say
user.first.decorate and that would give you a user decorator back, so it just looks at the class name of this object, whatever you give it and call decorate on, look at that class, it will say: Oh, this is a user, therefore we need a user decorator, and we initialize a new one of those objects and we give that back to you and there you go, you have that decorated object.
You can see this in the console if we run
User.first.decorate, this should give us a user decorator object back, and we can see that the variable inside of our decorator is pointing to that user that we pulled out. That means that this is pretty much doing exactly the same thing as if you were to say:
User.decorator.new(User.first), that's just a helper method that they add to ActiveRecord::Base so that you can go in and call the decorate method on any of your individual objects. So that works well, and you can also use this option if you would like to specifically pass in that class. So this works just as well. Now for collections, it's a little bit different, but it also goes to ActiveRecord::Base, so if you have ActiveRecord relation, you can say the same thing. So where as before, we can actually say:
def index @users = Users.all @users.decorate end
And as long as @users or whatever you call decorate on is an ActiveRecord relation, this will work just fine. So if we go into the console, and we say
User.all.decorate, we're going to get a Draper collection decorator back. So this has wrapped all of that. Now if you were to do an array, that's not going to work, so if you were to grab all the users and convert it to an array, this is an array object, it is not an ActiveRecord relation so the Draper helper method is not available here, but what you can say:
UserDecorator.decorate_collection(User.all.to_a)
and do the same thing if you have an array rather than an ActiveRecord relation, so that will work as well in case you were doing something like that where you need to go and filter and sort your results using pure ruby and you end up with an array, you can absolutely use this for those as well. What this means is that for our show views and our index views, we can just assign the decorators to the instance variable as a replacement for our normal version of this, and that can be more fluid that way. And this can be done the same way if you do decorators from scratch, you just have to remember that you're operating with decorators. Now be careful with this, because when you're doing an update, you don't want to update @user and do that on a decorator, you want to make sure you do that on a user model, and then at the very last moment, before you go to render a view, you want to then make the @user model or a user instance variable, a decorator and not a model. So what you would do here is something like this
app/controllers/users_controller.rb
def update user = User.find(params[:id]) if user.update(params) @user = user.decorate else end end
That way, you know that the only time you assign instance variables, you can make sure that they are decorators, so you know that you're separately working with the database when you were calling update and decorators when you're in your views, because you don't want to call update on a decorator because that's only designed to work with views and display that information in an interactible way, so you could pass through a decorator to do that, but that is not a good idea and that totally defeats the purpose of using decorators, so don't do that.
One of the ways that draper is designed to help combat that is to introduce a decorates_assigned method that you can put in your controller, and you could say here: user. And what would happen is you would operate on your user model just like normal, you would set your instance variables, and then in the views, if you ever wanted to call @user, you would not do that, you would actually use user. And of course in the views, there would be erb tags around that, and what that means is that this, when it gets accessed in the views, we wouldn't call this instance variable directly in the typical fashion of saying @user, we would call user which would be a helper method provided by the controller that this defines. So if we wanted to test this out, we can try to say users here, and our index action which we have defined with the decorators, we can actually skip the decorators, save this, and if we go and look at our users index, we want to change the instance variable to call that helper method that gets defined there, and if this works, then we will see in our browser the exact same output, but all of these will have been decorators that were automatically decorated by Draper, so that's really nifty to have that built in. that way you don't have to worry about doing the decoration yourself, and if you do your own decorators from scratch, you can actually build that little helper method as well, because in your controller there, this is effectively just defining a user's method here, and saying
def users @users.decorate end
and that's about it. So this behind the scenes is doing a pretty simple thing, but it's already done for you, and then ends up cleaning up your controllers so that there's no real knowledge of decorator happening, it's kind of automatically taken care of for you by accessing the right methods and never using instance variables anymore, so this is pretty good. I like that, I like that a lot, so that gives us that ability, and if we wanted to, we can go into the view and we can say index.html, and let's do a
console here, and this should give us that console at the bottom, and we can access the user's function, and if we go up to the top you will see that we get a draper collection decorator which hast the object which has the ActiveRecord relation, and at the very end, we can see that there is a whole big list of decorators that we get. So we get user decorator and the object for each one of those, and it's basically doing that map that we did whenever you call that on an association. So the decorator stuff works really nicely and isn't obtrusive in any way which is what I like a lot, so we can learn a lot about if you wanted to build your own decorator stuff in your own rails app, you could go and replicate all of this stuff, you just have to define those helper methods to be compatible. So all of that handles our controller's concerns about the decorators, which is a very important thing, we have to make sure that we assign the decorators the right time, and we interact with them at the right places, and this is a really good job of that, I'm really impressed with this solution because it makes it very seamless and doesn't end up changing our code hardly at all. In the views, it does a litle bit, but that's it, so we don't use instance variables anymore but that is it. Now when you want to implement one of your decorators, for example we want to implement that full_name method that we had before, and we have access to that object or alternatively you can call it model, but it is actually named object internally, so you can use that, and you'll see that if you inspect the user decorator, and so here you can call it
app/decorators/user_decorator.rb
class UserDecorator < Draper::Decorator
delegate_all
def full_name "#{object.first_name} #{object.last_name}" end
end
and update our views to print out those names. And go to our views and test this out, and we see that we get the full names and that is combined for us wonderfully. Now, one of the benefits that you get from the inheritance from Draper decorator is that if you wanted to, you can use the method *missing* that it implements to delegate all those calls to your object. So when you call *first_name* or *last_name*, that is basically going to throw an error except that error gets caught by the method *missing* function that draper decorator implements, and then it's going to attempt to call tha on your user in this decorator. So it will call whatever object that you passed in, and then try to call it on there, and that way, you can end up writing this code similarly as if it was inside of the model, and you don't have to write any of your specific delegate methods like we did before. So you can get rid of this `delegate_all`, and you would lose that functionality, and you ca do the `delegate :first_name, :last_name, to: :object` so you could do an explicit version of this, or you can also use the *delegate_all* which will use the method *missing* implementation. So here we refresh again and we are now using that *method missing* implementation that will automatically pass those over to the object. Now I'm going to paste in the two methods that we had before for the staff badge and the moderator badge, and you'll remember that we passed in the *view_context* variable here, and that gives us access to all of the views, helpers and everything like that, and we can do the exact same thing by saying `helpers.whatever` and that's going to give us the exact same thing, so it's pretty much the exact same implementation there, so you can say `h.content_tag` so you have a nice short method there, and here we can say `user.mod_badge` and `user.staff_badge`, so with that implemented, we can refresh and we will get those badges back just like we did before, and our implementation is almos exactly the same. One interesting thing I want to point out here, I wouldn't really recommend this, but it is kind of nifty that you can say `include Draper::LazyHelpers` and this is actually going to do a similar delegate thing in a sense where all of these methods for your helpers will get mixed into this decorator and you can say *content_tag* directly without specifying that helpers or h. ahead of that, and so you can kind of write these as if you're both at the same time inside of your model and inside the helpers, which is interesting, but I really wouldn't recommend that approach because it's important for you inside of the decorator to know the differences between which methods are helper methods and which ones are user methods, or whatever model you were decorating those methods, so it's important to separate those, and you're also not including that a giant amount of helpers directly inside of your decorator. So draper gives us a lot of nicities here, but what about the case of pagination? This is one of those things where pagination is actually kind of a view thing, but it's also just as equally database thing because you have to worry about your limits and your offsets, so if you said you want 10 results per page, then you need to limit to 10, but you may be on page 5 and you need to skip the first 40 or 50 records and then take the next 10, and so your views need to know a lot about the database on how to calculate those, and if you were using a gem like *will_paginate* or *kaminari* you need to know how to integrate that with your decorators, so do you pass this through your decorators or do you not? and there's a couple options that you can do with this. So let's install *will_paginate* here, so I've added that to my Gemfile, and I've already run bundler to install it, so we need to go into our controller first **app/controllers/users_controller.rb** ```ruby class UsersController < ApplicationController decorates_assigned :user, :users def index @users = Users.all.paginate(page: params[:page]) end end
Just so that it knows what the default page is, and then we can go into our views and add the will_paginate call, and we have to pass in the users into this, which one do we pass in? Do we do @users or do we use the users decorators? Well really, as long as you're decorating at the very last moment, then you can use your @users assigned variable and know that you're operating with your database models, and so this is never interacting with Draper, and that's probably the way that you want to do this. So you would skip draper entirely for your pagination, and you're going to be able to refresh, and you will have your pages and everything will work just like normal because you aren't actually even using decorators here whatsoever. Now you could pass in decorators, in case there was some reason you needed to, but if you refresh this page, you're going to see that you're going to get a methods missing, such as total_pages, because your Draper's collection decorator is the object that you have there, it's a collection decorator, and it needs to implement some methods in order to be compatible with will_paginate.
Draper's docs have some information about this, and you actually have to go and build another decorator, called PaginatingDecorator and then later on you can say: This will be the collection decorator class. So let's go build this out and show you how that works. Inside our decorators folder, we want to create a new one,
app/decorators/paginating_decorator.rb
class PaginatingDecorator < Draper::CollectionDecorator delegate :current_page, :per_page, :offset, :total_entries, :total_pages end
And you'll notice that we have to do this decorator because there's two things that we've decorated. Number one, is that we can decorate a relation, so a set of records, we can have that we decorate, and then we also have an individual record. So a set is actually a bunch of individual records that are decorated but the collection is actually what will paginate and kaminari you're going to care about, not the individual ones, which means that we can't implement these methods inside of the individual decorator, we have to put it in the collection decorator, and so we have to define this class to specifically delegate these methods, and there is no delegate_all that we want to do here for the collection, because it's a collection, it isn't an individual record. But once that is done, and you've saved it, you can grab the
self.collection_decorator_class snippet here, and we can paste that inside of our user decorator at the bottom and keep that all nice and hidden away, and that should do the trick for us, we can get rid of this example, and if we go back to our browser, it all works again, and we are now using pagination through our decorators. Now of course, there's almost no reason that you would need to do that, but for some reason, if you didn't have the access to the models directly, with the instance variables, then you can use this as a solution for paginating your decorated collection. Now we've already gone on very long, so I'm going to cut this off here, but there's a lot of other cools stuff that Draper can do out of the box, such as deecorating associations. You might have a blog post, and you might have an association for comments; well, when you access the decorated blog post, and you call comments on it, it's actually going to call the association normally but you can also use decorates_association inside of your decorator so that when you call comments, it will actually call Draper around those comments, and then comments will be decorated as well so you can have that automatically connected and set up so that you can issue those associations in your views just like we normally would, but they would automatically be decorated, and so you can trust that you're always working with decorators in your views, and that's pretty nifty, I like that. So there's some other stuff you can do, there's finders there's contact stuff, and I would encourage you to check that out in the README if you want to learn more about it, but I think that Draper does a pretty good job of implementing this, but as you might have noticed, we got a very very long ways into building something just like Draper and we hardly wrote any special code to pull that off, so you can definitely build your decorators without a gem and do a really really easy job of implementing that. There are a lot of nicities that Draper gives you, once you get into more advanced decorator stuff that you might want to use the gem for fo you have that already ready to go when you get around to that. I hope you enjoyed this episode, let me know in the comments below, like this video if you enjoyed it and want to see more design pattern stuff, and I will talk to you in the next one. Peace
Transcript written by Miguel
Join 18,000+ developers who get early access to new screencasts, articles, guides, updates, and more. | https://gorails.com/episodes/decorators-with-draper | CC-MAIN-2018-47 | refinedweb | 3,711 | 51.96 |
."
gee... (Score:5, Insightful)
The only real crime here is that we've let ourselves be suckered by them for as long as we have.
Re:gee... (Score:4, Funny)
No, that's Government. (Wait, there's a difference?)
Re:gee... (Score:4, Funny)
That's like saying there's no difference between the organ grinder and his trained monkey. Of course, there is a difference. One of them dances around, makes monkey noises, and steals stuff from you for the benefit of the other.
Re:gee... (Score:2)
You should watch century of the self [bbc.co.uk] if you get the chance. It lays out how the psyche of people have subtly being manipulated for both commercials as policital reasons.
The documentary shocked me as I've never thought it would've been as well defined and with as clearly defined "goals".
Re:gee... (Score:3, Insightful)
But regardless of the fact that ANY software producer will hype their product (As I'm sure you've seen by reading
Re:gee... (Score:3, Insightful)
Nod32. Know it, love it.
You may laugh, but there have been several times I've had people on Linux forwards me "jokes" with Windows viruses attached.
Then that is the fault of a clueless email admin. I've setup many email servers, and I don't think a virus has ever made in past that point coming in or going out. It's quite simple really, which prompts me to call the admins in questio
Re:gee... (Score:2)
Re:gee... (Score:2)
but you know what, the entire industry isn't corrupt, there are at least 8 competing adware companies, and yes they ALL try to collect personal data, they ALL try to make the ads pay the bills. Some companies try to do it the right way. they keep the software running on their own servers, and their own products EG yahoo. some companies try to squeeze a little more out of the bottom line, and offer 'sweet deals' to opens source communities.
Re:gee... (Score:2)
Every year Symantec has a critical flaw in their software, so someone can actually be SAFER without Norton on their computer, and a
Bad title! (Score:5, Insightful)
Re:Bad title! (Score:5, Funny)
Re:Bad title! (Score:5, Funny)
Title is chillingly apropos (Score:4, Insightful)
Not really...after all, these firms have absolutely no interest in eliminating the problem, but only in treating the symptoms. That's why they continually endorse an OS that is legendary for its security holes, while spreading FUD about more secure alternatives like *nix and MacOS, which have a chance of actually fixing the underlying problem.
Re:Title is chillingly apropos (Score:4, Interesting)
What bugs me about the big guys is that they've become such gigantic products. They cause as many problems with their bloat as they fix, and they still don't fix everything (especially where Ad/Spyware is concerned). And this, of course, makes them REALLY not want to fix the underlying issue: people would start noticing that their computer starts up twice as fast and generally runs much better without some cyclopean anti-everything program.
Symantec Client Security started out as an OK little product. At the time, I was very impressed that its UI was so clean. Now, they're a complicated amalgams of firewall, AV, anti-spyware, Cuisinart and dishwasher. While I realize that they sell integration, there's no reason that integration need entail poor usability and baffling complexity. I once tried to get FTP to work on a relative's computer. I found that in Norton there was no firewall rule for FTP anywhere (or it was named something weird), yet it was blocking all traffic. My only option was to completely disable their firewall (and people get pretty mad when you tell to disable something they paid for.
The reason there's such a high pressure to integrate, of course, is that these guys make big bucks off of huge corporate licenses. Many IT or business development people I've talked to have said that they won't put anything except Norton on a desktop. I can see their point, because only dealing with one company means less IT and B2B overhead. And from Norton/Symantec's point of view, if they didn't offer a fully integrated solution, then somebody else would and they'd lose the client. So, they acquire every technology they possibly can and haphazardly jam it into their suite.
While I'm posting, I will admit that the article is least partially true. At my company [robotgenius.net], we were somewhat embarassed to admit that we were sad when the first really apocalyptic adware site we'd found went offline. This wasn't because we wanted to drum up sales, but rather because they were a great test case for our technology.
Re:Title is chillingly apropos (Score:2)
Re:Title is chillingly apropos (Score:3, Interesting)
Re:Title is chillingly apropos (Score:2)
How so? When replying, please consider that I'm Joe Sixpack, armed with the root password, just enough smarts to install stuff and not enough smarts to not install bad stuff.
Re:Title is chillingly apropos (Score:4, Interesting)
I put it this way: Windows' application integration is built on a base of executing as instructions anything it finds which can possibly be executed. Documents and help files have embedded controls to be executed by the system, to name just one example. MS has learned that this is dangerous behavior, but their ability to move away from this model is severely hampered by the need to maintain compatibility, even basic functionality, with a mountain of installed base.
Good point about "Eulaware" (Score:3, Insightful)
There are operating systems that can protect against that threat. They're not mainstream in design, and neither Linux nor OS X is among them.
>please consider that I'm Joe Sixpack
Joe Sixpack
Re:Good point about "Eulaware" (Score:2)
Ok, so I didn't mean that *I'm* Joe Sixpack, I meant something along the lines of "Explain to me how Linux or OS X can prevent me from screwing my machine over. While doing so, assume that I have the root password and am Joe Sixpack..."
*I* am actually a developer with 7 years commercial experience who's been using a variety of different computer systems over the last 23 years, from my humble little Sinclair ZX Spectr
Re:Good point about "Eulaware" (Score:2)
Not really. Consider that Firefox has had many drive-by exploits available for it, and nothing stops you installing software on Linux without root then altering startup scripts/gconf/kconfig/session manager to ensure it's always loaded. From there it's trivial to do many things, including (in the unlikely event you care) getting the root password.
Re:Good point about "Eulaware" (Score:2)
Examples? I'd really like to see scum-ware persistently infect a RAM based PuppyLinux runtime. On that note, users are going to download crap, it's what users do. However, the scum-ware author ***KNOWS*** the OS layout for Win/OS-X, there's little flexibility, they can be 99% certain when estimating the fs/lib layout that what they need is there. On Linux, that's
Re:Good point about "Eulaware" (Score:2)
What'll really blow your mind is when you realize that his UID is actually 5 digits.
Re:Good point about "Eulaware" (Score:2)
Linux protects the user better than Windows from that on at least 2 different ways: 1) It normaly comes with the dancing cursors and weather forecasting apps included, so the user won't be that tempted to install them. 2) Most software doesn't have a EULA*, so we can teach Joe Sixpack to be sispicious of software that shows it.
There are also 2 unrelated advantages: 1) Linux DEs don't ask confirmation every time for every stupid action, so the user gets used to read dialog windows. 2) Most document formats
Re:Good point about "Eulaware" (Score:2)
Re:Good point about "Eulaware" (Score:2)
Well, my wife doesn't have admin priv. on her OS-X box, so I don't have to worry too much about her installing things she shouldn't. The fact that the box is very usable for a non-admin user does help with resisting viral attacks.
Re:Title is chillingly apropos (Score:2)
"...more secure alternatives like *nix and MacOS, which have a chance of actually fixing the underlying problem." How so? When replying, please consider that I'm Joe Sixpack, armed with the root password, just enough smarts to install stuff and not enough smarts to not install bad stuff.
Well, both of those OS's have some architectural advantages, like not needing to run network services for local actions, that make automated compromises less common. They both tend to be more responsive to vulnerabilities
Re:Title is chillingly apropos (Score:2)
Sounds familiar, hmm, where have I heard that business plan before?
Not a big coincidence that the anti-malware firms are doing so well, when their business model mimics that of the (consistent) market darlings for the last two decades, big pharma.
Re:Title is chillingly apropos (Score:2)
Symantec's CEO, John Thompson, made comments that everyone ought to buy a Mac.
(Disclaimer: I work for Symantec. My opinions are my own and not necessarily reflective of my employer.)
Re:Title is chillingly apropos (Score:3, Insightful)
Not really...after all, these firms have absolutely no interest in eliminating the problem, but only in treating the symptoms.
So look who is motivated to fix the problem. MS isn't, they aren't losing market share and they've introduced their own anti-virus to milk the situation. So who is? Well alternate OS vendors are (as you mentioned), since they can use it as a differentiator, but most of them don't really have a malware problem so they haven't put much effort into a better solution. Big, enterprise
Re:Bad title! (Score:3, Insightful)
Re:Bad title! (Score:3, Insightful)
I think there's a dubious market for malware. (Okay, so my old boss might be the type to commission a new virus, but most aren't.) The anti-malware markets need a continuous set of threats to be taken seriously and though they don't write the malware themselves, it's integral to their success in business.
Advice from industry experts giving 'analysis' such as "The smarter virus writers won't deploy their security compromises until after Vista a
Re:Bad title! (Score:2)
Good thing they don't get paid for editing Slashdot. Oh, wait...
wtf? (Score:5, Insightful)
If this guy doesn't know that Symantec == Norton, I don't think I have any use for his opinions on malware companies.
Readers (Score:3, Insightful)
Re:Readers (Score:2)
Re:wtf? (Score:2)
money (Score:5, Insightful)
Re:money (Score:2)
Re:money (Score:2)
people DO believe this stuff (Score:5, Insightful)
Agree or disagree with the points of this article (I mostly agree), there is an elephant in the middle of the room everyone ignores.
From the article (emphasis mine):
"Only the stupidest dolts in the universe?" Aside from being a little insulting, it's just not true. Many intelligent people believe these reports simply because, as the article points out elsewhere, because it is repeated the lie becomes truth.
People trust "media" to the extent they don't have expertise in some subject matter. What other result would you expect? There are too many topics, too many reports, and too many things demanding attention, general consumers and lay people, appropiately (though naively), rely on integrity of reporting bodies to filter that part of their world not their specialty(ies).
Reporting organizations (e.g., CERT) have an ethical responsibility to normalize and make canonical data issued for general consumption.
Unfortunately the technology world today is Microsoft's sandbox, and seemingly if anyone wants to play, be it media, competition, and lately even government, Microsoft seems to be able to control the rules. Sigh, again.
Re:people DO believe this stuff (Score:2)
Re:people DO believe this stuff (Score:2)
What should we expect? We should expect that if something is important to you, you at least do some research into it. It isn't like the inform
Re:people DO believe this stuff (Score:2)
I don't mean to be semantic, but would not a truly "intelligent" being be able to be able to tell the truth from propaganda, exagerations, and lies? As in your mental capabilities has been fully developed to discern social engineering?
Otherwise, they wouldn't they wo
Re:people DO believe this stuff (Score:2)
Sure it's true. Assumption: the population considered includes only people who use computers and know that Linux/Unix/MacOS/Windows exists.
The stupidest dolts could be half the population if you wanted. No quantity of 'dolts' is specified, so for all it matters, the stupidest dolts could include all but the smartest dolt.
The real implication, however (and this is the part I love) is that it's logical
somewhat OT about media reliability (Score:2)
I think that's a critically important observation, and if you extrapolate a little you get to an uncomfortable realization: people look for news that reaffirms what they want to hear. With the proliferation of news sources, you can find specialized news feeds, and end up with a situation where hundreds of thousands of Americans believe we found WMD's in Iraq -- because the repeated me
Re:Mod parent up, please. (Score:2)
Demand more from the IT press. (Score:2)
Joe Barr admitted that he had done that with the claims about Apple, but he then spent time doing the research.
And the "journalists" that "report" on the IT industry have a long and colourful history of bias and willful ignorance. There is no excuse for that. And it is those reports by those "journalists" that kee
Gadzooks! (Score:5, Funny)
Oh ****! Quick, someone tell me how to upgrade to this "Windows" thing!
Re:Gadzooks! (Score:3, Funny)
There's a simple reason for the difference between general perception (at least on Slashdot) and the raw statistics above. If a vulnerability is found in openssh, it counts as a flaw for Linux, for BSD, and for any Unix flavours that ship openssh by default. If a vulnerability is found in the ssh client that ships with Windows... oh wait.
perceived standard? (Score:5, Insightful)
Re:perceived standard? (Score:3, Interesting)
Wait, why on earth would an industry that exists to correct flaws in another product lead consumers away from that product? If AV companies encouraged people to ditch Windows, actually be careful on the internet and take other measures to avoid malware, and people listened to th
Re:perceived standard? (Score:3, Interesting)
The only situation where this is not the case is where the customers
Can they be trusted? (Score:2, Funny)
OK if I install this spyware in your computer and just backup your credit card numbers for you without your permission?
Thanks.
Oh, no, that's ok, you don't have to answer. We'll do it anyway.
I trust some of the anti-malware industry (Score:3, Interesting)
Seriously, however, I never buy any peice of security software without looking for testing results and reviews.
Also, I will never use any product that makes false positives intentionally (to scare the user into using/buying the product). That's just asking for trouble.
Re:I trust some of the anti-malware industry (Score:2, Interesting)
Hmm, you make an interesting point. Ever notice that when you run one of these expensive security suites and you don't get any meaningful results, you always get a couple of "dangerous" cookies found, just to keep the results above zero?
The logic must be: Don't tell them it's clean. Use fud if necessary.
Fear and Protection Rackets (Score:5, Insightful)
If there was a solid infrastructre that was trusted the whole industry would disappear. The industry is based on the Microsoft Operating system and its designed vulnerabilities. The industry would not exist without the flaws in the Microsoft Operating systems and workflow. If Microsoft fixed its stuff, or if people migrated to a solid infrastucture the industry would disappear. I am sure the industry as a whole is looking at Linux as a big threat, it could destroy their whole reason for existing.
As a whole the Linux client is not a market for this industry. They need to make Linux/OSS users feel the threat so we will by their product.
Re:Fear and Protection Rackets (Score:2)
TFA is on the mark in terms of the vacuous ethics of computer security software press releases and scare mongering but that doesn't mean that solid, secure operating systems would elliminate the need for anti-malware products. Maybe I'm wrong but I don't think the patching mechanisms for Linux distros and Macs or are so fantastic and/or t
Re:Fear and Protection Rackets (Score:2)
AV for MacOSX: $59 -- Why? (Score:5, Informative)
Noticed a copy of AntiVirus for Mac OSX @ CompUSA last week. $59! Three questions:
1) Who buys this stuff?
2) Why so much?
3) Why?
To my knowledge there is only one virus in the wild for OSX and it never really made an impact. I understand that AV for Mac scans for the billions of Windows viruses, but considering that the Mac is extraordinarily unlikely to become infected, it's similarly unlikely a Mac will pass on a virus. I know it's part of being a good net citizen, but ultimately scanning email is your own responsibility. I don't scan for Linux or mainframe viruses, or iPaq scripts. Why should I scan for Windows viruses?
Or am I missing something?
Re:AV for MacOSX: $59 -- Why? (Score:2)
Re:AV for MacOSX: $59 -- Why? (Score:5, Interesting)
Some argue that it's not bad to have a security infrastructure in-place, even if theres very little self-propagaiting malware out there. It makes one "ready" to deal with the inevitable threats when they are discovered. It makes one confident that they will be the first ones to recognize and recover from any future infection.
That seems like a good idea until you realize that to install and remove malware means the software will need to operate with very high permissions. Installing programs like Clam or Symantec Antivirus are possibly giving hackers more potential ways to exploit your system than if you hadn't installed the anti-malware to begin with. I think there actually have been low-level, local security holes found based soleley on security software that the user has installed.
On the Mac, I think there is more harm than good done right now with anti-virus products. It's almost like feeling you must hang that lucky pair of fuzzy dice in your new car because you think it helps you not have accidents, when in fact their interference in your driving might be what causes you to have one.
Re:AV for MacOSX: $59 -- Why? (Score:2)
Re:AV for MacOSX: $59 -- Why? (Score:2)
You're thinking about practical and effective anti-virus measures. Think stupider.
Some organizations have a high-level policy that says that all machines must have up-to-date anti-virus software, and until you can certify that this is the case, you can't use the corporate network, because your MAC address will not be on the router's whitelist.
You can bribe the IT guys (probably more than $60), you can hack your MAC to an allowed one (possible MAC collision, lose your job if y
Source for the most effective AV (Score:3)
#include
#include "OStest.h";
main(){
if((is_OSX() || is_Unixey()) && !has_slashdot_flames()){
}else if(is_MS_OS())
What a stupid title (Score:2, Insightful)
Of course it can't! It's the friggin' malware industry! Their business plan centers around installing stuff on your PC that you don't want on there and didn't ask for, and abusing your PC without your permission for their own purposes. Why on God's green earth would someone like that be trusted?
Re:What a stupid title (Score:2)
Work on your public image (Score:5, Interesting)
idiots, dolts, crap. There is a lot of name calling in there. He sounds like a teenager complaining about her friends. I don't claim to be the most articulate person around, but this guy shouldn't be writing articles. People judge you by the words you use. I got so distracted by his name calling I had to post before finishing the article, and I'm wondering if I'll be able to reach the end or take his side given the tone.
Re:Work on your public image (Score:2)
Sure, it's an opinion piece, but name-calling isn't called for.
Re:Work on your public image (Score:2)
In the news (Score:5, Funny)
- Doctors poor at telling hypochondriac when there is nothing wrong with them.
- Car companies not reliable source of information about bicycles and public transit.
- Lawyers cannot be trusted to create legislation that doesn't criminalize everything.
- Politicians appear to be lying or misleading to get elected.
- Wolves unwilling to notify sheep in advance of attack.
Re:In the news (Score:2)
Hrm....
Anti-malware should stay in the people's hands (Score:2)
their motivation (Score:2)
There will always be takers. So by default we can say that the malware business will remain rotten to the core until it is not only made illegal, but also prossicuted ruthlessly until w
Old Story (Score:2)
Can the ****** industry be trusted? (Score:3, Insightful)
Yes, Rotten To The Core (Score:3, Insightful)
Anyone who is serious about security doesn't run anti-virus because it does not fix the root issues of vulnerability.
Thy key is that anti-virus can be sold on fear and, since the average computer user doesn't understand that there is nothing mystical about viruses and their vectors are easily identified, fear sells a product that actually makes your computer less secure and less usable. That said, there are some good free programs out there, like ClamAV and Spybot Search & Destroy to help you as a system administrator check out suspicious files or clean up a mess on a specific case by case basis (the latter only applying to Windows).
Re:Yes, Rotten To The Core (Score:2)
Too pejorative (Score:5, Informative)
I got to that point in the article and remembered the red ink on a paper I wrote in grad school, wherein the professor said, "too pejorative to be taken as an objective analysis of the topic."
In all things academic or reporting, if you do not really have it, then at least fake objectivity....
Counterpoint (Score:2, Insightful)
Things are never as extreme as they seem - there are good & bad guys (and in-between guys, and girls too!
Then too, we know that t
Spam filter claims are mostly bogus (Score:3, Informative)
[...] extremely low false positive rate, with less than one in one million messages being a false positive. [ironport.com]
A few years ago, Bayesian classification seemed a promising way to filter spam. [messagingpipeline.com]
[...] best recorded levels of accuracy have included 99.991% by one avid user (2 errors in 22,786) and 99.987% by the author (1 error in 7000), which is ten times more accurate than a human being! [nuclearelephant.com]
That translates to better than 99.984% accuracy, which is over ten times more accurate than human accuracy [sourceforge.net]
In the game of cat and mouse between spammers and anti-spam vendors, spammers and hackers quickly developed new techniques to "fool" the Bayesian filtering software. [spamwash.com]
File these under UFO sightings.
No! Stay vulnerable. Please. (Score:4, Insightful)
No, not really (Score:3, Insightful)
OTOH, no industry can be trusted. If it wasn't for some tireless public-minded advocates the auto industry would probably have us still driving deathtraps with engines designed in the 1950s or the pharma industry, for example, would have us growing three heads while being charged 50 bucks for a paracetamol.
Re:No, not really (Score:2)
Um, you're asking this of a bunch of people reading slashdot on company time...
Conspiracy? Maybe. Stupidity? Definitely. (Score:4, Insightful)
Can the anti-malware industry be trusted? Can microsoft be trusted? Can the IT industry be trusted?
One thing that all of this overlooks, is that it doesn't take malice for hysteria to spread.
premise: people fear what they don't understand.
premise: most people don't understand computers.
I have a friend who fancied himself a home-taught computer expert. Armed with TweakXP, a few anti-virus tools, and a small handful of other gadgets, he was always offering to "optimize" and "fix" his friends' computers.
And lo! and behold, every single computer that was ever brought to him had "a major virus" or "a serious trojan" problem on it. Of course, there is so much media hype about viruses (and people's bad browsing habits) that this was fairly believable. However, the mere consistency of his diagnoses started making me suspicious....
Sure enough, after a few in-depth conversations, it turns out that he was using bad virus-detection software: some unknown little program that he assumed was "better than all the rest" because it "always found more" (it didn't occur to him that most of them were false positives); and moreover, it turns out he didn't even have a clear understanding of what a "virus" is.
But let me tell you: he had a stream of people in and out of his apartment that were absolutely convinced that ANY time there was EVER a problem with their machine, it MUST have been because of a virus.
Re:Conspiracy? Maybe. Stupidity? Definitely. (Score:2)
But it's definitely arguable that malice (or at least extreme greed, to the point of not caring about the truth, security, safety or anything else but profit) is behind the *starting* of these rumours. Then the computer-ignorant masses believe and spread the beliefs, because, after all, the security experts said so!
Why I don't trust them at all (Score:2)
Re:Why I don't trust them at all (Score:2)
Unfortunately if they had made a public announcement about it we would probably only remember them as the brave former company that stood up to Sony and were finally and posthumously found to be correct all along - so they had to talk to Sony first in a long slow process. Commercial malware is only going to be dealt with properly by those who don't have anything to l
NO (Score:2)
Open Source software, which by definition is approaching perfection like 1-e**(-k*x) approaches unity, will never, ever be subject to malware. It's the very antithesis of everything the anti-malware industry is about.
and other fine questions (Score:2)
Hypocracy (Score:2)
The AV crowd ain't the bad guys (Score:2)
Does the car industry exaggerate the additional safety an extra airbag on every corner of the car provides?
Does the low-carb food industry exaggerate the effect low-carb food has on your weight?
Does the perfume industry exaggerate the amount of stink you produce if you don't sprinkle their 10-bucks-a-shot stuff under your arms?
Can ANY industry be trusted that they don't blow the effect of their product (or the threat of "what if you don't buy it") out
Re:The AV crowd ain't the bad guys (Score:2)
The fatal flaw... (Score:2)
I like to think of the example of Rusty Jones. In the northeast, road salt destroys cars. Back in the 70s and 80s, as soon as someone would buy a car, they would drive it to Rusty Jones and get their rustproofing service. As soo
Re:The fatal flaw... (Score:2)
Kaspersky Lab is not the anti malware industry. (Score:2)
Re:complete lame if you ask me. (Score:2)
Re:job security (Score:3, Interesting)
This SHOULD be +5 Funny! (Score:2)
Now, on to malware on Linux/Unix, and root-kits. Sure, it CAN happen, and it is quickly dealt with. I simply use hashes on files, and off-site them (tripwire).
Periodically, the hardware is refreshed with the files corresponding to the correct hash. Which ensures that the MAXIMUM time a root
Re:Got it right about SANS (Score:2) | http://slashdot.org/story/06/06/08/1416255/can-the-malware-industry-be-trusted?sdsrc=nextbtmnext | CC-MAIN-2015-11 | refinedweb | 4,828 | 61.36 |
Adds switch blocks to the Python language.
This module adds explicit switch functionality to Python
without changing the language. It builds upon a standard
way to define execution blocks: the
with statement.
from switchlang import switch def main(): num = 7 val = input("Enter a character, a, b, c or any other: ") with switch(val) as s: s.case('a', process_a) s.case('b', lambda: process_with_data(val, num, 'other values still')) s.default(process_any) def process_a(): print("Found A!") def process_any(): print("Found Default!") def process_with_data(*value): print("Found with data: {}".format(value)) main()
Simply install via pip:
pip install switchlang
You can map ranges and lists of cases to a single action as follows:
# with lists: value = 4 # matches even number case with switch(value) as s: s.case([1, 3, 5, 7], lambda: ...) s.case([0, 2, 4, 6, 8], lambda: ...) s.default(lambda: ...)
# with ranges: value = 4 # matches first case with switch(value) as s: s.case(range(1, 6), lambda: ...) s.case(range(6, 10), lambda: ...) s.default(lambda: ...)
Looking at the above code it's a bit weird that 6 appears
at the end of one case, beginning of the next. But
range() is
half open/closed.
To handle the inclusive case, I've added
closed_range(start, stop).
For example,
closed_range(1,5) ->
[1,2,3,4,5]
dict?
The biggest push back on this idea is that we already have this problem solved. You write the following code.
switch = { 1: method_on_one, 2: method_on_two, 3: method_three } result = switch.get(value, default_method_to_run)()
This works but is very low on the functionality level. We have a better solution here I believe. Let's take this example and see how it looks in python-switch vs raw dicts:
# with python-switch:))
Now compare that to the espoused pythonic way:
# with raw dicts while True: action = get_action(action) switch = { 'c': create_account, 'a': create_account, 'l': log_into_account, 'r': register_cage, 'u': update_availability, 'v': view_bookings, 'b': view_bookings, 'x': exit_app, 1: lambda: set_level(action), 2: lambda: set_level(action), 3: lambda: set_level(action), 4: lambda: set_level(action), 5: lambda: set_level(action), '': lambda: None, } result = switch.get(action, unknown_command)() print('Result is {}'.format(result))
Personally, I much prefer to read and write the one above. That's why I wrote this module. It seems to convey the intent of switch way more than the dict. But either are options.
if / elif / else?
The another push back on this idea is that we already have this problem solved. Switch statements are really if / elif / else blocks. So you write the following code.
# with if / elif / else while True: action = get_action(action) if action == 'c' or action == 'a': result = create_account() elif action == 'l': result = log_into_account() elif action == 'r': result = register_cage() elif action == 'a': result = update_availability() elif action == 'v' or action == 'b': result = view_bookings() elif action == 'x': result = exit_app() elif action in {1, 2, 3, 4, 5}: result = set_level(action) else: unknown_command() print('Result is {}'.format(result))
I actually believe this is a little better than the raw dict option. But there are still things that are harder.
else) and will result in a runtime error (but only if that case hits).
update_availabilitywill never run because it's command (
a) is bound to two cases. This is guarded against in switch and you would receive a duplicate case error the first time it runs at all.
Again, compare the if / elif / else to what you have with switch. This code is identical except doesn't have the default case bug.)) | https://openbase.com/python/switchlang | CC-MAIN-2021-39 | refinedweb | 580 | 67.45 |
OpenGL Discussion and Help Forums
>
OpenGL Developers Forum
>
OpenGL under Windows
> Convert 3D model file to .h format
PDA
View Full Version :
Convert 3D model file to .h format
Kunjesh
09-23-2016, 11:08 AM
How can i convert my 3D Model file into ".h" format ?
MensInvictaManet
10-03-2016, 03:06 PM
^ Not sure what this guy is trying to say
-----
Here's how to convert the 3D model into an H file:
1) Start a new program that loads a 3D model into memory
2) Output the memory to code text. For example, if I had a 3D model structure that took a Vector3f (a struct that has three floats) for each vertex, I would output the vertices of the 3D model data as code that looks like this: "AddVertex(Vector3f(10.0f, 12.1f, -1.5f));"
3) Make the .h file have the code you've outputted... you can even have the exporter make the entire .h file if you want by including text for the header includes and functions and such... have the code in a function with the name of the model.
4) Include the .h file in your project, and run the function... something like "MammothTank();"
Does this make sense? I've done this before on a project of mine, it's not too difficult when you understand the basic idea.
So if you had a file that had a cube model, the vertex list might look like this in data:
{ 0.0f, 0.0f, 0.0f },{1.0f, 0.0f, 0.0f),{1.0f, 1.0f, 0.0f},{0.0f, 1.0f, 0.0f},{ 0.0f, 0.0f, 1.0f },{1.0f, 0.0f, 1.0f),{1.0f, 1.0f, 1.0f},{0.0f, 1.0f, 1.0f}
And you'd take in that data and have it outputted as the following text
#include "ModelManager.h" // Whatever your model management system is... I'm imagining a system that holds and allows retrieval of Model3D class entities, and includes Vector3.h or whatever your vertex data structure is
void MammothTank()
{
AddVertex(Vector3(0.0f, 0.0f, 0.0f));
AddVertex(Vector3(1.0f, 0.0f, 0.0f));
AddVertex(Vector3(1.0f, 1.0f, 0.0f));
AddVertex(Vector3(0.0f, 1.0f, 0.0f));
AddVertex(Vector3(0.0f, 0.0f, 1.0f));
AddVertex(Vector3(1.0f, 0.0f, 1.0f));
AddVertex(Vector3(1.0f, 1.0f, 1.0f));
AddVertex(Vector3(0.0f, 1.0f, 1.0f));
}
And you would also add the rest of the data in the same way... Let me know if the general idea doesn't make sense to you. I'm going very general since you didn't mention what 3D model type you are using or how you want the data to be taken in within the .h file.
Powered by vBulletin® Version 4.2.3 Copyright © 2018 vBulletin Solutions, Inc. All rights reserved. | https://www.opengl.org/discussion_boards/archive/index.php/t-198882.html | CC-MAIN-2018-09 | refinedweb | 484 | 72.46 |
The QMenuBar class provides a horizontal menu bar. More...
#include <QMenuBar>:
Constructs a menu bar with parent parent.
Destroys the menu bar.
Returns the QAction at pt. Returns 0 if there is no action at pt or if the location has a separator.
See also addAction() and addSeparator().
Reimplemented from QWidget::actionEvent().
Returns the geometry of action act as a QRect.().().
Reimplemented from QWidget::setVisible().
Reimplemented from QWidget::sizeHint().
Reimplemented from QObject::timerEvent().
This signal is emitted when an action in a menu belonging to this menubar is triggered as a result of a mouse click; action is the action that caused the signal to be emitted.(). | http://doc.trolltech.com/main-snapshot/qmenubar.html | crawl-003 | refinedweb | 107 | 62.75 |
You can write normal Java servlets in Groovy (i.e. Groovlets).
There is also a
Here's a simple example to show you the kind of thing you can do from a Groovlet.
Notice the use of implicit variables to access the session, output & request.
import java.util.Date if (session.counter == null) { session.counter = 1 } println """ <html> <head> <title>Groovy Servlet</title> </head> <body> Hello, ${request.remoteHost}: ${session.counter}! ${new Date()} <br>src </body> </html> """ session.counter = session.counter + 1
h2 Setting up groovylets
Put the following in your web.xml:
Then all the groovy jar files into WEB-INF/lib. (You should only need to put the groovy jar and the asm jar). | http://docs.codehaus.org/pages/viewpage.action?pageId=2778 | CC-MAIN-2014-41 | refinedweb | 115 | 78.35 |
Wikiversity:WikiProject
At the Wikipedia website there is a "WikiProject" pseudonamespace that places all content development projects inside the "Wikipedia:" namespace. At Wikipedia, all content development projects have page names that start with "Wikipedia:WikiProject". Rather than use a pseudonamespace for content development projects, Wikiversity uses the "School:" and "Topic:" namespaces. For details about content development projects at Wikipedia, see w:Wikipedia:WikiProject.
Contents
Namespaces[edit]
Namespaces are a feature of the MediaWiki software that is used to power the WIkiversity website. The Wikiversity website has a large number of webpages. Namespaces provide a system for organizing all these Wikiversity webpages that would be otherwise disorganized and hard to work with. Wikiversity namespaces allow webpages with related functions to be neatly separated into compartments. You can think of each namespace as a compartment that holds a large group of functionally related pages.
A wiki could function with all of its webpages in only one "namespace", but wiki participants have found that the extra organizational structure provided by a set of namespaces makes it easier to manage the many pages of a wiki. The practice of dividing a wiki's pages into more than one namespace began as a way to keep the intended content pages of a wiki separated from pages that that are "meta pages". "Meta pages" are concerned with the operation of the wiki itself and the process by which its participants create the intended content pages. At Wikiversity, all learning resources and actual educational content is on webpages in the main namespace. Each page in the main namespace has an associated page in the "Talk:" namespace where Wikiversity participants can discuss the contents of the main namespace page. The talk pages are "meta pages" that are not actual learning resources, rather, the content of "Talk:" pages is "meta level content". When Wikipedians discovered the value of sorting wiki pages into multiple namespaces (each namespace with a defined use) they continued this fragmentation and invented additional namespaces such as the "Template:", "Category:" and "Portal:" namespaces.
Project namespace[edit]
One of the "meta-level" namespaces is called the "project namespace". The name of this namespace (project) does not mean that every wiki collaboration ("project") has to be in the "project namespace". Think of the Wikiversity "project namespace" as pages that are relevant to the entire wiki project. There is one particular wiki editing project that is the concern of the Wikiversity "project namespace"; it is the Wikimedia project called "Wikiversity". The "project namespace" at Wikipedia is also called the "Wikipedia:" namespace and all of its pages have names that start with the prefix "Wikipedia:". At Wikiversity, the "project namespace" is called the "Wikiversity:" namespace. This page has the page name "Wikiversity:WikiProject" and is a page in the "Wikiversity:" namespace. Like other pages in the "Wikiversity:" namespace, this page is concerned with how the entire Wikiversity project is organized.
Wikipedia puts its content development projects in their "project namespace", but Wikiversity does not use Wikipedia's clumsy "WikiProject" pseudomnamespace approach for forcing content development projects into a single namespace. Wikiversity allows content development projects for specific academic topics to exist in a special namespace that was invented at Wikiversity.
Special namespaces for content development[edit]
If you can accept the idea of having a new namespace specifically for content development projects, then you still might ask: why does Wikiversity have two namespaces for content development projects? The main reason that Wikiversity has both a "School:" namespace and a "Topic:" namespace is because wiki community members naturally create hierarchical systems to organize content development projects. At Wikipedia there are both "higher level" "WikiProject" pages such as WikiProject Science and "lower level" "WikiProject" pages such as WikiProject Molecular and Cellular Biology.
Wikiversity has both a "School:" and a "Topic:" namespace so that the "School:" namespace can be for a relatively small number of "higher level" content development pages that function to organize a larger number of "lower level" content development projects for narrow academic topic areas. Wikipedia has a relatively narrow mission: to produce encyclopedia articles. Wikipedia has a smaller range of possible main namespace content than does Wikiversity. Wikiversity should come to have many more content development projects than does Wikipedia. The hierarchical system of "School:" and "Topic:" pages will become increasingly useful as the number of Wikiversity content development projects continues to grow (see the list of "Topic:" pages).
Wikiversity currently has mostly school pages corresponding to university schools. In the future there will be many other types of Wikiversity schools that organize many kinds of content development projects in a large number of different ways. Ultimately, the Wikiversity "Topic:" namespace may become nearly as large as Wikipedia's main namespace. When Wikipedia's main namespace got too large, the "Portal:" namespace was invented. For the same reason that it makes sense to have the "Portal:" namespace to provide "directory" pages for the main namespace, it makes sense to have the "School:" namespace to provide "directory" pages for the "Topic:" namespace.
Summary[edit]
At Wikiversity, content development projects can be called "department", "center", "institute", "WikiProject", "content development project" or anything else that participants feel is a useful way to refer to a content development project. At Wikipedia, the content development projects ("WikiProjects") are all pages in the "Wikipedia:" namespace. At Wikiversity, the content development projects are mostly in the "Topic:" namespace while some "higher level" projects exist on the "School:" namespace and function to organize large groups of related "Topic:" pages.
See also[edit]
- Schools - Wikiversity Schools organize related groups of more specialized content development projects.
- Topics - Wikiversity Topics are content development projects for specific narrow topic areas.
- Namespaces - description of the roles of all Wikiversity namespaces. | https://en.wikiversity.org/wiki/Wikiversity:WikiProject | CC-MAIN-2018-43 | refinedweb | 948 | 50.46 |
From: Beman Dawes (bdawes_at_[hidden])
Date: 2006-05-23 16:31:46
"Christopher Kohlhoff" <chris_at_[hidden]> wrote in message
news:20060522134258.91630.qmail_at_web32609.mail.mud.yahoo.com...
> Hello all,
>
> I have been thinking about how to reconcile Boost.Asio error
> handling with the system error classes defined in the TR2
> filesystem proposal N1975. I have some concerns which I'll
> outline in the context of how I plan to use them.
>
> The current plan is as follows:
>
> - Synchronous functions would by default throw a system_error
> exception.
>
> - An overload of each synchronous function would take an
> error_code& as the final argument. On failure these functions
> set the error_code to the appropriate error. On success, the
> error_code is set to represent the no error state. These
> functions do not throw exceptions.
So far, so good.
> - Callback handlers for asynchronous functions would take an
> error_code as the first parameter. For example:
>
> void handle_read(error_code ec)
> {
> ...
> }
That makes sense to me. It wouldn't be useful to throw an exception because
of the asynchronous nature of the control flow, so supplying a error_code to
the callback gives it a chance to deal with the error, or ignore it if
desired. Was that your analysis?
> - Since many error codes in socket programming are "well-known"
> and often tested for explicitly, they need to be defined
> somewhere. The current errors in the asio::error::code_type
> enum would be replaced by constants of type error_code:
>
> namespace boost {
> namespace asio {
> namespace error {
>
> const error_code access_denied = implementation_defined;
> const error_code message_size = implementation_defined;
> ...
>
> } // namespace error
> } // namespace asio
> } // namespace boost
>
> (Note that in practice boost::asio::error might actually be a
> class with static members rather than a namespace).
>
> The issues I have run into so far are:
>
> - The error_code class seems to assume that there is a single
> "namespace" for the system_error_type values. This is not
> necessarily the case for the errors in socket programming.
>
> For example on UNIX most socket functions use errno values,
> but the netdb functions (gethostbyname and friends) use a
> different set of error values starting from 1, and the
> getaddrinfo/getnameinfo functions use another set of error
> values starting from 1.
>
> Would it be possible for the error_code class to have a
> category value (which is probably an integer with
> implementation-defined values) to allow the creation of unique
> error_code values for each type of system error? A category
> value would also allow implementors and users to extend the
> system error classes for other sources of system error (e.g.
> SSL errors, application-specific errors, etc).
I haven't thought of the case of several "namespaces" or "errorspaces"
before, so please take what follows as an initial idea, not something cast
in concrete.
My initial thought is to leave error_code alone, since it should be fine for
most uses. For uses that need more information, such as asio, derive
asio_error_code from error_code, adding appropriate members. For example,
members to set/get the well known socket error codes.
That way users who don't care about the domain specific codes can just use
the error_code base member functions, while those that care about the
specific codes can use the derived asio_error_code functions.
Does that seem a bit cleaner than your approach of adding codes? What is
your reaction?
> - A common idiom in code that uses asio, when you don't care
> about a specific error type, is to simply treat the error as a
> bool:
>
> void handle_read(asio::error e)
> {
> if (e)
> {
> // take action because of error
> }
> else
> {
> // success
> }
> }
>
> Can the error_code class be made convertible-to-bool and also
> have operator! to support this style?
Yes, something like that would be very useful. I just made that mistake
(forgetting the error() function and assuming convetibility to bool) this
morning. The resulting error messages were classics of misdirection - they
went on about shared_ptr, of all things, not instantiating correctly! Took
me ten minutes to figure out the real problem.
So, yes. IIRC, there has been prior list discussion of the best way to
provide convertibility to bool. I'll google around and see if I can find it.
If anyone else remembers the discussion, please feel free to provide a
pointer or a summary.
> - Are these classes available in CVS somewhere so they can be
> reused by other boost libraries?
No. I was waiting for 1.34 to ship, but that is taking a long time so I'll
go ahead and update CVS head, hopefully tomorrow.
Thanks for trying the error_code approach with asio!
--Beman
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2006/05/105248.php | CC-MAIN-2019-22 | refinedweb | 771 | 56.05 |
Computation Graph¶
Computation Graph¶
- nnabla.forward_all(variables, bool clear_buffer=False, bool clear_no_need_grad=False, function_pre_hook=None, function_post_hook=None)¶
Performs a forward propagation up to variables specified as the 1st argument. See also
forward.
- Parameters
Clear the no longer referenced variables during forward propagation to save memory. This is usually set as True in an inference or a validation phase. Default is False. Note that starting variable and destination variable of the input graph will not be cleared, regardless of their
persistentflag. All intermediate variables will be cleared unless set explicitly as
persistent=True. For example,
forward_all([h_i, y], clear_buffer=True)
will clear all intermediate variables between
h_iand
yunless set explicitly as
persistent=True, but
h_iand
ywill not be cleared regardless of their
persistentflag.
clear_no_need_grad (bool) – Clear the unreferenced variables with need_grad=False during forward propagation. True is usually used when calling this during training. This is ignored when clear_buffer=True.
function_pre_hook (callable) – This callable object is called immediately before each function is executed. It must take
Functionas an input. The default is None.
function_post_hook (callable) – This callable object is called immediately after each function is executed. It must take
Functionas an input. The default is None.
Example
import numpy as np import nnabla as nn import nnabla.parametric_functions as PF # Create a graph which has two outputs x = nn.Variable.from_numpy_array(np.array([[1, 2], [3, 4]])) y = PF.affine(x, 4, name="y") z = PF.affine(x, 8, name="z") # Execute a forward propagation recursively up to y and z nn.forward_all([y, z], clear_buffer)
- nnabla.no_grad(no_grad_=True)[source]¶
No gradients for the whole network.
No gradients are required when creating a network, such that when the forward pass is executed, all intermediate buffers except for the leafs in the network are gone at the same time, resulting in memory optimization.
This is useful for example when an output of a pre-trained network is used for an input to another network, where the first pre-trained network does not need to be fine-tuned, but the other network is optimized.
Example:
with nn.no_grad(): output0 = <Network0>(<input0>) output1 = <Network1>(<input1>, output0) loss = <Loss>(output1, <ground_truth>) loss.forward(clear_no_need_grad=True)
This context also works in the dynamic mode.
with nn.auto_forward(), nn.no_grad(): output0 = <Network0>(<input0>)
Note
When working with the static network, the need_grad property of the input (e.g., input image) must be False and do not forget to add
<root>.forward(clear_no_need_grad=True); otherwise, all intermediate buffers are not gone as expected. | https://nnabla.readthedocs.io/en/latest/python/api/computation_graph.html | CC-MAIN-2022-27 | refinedweb | 416 | 51.65 |
Skins kapoot? --DeanGoodmanson, Fri, 07 May 2004 07:21:24 -0700 reply
Noticed on the preference pages that there's no longer an option to choose a different skin. Guess this feature didn't make it past the experimental phase?
Skins kapoot? --simon, Fri, 07 May 2004 09:33:27 -0700 reply
It was just a zwiki.org demo.. in moving from a skinned folder to a btree folder we lost that ability, the other skins were out of date/not working and not used afaik.
spam ? --simon, Sun, 09 May 2004 07:58:53 -0700 reply
Someone keeps adding these to the front page on zwiki.org & zopewiki. I think they are spam - can someone out there tell more ? .com .com
my stability problems --BillSeitz, Mon, 10 May 2004 14:09:31 -0700 reply
see bottom of BillSeitz:ZwikiFreebsdStabilityProblems
there were 53 crashes yesterday: error codes 6, 10, and 11
Ideas? (problem started mid-April - had not made any code changes - maybe hitting a threshhold of size?)
FIT work noted on Daily-URL --DeanGoodmanson, Tue, 11 May 2004 06:42:27 -0700 reply
Congrats!
my stability problems --simon, Tue, 11 May 2004 07:47:42 -0700 reply
Hi Bill.. you saw my reply, right ? I just happened to pass by yesterday.
my stability problems --BillSeitz, Tue, 11 May 2004 09:59:44 -0700 reply
your reply when/where? (imeme has boosted my memory allocation, and we'll see if that changes things after a day...)
Gadzooks, more referrals --DeanGoodmanson, Tue, 11 May 2004 22:34:02 -0700 reply
Todd Ogasawara over at O'Reilly blogs has noted Zwiki a one of his favorite Zope products. link ... and that post was also Daily-URL'd!
Gadzooks, more referrals --Simon Michael, Wed, 12 May 2004 06:41:45 -0700 reply
Hurrah! Thanks Dean.
link banning implemented --simon, Thu, 13 May 2004 23:00:19 -0700 reply
Repeated link spamming is becoming a problem on this and other wikis.. text edits containing banned urls - bbs dot qqfans dot com etc. - will now raise an error. These are regular expressions in a
banned_links lines property on the folder.
link banning implemented --DeanG, Mon, 17 May 2004 12:07:35 -0700 reply
Nice
I've seen a number of "erased" and deleted pages lately. I'd like to post these page names as request for status somewhere. Could you direct me to a page? Latest page is AdvancedEditOptions?
Top google hit for
dtml-in --DeanGoodmanson, Mon, 17 May 2004 15:48:41 -0700 reply
Nice Zwiki summary of the joys of wiki and journey of dtml by [Paul Hammond]?
Master Catalog joy --DeanGoodmanson, Mon, 17 May 2004 15:59:33 -0700 reply
I've figured out how to copy a Zwiki catalog to Zope's root server, then re-catalog and find every wiki page within all my SubWikis...and searches work.
Q: How can I get this catalog to automatically find new pages as they are added?
Master Catalog joy --Simon Michael, Mon, 17 May 2004 18:36:45 -0700 reply
If all goes well.. put your master catalog's id in a SITE_CATALOG property on the root folder and all wikis should use that one.
(runs for cover)
Top google hit for
dtml-in --Simon Michael, Mon, 17 May 2004 18:43:23 -0700 reply
Yes that is nice, thank you chief scout. We should link it somewhere.. not sure where right now..
link banning implemented --Simon Michael, Mon, 17 May 2004 18:45:51 -0700 reply
I've seen a number of "erased" and deleted pages lately. I'd like to post these page names as request for status somewhere. Could you direct me to a page? Latest page is AdvancedEditOptions?
Sorry, how do you mean ? Not RecycleBin? ?
--befuddled
link banning implemented --DeanGoodmanson, Mon, 17 May 2004 21:37:15 -0700 reply
Primarily pages that show up in the RecentChanges? that when viewed, have no content...and when browsing through the diff screens show no sign of vandalism.
Master Catalog joy --DeanGoodmanson, Mon, 17 May 2004 21:39:28 -0700 reply
That sounds OK..but I'm hesitant to give up the existing wiki-specific Catalogs. I'd like the local and master to be updated.
Rss2 page type --DeanG, Tue, 18 May 2004 10:55:53 -0700 reply
Simon - Would you turn off EPoz? on the RSS2 page? I'd like to add the summary() method within the description element but cant edit it in plain text mode.
Inspired by Ian's comment here
Rss2 page type --Simon Michael, Thu, 20 May 2004 13:37:20 -0700 reply
DeanG wrote:
Simon - Would you turn off EPoz? on the RSS2 page? I'd like to add the summary() method within the description element but cant edit it in plain text mode.
Now I see what you mean. I think that page has to be HTML mode to work. See for a different solution using a helper dtml method.. this is the most up-to-date zwiki rss work AFAIK.. I am not clear where it all is at though.
tracker tweak --simon, Sun, 23 May 2004 23:34:33 -0700 reply
I've removed the big OPEN/TOTAL counts that were at top-right of IssueTracker. Better or worse ?
tracker tweak --simon, Thu, 27 May 2004 13:50:23 -0700 reply
Also the tracker defaults to open and pending, rather than recent. Better or worse ?
alt-r shortcut --DeanGoodmanson, Thu, 27 May 2004 14:26:15 -0700 reply
Jut a quick note that the embedded RSS reader in Firefox (and Mozilla?) uses Alt+r to open the side bar. I'm not sure it's a concern, but wanted to mention it.
Re: zwiki i18n/l10n coordination --Simon Michael, Thu, 27 May 2004 14:28:27 -0700 reply
Simon Michael wrote:
How do we reach everyone involved in zwiki i18n/l10 when we need to ? How do we scale as more people get involved ?
Aside from IRC, my suggestion at the moment is that everyone should subscribe to or at least check it once in a while, if they're not already subscribed to the whole wiki. That's where I would usually post things of interest.
I'm open to any other ideas, if you have one, please post there so we can all discuss.
I'll cc this to DevDiscussion? for anyone I've missed.
Thanks! -Simon
Re: Subclassing/extending ZWikiPage --Simon Michael, Thu, 27 May 2004 18:30:28 -0700 reply
Edoardo ''Dado'' Marcora wrote:
Wouldn't it be possible by eliminating all the dependencies on self.meta_type in ZWikiPage (a other scripts) such as:
def pageObjects(self): """ Return a list of all pages in this wiki. """ return self.folder().objectValues(spec=self.meta_type)
Maybe ZWiki should consider not only ZWikiPages? but also any product that would implement a ZWikiPage-like interface. Maybe the filtering should be done on a property (defined in the interface) like .isZWikiAware or something like that.
I am not much of a programmer (I am a neurobiologists with little programming experience) but you would knowk for sure if this is at all possible.
Thanx for your interest,
Dado
----- Original Message ----- From: "Simon Michael" <simon@joyful.com> To: "Edoardo ''Dado'' Marcora" <marcora@caltech.edu> Sent: Thursday, 27 May 2004 8:23 Subject: Re: Subclassing/extending ZWikiPageHi Edoardo.. if you know how, we are certainly interested. Cheers.
editing errors --Bob McElrath?, Fri, 28 May 2004 12:37:43 -0700 reply
So someone ran /clearCache on one of my pages last night (Simon, was it you)? Which resulted in a latexwiki-generated page error. (I am having difficulty tracking down exactly what in that page generated the error...)
It seems that when a page generates an error, editing that page does not occur properly. After editing I am redirected to a page like (note trailing slash), which gives the error again. Going to (no trailing slash) gives me an old version of the page.
This is all related to pre-rendering, I think. Have others observed this? I will try to track this down this weekend perhaps.
editing errors --simon, Fri, 28 May 2004 12:49:21 -0700 reply
No, not me.. unless I was sleep-surfing again..
I am also being bugged by page rendering errors lately, eg from unquoted dtml. Yours was not a dtml error I take it..
editing errors --Bob McElrath?, Fri, 28 May 2004 12:58:44 -0700 reply
simon [zwiki-wiki@zwiki.org]? wrote:
No, not me.. unless I was sleep-surfing again.. =20 I am also being bugged by page rendering errors lately, eg from unquoted dtml. Yours was not a dtml error I take it.
No it was a regex recursion error in my code, triggered by some (as-yet-unknown) text on the page..
A more general error mechanism is probably in order. Consider how LatexWiki does it: when there is a latex error, the page is quoted (<pre>) with an error in red at the bottom.
What if we expand the concept of page-rendering errors and treat them all uniformly, whether it's a zwiki/latexwiki code error, dtml error, latex error...catch the exception and display source with an error message, rather than letting zope give its (useless) error page. Zwiki error pages should preserve the zwiki header/footer so that useful links like "edit" and "diff" are still there.
In some cases it may be possible to mark up the source to indicate the position of the error. (This is certainly possible for latex...dtml too? If you catch a dtml exception does it give a line number?)
Furthermore, comments should be treated as seperate documents so that a comment can't hose an entire page. How about edit links for each comment? Would that be cumbersome? Maybe editing the main page would show only the page and not comments?
editing errors --simon, Fri, 28 May 2004 13:04:38 -0700 reply
No it was a regex recursion error in my code, triggered by some (as-yet-unknown) text on the page.
Ah.. freebsd python bites again ? Maybe you can tweak the offending regexp to make it less expensive (or do without it).
A more general error mechanism is probably in order. Consider how LatexWiki does it: when there is a latex error, the page is quoted (< pre>) with an error in red at the bottom.
Agreed, I'll look at how you do it.
So far, storing comments as part of the page text is the simplest thing that could possibly work.. it simplifies searching for example. They do get individually rendered though, so we could just do similar error handling for each comment.
Re: Subclassing/extending ZWikiPage --simon, Fri, 28 May 2004 13:11:51 -0700 reply
Maybe ZWiki should consider not only ZWikiPages?? but also any product that would implement a ZWikiPage-like interface (eg in pageObjects)
Yes that seems like a way to go. What would we mean by the Zwiki interface, though. I think defining that would be the first task. The zope 3 zwiki implementation might be a good starting point. I have no concrete need for this at the moment, and it's hard to envision what it would bring in practice - meanwhile I'll support anyone that wants to work on it.
Re: Subclassing/extending ZWikiPage -- Fri, 28 May 2004 16:48:22 -0700 reply
In my mind I envision ZWiki to become more of a versatile knowledge-base which could contain anything (structured or not) ranging from bibliographic references to information about genes and proteins (I am biologist... therefore I am little biased ;). I envision the ability of subclassing ZWikiPage as a way of supporting structured documents/objects (e.g., bibliographic references) which also supports specialized behavior (e.g., downloading of bibliographic information from online databases). Any type of content that can be described as a Zope product could use the ZWiki-way of linking/organizing resources which I found very clever... the problem is that not everything can be efficiently represented as free-form text. Just my two cents. (p.s. I will think about the interface and get back to you!). Thanx :) | http://zwiki.org/DevDiscussion200405?subject=tracker%20tweak&in_reply_to=%3C20040523233433-0700%40zwiki.org%3E | CC-MAIN-2019-39 | refinedweb | 2,030 | 74.59 |
- NAME
- SYNOPSIS
- DESCRIPTION
- DB_HASH
- DB_BTREE
- DB_RECNO
- THE API INTERFACE
- HINTS AND TIPS
- COMMON QUESTIONS
- HISTORY
- BUGS
- AVAILABILITY
- SEE ALSO
- AUTHOR
NAME
DB_File - Perl5 access to Berkeley DB
SYNOPSIS ;
DESCRIPTION.
Please note that this module will only work with version 1.x of Berkeley DB. Once Berkeley DB version 2 is released, DB_File will be upgraded to work with it.
Berkeley DB is a C library which provides a consistent interface to a number of database formats. DB_File provides an interface to all three of the database types currently supported by Berkeley DB.
The file types are:
- DB_HASH.
- DB_BTREE
DB_RECNO allows both fixed-length and variable-length flat text files to be manipulated using the same key/value pair interface as in DB_HASH and DB_BTREE. In this case the key will consist of a record (line) number.
Interface to Berkeley DB".
Opening a Berkeley DB Database File".
Default Parameters.
In Memory Databases
Berkeley DB allows the creation of in-memory databases by using NULL (that is, a
(char *)0 in C) in place of the filename. DB_File uses
undef instead of NULL to provide this functionality.
DB_HASH..
DB_BTREE.
Changing the BTREE sort order new compare function must be specified when you create the database.
You cannot change the ordering once the database has been created. Thus you must use the same compare function every time you access the database.
Handling Duplicate Keys.
The get_dup() Method => []
Matching Partial Keys 'bval' Option.
A Simple Example
Here is a simple example that uses RECNO.
Extra Methods:
- $X->push(list) ;
Pushes the elements of
listto the end of the array.
- $value = $X->pop ;
Removes and returns the last element of the array.
- $X->shift
Removes and returns the first element of the array.
- $X->unshift(list) ;
Pushes the elements of
listto the start of the array.
- $X->length
Returns the number of elements in the array.
Another Example:
Rather than iterating through the array,
@hlike this:
foreach $i (@h)
it is necessary to use either this:
foreach $i (0 .. $H->length - 1)
or this:
for ($a = $H->get($k, $v, R_FIRST) ; $a == 0 ; $a = $H->get($k, $v, R_NEXT) )
Notice that both times the
putmethod was used the record index was specified using a variable,
$i, rather than the literal value itself. This is because
putwill return the record number of the inserted line via that parameter.
THE API INTERFACE:
The methods return a status value. All return 0 on success. All return -1 to signify an error and set
$. implement the tied interface currently make use of the cursor, you should always assume that the cursor has been changed any time the tied hash/array interface is used. As an example, this code will probably not do what you expect:
.
- $status = $X->get($key, $value [, $flags]) ;
Given a key (
$key) this method reads the value associated with it from the database. The value read from the database is returned in the
$valueparameter.
If the key does not exist the method returns 1.
No flags are currently defined for this method.
- $status = $X->put($key, $value [, $flags]) ;
Stores the key/value pair in the database.
If you use either the R_IAFTER or R_IBEFORE flags, the
$keyparameter will have the record number of the inserted key/value pair set.
Valid flags are R_CURSOR, R_IAFTER, R_IBEFORE, R_NOOVERWRITE and R_SETCURSOR.
- $status = $X->del($key [, $flags]) ;
Removes all key/value pairs with key
$keyfrom the database.
A return code of 1 means that the requested key was not in the database.
R_CURSOR is the only valid flag at present.
- $status = $X->fd ;
Returns the file descriptor for the underlying database.
See "Locking Databases" for an example of how to make use of the
fdmethod to lock your database.
- $status = $X->seq($key, $value, $flags) ;
This interface allows sequential retrieval from the database. See dbopen for full details.
Both the
$keyand
$valueparameters.
HINTS AND TIPS
Locking Databases";
Sharing Databases With C Applications ;
The untie() Gotcha
If you make use of the Berkeley DB API, it is very strongly recommended that you read "The untie Gotcha" in perltie. perltie ...
COMMON QUESTIONS
Why is there Perl source in my database?.
How do I store complex data structures with DB_File?
Although DB_File cannot do this directly, there is a module which can layer transparently over DB_File to accomplish this feat.
Check out the MLDBM module, available on CPAN in the directory modules/by-module/MLDBM.
What does "Invalid Argument" mean?
You will get this error message when one of the parameters in the
tie call is wrong. Unfortunately there are quite a few parameters to get wrong, so it can be difficult to figure out which one it is.
Here are a couple of possibilities:
Attempting to reopen a database without closing it.
Using the O_WRONLY flag.
What does "Bareword 'DB_File' not allowed" mean?.
HISTORY
- 0.1
First Release.
- 0.2
When DB_File is opening a database file it no longer terminates the process if dbopen returned an error. This allows file protection errors to be caught at run time. Thanks to Judith Grass <grass@cybercash.com> for spotting the bug.
- 0.3
Added prototype support for multiple btree compare callbacks.
- 1.0
DB_File has been in use for over a year. To reflect that, the version number has been incremented to 1.0.
Added complete support for multiple concurrent callbacks.
Using the push method on an empty list didn't work properly. This has been fixed.
- 1.01
Fixed a core dump problem with SunOS.
The return value from TIEHASH wasn't set to NULL when dbopen returned an error.
- 1.02added. Without it the resultant database file was empty.
Added get_dup method.
- 1.03
Documentation update.
DB_File now imports the constants (O_RDWR, O_CREAT etc.) from Fcntl automatically.
The standard hash function
existsis now supported.
Modified the behavior of get_dup. When it returns an associative array, the value is the count of the number of matching BTREE values.
- 1.04
Minor documentation changes.clean.
Reworked part of the test harness to be more locale friendly.
- 1.05
Made all scripts in the documentation
strictand
-wclean.
Added logic to DB_File.xs to allow the module to be built after Perl is installed.
- 1.06
Minor namespace cleanup: Localized
PrintBtree.
- 1.07
Fixed bug with RECNO, where bval wasn't defaulting to "\n".
- 1.08
Documented operation of bval.
- 1.09
Minor bug fix in DB_File::HASHINFO, DB_File::RECNOINFO and DB_File::BTREEINFO.
Changed default mode to 0666.
- 1.10
Fixed fd method so that it still returns -1 for in-memory files when db 1.86 is used.
- 1.11
Documented the untie gotcha.
- 1.12
Documented the incompatibility with version 2 of Berkeley DB.
- 1.13
Minor changes to DB_FIle.xs and DB_File.pm
- 1.14
Made it illegal to tie an associative array to a RECNO database and an ordinary array to a HASH or BTREE database.
-.
BUGS "CPAN" in perlmod.
SEE ALSO
perl(1), dbopen(3), hash(3), recno(3), btree(3)
AUTHOR
The DB_File interface was written by Paul Marquess <pmarquess@bfsec.bt.co.uk>. Questions about the DB system itself may be addressed to <db@sleepycat.com<gt>. | https://metacpan.org/pod/release/TIMB/perl5.004m3t2/ext/DB_File/DB_File.pm | CC-MAIN-2016-18 | refinedweb | 1,189 | 68.57 |
Vadim Gritsenko wrote:
> Looking at all this I see an opportunity to rewrite all SQL / LDAP /
> XMLDB / Include / I18N / Lucene / etc transformers as a single
> Dispatcher and a bunch of handlers, reacting on namespaced tags, and
> working on a SAX stream. What do you think?
>
> On the other side, it reminds me somehow about reactor pattern in the
> Cocoon 1 and that it was not good enough for some reason...
I had the exact same feeling when I saw the proposal and I would not be
happy about accepting such a donation without some previous
community-driven design phase that might outline the design problems of
the dispatcher concept.
-- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200202.mbox/%3C3C64162B.8749A1E9@apache.org%3E | CC-MAIN-2017-39 | refinedweb | 109 | 52.53 |
Silverlight is the main application development platform for Windows Phone 7. In a previous tutorial we covered how to get your system set up for Windows Phone 7 development and then we developed a very simple Silverlight application that rotated a button around a grid. This article will introduce you to more advanced Silverlight features and will enable you to develop meaningful applications that display data in interesting and unique ways.
This article will introduce you to a number of intermediate-level Windows Phone 7 and Silverlight features including application resources, styles, data templates, and view navigation. You’ll take advantage of data binding and WP7 services that allow you to navigate between pages quickly and easily. You should have some familiarity of XAML and C# before beginning this tutorial.
Creating Your Project
In this tutorial, you’re going to create a simple Digg client that allows a user to browse stories by topic. You’ll take advantage of intermediate-level Silverlight and Windows Phone 7 features including application resources, styles, data templates, and navigation services. You’ll use data binding to display information from Digg and various WP7 services to allow users to get around your application.
To get started, make sure you have the latest Windows Phone 7 development tools installed on your computer. The tools were updated on July 12, 2010 so you may need to uninstall a previous CTP and install the tools Beta.
Open Visual Studio 2010 and click New Project in the left sidebar. In the dialog that pops up, select “Windows Phone Application” from the available templates and give your project a name like “SimpleDigg.” Make sure that the “Create directory for solution” toggle is checked and then click “OK.” Your setting should look like the following:
After your project is created, Visual Studio opens MainPage.xaml for editing. Close this file for now.
Creating Digg Data Classes
To access Digg’s data, we’ll use their official API. In particular, we’ll be using the story.getAll and topic.getAll methods. Example responses for each call can be found at the following URLs:
As you can see, story.getAll returns story items. Stories have a lot of data associated with them, but we’re going to focus on 4 pieces of information:
- Title
- Description
- Diggs
- Link
Let’s create the class to hold this data. In Visual Studio’s Solution Explorer (which is open by default in the right sidebar), right-click on your Project and choose “Add > New Folder”. Name this new folder
Digg. Right-click on your newly created folder and choose “Add > Class…”. Name your class
Story and click the Add button.
Visual Studio will open your new class for editing. Inside the class definition add four public properties like the following:
public string Title { get; set; } public string Description { get; set; } public string Link { get; set; } public int Diggs { get; set; }
Now, add the class that will hold Topic data. Right-click on your
Digg folder again and choose “Add > Class…”. Name your class
Topic and add the following properties when the file opens:
public string Name { get; set; } public string ShortName { get; set; }
At this point, you’ve created all the data classes you’ll need for this tutorial and are ready to markup the views needed for the rest of the application.
Creating Views
The SimpleDigg client has three different views that need to be created. They are:
- Topic List – Lists all topics on Digg
- Story List – Lists stories retrieved from Digg based on a particular topic
- Story Detail – Shows details about a selected story
Topics List
The Topic List will be the first screen that users see when they start your application. It comprises a list of topic names which, when one of the topic is clicked, leads to a list of stories in that topic. Since this will be the first screen that users see, we’ll go ahead and use the previously created MainPage.xaml file that already exists in the Project. Open MainPage.xaml and you should see a visual representation on the left and the markup for the view on the right.
Click on the text “My Application” in the visual representation and notice that a
TextBlock element in the XAML representation is highlighted. That
TextBlock has a
Text attribute currently occupied by the value “MY APPLICATION.” Change that value to whatever you want. I recommend “Simple Digg.” You’ll see that the visual designer updates to match your changes.
Now, repeat the process with the “page name” string. Click on the text, find the appropriate
TextBlock and modify the
Text attribute. This time, I recommend changing it to “Topics”. If you’ve done everything correctly up to this point, you should have a
StackPanel element that contains two @TextBlock@s, each with an appropriate value. The XAML looks like the following:
<StackPanel x: <TextBlock x: <TextBlock x: </StackPanel>
Now, you need to add the list container to your page. Open up the Control Toolbox (located on the left of Visual Studio) and drag a ListBox item into the big blank area on your page. You need to modify it to stretch the width and height of it’s container, so put your cursor in the XAML editor and modify the ListBox element to read like the following:
<ListBox Name="TopicsList" />
This markup removes all the styling that the visual designer introduced and renames the element so that you can access elements in it. At this point, you’ve completed the markup for the Topics List view and can now move onto the the other parts of the application
Story List
The story list view is very similar to the topic list. For organizational purposes, we’re going to put this view (and later, the Story Detail view) inside of a separate folder. Right-click on your project’s name in the Solution Explorer and choose “Add > New Folder.” Name the new folder
Views. Then, right-click on the
Views folder and choose “Add > New Item…” Select the
Windows Phone Portrait Page template and name it
Stories.xaml. Your dialog box should look like the following:
Now, as before, change the application title to “Simple Digg” and the page name to “Stories.” Next, drag a ListBox onto the blank space underneath your page title and modify it’s markup to look like the following:
<ListBox Name="StoriesList" />
At this point your story list view looks nearly identical to your topic list. The real differences will show up when you populate them with data items.
Story Details
The final view for your application is the Story Details view. The Story Details view will present the 4 pieces of data that we talked about earlier:
- Title
- Description
- Diggs
- Link
The number of Diggs and title will occupy the top of the view and the story description will follow underneath. After that, a link will will allow the user to navigate to the story in question if they wish.
As before, right-click on the
Views folder in your project and choose Add > New Item. Select the
Windows Phone Portrait Page template and name your new view
Story.xaml. Click Add and Visual Studio will create
Story.xaml and open it for editing.
Change the application title and page title textblocks to read “Simple Digg” and “Story” respectively. Now, drag a
StackPanel into the blank space underneath your page title. Drag another
StackPanel into the previous
StackPanel. This
StackPanel will contain the story title and Digg count. You want these items to align next to each other, so change the
Orientation property to
Horizontal.
Finally, drag a
TextBlock and a
Button into your first
StackPanel. The
TextBlock will contain the story description while the
Button will allow the user to visit the story source. You’re going to need to do some extensive property modification to these elements and, rather than run through them one by one, just make sure your markup looks like the following:
<StackPanel Name="StoryDetails"> <StackPanel Orientation="Horizontal" Name="StoryDetailsHeader"> <TextBlock Name="NumberDiggs" Text="10" /> <TextBlock Name="Title" Text="Story Title Goes Here" /> </StackPanel> <TextBlock Name="Description" Text="The story description will go here. This text will wrap, as you can see right here." TextWrapping="Wrap" /> <Button Content="Read Full Story" Name="Link" Width="200" /> </StackPanel>
You can see we’ve removed most explicit
Height and
Width properties and changed
Text and
Name properties to something a little bit more descriptive. It looks a bit ugly right now, but we’ll fix that up later. If you’ve got everything marked up corrrectly then your visual designer pane should look like the following:
At this point, the basics of all the necessary views are done. You can hit F5 to fire up the application to confirm that everything is working, but you’ll just get a blank screen with “Topics” at the top.
Customizing the Navigation Mapper
The next thing you need to do is make sure that you can direct users around your application. To do so, you’ll use Silverlight’s navigation mapping with a few simple rules. Open your project’s
App.xaml file and place your cursor inside of the opening
Application element and add a new namespace as follows:
xmlns:nav="clr-namespace:System.Windows.Navigation;assembly=Microsoft.Phone"
This references the Windows System Navigation namespace (a Silverlight feature) and allows you to use the various library classes that exist within it. Now, find the
Application.Resources element in
App.xaml and add the following elements:
<nav:UriMapper x: <nav:UriMapper.UriMappings> <nav:UriMapping <nav:UriMapping <nav:UriMapping </nav:UriMapper.UriMappings> </nav:UriMapper>
The code you just entered creates a variety of URI mappings for the views within your application. They correspond to the topics list, story list, and story detail views respectively. As you can see, Silverlight navigation mapping allows you to define query variables inside of your custom mappings. This will come in handy later when we go to actually populate data.
You’re not done with URI mapping, though. You need to tell your application to use this
UriMapper, so open the
App.xaml code behind by clicking the arrow next to
App.xaml and opening
App.xaml.cs. Inside of the
App method after the call to
InitializePhoneApplication() add the following statement:
RootFrame.UriMapper = Resources["UriMapper"] as UriMapper;
This statement tells your application to use the UriMapper you just defined in XAML for your phone app. Now, let’s start populating some data.
Populating the Topic List
The first thing we need to do is populate the Topic list. We’ll do this when the user first navigates to the
OnNavigatedTo method for the
MainPage.xaml. Place your cursor after the constructor and add the following code:
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e) { base.OnNavigatedTo(e); WebClient digg = new WebClient(); digg.DownloadStringCompleted += new DownloadStringCompletedEventHandler(digg_DownloadStringCompleted); digg.DownloadStringAsync(new Uri("")); } void digg_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { }
You can see that inside of the
OnNavigatedTo method you create a
WebClient object, assign it an event handler for when a string is downloaded, and then instruct it to download the string from the Digg
topic.getAll method URL. We know that the string to be downloaded will be in XML format, so we need to make sure our event handler can parse the XML. For this purpose we’ll use the Linq libraries available in the .NET framework. Before we can utilize those library classes, though, we’ll have to add a reference to the library. Right-click on the “References” item in your Solution Explorer pane and choose “Add Reference…” From the list that pops up, select
System.Xml.Linq and click “OK.”
Now, you just need to fill in the event handler that you created. Change
digg_DownloadStringCompleted so it looks like the following:
void digg_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { if(e.Error != null) { return; } XElement topicXml = XElement.Parse(e.Result); var topics = from topic in topicXml.Descendants("topic") select new Topic { Name = topic.Attribute("name").Value, ShortName = topic.Attribute("short_name").Value }; TopicsList.ItemsSource = topics; }
First, you check to see if the download was completed successfully. If it was, then you parse the resultant string and generate a collection of topics using Linq to XML. If you’re interested, you can read more about Linq to XML at the official MSDN site.
Finally, you assign the
ItemsSource property of the
TopicsList to the topics you parsed out. If you see a squiggly line under Topic, then place your cursor after it, click the down arrow that appears under the word, and select “using SimpleDigg.Digg”. At this point, you’ve got your topics populated so fire up your phone emulator by pressing F5 and you should see something like the following:
As you can see, your list has been populated but the correct data is not being displayed. Let’s take care of that now.
Data Templates
Data template are one of the most powerful tools in your Silverlight toolkit. They allow you to define the markup that should be shown for arbitrary objects. At this point, we’ll define DataTemplates for Digg Topics and Stories. Open
App.xaml and place your cursor inside of the
Application.Resources element. Add the following element:
<DataTemplate x: <TextBlock FontSize="48" Text="{Binding Name}" /> </DataTemplate>
This DataTemplate provides contains a simple
TextBlock element which is bound to the
Name property of the object being displayed. If you remember, the
Digg.Topic class contains a
Name property which is set to the
name attribute returned from the Digg topics API call. Return to your
ListBox element. Add a new property
ItemTemplate to the
ListBox as follows:
ItemTemplate="{StaticResource TopicTemplate}"
This line of code instructs the application to use your previously created
DataTemplate resource to display the Topic objects that make up the
ListBox’s collection. If you press F5 and run your application, you’ll see that Topic names are properly displayed now:
Fetching and Displaying Stories
At this point we’re ready to start fetching stories per topic and listing them. First, we need to tell the application that when a topic title is tapped the application should navigate to the stories list. Open
ListBox element. Add the
SelectionChanged property and allow Visual Studio to create a new event handler. In
MainPage.xaml.cs, change your event handler so it reads something like the following:
private void TopicsList_SelectionChanged(object sender, SelectionChangedEventArgs e) { Topic topic = TopicsList.SelectedItem as Topic; NavigationService.Navigate(new Uri("/Topics/"+topic.ShortName, UriKind.Relative)); }
If you run your application now (by pressing F5), you can see that you navigate to the Stories page whenever you select a topic. Now, we just need to actually populate the story list and make them display appropriately. As we did earlier, we’re going to override the
OnNavigatedTo method to make that happen. Open
Views/Stories.xaml.cs and add the following code:
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e) { base.OnNavigatedTo(e); String name; NavigationContext.QueryString.TryGetValue("Topic", out name); WebClient client = new WebClient(); client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted); client.DownloadStringAsync(new Uri("" + name)); } void client_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { if (e.Error != null) { return; } XElement storyXml = XElement.Parse(e.Result); var stories = from story in storyXml.Descendants("story") select new Digg.Story { Title = story.Element("title").Value, Description = story.Element("description").Value, Diggs = Int32.Parse(story.Attribute("diggs").Value), Link = story.Attribute("link").Value }; StoriesList.ItemsSource = stories; }
A lot of this will look familiar. The only part that may look odd is retrieving the topic name. If you recall, you mapped
/Topics/{topic} to
/Views/Stories.xaml?Topic={topic}. That is, you allow the Topic query string variable to be passed in a friendly format. When we navigated from the topics list, we passed the topic shortname in the relative Uri. In the code above, when the stories list is being navigated to we retrieve this variable and use it to call the Digg API URL with a specific topic.
We know that if we fire up our application at this point we’re not going to get the kind of appearance we want for our story listing. Let’s define another DataTemplate to use in this view. Open up
App.xaml and add the following code to your
Application.Resources element.
<DataTemplate x: <StackPanel Orientation="Horizontal" Margin="5"> <Grid Background="Yellow"> <TextBlock FontSize="48" Foreground="DarkGray" Height="60" TextAlignment="Center" Text="{Binding Diggs}" VerticalAlignment="Center" Width="60" /> </Grid> <TextBlock FontSize="24" Margin="10,0,0,0" MaxWidth="400" Padding="5" Text="{Binding Title}" TextWrapping="Wrap" /> </StackPanel> </DataTemplate>
Now, open up
Views/Stories.xaml and modify your
ListBox element so it reads as follows:
<ListBox ItemTemplate="{StaticResource StoryTemplate}" Name="StoriesList" ScrollViewer.
Run your application by pressing F5 and click on a topic name. Wait a moment, and you’ll see your stories appear. The next thing we have to do is display story details on the detail page.
Displaying Story Details
In order to display story details, we need to first allow navigation to the story detail page and then we’ll handle displaying data. In the story list, we have a number of story items. When one of them is selected we want to store that
Story object somewhere and then use it on the story details page. To do so, we’ll add an event handler to the
SelectionChanged event as follows:
private void StoriesList_SelectionChanged(object sender, SelectionChangedEventArgs e) { PhoneApplicationService.Current.State["Story"] = StoriesList.SelectedItem; NavigationService.Navigate(new Uri("/Story", UriKind.Relative)); }
Here, you’re storing the selected story in the
PhoneApplicationService class’s
State property as recommended by the execution model best practices. If you have a red squiggly line under
PhoneApplicationService then place your cursor inside the word and then select the dropdown that appears and choose “using Microsoft.Phone.Shell”.
Now, we need to retrieve this on the other end. Open up your
Views/Story.xaml.cs and add the following code that overrides
OnNavigatedTo:
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e) { base.OnNavigatedTo(e); Digg.Story story = PhoneApplicationService.Current.State["Story"] as Digg.Story; this.DataContext = story; }
Here, you intercept the navigation to the story details view, retrieve the story stored in the
PhoneApplicationService’s
State property, and then remove the story from the
PhoneApplicationService’s
State collection. You then set the
DataContext for the view to the story retrieved. This is key, as we’ll use this binding to make display the appropriate data.
Open your markup for the story details view in
Views/Story.xaml. Modify it to use bindings as follows:
<StackPanel Name="StoryDetails"> <StackPanel Name="StoryDetailsHeader" Orientation="Horizontal"> <Grid Background="Yellow" Margin="5"> <TextBlock FontSize="48" Foreground="DarkGray" Height="60" TextAlignment="Center" Text="{Binding Diggs}" VerticalAlignment="Center" Width="60" /> </Grid> <TextBlock Margin="5" Name="Title" Text="{Binding Title}" TextWrapping="Wrap" /> </StackPanel> <TextBlock Margin="10" Name="Description" Text="{Binding Description}" TextWrapping="Wrap" /> <Button Content="Read Full Story" Name="Link" /> </StackPanel>
If you launch your application now (press F5) you will be able to drill down from the topic list, to the story list, to full story details. The story details view should look something like the following:
There’s only one last thing to do. Add a click event handler to the Link button in
Views/Story.xaml as follows:
<Button Content="Read Full Story" Name="Link" Click="Link_Click" />
Change your event handler,
Link_Click, to read like the following:
private void Link_Click(object sender, RoutedEventArgs e) { WebBrowserTask task = new WebBrowserTask(); task.URL = (this.DataContext as Digg.Story).Link; task.Show(); }
If you see a red squiggly line under
WebBrowserTask, then place your cursor over the class and then select “using Microsoft.Phone.Tasks” from the dropdown that appears. This code launches the Windows Phone 7 web browser when clicking on the button and navigates it to the story’s URL.
Finishing Up
You have a fully functioning, albeit simple, Digg client at this point. You can browse stories by topics, view story details and visit the full story in the WP7 web browser. In this tutorial we’ve:
- Created classes to store Digg data
- Created and customized application views using the visual designer
- Customized navigation URIs and used the Windows Phone 7 Navigation service
- Implemented DataTemplates and Styles to display stories and topics
- Overrode the OnNavigatedTo and OnNavigatedFrmo event handlers to provide appropriate functionality for each page
- Used the Windows Phone 7 tasks to launch a web browser
Some of the topics we covered are far to in-depth to cover in a simple tutorial so you’ll probably want to find out more about them. The following resources should help you get started:
- Data Templates
- Styles
- Windows Phone 7 Programming
I hope you’ve enjoyed this tutorial. If you have any questions or want to see something different in a future Windows Phone 7 tutorial, let me know in the comments.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| http://code.tutsplus.com/tutorials/using-silverlight-to-create-a-digg-client-for-windows-phone-7--mobile-2127 | CC-MAIN-2016-36 | refinedweb | 3,503 | 55.34 |
Enviado por AmandaS (Intel) el
Compiler Methodology for Intel® MIC Architecture
Unlike the IA-32 and Intel® 64 architectures, the Intel® MIC Architecture requires all data accesses to be properly aligned according to their size, otherwise the program may behave unpredictably.
For example, an integer variable, which requires four bytes of storage, has to be allocated on an address that is a multiple of four. Likewise, a double-precsion floating point variable or a pointer variable, which requires eight bytes of storage, has to be allocated on an address that is a multiple of eight.
Structures and unions assume the alignment of their most strictly aligned component. Each member is assigned to the lowest available offset with the appropriate alignment. The size of any object is always a multiple of the object‘s alignment.
Note that removing the misaligned accesses on IA-32 and Intel® 64 architectures (through appropriate source changes) will likely lead to improved performance there too.
1. Here is a Fortran example that is not ABI-compliant on the Intel® MIC Architecture - note the use of the “sequence” keyword inside an object.
Consider, the following structure:
type, public :: GridEdge_t
sequence
integer :: head_face ! needed if head vertex has shape (i.e. square)
integer :: tail_face ! needed if tail vertex has shape (i.e. square)
integer :: head_ind !
integer :: tail_ind !
type (GridVertex_t),pointer :: head ! edge head vertex
type (GridVertex_t),pointer :: tail ! edge tail vertex
logical :: reverse
end type GridEdge_t
Adding up the sizes of the individual fields, the size of this object is 36 bytes. Since the sequence keyword is used, they are contiguous in memory. If we had an array of these objects, array elements are packed without padding bytes. So after the first element, subsequent elements would no longer be aligned when trying to access the fields head or tail. According to the ABI requirements, the fields head and tail should be 8-bytes aligned, so alignment of a GridEdge_t should be 8 bytes, and sizeof GridEdge_t should be a multiple of 8, viz. 40. If the SEQUENCE keyword is removed, the compiler automatically creates GridEdge_t wth the correct size of 40 bytes.
2. Here is a simple synthetic example in C that violates the ABI:
#include <malloc.h>
int main(int argc, char **argv)
{
char *blob = (char *)malloc(100); // malloc returns 8-byte aligned pointer
float *ptr = (float *)(blob + argc); // Assume program is invoked with no arguments, argc=1
for(int i = 0; i < argc; i++)
{
ptr[i] = 0; // GP fault here since floating point data is not aligned at 4-bytes
}
return 0;
}
This kind of access violation may happen from a user written memory allocation routine. Its not uncommon for users to write their own memory allocation routines, which could inadvertently result in unaligned allocated memory. This can lead to runtime errors due to the ABI requirements on the Intel® MIC Architecture and should be fixed by the user by making appropriate changes in the source code. | https://software.intel.com/es-es/articles/element-wise-alignment-requirements-for-data-accesses-to-be-abi-compliant-on-the-intel-mic | CC-MAIN-2016-07 | refinedweb | 491 | 52.29 |
in the nested class.
Note that T is available to the nested Node class. When
GenericList<T> is instantiated with a concrete type — for example as a
GenericList<int> — each occurrence of
T will be replaced with int.
// type parameter T in angle brackets public class GenericList<T> { // The nested class is also generic on T private class Node { // T used in non-generic constructor public Node(T t) { next = null; data = t; } private Node next; public Node Next { get { return next; } set { next = value; } } // T as private member data type private T data; // T as return type of property public T Data { get { return data; } set { data = value; } } } private Node head; // constructor public GenericList() { head = null; } // T as method parameter type: public void AddHead(T t) { Node n = new Node(t); n.Next = head; head = n; } public IEnumerator<T> GetEnumerator() { Node current = head; while (current != null) { yield return current.Data; current = current.Next; } } }
The following code example shows how client code uses the generic
GenericList<T> class to create a list of integers. Simply by changing the type argument, the code below could easily be modified to create lists of strings or any other custom type: | https://msdn.microsoft.com/en-US/library/0x6a29h6(v=vs.80).aspx | CC-MAIN-2015-32 | refinedweb | 197 | 50.57 |
On Mon, 14 Sep 2009 21:44:02 -0400, David Manura <dm.lua@math2.org> wrote: > environment=[[ require("lanes"); return lanes ]] If anyone (other than I) starts using Darwin, the pattern above will probably be a common one because it not only allows the re-use of an existing Lua module, contained in a file of code, but the structure definition uses 'require' to find that file. 'require' is (a re-implementation of) the standard Lua 'require' function, so if ' require("lanes") ' works in your code today, then it will work when used as in the example above. > Why isn't the native Lua search path sufficient? It is sufficient, but I was going for 'distinct', in order to accommodate the scenario in which the Darwin structure writer is setting up a user environment for a population of users. Let me explain. My own plans for using Lua are to embed it (no surprise) and to allow users to write their own scripts (again, no surprise). But how much power should I give my users? On the "low power" end of the spectrum, I could prevent them from doing io and from interacting with the os. At the other end of the spectrum, "full power" would give the user the power to write Lua scripts that did anything, including loading modules from arbitrary places in the file system (and running os.execute("luarocks ...") -- a scary thought!). In my project, I need to set up the environment in which user scripts run. I can precisely define the user environment using Darwin (in darwin/initialstructures.lua). I plan to allow my users to load modules. When I create the user environment, I may want my own search path for loading files that is distinct from the Lua package.path. That gives me the opportunity to write structure declarations that load code from anywhere in the file system, according to how I've set the Darwin path. (The Darwin path applies only to files listed in the 'files' clause.) I'll load "special" code from a non-standard place to avoid polluting the namespace of modules that can be loaded from the standard places on the Lua path. "Special" code could be, e.g. code that wraps the io package to restrict the kinds of io that users can do. But in reality, it just lets me separate the code that I wrote to "set up the system" from the code that user scripts will load using 'require' (really: package.loaders). My thoughts on this part of the design are still evolving. I considered giving Darwin a mirror of Lua's package table (which has a wonderfully flexible design) in order to provide the full power of loaders, preload, etc. In the end, I opted for 'keep it simple' instead, and just implemented a search path, mainly because my desire was only to have my "set up the system" code live in a directory that is not on the Lua search path. Jim | https://lua-users.org/lists/lua-l/2009-09/msg00412.html | CC-MAIN-2021-49 | refinedweb | 500 | 68.1 |
In this tutorial we’ll create two .NET projects:
- The WCF service that will use LINQ to SQL and talk with a database.
- A web application i.e. the client of the above WCF service.
Creating WCF Service:
Alright then, let’s get started!
- Start Visual Studio 2010 (2008 will also work)
- Select New Project from the top left options
- In the New Project dialogue box, you’ll see a left hand side panel called Installed Templates, expand Visual C# in the tree view and select WCF in it
- This will change the available options in the middle panel, select WCF Service Application in the middle panel. Your New Project dialogue box should look something like this:
- Give a decent name and path to the project and hit OK.
- When you’ll click OK, the Visual Studio 2010 will create a WCF project for you with a default service added to it. Now we’ll quickly add all the files that we need in the solution before we start coding
- First we’ll add a DBML object in our project that will link with the database. We’ll use SQL Server’s database. So right click the project file in the solution explorer and select Add New Item. In the Add new dialogue box, you’ll see a left hand panel called Installed Templates, select Data in it. The middle panel will get populated with all the available objects for Data related tasks. Select LINQ to SQL Classes, give a decent name (I usually give the same name as database that this object is going to connect with) to it and click Add button. The dialogue box will look something like this:
- When you’ll click Add in the above dialogue box, a dbUniversity.dbml object will get added in your solution. Double click that object to open it in the design view. In the design view you’ll see a link to Server Explorer. Click it to open the Server Explorer. In the Server Explorer you can add a connection to a SQL Server and from there you can add database objects in your project. For the purpose of this tutorial, I created a database in my local SQL Server with the name University. I also added two tables namely Teacher and Department in the database. I was successfully able to create a connection with the database and add the two tables by drag n’ drop on the design panel of the dbUniversity.dbml. The design view should look something like this:
- The obvious relationship between Department and Teacher is one-to-many. Now before we start writing LINQ code. I want to finish adding some other files that are required for this project. So let’s quickly add a new WCF service in our project. We know there is already a default service in the project, but we’ll leave it as it is and add a new one so that we can see what the complete process of adding a new service is. So let’s right click the project in Solution Explorer one more time and select Add New Item. In the New Item dialogue box select Web in the Installed Templates panel and select WCF Service in the middle panel. Give your service a decent name and select Add button in the bottom. The dialogue box will look like this:
- Now when you’ll click Add button in the above dialogue box, you will notice two new files will get added in your solution a). MyService1.svc b).IMyService1.cs Why the two? If you open MyService1.svc.cs you’ll find your answer. Basically MyService1 class extends – I mean – implements the IMyService1 interface. So yeah, IMyService1 is the interface of our service and that’s why it contains keywords like ServiceContracts and OperationContracts. Who are these contracts with? With the clients of our service. See that’s how it works, when our service is published, it signs a contract with all of the prospective client applications that these are my operational methods, and these are the signatures of those methods with their return types. Clients will connect with our service assuming that our service will abide by this contract. As long as our service abides by its contract the communication between client and the service will continue. If at any point service changes its functionality to the extent that it changes the names of its operational methods or their signatures, the communication will become erratic. The complete Service Oriented Architecture concepts are obviously beyond the scope of our little tutorial, so we’ll pass on that and move forward on adding our one last piece of code file in our project
- Right click the project in the Solution Explorer and add a simple CS class file, name it DAOTeacher.cs
- Ok now we’ve added all the required files in our project, before we start coding, we’ll take a quick look at our Solution Explorer to see if everything looks good. Our Solution Explorer should look something like this:
Of course the most important files are in the bottom of the list. Anyways, so now we’ve got everything that we needed and we’re ready to start coding and we’ll start with the hardest part first i.e. DAOTeacher.cs. In DAOTeacher class we just want to see how we can query against a database using LINQ to SQL classes and also we want to explore the new language syntax of LINQ class libraries. Therefore, we’ll create a new method called GetTeacherName, pass in a teacher numeric ID to it and query against the database to get the name of the teacher. The code of DAOTeacher.cs looks like as follows:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.Serialization; using System.Web; namespace prjWCFTest1 { public class DAOTeacher { public string GetTeacherName(int id) { 01 dbUniversityDataContext db = new dbUniversityDataContext(); string teacherName = string.Empty; try { 02 teacherName = (from vTeachers in db.Teachers where (vTeachers.teacher_id == id) select vTeachers.teacher_name).ToList().FirstOrDefault(); } catch (Exception exc) { teacherName = exc.ToString(); } return teacherName; } } }
In line # 01, we added a database context. This is a new class that .NET created for us with the prefix of the same name that we gave to our DBML object. The data context class contains all the information that we dragged n’ dropped in the design view of the DBML object. The data context has different overloaded functions that we can use, e.g. we can pass the database connection string at the run-time to instantiate the context object, a feature which is sometimes useful if you’re deploying the same application from Dev to QA or Production environments and want to pass the database connection string dynamically (again, this discussion is beyond the scope of the tutorial). So in line # 02 we’re basically querying against the database. The line is pretty much similar to the following SQL query:
SELECT TOP 1 teacher_name FROM Teachers WHERE teacher_id = id
But the beauty of the line # 02 is you get intelliSense here! And since IDE compiles it, you rule out all the chances of writing erroneous SQL queries. Though in this particular example, the query is pretty simple, but don’t think that LINQ is capable of handling only simple ones. I’ve written some very complex LINQ queries lately, and trust me, with all those inner and outer joins, LINQ made it very easy for me as compared to it if I had to write them in SQL as inline queries. Once you’ll get your hands on LINQ queries, you’ll find them really very interesting and easy to use. OK so we’re done with the hardest part, really, that was the hardest part in this project. Now we’ll quickly write code in the MyService1.svc.cs as follows:
using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace prjWCFTest1 { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "MyService1" in code, svc and config file together. public class MyService1 : IMyService1 { private DAOTeacher daoTeacher = new DAOTeacher(); public string GetTeacherName(int id) { return daoTeacher.GetTeacherName(id); } } }
So simple!
Instantiated our DAO class and called its GetTeacherName() in a public method of the service. Now the only piece missing is we haven’t declared the service’s public method in the Interface. Let’s quickly do that, IMyService1’s code will look like as follows:
using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace prjWCFTest1 { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the interface name "IMyService1" in both code and config file together. [ServiceContract] public interface IMyService1 { [OperationContract] string GetTeacherName(int id); } }
Alrighty, so we’re all set! Let’s build and run our WCF service. Another cool feature with Visual Studio 2010 is that they’ve attached WCF Test Client with the debugger now. WCF Test Client was already there since they introduced WCF in 2008, but now having it automatically attached with our service is one extra benefit that we can enjoy. Ok so when I ran my project I got the following window, I even entered a teacher ID to invoke the method and it returned me the expected results, take a look:
Before we end discussion on this project, we need to make one more change in it, so that it can be used from client project. The change is we need to make this project run as a website from IIS. In order to do that, we need follow few easy steps as described in the following image:
Creating WCF Client:
Well, this part is really very simple, so I’ll just quickly go through it. Here are the steps we need to follow to create a WCF client:
- Create a web application project in Visual Studio 2010 and name it, let’s say prjWCFClient1
- Right click the project in the Solution Explorer and select Add Service Reference option in the popup menu
- In the Add Service Reference dialogue box, enter the localhost’s address of the WCF service that we just created, give the service reference a decent name and click OK. The dialogue box will look like this:
And that’s all we need to add in this project. Now let’s code our web page, in Default.aspx write the following code:
<%@ Page </asp:Content> <asp:Content <h2> Welcome to WCF Client! </h2> <p> Enter Teacher ID to search: <asp:TextBox</asp:TextBox> <asp:LinkButtonSearch</asp:LinkButton><br /> <asp:Label</asp:Label> </p> </asp:Content>
Note that the master page and Content blocks were added automatically by Visual Studio 2010. The code that I added was only inside the BodyContent block. Since the code is very simple and no explanation is needed for it, I’ll jump to the CS part of it. Write the following code in the Default.aspx.cs file:
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace prjWCFClient1 { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void lbtnSearch_Click(object sender, EventArgs e) { 01 MyService1.MyService1Client client = new MyService1.MyService1Client(); 02 this.lblTeacherName.Text = client.GetTeacherName(int.Parse(this.txtTeacherID.Text)); } } }
In line # 01 we are creating the client object. The MyService1Client class has been automatically generated by .NET. This is the object that will actually take our request and bring the result back from the service. It has a few overloaded constructors, one of which can be used by passing End Point URL; a feature useful while the web application is being deployed from Dev to QA or Production environments. Once client object is created you can use its intelliSense to find out what web methods are available for the application.
Now our client application is ready to run, and that’s how it looks like:
So that’s all from my side today. Keep me posted, folks. And don’t hesitate if you want source code of the two projects. Until next time, enjoy programming!
Thanks for Step by step description , easy for beginner to get hands on WCF
Comment by Sandhya — November 26, 2012 @ 5:57 pm
Very detailed explanation.Thanks a lot 🙂
Comment by Harika — April 18, 2014 @ 5:48 am | https://mukarrammukhtar.wordpress.com/linq-with-wcf-in-net/ | CC-MAIN-2018-51 | refinedweb | 2,090 | 64.1 |
29 May 17:13
ISC BIND 9.5.0 is now available
From: Evan Hunt <Evan_Hunt <at> isc.org>
Subject: ISC BIND 9.5.0 is now available
Newsgroups: gmane.network.dns.bind.announce
Date: 2008-05-29 15:15:04 GMT
Subject: ISC BIND 9.5.0 is now available
Newsgroups: gmane.network.dns.bind.announce
Date: 2008-05-29 15:15:04 GMT
BIND 9.5.0 is now available. BIND 9.5.0 is a feature release for BIND 9. BIND 9.5 has a number of new features over previous versions, including: - GSS-TSIG support (RFC 3645). - DHCID support. - Experimental http server and statistics support for named via xml. - More detailed statistics counters, compatible with the ones supported in BIND 8. - Faster ACL processing. - Use of Doxygen to generate internal documentation. - Efficient LRU cache cleaning mechanism. - NSID support (RFC 5001). BIND 9.5.0 contains the following security fixes: 2305. [security] inet_network() buffer overflow. CVE-2008-0122.3. [security] Query id generation was cryptographically weak. [RT # 16915] 2202. [security] The default acls for allow-query-cache and allow-recursion were not being applied. [RT #16960] 2126. [security] Serialize validation of type ANY responses. [RT #16555] 2124. [security] It was possible to dereference a freed fetch context. [RT #16584] 2112. [security] Warn if weak RSA exponent is used. [RT #16460] +---------------------------------------------------------------------+ | If you are running a version of BIND without these changes you | | are advised to upgrade as soon as possible to one of BIND 9.3.5, | | BIND 9.4.2, or BIND 9.5.0. | +---------------------------------------------------------------------+ BIND 9.5.0 can be downloaded from The PGP signature of the distribution is at The signature was generated with the ISC public key, which is available at <>. A binary kit for Windows 2000, Windows XP and Windows 2003 is at The PGP signature of the binary kit for Windows 2000, Windows XP and Window 2003 is at Changes since 9.5.0a1: --- 9.5.0 released --- 2374. [bug] "blackhole" ACLs could cause named to segfault due to some uninitialized memory. [RT #1809] --- 9.5.0rc1 released ---] --- 9.5.0b3 released --- 2360. [bug] Fix a condition where we release a database version (which may acquire a lock) while holding the lock. 2359. [bug] Fix NSID bug. [RT #17942] 2358. [doc] Update host's default query description. [RT #17934]] --- 9.5.0b2 released --- 2324. [bug] Fix IPv6 matching against "any;". [RT #17533] 2323. [port] tru64: namespace clash. [RT #17547] 2322. [port] MacOS: work around the limitation of setrlimit() for RLIMIT_NOFILE. [RT #17526]288. [port] win32: mark service as running when we have finished loading. [RT #17441] 2287. [bug] Use 'volatile' if the compiler supports it. [RT #17413] <at>-response. -- -- Evan Hunt -- evan_hunt <at> isc.org Internet Systems Consortium, Inc. | http://permalink.gmane.org/gmane.network.dns.bind.announce/141 | crawl-002 | refinedweb | 458 | 72.42 |
Review: Needs Fixing 1. To maintain ABI compatibility, you must only add new virtual functions to the end of public classes.
Advertising
2. I don't like that this mechanism is so different to the way you block non-internal modules (using a URIMapper with DENY_ACCESS). While it would be a bit less efficient, it would be more consistent to call static_context::resolve_uri() at translator.cpp line 2840 solely for the purpose of seeing if zerr::ZXQP0029_URI_ACCESS_DENIED is thrown. -- Your team Zorba Coders is subscribed to branch lp:zorba. -- Mailing list: Post to : zorba-coders@lists.launchpad.net Unsubscribe : More help : | https://www.mail-archive.com/zorba-coders@lists.launchpad.net/msg08528.html | CC-MAIN-2017-51 | refinedweb | 101 | 51.44 |
Hawala1.
.
How Does Hawala Work?
Hawala works by transferring money without actually moving it. In fact money transfer
without money movement is a definition of hawala that was used, successfully, in a
hawala money laundering case.
An effective way to understand hawala is by examining a single hawala transfer. In this
scenario, which will be used throughout this paper, Abdul is a Pakistani living in New
York and driving a taxi. He entered the country on a tourist visa, which has long since
expired. From his job as a taxi driver, he has saved $5,000 that he wants to send to his
brother, Mohammad, who is living in Karachi.
.
Even though Abdul is familiar with the hawala system, his first stop is a major bank. At
the bank, he learns several things:
The bank would prefer that he open an account before doing business with them;
The bank will sell him Pakistani rupees (Rs) at the official rate of 31 to the dollar; and
The bank will charge $25 to issue a bank draft.
This will allow Abdul to send Mohammad Rs 154,225. Delivery would be extra; an
overnight courier service (surface mail is not always that reliable, especially if it contains
something valuable) can cost as much as $40 to Pakistan and take as much as a week to
arrive. Abdul believes he can get a better deal through hawala, and talks to Iqbal, a fellow
taxi driver who is also a part-time hawaladar.
Iqbal offers Abdul the following terms:
A 5% commission for handling the transaction;
35, instead of 31, rupees for a dollar; and
Delivery is included.
This arrangement will allow Abdul to send Mohammad Rs 166,250. As we will see, the
delivery associated with a hawala transaction is faster and more reliable than in bank
transactions. He is about to make arrangements to do business with Iqbal when he sees
the following advertisement in a local Indo-Pak newspaper (such advertisements are
very common):
Abdul calls the number, and speaks with Yasmeen. She offers him the following deal: A fee of 1 rupee for each dollar transferred; 37 rupees for a dollar; and Delivery is included.
Under these terms , Abdul can send Mohammad Rs 180,000. He decides to do business with Yasmeen.
•Cheap tickets to India, Pakistan, Bangladesh, Sri Lanka, Dubai •Great rupee deals (service to India and Pakistan) •Large movie rental selection •Video conversions •Latest Bollywood hits on CD and cassette •Prepaid international calling cards •Pager and cellular activations (trade-ins welcome) •Conveniently located in Jackson Heights (718) 555-1111 ask for Nizam or Yasmeen (718) 555-2222 [fax] (718) 555-2121 [pager] MUSIC BAZAAR AND TRAVEL SERVICES AGENCY), and the payment is almost always made in person. Finally, in some scenarios,
he trusts her to repay him the equivalent of either $5,000 or Rs 180,000.
As was stated above, hawala works through connections. These connections allow for the
establishment of a network for conducting the hawala transactions. In this transaction,
Yasmeen and Ghulam are part of the same network. There are several possible ways in
which this network could have been constructed.
The first possibility is that Yasmeen and Ghulam are business partners (or that they just
do business together on a regular basis). For them, transferring money is not only another
business in which they are engaged but a part of their normal business dealings with one
another. Another possibility is that, for whatever reason, Ghulam owes Yasmeen money.
Since many countries make it difficult to move money out of the country, Ghulam is
repaying his debt to Yasmeen by paying her hawala customers; even though this is a very
informal relationship, it is quite typical for hawala. A third (and by no means the final)
possibility is that Yasmeen has a rupee surplus and Ghulam is assisting her in disposing
of it.
In the last two cases, Ghulam does not need to recover any money; he is either repaying
an existing debt to Yasmeen, or he is handling money that Yasmeen has entrusted to him,
but is unable to move out of the country. In the first case, where Yasmeen and Ghulam
are partners, a more formal means of balancing accounts is needed.
One very likely business partner scenario is an import/export business. Yasmeen might
import CDs and cassettes of Indian and Pakistani music and 22 carat gold
jewelry from Ghulam, and export telecommunications devices to Ghulam.
In the context of such a business, invoices can be manipulated to conceal the movement of money..
(via the US Treasury, FinCen, Interpol) | https://www.epjresearchroom.com/2016/04/hawala.html | CC-MAIN-2020-50 | refinedweb | 766 | 58.72 |
Adding Custom Galleries galleries,
August 2007
Code It | Read It | Explore It
Whether you want to add your own custom controls to the default Office Fluent Ribbon or reuse one of the many built-in controls, you use a combination of XML and programming code.
Adding Controls with XML
XML provides a hierarchical, declarative model of the Office Fluent Ribbon. You add controls, such as galleries and buttons, to the Office Fluent Ribbon by using XML elements to specify the type of component. For example, you add a single button by using the button element. You assign property values to the controls by using attributes such as the label attribute.
<customUI xmlns="" loadImage="LoadImage" > <ribbon startFromScratch="false"> <tabs> <tab id="tab1" label="Gallery Demo" keytip="x" > <group id="group1" label="Demo Group"> <gallery id="gallery1" columns="2" rows="2" getEnabled="GetEnabled" getScreentip="GetScreenTip" supertip="This is the super tip." getKeytip="GetKeyTip" getShowImage="GetShowImage" getShowLabel="GetShowLabel" getLabel="GetLabel" getSize="GetSize" image="internetconnection.bmp" getItemCount="GetItemCount" getItemHeight="GetItemHeight" getItemWidth="GetItemWidth" getItemImage="GetItemImage" getItemLabel="GetItemLabel" getItemScreentip="GetItemScreenTip" getItemSupertip="GetItemSuperTip" onAction="galleryOnAction" > <item id="item1" /> <item id="item2" /> <item id="item3" /> <item id="item4" /> <button id="button1" getLabel="GetLabel" onAction="buttonOnAction" imageMso="HappyFace" /> </gallery> </group> </tab> </tabs> </ribbon> </customUI>
This sample adds a custom tab titled Gallery Demo to the Office Fluent Ribbon by assigning text to the tab element's label attribute. This tab contains the Demo Group group, which contains a gallery control named gallery1. In addition to the items contained in the gallery, the code defines a button named button1. The gallery and button have properties defined for them by using attributes such as columns, rows, and supertip. These properties are assigned explicitly by setting the attribute equal to a string, such as the supertip attribute, or indirectly by pointing to a programming code procedure such as the getEnabled attribute. The following figure shows the result of applying this XML to the Office Fluent Ribbon in Microsoft Office Excel 2007:
When you see an attribute prefixed by the word get, this tells you that the attribute points to a callback procedure. However, an attribute does not have to be prefixed by get to reference a callback procedure. One example is the onAction attribute that will be discussed later.
You can differentiate built-in components from custom components based on the Mso suffix. Looking at the sample, you see that some attributes have the Mso suffix, such as the imageMso attribute for the button, and some attributes do not. Attributes that do not have the Mso qualifier, such as the id attribute of the gallery element, are custom controls. Attributes with the Mso suffix refer to built-in controls, commands, and images.
Looking at the other attributes of the gallery control, the columns and rows attributes specify the number of columns and rows, respectively, for the list of items in the gallery control's drop-down list.
Next, you see the getEnabled attribute. This attribute points to a callback procedure that returns True to indicate that the gallery is available for use. Setting the attribute to False grays out the control, indicating that the gallery is not active. Callback procedures are described in the Assigning Functionality to Ribbon Components section.
Next, you see the getScreentip attribute. ScreenTips are the small boxes that appear when you move the mouse pointer over an item on the Office Fluent Ribbon. They provide brief context-sensitive help about the item. Likewise, the supertip attribute (or the getSupertip attribute if you are pointing to a callback procedure) provides additional information about the object.
The supertip attribute illustrates another aspect of control attributes. When an attribute lacks the get prefix, this typically indicates that you assign text to the attribute explicitly. So in the case of the supertip attribute, the text is assigned directly instead of by a callback procedure. There are exceptions such as the onAction attribute, where the attribute is not prefixed by the word get. To find more information about which attributes are assigned explicitly and which attributes point to callbacks, see the article Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3).
The getKeytip attribute points to a callback procedure that assigns a KeyTip for the gallery control. KeyTips are also called access keys or keyboard shortcuts. KeyTips indicate which key combination to press to access program functionality from the keyboard. To use a KeyTip for a control on a custom tab, you first set a KeyTip for the tab or use the default KeyTip assigned by Microsoft Office (You can see the KeyTip for the tabs by pressing the ALT key). Then you assign a KeyTip for the control. For example, in the XML code sample, the tab1 tab has a KeyTip equal to x. The gallery1 gallery control resolves to a KeyTip equal to GL. When the Office Fluent Ribbon is displayed, pressing the key combination ALT displays the access keys for the tabs. Then pressing x brings the focus to the custom tab. And finally, pressing the key combination G+L shifts the focus to the gallery.
The getShowLabel attributeand the getShowImage attribute point to callback procedures that resolve to Boolean values. For example, setting the getShowLabel attribute to true causes the label for the gallery to appear when the control is displayed. If you want to set the value of the attribute explicitly, you use the showLabel attribute.
The getLabel attribute also points to a callback procedure that returns the label that is displayed for the gallery. Next, the image attribute for gallery1 specifies a custom image for the control. Likewise, the imageMso attribute for button1 specifies a built-in image for the button. For a spreadsheet of built-in images, see 2007 Office System Add-In: Icons Gallery.
You can use the image and imageMso attributes with the loadImage attribute of the customUI element. When you specify an image with the image attribute, the LoadImage callback procedure is called to load the image. The image is then displayed on the Ribbon.
In this procedure, a bitmap image is returned to Microsoft Office. When the procedure is called, the image is retrieved as an assembly resource and assigned to a Stream object. Finally the image is returned as a Bitmap object.
Returning to the XML sample, all of the attributes that contain the word Item relate to the items that are displayed when you click the arrow on the gallery control; these attributes are self-explanatory. For example, the getItemCount attribute specifies the number of items that are displayed in the drop-down list when you click the gallery control. The getItemHeight and getItemWidth attributes specify the height and width dimensions of the items in the drop-down list. And the getItemScreentip and getItemSupertip attributes specify screen tips and super tips for the drop-down items.
The onAction attribute points to a callback procedure that is triggered when you click an item in the gallery. This procedure is discussed in the next section.
And finally, the button element adds a button to the gallery. You can tell that the button uses an image that is built into Microsoft Office because of the use of the imageMso attribute.
Assigning Functionality to Ribbon Components
In the previous XML sample, several of the attributes point to callback procedures. For example, the gallery element has an onAction attribute. When the user clicks an item in the drop-down, the OnAction method, or callback procedure, is called. The code in the OnAction method gives the gallery its functionality. These procedures are called callbacks because when a user clicks an item in the drop-down list, the action alerts Microsoft Office that the control needs its attention. Microsoft Office then calls back to the method defined by the onAction attribute and performs whatever action is contained in the method. The following paragraphs describe some of these callback procedures.
To get an idea how a basic callback works, look at the GetLabel function.
When Microsoft Office calls the GetLabel procedure, an IRibbonControl object representing the gallery is passed in. The procedure tests the Id property of the object and, depending on its value, returns the text label of the control. Microsoft Office then displays that text label with the control.
The GetShowLabel and GetShowImage callback procedures return a boolean value to Microsoft Office that specifies whether to display the respective object (a label or an image) when the custom tab is displayed.
The getItemImage attribute points to the GetItemImage callback procedure.
public Bitmap GetItemImage(IRibbonControl control, int itemIndex) { string imageName; switch (itemIndex) { case 0 : imageName = "camera.bmp"; break; case 1 : imageName = "video.bmp"; break; case 2 : imageName = "mp3device.bmp"; break; default: imageName = "camera.bmp"; break; } Assembly assembly = Assembly.GetExecutingAssembly(); Stream stream = assembly.GetManifestResourceStream("<your project name here>." & imageName); return new Bitmap(stream); }
In this procedure, a bitmap is returned to Microsoft Office. When the procedure is called the image is assigned to a variable. That image file is then retrieved as an assembly resource stream and assigned to a Stream object. Finally, the image is returned as a Bitmap object.
The GetSize procedure is called from the getSize attribute and determines the size of the control.
public RibbonControlSize GetSize(IRibbonControl control) { RibbonControlSize ctrlSize; switch (control.Id) { case "gallery1" : ctrlSize = RibbonControlSize.RibbonControlSizeLarge; break; case "button1" : ctrlSize = RibbonControlSize.RibbonControlSizeRegular; break; default: ctrlSize = RibbonControlSize.RibbonControlSizeRegular; break; } return ctrlSize; }
The procedure tests the Id property of the calling control and depending on its value, sets its size from one of the choices from the RibbonControlSize enumeration.
Next, the galleryOnAction callback procedure is called when you click an item in the gallery. When triggered, Microsoft Office passes in the control object, the id of the item that you clicked, and the index number of the item. The procedure tests the selectedIndex property of the gallery and inserts text specific to that control into the A1 cell in the worksheet.
public void galleryOnAction(IRibbonControl control, string selectedId, int selectedIndex) { switch (selectedIndex) { case 0: applicationObject.get_Range("A1:A1", missing).Value2 = "You clicked a camera."; break; case 1: applicationObject.get_Range("A1:A1", missing).Value2 = "You clicked a video player."; break; case 2: applicationObject.get_Range("A1:A1", missing).Value2 = "You clicked an mp3 device."; break; case 3: applicationObject.get_Range("A1:A1", missing).Value2 = "You clicked a cell phone."; break; default: applicationObject.get_Range("A1:A1", missing).Value2 = "There was a problem with your selection."; break; } }
When you click the button, Microsoft Office calls the buttonOnAction procedure, which then displays a dialog box.
The remaining callback procedures are similar to those previously discussed and are included, along with variable declarations, in Table 1. You should add them to your project before running the project in the next section:
Table 1. Additional callback procedures and variable declarations to add to the project. By using an add-in, you get application-level customization. This second option means that the customized Ribbon applies to the entire application regardless of which document is open.
Creating a customized Ribbon by using an Open XML file is not complicated.
To create a custom Office Fluent Ribbon with an Open XML Format one method called GetCustomUI. You use this method to return the XML Ribbon customization code to Microsoft Office. Then, add programming procedures that give the custom Ribbon its functionality.
In the following procedure, you create a custom tab containing a custom group and a custom gallery to the default Office Fluent Ribbon in Office Excel 2007. Clicking the gallery inserts text into the worksheet. The gallery also contains a button that, when clicked, displays a dialog box.. At design time, it also makes it easier for you to install and uninstall the add-in.)
To interact with Excel 2007 and the Ribbon object model, you must add a reference to a type library..
Next, you.
Next, you need to create an instance of Excel and add the Ribbon interface.
To access the host application#, add a comma at the end of the public class Connect : statement, based on the programming language you are using. The GetCustomUI method tests the Id property of the control and inserts the text specific to that control into the worksheet at cell A1.
Because the gallery drop the image file onto the Resources tab.
Now you are ready to run the project.
To test Gallery Demo tab appears. You also see the Group Demo group containing the gallery. Notice the custom image for the gallery control.
Click the down arrow in the control. Notice the four items and the button with the built-in image.
Click the Video Player item.
Excel inserts the text into the worksheet at cell A1 as displayed in Figure 2.Figure 2. Click an item in the gallery to insert text into cell A1
Click the gallery again and then click the button. The dialog box is displayed as shown in Figure 3.Figure 3. Clicking the button displays a dialog box
Exit Excel.
In Visual Studio, in Solution Explorer, right-click RibbonDemoSetup, and then click Uninstall.
There are a number of resources on customizing the Office Fluent user interface. | https://msdn.microsoft.com/en-us/library/bb736142(v=office.12) | CC-MAIN-2018-13 | refinedweb | 2,180 | 56.25 |
go to bug id or search bugs for
FYI, when I go to the given URL and click on "view source", I get
Internet Explorer cannot open the Internet site.
Could not complete the operation due to error 800c0007.
Yes, it's IE 4.01, but I thought you guys fixed this bug in one of the betas.
Add a Patch
Add a Pull Request
This was an Apache 1.3.x bug. We thought it was fixed
earlier, but the previous fix apparently left a lot to be
desired. Dean Gaudet has come up with a fix for the fix.
If you are running a bleeding-edge Apache server look for a
commit from Dean shortly.
Automatic comment from SVN on behalf of aharvey
Revision:
Log: Fix infinite recursion in namespaces FAQ example #8, per note 111006. | https://bugs.php.net/bug.php?id=8 | CC-MAIN-2021-49 | refinedweb | 138 | 81.83 |
"Elementary" One-liners Overview (Part II)
Hi!
In the last Weekly Overview we looked at first ten missions from the "Elementary" island.
Today we've got the second part of our Weekly Review for your reading pleasure.
Count Inversions
In this mission you need count the number of inversions in a sequence of numbers.
And we're opening up with @veky's solution "Gallery".
Here we see the interesting usage of a double loop in comprehension.
count_inversion = lambda s: sum(a > b for i, b in enumerate(s) for a in s[:i])
The end of other
In this task, you are given a set of words in lower case.
We must check whether there is a pair of words, such that one word is the end of another (a suffix of another).
Here's a simple and clear one-liner from @Apua "just one liner".
checkio=lambda S:any(a!=b and a.endswith(b) for a in S for b in S)
Days Between
How to find the difference in days between the given dates.
"oneliner" by @DiZ with the "import" variation which is designed specifically to write one-liners ;-)
days_diff=lambda f,t,d=__import__('datetime').date:abs(d(*f)-d(*t)).days
Pangram
Check if a sentence is a pangram or not.
And here's the typical type of "import" in @ciel's solution.
import string;check_pangram=lambda t:string.ascii_uppercase in str().join(sorted(list(set(t.upper()))))
Binary count
Convert a number to the binary format and count how many unities (1) are in the number spelling.
Everything is short, simple and clear in "lambda" solution by @mastak.
checkio = lambda n: bin(n).count('1')
Number Base
You are given a positive number as a string along with the radix for it.
Your function should convert it into decimal form.
Yes, it's really obviously for Python. So let's look at the short solution by @shiracamus
checkio=lambda s,r:int(('-1',s)[int(max(s),36)<r],r)
Common Words
You are given two strings with words separated by commas. Try to find what is common between these strings.
Yep, @somnambulism didn't use sets in his "oneliner".
checkio = lambda a, b: ','.join([x for x in sorted(a.split(',')) if x in b.split(',')])
Absolute sorting
An array (a tuple) has various numbers. You should sort it, but sort it by absolute value in ascending order.
And again we have an "Obvious" solution by @nickie.
checkio=lambda a:sorted(a,key=abs)
Building Base
This is not a base mission, players should write a class with the given requirement.
Here I've not found a formal one-liner, but I think this solition with the simple title "zzdgnczfgdmksjdgfjs"
by @samulih can be counted as one-liner.
class Building: __repr__ = lambda s: '%s%s' % (s.__class__.__name__, s.d) def __init__(s, *args, methods=('area', 'volume', 'corners')): s.d, ns, we, r = (args + (10,))[:5], ('sou', 'nor'), ('we', 'ea'), (0, 1) s.__dict__.update({k: lambda n=n: ({'%sth-%sst' % (ns[i], we[j]): [s.d[0]+s.d[3]*i, s.d[1]+s.d[2]*j] for i in r for j in r}) if n>1 else s.d[2]*s.d[3]*s.d[4]**(n&1) for n, k in enumerate(methods)})
Friends
And again a mission where you need to write a class.
Yes, this is also a not strictly a one-liner, but a set of one-liners in "lambda" solution by @jcg.
class Friends(set): __init__ = lambda self, connections: self.update(map(frozenset, connections)) add = lambda self, connection: not(connection in self or super().add(frozenset(connection))) remove = lambda self, connection: connection in self and not super().discard(frozenset(connection)) names = lambda self: set.union(*map(set,self)) connected = lambda self, name: set().union(*filter(lambda x:name in x, self))-{name}
What next?
We finished our Elementary island with 20 missions in 29 strings (last two mission broke it).
If you have ideas for the next week solution overview -- feel free to let us know.
That's all for today, folks. Bye! | https://py.checkio.org/blog/elementary-one-liners-part-2/ | CC-MAIN-2018-09 | refinedweb | 689 | 68.36 |
Unless you’ve spent the last 18 months under a literal rock, you’ve no doubt heard about Lightning and how it can help you build apps faster. We’ve spoken with many developers in the Salesforce community over the past 18 months about their transition to Lightning, and have received great feedback that we are using to deliver tools and features that make it easier for you to use Lightning Components. That is why I am excited by the new Base Lightning Components that we delivered in the Winter ’17 release. Over the next few weeks, I will be writing several posts to introduce you to this new resource.
Before we dive into Base Lightning Components, let’s talk about why we created them. That story goes all the way back to the early pilot releases of Lightning where a common story we heard from the pilot participants was: “that UI looks cool, how do we make our apps look like that?”
Having worked with CSS for the better part of 20 years, I’ve formed an opinion that I am sure you will agree with: the “C” in CSS often stands for confusing! This is especially true when you are trying to reverse engineer someone else’s CSS – and even more so when the CSS is the core of a large enterprise application. That is the reason we created the Salesforce Lightning Design System (SLDS) and its CSS framework. Using the Lightning Design System, you can simply find the components that you need for your Lightning application – complete with markup and CSS. It could not be simpler — or so we thought.
In reality, modifying the markup for specific component states is not always easy due to the lack of JavaScript, a side-effect of the framework-agnostic nature of SLDS. In speaking with developers about this challenge, we’ve heard that an “SLDS version of element X” would help make this easier. Base Lightning Components were born to address not only this request but also to help streamline Lightning development and make it even faster. In building out the Base Lightning Components, the team also took the opportunity to refactor code for optimum performance, and leverage some of the intrinsic browser capabilities such as client-side validation.
Deciding which components to build first was a matter of looking at the common patterns across both internal and external teams. It should come as no surprise that the building blocks for forms, such as inputs, radio buttons, and checkboxes, were literally everywhere, and therefore a logical first choice. However, there were also building blocks for grouping information and defining application layout.
With Winter ’17, we’ve published 18 Base Lightning Components that provide building blocks for form-based components, structural components like tabs and cards, and finally, two layout components based on the responsive grid system of SLDS. You can find the complete list in the Lightning Components Developer Guide, in the Component Reference area. As you look at the list, it is important to remember that, at least initially, these components will not “do everything under the sun,” but should address approximately 80% of use cases. We encourage feedback as you begin to build with them so that we can better understand functional gaps and plan for enhancements.
So how do you go about using Base Lightning Components when building Lightning applications? The first step is to familiarize yourself with the individual components using the Lightning Component Reference docs. You will quickly notice that each of the components follows a similar pattern. First, each component is identified by the lightning: namespace and the name of the component. From there, every component has a list of attributes, just like any standard HTML element. For example, a button might look like this:
<lightning:button
Notice the “variant” attribute. A variant is a version of the component that simply looks different from the default. In addition to the documentation for the Base Lightning Components, it can be helpful to explore the SLDS component list to see visual representations of components and their variants.
Moving forward, the list of components from the SLDS site will be the basis for new Base Lightning Components. In other words, if it is in SLDS, you can have a reasonable expectation that you will see a corresponding Base Lightning Component in a future release.
Next time, we will take a closer look at some of the individual Base Lightning Components and demonstrate how you can use them to build a holistic Lightning application. | https://developer.salesforce.com/blogs/developer-relations/2017/01/base-lightning-components.html | CC-MAIN-2018-51 | refinedweb | 756 | 57.1 |
Last updated in July 2020.
Follow along as we mock-up, design and lay out a sales dashboard with native React components from KendoReact, complete with a responsive grid, data, charts and more.
Building web apps can be challenging, even with modern frameworks like React. Fortunately, UI libraries can make this easier. In this tutorial, we are going to use KendoReact, a library of professional UI components built for React. If you have used component libraries from Progress, you will feel right at home with KendoReact. However, if you have not, this tutorial will demonstrate how to work with our KendoReact components, how to wrap them in containers and provide data to them.
Source code for this tutorial can be found at: Github.com/Telerik/kendo-react-build-a-sales-dashboard. This repo provides step-by-step commits that follow each section of this tutorial!
What we will be making: Below is a screenshot of the final dashboard. My goal is to show you step by step how to take a wireframe mockup and turn it into working HTML using a combination of custom HTML and CSS and KendoReact components.
Our sales dashboard will show quarterly data for top selling products of our fictitious company. I will introduce the data needed for each component as we build them and we will utilize a responsive grid from Bootstrap to aid with responsive layout changes.
We will use Create React App to setup a React project within minutes.
A lot of line of business applications are mocked up using simple sketches. I have used a tool called Balsamiq to create a mockup for our dashboard. This tutorial will get our charts, grids, graphs and other items laid out in a dashboard fashion each component driven and controlled by JSON data.
We will use a Material Design theme to give us good looking type and polished UI styles with minimal effort.
From our mock-up I have created an outline that I will use to arrange my rows and columns. This will guide me in structuring my
<div> elements and creating classes I will need to achieve the specific layout I want.
Below is the typical outline I would have created given the mock-up above. We have two rows, the first containing the heading to the left and buttons to the right. Everything else will go in a new row underneath it. The second row is split up into two columns. The first (or left) column will contain our Panel Bar component. Inside the second (or right) column will be two rows, the first having three columns and the next having just one column spanning the full width of its parent container. From this description, I now have a basic idea of how to structure my HTML.
Now that we have these sketches we can create our markup using
<div> elements and assigning bootstrap-grid classes indicating how many of the maximum 12 columns each
<div> will take up. We will use the the Bootstrap Grid's responsive column classes to help us achieve our desired layout.
We need to ensure that we have Node installed, version 10 or higher, as the latest version of Create React App makes this a requirement. Having Node installed will allow us to use npm to download Yarn Package Manager. If you are new to Create React App, you can brush up on the latest with this article, Hello, Create React App!, written to get folks up to speed creating React applications using zero configuration.
Yarn is used as the default package manager in Create React App. Install it using:
$ npm install yarnpkg -g
If you have any issues installing Yarn on Windows, just download and run the
msi installer here.
$ npx create-react-app sales-dashboard $ cd sales-dashboard $ yarn start
Once Create React App is started you can check what our app looks like in the browser:
Great, the app is working. Your page will look funny for a few minutes until we add the HTML and CSS.
We need a few packages installed from npm in order to get the the basic layout for our dashboard working. KendoReact has a Material theme that we can pull in as a package for layout. We will also need to bring in a few KendoReact buttons, which will give you an idea of how easy it is to pull the bits and pieces in to get started. Since Create React App uses yarn, so will we. Let's install the few packages we need from KendoReact:
yarn add @progress/kendo-theme-material @progress/kendo-react-layout @progress/kendo-react-pdf @progress/kendo-drawing @progress/kendo-react-buttons @progress/kendo-react-ripple
Considering the layout we saw above, I have created a hierarchy of
div elements each given a
className in the traditional “12 column responsive grid” fashion, and simplified that idea in a visual aid seen below. This is just to give an idea of what we need to create. The HTML I will have you copy from the Github Gist below has some additional classes for each breakpoint
xs thorugh
xl.
Tags like "<GridContainer />" are just placeholders for the KendoReact components we will add. Hopefully the diagram above gives you an idea of our HTML structure.
Copy the code below into your
App.js page.
Copy the CSS below into your
App.css.
Right now, our layout is not as we intend because we have not loaded bootstrap yet. Let's use the Bootstrap 4 Grid, which provides a CSS file that only includes styles from Bootstrap Grid and none of the other Bootstrap styles. This will ensure we are not loading additional CSS that we are not using. I use this specific package because it has a decent amount of weekly downloads and the project seems maintained, but there are many others just like it. We will add the package first:
yarn add bootstrap-4-grid
Next we will add an import for the
bootstrap-4-grid CSS which we will load into our
node_modules/bootstrap-4-grid/css directory. This import should go at the top of the
App.js file.
import 'bootstrap-4-grid/css/grid.min.css';
I have a piece of CSS I would like to add just to give us an idea of the boundaries of ourBootstrap Grid. The following CSS styles will render a one pixel black line around every row and column of our Bootstrap 4 Grid. We should see a resemblance to the mockup from earlier.
.container .row div { outline: solid 1px black; }
Once added to the
App.css file, we will get a trace of our layout.
We can see the boundaries of each box on the page, we also see some column gutters around the percentages. If we wanted we could inspect the page using the Chrome DevTools and get a better understanding of the padding on each section of the grid.
Since we are using Bootsrap, we can change the layout at different page widths (breakpoints). With the classes that we have added, you will see a clear change in the layout when you cross the small to medium breakpoint boundary. We can open Chrome DevTools and toggle the device toolbar allowing us to resize the page. If we drag from appx 700px to 800px range, we will see a clear change in the layout when we cross 768 pixels. Try it out or just watch me do it!
We already have a few buttons on the page, but we want to replace them with KendoReact buttons. It's a great way to get acquainted with working with KendoReact components, which take advantage of the Material theme we have installed. We already have the dependencies added. Let's go into our
App.js file and add the following imports, including our stylesheet for the material theme:
import React, { Component } from 'react'; import ReactDOM from 'react-dom'; import { Button } from '@progress/kendo-react-buttons'; import { savePDF } from '@progress/kendo-react-pdf'; import '@progress/kendo-theme-material/dist/all.css'; import './App.css'; import 'bootstrap-4-grid/css/grid.min.css';
We can wire up the Export to PDF button. To do this, we simply need to find the two buttons we have in our HTML and change both
<button> tags to use title casing:
<Button>Share</Button> <Button>Export to PDF</Button>
This will render a KendoReact Button complete with its style. A KendoReact Button has a prop named
primary which we can use to add a distinguishing feature to our button - it's the same as adding the class
primary. We just need to pass the value
true to this prop. Behind the scenes, our component takes that
true value and then renders a
primary class.
<Button primary={true}>Share</Button>
Let's use a class that will give our buttons spacing. It's already defined in the styles we have added to the
App.css file. On the div that surrounds our buttons, add
buttons-right to the className. The buttons and their containing div should now look like this:
<div className="col-xs-6 col-sm-6 col-md-6 col-lg-6 col-xl-6 buttons-right"> <Button primary={true}>Share</Button> <Button>Export to PDF</Button> </div>
Now you should see your buttons taking on a Material Design style.
I noticed something missing when I clicked on our new buttons. The Material Design frameworks I have worked with in the past utilize a droplet effect on certain UI elements when pressed. Buttons definitely show this ripple effect and I am not seeing it on ours. This is because KendoReact provides this as a separate package (KendoReact Ripple), which I think is a good idea because I may or may not want it in my project. Let's import the
<Ripple> as a component and we will wrap it around whatever portion of our application we want to apply it to:
yarn add @progress/kendo-react-ripple
With that done, you can now import
Ripple into the
App.js page just below the savePDF import:
import { Ripple } from '@progress/kendo-react-ripple';
Next, we want to add a
<Ripple /> container around the
<div> element of the
app-container so that all
Button and other components will get the ripple effect applied to them as a child of
<Ripple />:
class App extends Component { render() { return ( <Ripple> <div className="bootstrap-wrapper"> { /* ... */ } </div> </Ripple> ); } } export default App;
To test it live in our application and not trigger the actual button handler, click and drag outside the button hit state and release.
A lot of times we simply want the user to be able to print everything on the page to a PDF file. In order to do this, we can use KendoReact's PDF Export to do all the heavy lifting.
Add the following code to your App Component Class in
App.js:
constructor(props) { super(props); this.appContainer = React.createRef(); } handlePDFExport = () => { savePDF(ReactDOM.findDOMNode(this.appContainer), { paperSize: 'auto' }); }
With that code in place, we need to bind
this.appContainer to an object, which is a reference to the HTML element that contains the area we want to print to PDF.
Because we want to print the entire sales dashboard, we will place a
ref attribute on an outer
<div> in our JSX. I'm going to use the one with the className:
app-container
<div className="app-container container" ref={(el) => this.appContainer = el}>
The
ref attribute allows us to assign an
HTMLDivElement, representing the contents of the
<div> element it is placed on, to a local property.
Next, we will want to ensure that we are calling the
handlePDFExport() function from the
onClick event. Let's also disable the other button for the time being.
<Button onClick={this.handlePDFExport}>Export to PDF</Button>
Let's now test our button to ensure everything is working. When the button is pressed, you should get a prompt to download a PDF file. Upon opening the PDF you should see the entire contents of our page. You can imagine what would happen if we put this attribute on another
<div> in our page. At that point the button would only print the contents of the
<div> element. We will revisit this idea once we get the Grid working and create a button that only prints the data grid.
Let's wire up the share button now. In a real production application this would talk to a service that could be used to send an email to someone in order to share the dashboard link, but we are just going to make it print to the console. The KendoReact Dialog is one of the more important and widely used components in our toolkit as a developer using KendoReact, which communicates specific info and prompts our users to take actions via a modal overlay.
In the constructor for our
App.js file, let's create an object to hold state. This state object is understood by React to be a special object. Under the hood, React treats this object differently.
constructor(props) { super(props); this.appContainer = React.createRef(); this.state = { showDialog: false } }
Let's create a function inside the
App class, underneath the
handlePDFExport() function. As I mentioned React state objects are special, they have an API used specifically for interacting with it. For instance, if we want to change the state in any way, we should not access the object directly and assign new values. Instead we use the
setState method for updating the state. This will schedule an update to a component's state object. When state changes, the component responds by re-rendering.
handleShare = () => { this.setState({ showDialog: !this.state.showDialog }) }
PRO TIP: To execute a function, or verify if the state updates correctly, we can pass a function as a second argument (callback) to
setState(), the function will be executed once the state is updated. Find out more and explore the React docs for state.
handleShare = () => { this.setState({ showDialog: !this.state.showDialog }, () => console.log(this.state)) }
We also need to update the button to use this function.
<Button primary={true} onClick={this.handleShare}>Share</Button>
So this button toggles a boolean value in our state object, which is typically a good way to hide and show modals, pop ups or hidden areas of the page. But we need to create a hidden area that will reveal itself when this button is clicked. As we saw from our setState callback, each time we press the Share Button that value is flipped. This HTML block that we are going to add should be replaced by the code below:
<h4 style={{display : 'none'}}>Dialog Shown/Hidden with Logic</h4>
Replace with the following code:
{this.state.showDialog && <Dialog title={"Share this report"} onClose={this.handleShare}> <p>Please enter the email address/es of the recipient/s.</p> <Input placeholder="example@progress.com" /> <DialogActionsBar> <Button primary={true} onClick={this.handleShare}>Share</Button> <Button onClick={this.handleShare}>Cancel</Button> </DialogActionsBar> </Dialog> }
Let's unpack what we just added: we brought in a new KendoReact component called
<Dialog>, which is wrapped in an expression that will hide or show the area based on the
state.showDialog value being flipped. The best way to think of this is that our
<Dialog> component equates to a truthy value. It's similar to saying:
{ this.state.showDialog && true }
So because it's paired up with the
this.state.showDialog, if both equate to true, the Dialog displays. However, if
this.state.showDialog is false, the output of the
<Dialog> component is not revealed. Again this is just a way to think about this statement if for any reason it looks weird to you.
The
<Dialog></Dialog> component will not work without importing it from the
kendo-react-dialogs package, so let's get that added and imported:
yarn add @progress/kendo-react-dialogs @progress/kendo-react-inputs @progress/kendo-react-intl
And we'll also import those packages in our
App.js. Our imports should now look like this:
import React, { Component } from 'react'; import ReactDOM from 'react-dom'; import { Dialog, DialogActionsBar } from '@progress/kendo-react-dialogs'; import { Input } from '@progress/kendo-react-inputs'; import { Button } from '@progress/kendo-react-buttons'; import { savePDF } from '@progress/kendo-react-pdf'; import { Ripple } from '@progress/kendo-react-ripple'; import '@progress/kendo-theme-material/dist/all.css'; import './App.css'; import 'bootstrap-4-grid/css/grid.min.css';
I'd like to start bringing in the
Chart component. It has the least amount of data associated with it, so it's a logical next step and easy to implement.
Let's add a directory for all of our container components that will wrap our individual KendoReact components. We will call the directory
components. Inside, create our first container component named:
DonutChartContainer.js.
We will need KendoReact Charts for this feature. We will also install HammerJS, which is required for Chart events.
yarn add @progress/kendo-react-charts hammerjs
Next, I was able to pretty much copy and paste from the KendoReact chart documentation to get what we need for
DonutChartContainer.js, which you can copy from the Gist below:
The KendoReact Charts have many different Series Types. If you go to the KendoReact Charts documentation, you will see that charts has a sub section called "Series Types". One of these series is called "Donut", and that is where I found the StackBlitz demo and I just copied the code from there.
The KendoReact charts provide a vast set of features for building rich data visualizations. To learn more about them, feel free to check out the KendoReact Charts API.
The first thing we want to create for the
Chart is some dummy data. Like I said before, all of our components will need data. Let's create a directory named
data as a sibling to our
components directory. Inside that directory create a file named:
appData.js.
Remember, the idea is to show what percentage of food (by category) has sold in Q4. That specific data is what we will use to populate the donut chart. We want to display a label (foodType) and percentage value (percentSold).
foodTypecategory of foods sold in Q4 at all stores
percentSoldpercentage represented as a decimal sold in all stores in Q4
Copy the code below into the
appData.js file:
export const donutChartData = [ { 'foodType': 'Beverages', 'percentSold': 16.5 }, { 'foodType': 'Condiments', 'percentSold': 24 }, { 'foodType': 'Produce', 'percentSold': 13 }, { 'foodType': 'Meat/Poultry', 'percentSold': 16.5 }, { 'foodType': 'Seafood', 'percentSold': 20 }, { 'foodType': 'Other', 'percentSold': 10 } ];
We need to add the import to
App.js for the
DonutChartContainer:
import { DonutChartContainer } from './components/DonutChartContainer';
And replace the
<h4>DonutChartContainer</h4> element with:
<DonutChartContainer />
Now our component should be working. I want to show you how to format the label of the Donut Chart. Right now we are only displaying the category because we specified that in our component configuration. Inside the
DonutChartContainer.js file, change the
labelTemplate function to:
const labelTemplate = (e) => (e.category + '\n'+ e.value + '%');
Here is our beautiful Donut, it even looks tasty! When we use the Donut Chart, we interact with a
<ChartSeriesLabels> component. The
content input accepts a function that returns a string. It's that simple. It fills each section (categories in our case) with rich goodness. Using just what we know about JavaScript, we can achieve some better formatting and I think we may want to use
e.percentage instead of
e.value. You can get details on the fields we can tap into in our ChartSeriesLabels documenation.
I have modified the template function to use percentage which is more accurate for this type of a chart. In case the data doesn't equal 100 each part will still represent its part of the whole.
const labelTemplate = (e) => (e.category + '\n' + (e.percentage*100) +'%');
With that, we're now using
percentage instead of
value.
We will use a KendoReact Bar Chart, which will represent a monthly breakdown of the percentages from each individual month of Q4 2018. The Donut
Chart showed the average percentage over the entire quarter, but our bar chart will show each month of that quarter. Below is the data we need to add to our
appData.js file. You will notice that our data corresponds to the Donut Chart as well, so the user can easily see the relationship.
export const barChartQ4Months =['October', 'November', 'December']; export const barChartMonthlyPercentages = [ { name: 'Beverages', data: [14, 16, 19.5] }, { name: 'Condiments', data: [24, 23.5, 24.5] }, { name: 'Produce', data: [12.5, 12.5, 14] }, { name: 'Meat/Poultry', data: [16, 18, 17] }, { name: 'Seafood', data: [21.5, 20, 17] }, { name: 'Other', data: [7, 12, 11] }, ];
With the data in place, we can add a new container component to our
components directory. Create a file named
BarChartContainer.js, and copy the code below into that file:
Add the import to
App.js for the
BarChartContainer:
import { BarChartContainer } from './components/BarChartContainer';
And replace the
<h4>BarChartContainer</h4> element with:
<BarChartContainer />
Check to ensure that your bar charts are using the same color as the Donut Chart slices for each product. Everything should line up because our data for each chart is in the same order. If you were building an API to serve this data, that would be something you may want to be aware of.
Have you noticed how crazy simple it is to use these components? We still want to have a wrapper or container component around the KendoReact component so that we have that layer if needed.
We have an array of months, each one of those months will translate into a category on the bar chart. We also have an array of objects. Each of these objects has a
name field that corresponds to our categories of food. It will also have a data field. So for each month (category on the bar chart), we iterate over the first index of every data field's array. Each iteration builds a bar whose height corresponds to the index's value. Again this happens for each month.
My tip to anyone working with this chart is to take that example and become familiar with how each tag inside the
<Chart> component plays into the bigger picture. We have a Legend, ChartCategoryAxis & Items, ChartSeries & Items, ChartValueAxis & Items and of course the encompassing component, the Chart itself.
To do more hacking on charts, check out this article on Data Visualizations with Kendo UI for some really cool ideas on using different charts.
The
Grid container is by far one of our most used and requested components. Our grid will be a list of products. To populate it, we'll copy the gist below and paste it into
appData.js. This will serve as the top 10 products of Q4, which are the heart of the data we are building the dashboard around. In a more advanced situation, the
Grid could be populated by clicking on a particular month and we would filter a larger set of products, but in order to just get a prototype created and a Grid on the page, we are going to use this dummy data. We will do some processing of that data, and I can show you how that is done in just a few moments when we add the Sparkline chart to our Grid as an enhancement.
We need to add a few packages before using the
Grid. For info on why each dependency is needed, check out the KendoReact Grid Dependencies section in our documentation:
yarn add @progress/kendo-data-query @progress/kendo-react-dateinputs @progress/kendo-react-dropdowns @progress/kendo-react-grid @progress/kendo-react-inputs @progress/kendo-react-intl @progress/kendo-react-data-tools
I listed all of the dependencies to show what is required for the grid, but a few of these we already installed during a previous component - that's because KendoReact components sometimes share the same dependencies. There is no harm in running the install again.
Next, let's add the data to our
appData.js file:
Looking at the data, the most important fields are product id, name, category, price, in-stock and discontinued fields. I brought in a little more data than we needed in case you want to play around with the grid on your own and experimenting. For now we will just use those specific fields though.
The main components for a KendoReact
Grid are the actual
<Grid> element which contains child
<Column> components, each mapping to a specific field from our data object.
I want to give you a quick visual of the code for this component, so if I only wanted to display the id, name and category from our data set, I could very easily and almost from memory build that component:
<Grid style={{height:'300px'}} data={gridData}> <Column field="ProductID" title="ID" /> <Column field="ProductName" title="Name" /> <Column field="Category.CategoryName" title="Category Name" /> </Grid>
And that would look like this rendered on the page:
Implementing the
Grid is that simple. In our project, we are going to use a few more properties and some more column sizing than you saw in the example above. Copy the entire component from the gist below and put it into a new file named
GridContainer.js:
Add the import to
App.js for the
GridContainer:
import { GridContainer } from './components/GridContainer';
And replace the
<h4>GridContainer</h4> element with:
<GridContainer />
Now that we have the basic grid working and using our data, we will add some code that processes the data by adding random numbers to an array so that we can create a fake sparkline chart for each product. In a real product or application we would need to use real historical data, but for the purposes of this demo, we'll fake it. Let's create the function and add it just below the imports in our
GridContainer.js file:
const processData = (data) => { data.forEach((item) => { item.PriceHistory = Array.from({ length: 20 }, () => Math.floor(Math.random() * 100)); return item; }) return data; }
The property
PriceHistory is now available when the
Grid is rendered. We can see this by placing a
debugger; statement before the
return data; line in our new function, then opening the Chrome DevTools (F12) and inspecting that
data object. Now we just need a
Sparkline chart that can use the new
PriceHistory property.
We are going to create a Sparkline Chart component inside of this
GridContainer.js file. When a component or function will only be used in conjunction with one specific component, it's okay to keep it in the same file. We will add a function and component just under the current imports of the
GridContainer component, for use only in this grid:
import { Sparkline } from '@progress/kendo-react-charts'; const SparkLineChartCell = (props) => <td><Sparkline data={props.dataItem.PriceHistory}/></td>
Next, add the new column to the
Grid component, just above the discontinued column:
<Column field="PriceHistory" width="130px" cell={SparkLineChartCell}
We also need to update the Grid component to use
processData:
<Grid style={{ height: '300px' }} data={processData(gridData)}>
Also, if you have not already done so 😆, we should comment out the Grid Outline code from the
App.css page.
.container .row div { outline: solid 1px black; }
Just in case you have any issues I have created a gist for
GridContainer.js, showing what the code should look like at this point. We now have added a KendoReact component within another component, that's cool! It's a
Sparkline rendering inside of a column from our
Grid. I wanted to highlight this, because you can lego-style compose KendoReact components if you wish. When in doubt, just try it!
The KendoReact PanelBar is a component in the KendoReact Layout package. We already should have this installed, so we can skip this command.
yarn add @progress/kendo-react-layout
Copy the data below into
appData.js file. The data has two top level nodes containing arrays as values.
Let's bring in some additional styles for the Teammates section of the
PanelBarContainer. Copy this Gist to the bottom of the
App.css page:
Now we just need to copy the Gist below and paste it into our
PanelBarContainer.js component:
Now add the import to
App.js for the
PanelBarContainer:
import { PanelBarContainer } from './components/PanelBarContainer';
And replace the
<h4> element for the
PanelBarContainer:
<PanelBarContainer />
We will also need to add some profile images for each team member. I have created a small zip file that has some images already sized correctly that you can use:
profile_images.zip.
After you have downloaded those images, add them to a
public/img directory in your project for any static files like logos, graphics, images, etc. The public directory is the right place for these.
Our new component should look like this:
We have some semblance of a dashboard going on here. It's laid out in a manner that will look decent on medium and large sized screens (960px and up). Obviously this is nothing to ship to production, but it gets you started and working with the KendoReact components, which is the point.
A few things that we could do to expand this demo is add some interactivity, or refactor to work with Redux, and we could also build an API to serve up our data. And I'd like to implore you to explore these options and let us know what you think about our components in the comments section. Also let us know if you would like to see this demo taken further with more advanced concepts we could build.. | https://www.telerik.com/blogs/lets-build-a-sales-dashboard-with-react?utm_medium=cpm&utm_source=jsweekly&utm_campaign=kendo-ui-react-blog-sales-dashboard&utm_content=brieflink | CC-MAIN-2022-21 | refinedweb | 4,929 | 62.98 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.