text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Introduction
MSBuild 4.0 has all sorts of features for targeting different .NET Framework versions. The idea is that you can use MSBuild 4.0 to build all your legacy (pre-4.0) project types, as well as new projects that just target a downlevel version of the .NET Framework. In addition, you can mix and match the target framework versions for different projects under a single invocation of MSBuild.exe. Unfortunately, in Beta 2, we've had reports of a problem with building unit test projects that contain the special generated accessor assemblies, otherwise known as "test references". If you have a VS 2008 unit test project with test references, you may find that MSBuild 4.0 gives you an error like:
Caught a BadImageFormatException saying "Could not load file or assembly 'C:\Build\Project\Sources\ProjectName\ProjectName.UnitTests\obj\Debug\ProjectName_Accessor.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.".
Update: This should be fixed for the final release of VSTS 2010.
The Problem
MSBuild 4.0 uses the .NET 4.0 CLR. All the MSBuild tasks, including third party and custom tasks, are loaded under the 4.0 runtime. This, in most cases, isn’t bad, because the 4.0 runtime should be perfectly capable of running code that was targeted for the 2.0 runtime. However, in the case of the VS 2008 Team Test BuildShadowTask, the code creates an assembly using the System.Reflection.Emit namespace. That namespace, when provided by the 2.0 CLR emits assemblies targeting the 2.0 CLR. When the namespace is provided by the 4.0 CLR it emits assemblies targeted at the … drum roll please … 4.0 CLR! So, when you build a VS 2008 unit test project (targeting the 3.5 framework, which uses the 2.0 CLR) using MSBuild 4.0, you end up with accessor assemblies that target the 4.0 CLR. Oops. Now downlevel applications trying load the assembly will blow up with the above exception. You’ll see it sometimes during your build with ResGen.exe, or when you go to run your tests from MSTest.exe.
The
Solution Workaround
To get around this, all you need to do is run the BuildShadowTask from MSBuild 3.5, which is the version VS 2008 projects were meant to be built with. However, if you are using build features in TFS 2010 Beta 2, you are in trouble, since it (out-of-the-box) only supports MSBuild 4.0. Sure, you could write your own build activity to run MSBuild 3.5, but MSBuild 4.0 is *supposed* to work. So rather than blaming the TFS Build team (of which I am a part of) I’ll address the real issue. Several teams have gained traction to resolve this bug before shipping, but there is no guarantee. Quite frankly, this bug is minor in regards to the entire project. My team is pushing heavily for it to get done, but time will only tell.
Just as an extra note, the problem is actually in the VS 2008 Microsoft.TeamTest.targets file, or the BuildShadowTask that it uses (depending on how you look at it). They generate an assembly targeting the wrong framework version. So that is a natural place to fix the issue. Instead of modifying the .targets file directly (you could, but think of all the things that could go wrong!) I decided to add a new .targets file, and then modify my unit test projects to load it instead of the original. The new .targets will replace the ResolveTestReferences target that Microsoft.TeamTest.targets exposes.
Fast Track
If you just want to get it done and don’t care to know how it works, do this:
- Download the attached file and place the .targets file in %ProgramFiles%\MSBuild\Microsoft\VisualStudio\9.0\TeamTest (or %ProgramFiles(x86)%\MSBuild\Microsoft\VisualStudio\9.0\TeamTest on an x64 system).
- Add the following property to your unit test project (.csproj or .vbproj) files near the beginning (it needs to be before the import of Microsoft.CSharp.targets).
<MsTestToolsTargets>$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v9.0\TeamTest\Microsoft.TeamTest.4.targets</MsTestToolsTargets>
The Details
I am aiming to replace the old behavior. Microsoft.Common.targets uses the MsTestToolsTargets property to define where to load the test tools .targets file from, so I have an easy way to replace the old behavior. Let’s first look at the existing Microsoft.TeamTest.targets file.
<UsingTask TaskName="BuildShadowTask" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\TeamTest\Microsoft.VisualStudio.TestTools.BuildShadowsTask.dll"/>
If you are familiar with MSBuild project files, this is easy to understand. If not, just know that the UsingTask element is a way to import a coded MSBuild task. In this case, the BuildShadowTask being imported is what generates the accessor assembly. Well, sort of. It links to Publicize.exe, which does the actual assembly generation, but as far as MSBuild is concerned, the task’s implementation is irrelevant.
<PropertyGroup>
<ResolveReferencesDependsOn>
$(ResolveReferencesDependsOn);
ResolveTestReferences
</ResolveReferencesDependsOn>
</PropertyGroup>
In this section, the ResolveReferencesDependsOn property, which is used to determine the targets that the ResolveReferences target depends on, is modified to include the ResolveTestReferences target. The ResolveTestReferences target is the target that invokes BuildShadowTask, which you’ll see below. This is a standard way to add your custom targets to the common target dependency chain.
<Target Name="ResolveTestReferences" Condition="'@(Shadow)'!=''">
>
</Target>
Here is the meat of the .targets file. It is a single target that leaves the bulk of the logic to the custom task. It passes all the relevant properties and items to the task, and assigns a couple of outputs. What you need to take from this is that when I re-implement this target, I need to maintain the same behavior. I looked at invoking Publicize.exe to accomplish this, but it turns out that Publicize.exe doesn’t have very many command line parameters. It could be done, but you’d need a custom task or a lot of MSBuild logic to accomplish the same as the standard BuildShadowTask. So, instead, I cheated. My solution was to shell out to MSBuild 3.5 to do this part of the build process. That means I need to be able to target the ResolveTestReferences target alone, which took a little extra work.
Let’s jump into the new .targets file. Remember, this replaces the above logic, it doesn’t add onto it.
<UsingTask TaskName="BuildShadowTask" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\TeamTest\Microsoft.VisualStudio.TestTools.BuildShadowsTask.dll"/>
Here I didn’t make any changes to the orginal .targets file. I import the BuildShadowTask as before. I will, in fact, use it, just in a round-about way (you’ll see what I mean in a bit).
<PropertyGroup>
<CoreResolveTestReferencesTarget Condition=" '$(CoreResolveTestReferencesTarget)' == '' ">ExternResolveTestReferences</CoreResolveTestReferencesTarget>
</PropertyGroup>
Here is where some of the magic happens. Remember that I said my solution is to shell out to MSBuild 3.5. Well, the .targets file that I’m creating will be included in both the MSBuild 4.0 and MSBuild 3.5 cases. That is, the initial build and the shelled out build will both use this same logic. I differentiate the two scenarios in this property. The condition on the property states “if the property isn’t set”. That is, I only set the property if it wasn’t previously set. In this case, it won’t be set when the .targets file is loaded from MSBuild 4.0. When I shell out to MSBuild 3.5, I’ll set it explicitly. From MSBuild 4.0 I’ll set it to ExternResolveTestReferences, but when it is called from MSBuild 3.5 it will be set to CoreResolveTestReferences.
<PropertyGroup>
<ResolveReferencesDependsOn>
$(ResolveReferencesDependsOn);
$(CoreResolveTestReferencesTarget)
</ResolveReferencesDependsOn>
</PropertyGroup>
Like the original .targets file, I needed to fit my targets into the dependency chain for the common build targets. We add the target in the value of the property CoreResolveTestReferencesTarget, which I defined above.
<PropertyGroup>
<ResolveTestReferencesDependsOn>
ResolveReferences;
$(CoreResolveTestReferencesTarget)
</ResolveTestReferencesDependsOn>
</PropertyGroup>
I also added a standard dependency property for the ResolveTestReferences target. This is because when you use the original .targets file, you can’t build just the ResolveTestReferences target, because it doesn’t define its dependencies. I needed to call the target directly, so I added the required dependencies.
<Target Name="ResolveTestReferences" Condition="'@(Shadow)'!=''" DependsOnTargets="$(ResolveTestReferencesDependsOn)" />
The original ResolveTestReferences contained all the logic to call out to the BuildShadowTask. Since I need to use different logic between MSBuild 4.0 and MSBuild 3.5 I added the target dependencies and removed the logic. The condition just states that “if there are Shadow items”. In other words, only run the target if items are defined as Shadow, which are the test reference (.accessor) files.
<Target Name="CoreResolveTestReferences" Condition="'@(Shadow)'!=''">
The CoreResolveTestReferences target is the one called from MSBuild 3.5. It has the same condition as the ResolveTestReferences target to avoid unnecessary invocations. This target will look a lot like the original ResolveTestReferences target. In fact the next bit is a direct copy:
>
Just like the original ResolveTestReferences, a call out to the BuildShadowTask is done. The difference is in the next couple of lines:
<WriteLinesToFile
File="$(IntermediateOutputPath)ReferencePath.txt"
Lines="@(ReferencePath)"
Overwrite="true" />
<WriteLinesToFile
File="$(IntermediateOutputPath)ReferenceCopyLocalPaths.txt"
Lines="@(ReferenceCopyLocalPaths)"
Overwrite="true" />
All I’m doing here is writing the values of the ReferencePath and RefenceCopyLocalPaths items out to a text file. Note that there is a potential for data loss here, if the items have metadata attached to them. In my tests, I didn’t see any issues. The reason I write the values of these items out to a file is that I want to read them back into MSBuild 4.0 space. It was the easiest way I could think of to try to replicate the original behavior.
</Target>
And, of course, this just closes the CoreResolveTestReferences target. Now onto the MSBuild 4.0 logic.
<Target Name="ExternResolveTestReferences" Condition="'@(Shadow)'!=''">
The ExternResolveTestRefences target contains the logic for MSBuild 4.0 to shell out to MSBuild 3.5. It uses the same condition as seen on the ResolveTestRefences target.
<Exec Command=""$(MSBuildBinPath)\MSBuild.exe" /t:ResolveTestReferences "$(MSBuildProjectFullPath)" /p:
Here is where we shell out to MSBuild 3.5. In MSBuild 4.0, I found that the MSBuildBinPath property is set to the target framework’s bin path, not the 4.0 bin path, so I was able to use that to invoke the correct version of MSBuild.exe.
If you are not familiar with the MSBuild command line arguments, the /t indicates the target or targets to run and /p defines a property. Since I set up the dependency chain for the ResolveTestRefences target, we can just tell MSBuild 3.5 to build that target. After the /t argument you’ll see that I pass the full path to the MSBuild project that is currently being built, which should be the unit test project. The rest of the arguments are properties that need set to maintain behavior. This is where you might find the need to do some customization. I’ll go over some of the properties and why I included them.
The Configuration property is the configuration that you want to build. A configuration is typically ‘Release’ or ‘Debug’ for .NET projects, but it can be customized. Anyway, here I just wanted to pass the currently building configuration on to MSBuild 3.5 so it builds to the right folders, etc.
The Platform property is the platform that you are targeting. For example, x86 or x64. For .NET projects, it will typically be ‘Any CPU’. Just like the Configuration property, I just wanted to maintain the same for MSBuild 3.5 since it often is used to construct output file paths, etc.
The OutDir property is set because it is a commonly overridden property for TFS Build builds. This problem was initially found from a customer that was trying to use TFS Beta 2 to build legacy projects. Well, TFS Build overrides the OutDir property in order to get all your projects to put their outputs into a single location. To get project references and such to resolve correctly, it is needed, although no outputs are placed there from my invocation of MSBuild 3.5.
Finally, the CoreResolveTestReferencesTarget is set to CoreResolveTestReferences. This is the key to making sure that the invoked MSBuild 3.5 uses the CoreResolveTestReferences target, which does the actual BuildShadowTask call. Don’t change this.
Ok, now the last thing to do is read back in those files that we wrote from the CoreResolveTestReferences task in MSBuild 3.5.
<ReadLinesFromFile File="$(IntermediateOutputPath)ReferencePath.txt">
<Output TaskParameter="Lines" ItemName="ReferencePath" />
</ReadLinesFromFile>
<ReadLinesFromFile File="$(IntermediateOutputPath)ReferenceCopyLocalPaths.txt">
<Output TaskParameter="Lines" ItemName="ReferenceCopyLocalPaths" />
</ReadLinesFromFile>
Simply, I read the lines right back into the properties that they were written from in MSBuild 3.5, but in MSBuild 4.0 space. With any luck, any other targets will have the expected values already in these item properties. This was the final key to maintaining the same behavior as the original ResolveTestReferences target.
<Delete Files="$(IntermediateOutputPath)ReferencePath.txt" />
<Delete Files="$(IntermediateOutputPath)ReferenceCopyLocalPaths.txt" />
For good measure, I go ahead and delete the intermediate files, since they mean nothing for incremental building.
</Target>
And, of course, this ends the ExternResolveTestReferences target.
All that code is attached as Microsoft.TeamTest.4.targets. You’ll want to drop that file into your %ProgramFiles%\MSBuild\Microsoft\VisualStudio\v9.0\TeamTest folder. On an x64 system, put it in %ProgramFiles(x86)%\MSBuild\Microsoft\VisualStudio\v9.0\TeamTest.
Now, to get your unit test projects to pick up this .targets file instead of the original, you need to add one property to them. If you have a large project structure, then perhaps you have a single place that you can add it. If you have a large project structure and aren’t using .targets files for your common customizations (properties, custom build processes, etc) then I suggest you start looking into it.
The key is to get this property defined before you import Microsoft.CSharp.targets. If you are at all familiar with MSBuild project files, you won’t have too much trouble. For the rest of you, I’ll show you how to add it to a plain ol’ unit test project. Open the unit test project file (.csproj or .vbproj) in notepad or VS using the XML editor. Find the first set of properties, something like:
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProductVersion>9.0.30729</ProductVersion>
<SchemaVersion>2.0</SchemaVersion>
<ProjectGuid>{9C571CAA-F9D2-4103-8B1E-5AB41E34C187}</ProjectGuid>
<OutputType>Library</OutputType>
<AppDesignerFolder>Properties</AppDesignerFolder>
<RootNamespace>UnitTests</RootNamespace>
<AssemblyName>UnitTests</AssemblyName>
<TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
<FileAlignment>512</FileAlignment>
>
Add this property somewhere in that property group:
<MsTestToolsTargets>$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v9.0\TeamTest\Microsoft.TeamTest.4.targets</MsTestToolsTargets>
Save the file, and you’re done. Go build from MSBuild 4.0. You should see that the generated accessor assemblies are linked to the 2.0 CLR and not the 4.0 CLR. You can verify by opening the assembly in ILDasm.exe. Open the MANIFEST node.
What you should see:
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4..
.ver 2:0:0:0
}
What you should not see:
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4..
.ver 4:0:0:0
}
.assembly extern mscorlib as mscorlib_2
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4..
.ver 2:0:0:0
}
Conclusion
You can in fact build 2.0 linked accessor assemblies using MSBuild 4.0 Beta 2. However, this is only a workaround. You may need to customize it to your environment or needs.
I hope this helps some people out. Please do leave feedback if you find errors or if you just want to give a big thumbs up (or down).
Downloads^5Beta%202^6/Microsoft.TeamTest.4.zip
Hi!
Will MSBUILD 4.0 be able to handle VC++ 9.0 project as well? Are there some known limitations about building unmanaged VC++ assemblies?
Thanks!
-Christian
It does build the legacy unmanaged projects, but it requires VS 2008 to be installed (with the C++ features, of course). The reason is that in VS 2008, C++ projects used VCBuild, not MSBuild. In VS 2010, the C++ projects are MSBuild projects.
Adam
Hi Adam,
Was this fixed in 2010 RTM?
Best, Jason
Yes. However, the fix was targeted for TFS 2010 Build, not for client builds. This means that if you install VSTS 2010 and try to build one of these projects using MSBuild 4, you will still see the broken behavior. The reason is a file called Microsoft.TeamTest.OverrideTasks is only installed with TFS 2010. You can, however, manually create the file, or copy it from a machine with TFS 2010 installed. The file lives in the framework directory (make sure you grab it for both the 32 and 64 bit framework directories).
Adam | https://blogs.msdn.microsoft.com/adamroot/2009/12/10/building-vs-2008-unit-test-projects-in-msbuild-4-0-beta-2/ | CC-MAIN-2017-47 | refinedweb | 2,841 | 51.55 |
Introduction
Should I take this tutorial?
This tutorial examines the validation of XML documents using either Document Type Definitions (DTDs) or XML Schema. It is aimed at developers who need to control the types and content of the data in their XML documents, and assumes that you are familiar with the basic concepts of XML. (You can get a basic grounding in XML itself through the Introduction to XML tutorial.) It also assumes a basic familiarity with XML Namespaces. (You can pick up the basics of namespaces in the Understanding DOM tutorial.)
This tutorial demonstrates validation using the Java language from the command line, but the principles and concepts of validation are the same for any programming environment, so experience with Java technology is not required to gain a thorough understanding. DTDs and XML Schema, in particular, are language- and platform-independent.
In the creation of a database, using a data model in conjunction with integrity constraints can ensure that the structure and content of the data meet the requirements. But how do you enforce that kind of control using XML, when your data is just text in hand-editable files? Fortunately, validating files and documents can ensure that data fits constraints. In this tutorial, you'll learn what validation is, and you'll learn how to check a document against a Document Type Definition (DTD) or an XML Schema document.
DTDs were originally defined in the XML 1.0 Recommendation and are a carryover from the original Standard Generalized Markup Language (SGML), the precursor to HTML. Their syntax is slightly different from XML, which is one drawback to using them. They also have limitations in how they can be used, which led developers to seek an alternative in the form of XML schemas. However, DTDs are still in use in a significant number of environments, so an understanding of them is important.
The primary alternative to DTDs is the XML Schema Recommendation, maintained by the World Wide Web Consortium (W3C). (Throughout the course of the tutorial, "XML Schema" should be considered synonymous with "W3C XML Schema.") Schemas, which are also XML documents, provide a more familiar and more powerful environment in which to create the constraints on the data that can exist in an XML document.
By the end of this tutorial you will learn how to create both a DTD and an XML Schema document. You'll also learn the concepts of using them to validate an XML document.
The examples in this tutorial, should you decide to try them out, require that you install the following tools and make sure they are working correctly. Running the examples is not a requirement for understanding the content provided.
- A text editor: XML files, DTDs, and XML Schema documents are simply text. To create and read them, a text editor is all you need.
- You can manipulate and validate XML in any language where a validating parser is available. The bulk of the tutorial deals with the creation of documents, but you will also see how to build an application that uses a validating parser. XML support has been built into the latest version of Java (available at), so you won't need to install any separate classes. (If you're using an earlier version, such as Java 1.3.x, you'll also need an XML parser such as the Apache project's Xerces-Java (available at), or Sun's Java API for XML Parsing (JAXP), part of the Java Web Services Developer Pack (available at).
If you have a different set of tools installed, you can use them instead. Just check the documentation for instructions about turning on validation. You can download C++ and Perl implementations of Xerces from the Apache Project at. | http://www.ibm.com/developerworks/xml/tutorials/x-valid/ | crawl-003 | refinedweb | 625 | 53.21 |
Hi As you may have noticed, the beloved target of many flamewars, the NEW queue [1] has been reduced to an average of less than 10 packages. Packages are processed within days, sometimes even within hours. In order to slow it right back down again, so you dont get used to it too much, I decided to become insane and do even more. I plan to add some very rough quality checks - nothing should surprise you, but all of these problems were already seen in the queue. Not everything that makes a package RC-buggy is listed, but will also count as a reason for a reject - iff we notice it. Of course this does not take away the developers responsibility to do their own QA before upload. Its the maintainer who is responsible for everything that happens with a bad package, not the ftpteam! NEW checking is about three things. In order of priority: trying to keep the archive legal, trying to keep the package namespace sane, and trying to reduce the number of bugs in Debian.. To avoid flames from surprised maintainers, I have decided to post a little list of things we look for, so you can fix your packages before uploading. All items are things that *really* should never happen anyway, but exist in some packages nonetheless. You can find this list at in the future and that one will also be updated if we need to. This is purely an informational list and is from the top of my head. There may be more reasons. I mark reasons added with this mail with a star, so you can quickly see what's new. Most of the added points are just quality issues. While I'm at it: If you want to make it easy for us, then please state *why* you've added a NEW binary package or renamed source. A simple "New upstream" or "Fixed bug" isn't great. Serious violations (direct rejects even if we only find one point): - You have a GPL program linking with OpenSSL. This is only ok if upstream gave a license exception for this otherwise the two licenses are incompatible. Visit or/and for more information. - You have a cdbs package and for whatever reason turned on the "play with my debian/control in a bad way" option. See #311724 for a long text on that matter. Small overview is in Footnote [2]. - You have a PHP addon package (any php script/"app"/thing, not PHP itself) and its licensed only under the standard PHP license. That license, up to the 3.x which is actually out[3], is not really usable for anything else than PHP itself. I've mailed[4] our -legal list about that and got only one response, which basically supported my view on this. Basically this license talks only about PHP, the PHP Group and "includes Zend Engine", so its not applicable to anything else. And even worse, older versions include the nice ad-clause. One good solution here is to suggest a license change to your upstream, as they clearly wanted a free one. LGPL or BSD seems to be what they want. - Be sure that you correctly document the license of the package. We often find packages having a GPL COPYING file in the source, but if one goes and looks at every file there are a few here and there having different licenses in them, sometimes as bas as "You aren't allowed to do anything with this file, and if you do we will send our lawyers to you". Of course it's hard to check a tarball with thousands of files (think about X, KDE, Kernel or similar big packages), but most of the tarballs aren't that big. Also not-nice is a package, itself being GPL, having documentation licensed with a non-free license, like the CC licenses. Makes the original tarball non-free, this is one of the cases where you need to repackage it (look in the archive for examples, mostly having .dfsg. in their tarballs name). - You "break" a transition. Well, right at the moment its the nice C++ ABI thing, but in the future it may be something else. For this C++ one it's basically - if you link against libstdc++5 you are out, *EXCEPT* if you declare a special Build-Depends on the right compiler to get this which most people havent done and its nearly always not part of the build environment. Also having dependencies on non-transitioned C++-libraries gets you rejected. I think most important libs are done, but until recently a nice candidate was libqt[whatever]c102. Watch out c102 names in your Depends lines if you want to be sure. - Also not good is to build against a version of a library only in experimental, but uploading to unstable. We may not detect all of that, but if it happens the package will be rejected. :) - You try to hijack a package of an active maintainer (or team). Don't do that. -* :) - Your package "fails to build from source in a sane way". A good example is behind #300683, but I can think of more creative ways to do it. Basically you need to be able to build all[5] .debs with only one call to debian/rules. Not two, not three. Not an extra script. - Your debian/copyright file must have at least the minimum needed stuff. A good overview of what you need is in the thread starting at - Your upstream tarball needs to include a copy of the license, or debian/copyright needs to explain where it can be found. (eg. the upstream website). In the past there were uploads where one couldnt find the license statement in the tarball or on the website from upstream, which is bad. - Do not include a license that is in /usr/share/common-licenses into your debian/copyright. That's a waste of space. - If the license is not in /usr/share/common-licenses it must be copied verbatim into debian/copyright, a pointer to the package or a website is not enough. - If you want your package in main it must *not* (Build-)Depend: or Recommend: a package which isn't in main too. Thats what we have contrib for. - You most probably want to have some content in your package. From time to time we find packages which don't have any. An example is a rename of a lib* package, but to miss updating the place where the files are installed. - It should really not violate policy, for example violating FHS. Like installing libFOO.so in /usr/share, creating random new /pathentries or any other obvious policy violation. - If you rebuild the orig.tar.gz (in the few cases this is needed, most of the times it is NOT) - be sure to not include .so/.a/other compiled files. You should also exclude .cvs/.svn/.whatever_your_version_control_system_has directories you added. - Lintian errors and warnings, without a good reason to ignore them can get your package a reject. Sometimes there are valid reasons, but then you should either file a bug against lintian if it's generally wrong or include an override in your package, giving a reason in the changelog for it. - Watch out if you include a rpath in your binaries. Most of the time this is not what you want but it's easy to fix. See for more details. - Use the right package names. A lib should start with lib, a perl module with lib and end with -perl, etc. Minor issues, sometimes also named "Good packaging behaviour". Not a single one is enough to get you a REJECT, but if you collect multiple ones the probability rises: * Have a good description for your package. Both short and long. You should know what you packager does but your users don't. So write something that they will understand *before* they install it. A simple "XY, it does stuff" is not something that should get uploaded. * completly doesn't care for that. * Be sure to look for native/non-native build. Its easy to build a package native because you had a spelling error in the upstream tarballs name. Simple solution: Whenever you have a -X in your version number (ie a debian-revision) look if you have a .diff.gz before you upload. * Do not add the third version of a lib/program. Get RID of them instead. Always try to not have more than two version of a library/program in, and that also only if it's needed due to the hard transition to a newer one. * Write manpages. Yes. Really. Write them. Well. It's basically: If your program/tool has a --help and --version commandline option you can simply run help2man and have a working start. Such easy ones are near to a reject. Yes, there are things in the archive where its hard to write manpages. * Have a clear debian/rules file. A bad example are the dh-make templates. As the name says: they are templates. Edit them, test your package and then delete the whole bunch of commands that are commented out, make it hard to read and do not help. If you later need anything: Type dh_[TAB][TAB] to see what's available. * Remove the .ex files from debian/. They are examples and not meant to stay around. * Standards-Version should be the latest available one, not any ancient that may happen to get in with a template you use. * Having a "INSTALL_PROGRAM += -s" setting if nostrip option is set while using dh_strip later. Useless, read man dh_strip please. :) [1] [2] The "DEB_AUTO_UPDATE_DEBIAN_CONTROL" option turned on modifies Build-Depends, which doesnt affect the build thats running. But it affects all following builds, which can be NMU, buildd builds and worst case: security builds. You *DO NOT* want to have such a build getting a different result (except for the small changes intended with the build) just because there is now another thing in the build-depends. Two solutions for this: a.) Think about it and set the Build- Depends yourself. Thats *easy* and you can check them in pbuilder. b.) Do this only in a special target in debian/rules that is NEVER called automagically. So you can check what it did before you start the real build. [3] But many people still use older ones, like 2.xx. [4] [5] This only includes .debs that should get uploaded. Stuff that's only built by users for their local system doesn't matter. -- bye Joerg I think there's a world market for about five computers. -- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943
Attachment:
pgpHH52opTxGj.pgp
Description: PGP signature | http://lists.debian.org/debian-devel-announce/2005/08/msg00011.html | crawl-002 | refinedweb | 1,800 | 74.19 |
regarding mini project - JDBC
regarding mini project i need to make a mini project using servlet and jdbc.pls help me with a real time application
real+page - Java Beginners
real+page
real+finance - Java Beginners
real+finance
real+contact - Java Beginners
real+contact
static keyword with real example
static keyword with real example static keyword with real examplestrong text
real+header - Java Beginners
real+header
>
>
Simplified .
Home Loans
real+footer - Java Beginners
real+footer
project
project suggest a network security project based on core java with explanation
PROJECT
PROJECT i want online polling system project codding
project
project i need a project in java backend sql
MCA Project Training
MCA Project Training
... their technical skills and proficiencies into real time development environment. The course..., this MCA project training in Java is definitely for the MCA final year
real time question
real time question can you just give me the example of using hibernate in DAO classes
Hi Friend,
Please visit the following link:
Hibernate Tutorials
Thanks Management Methodologies
, dynamic team culture, less restrictive project control, and emphasis on real...The basic objective behind a project management is to achieve the defined set of objectives and goals for a project in a definite time frame within
Project Management Tools
enables the project managers to track the progress of the project on real time basis...Project Management is a complex phenomenon, which requires lots of activities... of a project, a project manager is provided with certain tools that supports
please help.. this my importtant project..!!!!!
please help.. this my importtant project..!!!!! Consider the Algebra..., the equation has two real roots. If it is zero, the equation has one root. If it is negative, the equatation has no real roots.
Figure 1:QuadraticEquation Class Diagram
Project
=(String) vdata.get(1);
String project_name=(String) vdata.get(2);
String project
Project Scope Management
the project and believed to be true, real, or certain. Assumptions involve a degree... to be true and real for the planning of the project.
Other planning outputs...Project Scope Management includes the processes and the procedures
project
in project
Project
real+function - Java Beginners
real+function <?php
/**
* @package WordPress
* @subpackage Default_Theme
*/
if ( function_exists('register_sidebar') ) {
register_sidebar(array(
'before_widget' => '<li id="%1$s" class="widget %2$s">
real+index.php - Java Beginners
real+index.php
5 Reasons to get your Home Loan through Property PA Finance
We come to you
We operate outside of business hours, making it convenient
You get expert assistance
Java Final Project - Java Beginners
Java Final Project I am having a real hard time with this java... Project Overview
This document provide a sample of what is possible, and could lead to a project worth the maximum grade of 100% Something to think about
What is Agile Project Management?
project management to perceive from a simple and easy to perceive approach is a type of project management where error ridden project management attributes... to describe in contrast to other types of project management which are often
Project Procurement Management
Project Procurement Management is the effective acquisition of goods...
for the organization in context to a project. In addition to that project....
An effective project management passes through six phases for its
implementation
Real time examples - Java Beginners
Real time examples what is method overloading,method overriding,constructor overloading concept in java and explain with real time examples? Hi Friend,
Please visit the following links:
Real Time Scenarios - Development process
Real Time Scenarios Hello,
Can anybody tell me regarding the Quality Processes that are to be followed while developing an application using struts/ejb ? Hi
The Quality Process is totally depend on the quality
Joomla Real Estate Package
Joomla Real Estate Package
This package is suitable for real estate agency or property builder who needs this package to implement it its hot property... Joomla real estate CMS
package
read a positive real numbers from highest to lowest
read a positive real numbers from highest to lowest write a java program that will read a sequence of 10 positive real nos. entered by the user and will print the same numbers in sorted order from lowest to highest using arrays
Real Time Mobile Tracking
Real Time Mobile Tracking
Real time mobile tracking is all about being able... technology. Real time mobile tracking is a web-based solution that helps in attaining... in real time using the GSM/GPRS technology by gathering data in the process about
what is the use of clone() in real time scenario?
what is the use of clone() in real time scenario? what is the use of clone() in real time scenario?
Hi Friend,
Clone() method is used to create and return copy of the
object.
For more information, visit
Real Time code for Array list - Development process
Real Time code for Array list Hi,
Can u give me sample code for Arraylist in real time applications. Thanks PraKash Hi Friend,
Try the following code:
import java.util.*;
class Student {
public
Displaying database table on the windows screen in real time.
Displaying database table on the windows screen in real time. Problem statement...
I have a database table which is running on a remote host .I need to write a program to display this table in a windows form or on a html page
Current Running Projects on roseindia.net
;
Project Title
College Alumini Registration and Display System
Patient Registration and appointment booking system
SS7 real message communication tutorial design using ASP.NET - Call Flow 1
SS7 real message
Real Time GPS Fleet Tracking
efficiency, logistics companies need the real time fleet management solution. The real time fleet tracking solution helps the companies to exactly pin pointing... communication hardware that enables the driver and operations cell to establish real
project enquire
project enquire **hi admin,
are you had project about "project tracking" on java
Project Management
Project Management What is Project Management and how it useful in managing a software project. What all certification courses are available for Software Project Management? How to become a good project manager?
Thanks
Real-time GPS fleet Management
Fleet business depends on real-time GPS fleet management. Now to make it clear... destination and also helps him in avoiding traffic and accidents.
Real-Time GPS... and store data regularly and in real time so that the person gets updated
Real Time Example Interface and Abstract Class - Java Beginners
Real Time Example Interface and Abstract Class Hi Friends, can u give me Real Time example for interface
and abstract class.(With Banking Example I want a mini project on eco tourism
Project Doubght
Project Doubght What is meant by server monitoring in java enterprise application
Need Project
Need Project How to develop School management project by using Struts Framework? Please give me suggestion and sample examples for my project
[Mini-Project]
[Mini-Project] [Mini-Project] Develop a programmer's editor in Java that supports syntax highlighting, compilation support, debugging support, etc
TheBroth, The Global Mosaic
TheBroth, The Global Mosaic
TheBroth is a browser-based massively multiplayer collaborative art project (MMCAP)
made with Ajax. Interact with people from around the world in a real
java project
java project i would like to get an idea of how to do a project by doing a project(hospital management) in java.so could you please provide me with tips,UML,source code,or links of already completed project with documentation Project Java project using with collection and generic with observer design pattern
Jsp Project
main project for completing my acadamic project as well as i need to improve my... project using jsp and servlet.. and i am interested in doing cloud computing related project...
Thanks in Advance
requeting project
requeting project project sir
sir please help me to develop this project, The main objective of this project is to implement a computer..........
within this week i need to complete this project
after developing this please mail
coding for project
coding for project hai
how to write jsp coding for project smart accessories ......
that s to navigate to another page when you click on a tag
my project
my project how to creat a e learnig site with student information , faculty details , student queries , student feedback, course information
Real time Mechanism fot Sessio Tracking - Servlet Interview Questions
Real time Mechanism fot Sessio Tracking
Hi Friends,
Which is d most commonly used mechanism for tracking a session. and y it is best . thanks in advance
want a project
want a project i want to make project in java on railway reservation using applets and servlets and ms access as database..please provide me code and how i can compile and run
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
EJB Project
EJB Project How to create Enterprise Application project with EJB module in Eclipse Galileo?
Please visit the following link:
Open Source Game Engine
is a real-time 3D rendering environment for all of your real-time 3D needs... world-class, real-time 3D applications today. If you are new to developing... for C++ even in the demanding field of real-time 3D game development. When I
About Project
About Project Hello friends i want to make a project on face reconization
this is my first projct
so please help me that how i start my projct
please tell me some working with image codeing.
thanks
regarding project
regarding project sir we need help about project. we have idea about his project please guide me sir.
OBJECTIVES OF THIS PROJECT:
-Ability... to collaborate and come up with combined solution quickly.
PROJECT NAME: DID I WRITE A GOOD
MINI PROJECT
MINI PROJECT CAN ANYONE POST A MINI PROJECT IN JAVA?
Hi...
You can have the following projects as per ur requirement. Free and easy to download.. All projects are JAVA based
Sale and purchase in java swing
Java | http://www.roseindia.net/tutorialhelp/comment/58275 | CC-MAIN-2014-49 | refinedweb | 1,650 | 51.58 |
Kotlin introduction
Kotlin is a statically typed language which means that it does most the check at compile time rather being dependent on runtime. Well this language has stayed in the development period for quite 6 years and when it was released it was production ready right form the beginning. It’s quite similar to Scala language and it also inturn requires jvm to run hence it’s completely interoperable with Java.
What makes kotlin a better choice?
Well kotlin has overcome the issue that programmer faces with java like the greatest issue is null pointer exception which this language claims that it will never produce null pointer exception until it’s a fault of the programmer how do they do that we will see it later.
Features of kotlin:
- Extension function
- High order function
- Inter-operable with java
- Null safety
- Smart cast
- Default and named arguments
- Multi-value return from function
- Data class
We will see the examples for all those features regarding how to use them but what is important to know here is how to use kotlin as the syntax varies a lot as compared to java. For all those programmers who have learned Scala it will be a left hand play for them as it closely resembles with it.
Just like program execution starts with public static void main(…) in java same way in kotlin it start with fun main(…). The syntax that it follows is variableName:DataType
like for example to declare the variable of integer type the
java syntax is: int name;
kotlin syntax is: var name:Int=10; //in case the value is mutable
kotlin syntax is: val name:Int=10; //in case value is immutable
Now as you can see that I initialize the variable at the time of declaration itself so that kind of scenarios it’s permissible to omit :Int and directly write var name= 10; based on the type of value it will automatically assign the data type to the variable. Following is the example of correct implementation
fun main(args: Array<String>) {
var name:Int=10; // with the data type
var varName = 20; //without mentioning data type //both the methods are correct so no error here.
println(name);
println(varName);
}
Following is the example of wrong implementation
fun main(args: Array<String>) {
var name:Int=10;
var varName;
varName = 20; // As data type is not defined so it's an error.
println(name);
println(varName);
}
In case you are wondering what does fun means that it’s actually a way to define function just like you do in php by writing function keyword here it’s the short form by using fun keyword.
How to define a function?
Well the syntax of function is
fun functionName(optionalArg....) {
.....
}
fun is the keyword to define a function. Now a special care has to be taken while using parameter with function have a look on example below
fun check(name:String) {
println(name);
}
Now if you have noticed something than it’s the keyword var or val which is missing here now that is as intended the var and val in function parameter is not allowed. Following is the wrong implementation as var is used
fun check(var name:String) {
println(name);
}
Create class in kotlin
To create class just use class keyword it’s exactly same way as it’s in java but the similarity end there itself.
class Employee{
}
Now if you have only one constructor than that constructor is called as primary constructor and that is declared with the name of class itself.
class Employee constructor(id: Int, name:String) {
}
In case constructor does not have any annotation or visibility than the keyword constructor can be omitted.
class Employee(id: Int, name:String) {
}
If constructor has to be initialize than that has to be in the block of init
class Employee(id:Int, name: String) {
init {
println(name);
}
}
Note that all the class by default are not inheritable if you want to inherit a class than define that class as open.
open class Base {
.....
}
class Derived() : Base(){
.....
}
The way to use inheritance is quite similar to the c++ class inheritance.
Well there are quite a long list of features for classes but this article mainly focus on familiarize with kotlin so let’s see the new feature which will make you to think why haven’t used kotlin till now.
The model classes are often created with the boilerplate of getters and setters but as you use kotlin than forget about those boilerplate rather your model class is now just a one line of code that’s it. Have a look below on how it’s done
Using java
public class;
}
}
Same model class created using kotlin
data class Employee(val id:Int, val name:String);
Adding data in front of class will automatically create it as a model class so to access the members just call object.name_of_variable for example.
fun main(args: Array<String>) {
val emp = Employee(10,"name");
println(emp.id);
}
Note here that no new keyword is needed directly use the class name to declare a variable. Also if the variable are mutable than use var instead of val.
That’s it for the glimpse of kotlin a single article will make it too long to be read so wait for the next article to see about the remaining features and how to use them.
See the following article to read about kotlin best features
The best features of Kotlin!
Hello everyone in my last article I mentioned about the features of kotlin so let’s see those with an example and also…
medium.com | https://medium.com/android-news/kotlin-now-official-android-language-36364b4914c1 | CC-MAIN-2021-43 | refinedweb | 941 | 52.83 |
pg has asked for the wisdom of the Perl Monks concerning the following question:
I run a chat room on my web site. To make the content more attractive, and also to visually confirm to the user that the content has refreshed when they click "listen", I change the background color each time randomly.
Now if the foreground color stays the same all the time, it could be too close to the background color, and the user will not be able to read. So I come up with this piece of code to generate two colors that (I believe) are well contrasted to each other:
and I use it in this way:and I use it in this way:sub random_color { my ($r, $g, $b) = (int(rand(256)), int(rand(256)), int(rand(256))) +; my $color1 = sprintf("#%02x%02x%02x", $r, $g, $b); my $color2 = sprintf("#%02x%02x%02x", ($r + 128) % 256, ($g + 128) + % 256, ($b + 128) % 256); return ($color1, $color2); }
my ($c1, $c2) = random_color(); $msg .= $cgi->table({-border => 1, bgcolor => $c1, sty +le => "Color: $c2;"}, $cgi->Tr($table_data));
So far, it seems to me work very well, but I am not sure whether this really gurantees the well-contrastness. Maybe some one can enlighten me, or come up with some other interesting way to generate the colors, and give a different look and feel. | https://www.perlmonks.org/bare/index.pl/?node_id=305198 | CC-MAIN-2021-25 | refinedweb | 225 | 53.58 |
Code :
public boolean same_State (Search_State s2) { Jugs_State js2= (Jugs_State) s2; return ((j1==js2.get_j1())&& (j2==js2.get_j2())); }
I just want to know what the part :
does.does.Code :
Jugs_State js2= (Jugs_State) s2;
I know you can create an object by e.g. Classname object = new Classname() but
what is this "(Jugs_State)" in the brackets and why is there no "new" and why is
there a new variable "s2" after that.
Finally, what is the return doing at the end?
Thanks a lot, I just need a general answer of what the code is doing
not exactly what my classes do (that's why I haven't included all the code
it's long). | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/16969-what-does-short-part-code-do-printingthethread.html | CC-MAIN-2017-34 | refinedweb | 113 | 79.4 |
Ionic 3 Performance Boost with Observable using VirtualScroll and WKWebView
What.
- The unoptimized case
- The WKWebView solution
- The Ionic 3 VirtualScroll solution
Let's fire up our Ionic 3 app:
ionic start smoothapp blank --v2
The same home.ts file will be used for each case:
import { Component } from "@angular/core"; import { Observable } from "rxjs"; @Component({ selector: "page-home", templateUrl: "home.html" }) export class HomePage { displayedImages; constructor() {} ngOnInit() { const baseImg = ""; const imgArray = Array(1000).fill(baseImg); this.displayedImages = Observable.of(imgArray); } }
All the work is done in the ngOnInit hook.
I haven't found a free service that returns 1000 images on demand, so we go back to our good old lorempixel website.
First, we add 1000 image urls to a temporary array named imgArray.
Once this is done, an Observable is returned. A HTTP request now returns an Observable, that's why we return one to simulate this case (you can have a reminder on Observables there).
Using the method of from Observable, the image array will be transformed into an Observable sequence.
Just a good old ngFor
Ionic advises us to use the <ion-img> Component, however, it has some issues with the current VirtualScroll:
So the good old <img> tag will be used to keep the tests congruency.
In the home.html:
<ion-content padding> <ion-list> <ion-item * <ion-avatar item-left> <img src="{{displayedImage}}"> </ion-avatar> </ion-item> </ion-list> </ion-content>
The images are displayed inside an <ion-list>.
The ngFor Directive here creates one <ion-item> for each image we have to display.
The asyncPipe is added because we are not looping on a simple array, but on an Observable sequence.
Finally the images are displayed inside an <ion-avatar>.
The benchmark:
The initial part is the Ionic 3 loading screen with a stable 10 fps.
Once the application is loaded, the fps become unstable, every time a scroll is triggered, a fps drop appears.
It's looking very bad and that's not how a smooth application's benchmark should look like.
Let's fix this ;).
Introducing WKWebView
The WKWebView API was introduced in iOS 8.
It has a better HTML5 support (added IndexedDB and ObjectStore ArrayBuffer) and has far better performances than its predecessor UIWebView.
The whole installation process is available there:
From here, 10k images will be displayed. Why? Because we can!
The results are really impressive:
After the initial Ionic 3 loading (the constant 10fps).
The app is working smoothly averaging 55 fps!
All of that without modifying the source code. That's a quick and clean fix!
The Bleeding Edge: VirtualScroll
Finally the Ionic homemade solution: the VirtualScroll.
Unlike the other cases, the VirtualScroll only renders a limited number of rows.
Even though 10K images are available:
Only a few numbers are in the DOM.
This number is just enough to give the user the impression that every images are here.
The only changes are:
<ion-list [virtualScroll]="displayedImages | async"> <ion-item * . . .
The <ion-list> and the <ion-item>.
We attach the virtualScroll attribute. The displayedImages are required, alongside the asyncPipe (still using our Observable). The virtualItem Structural Directive will do its job but requires the creation of a displayedImage variable.
And that's it!
The performances:
The application is running smoothly while using the good old UIWebView API.
However, this solution comes with some other issues:
Conclusion
We are damn lucky.
Many solutions and improvements have been created for us through the past years.
The VirtualScroll is very interesting because it allows us to virtually display thousands of images without breaking a sweat by only rendering a hundred, however, it does come with its own small issues that can be buzz killers.
On the other side, the WKWebView API is already there since iOS 8 and very stable for some great performances.
Always remember: | https://javascripttuts.com/ionic-3-performance-boost-with-observable-using-virtualscroll-and-wkwebview/ | CC-MAIN-2019-26 | refinedweb | 637 | 57.37 |
> From arj@cam-orl.co.uk Fri Jul 2 08:23:04 1993
> Return-Path: <arj@cam-orl.co.uk>
> Received: from gossip.pyramid.com by ubitrex.mb.ca (4.1/SMI-4.1)
> id AA23093; Fri, 2 Jul 93 08:22:54 CDT
> Received: from sword.eng.pyramid.com
> by gossip.pyramid.com (5.61/OSx5.1a Pyramid-Internet-Gateway)
> id AA20688; Fri, 2 Jul 93 06:22:11 -0700
> Received: from goss.pyramid.com
> by sword.eng.pyramid.com (5.61/Pyramid_Internal_Configuration)
> id AA17891; Fri, 2 Jul 93 06:21:51 -0700
> Received: from quince.cam-orl.co.uk
> by gossip.pyramid.com (5.61/OSx5.1a Pyramid-Internet-Gateway)
> id AA20677; Fri, 2 Jul 93 06:21:53 -0700
> Date: Fri, 2 Jul 93 14:20 BST
> Tom..
>
The MIPS and SGI boxes configured the MIPs chips for big-endian. DEC
configured the MIPs chips in the DECstations as little-endian, to match
the VAX. Which also matches the PDP-11. It so happens that Intel made
the 8085/8088/80386 little-endian as well. So linux is little-endian,
and we may want to make linux/MIPs little-endian, to reduce portability
problems (file system structures, etc).
Making linux/MIPs little-endian makes cross-compiling from SGI an
interesting configuration problem, but presumably no more difficult
then generating linux/386 code from a SunOS/SPARC environment.
Try this:
/*
* end.c
*
* Test a target processor and compiler
* for size of integers and endian orientation.
*/
#include <stdio.h>
main()
{
long int test_long;
int test_int;
char test_8;
char *ptr_8;
test_long = 0x01020304;
test_int = 0x01020304;
printf("%d bit native integers.\n", sizeof(int) * 8);
if (sizeof(long) == 4) {
ptr_8 = &test_long;
}
if (sizeof(int) == 4) {
ptr_8 = &test_int;
}
test_8 = *ptr_8;
printf("This is a %s-endian machine.\n", test_8 == 1? "big" : "little");
}
________________________________________________________________ | http://www.linux-mips.org/archives/riscy/1993-07/msg00189.html | CC-MAIN-2014-41 | refinedweb | 300 | 52.87 |
It is a common problem that people want to import code from IPython Notebooks. This is made difficult by the fact that Notebooks are not plain Python files, and thus cannot be imported by the regular Python machinery.
Fortunately, Python provides some fairly sophisticated hooks into the import machinery, so we can actually make IPython notebooks importable without much difficulty, and only using public APIs.
import io, os, sys, types
from IPython import get_ipython from IPython.nbformat import current from IPython.core.interactiveshell import InteractiveShell
Import hooks typically take the form of two objects:
'IPython.display'), and returns a Module
def find_notebook(fullname, path=None): """find a notebook, given its fully qualified name and an optional path This turns "foo.bar" into "foo/bar.ipynb" and tries turning "Foo_Bar" into "Foo Bar" if Foo_Bar does not exist. """ name = fullname.rsplit('.', 1)[-1] if not path: path = [''] for d in path: nb_path = os.path.join(d, name + ".ipynb") if os.path.isfile(nb_path): return nb_path # let import Notebook_Name find "Notebook Name.ipynb" nb_path = nb_path.replace("_", " ") if os.path.isfile(nb_path): return nb_path
Here we have our Notebook Loader. It's actually quite simple - once we figure out the filename of the module, all it does is:
Since IPython cells can have extended syntax, the IPython transform is applied to turn each of these cells into their pure-Python counterparts before executing them. If all of your notebook cells are pure-Python, this step is unnecessary.
class NotebookLoader(object): """Module Loader for IPython Notebooks""" def __init__(self, path=None): self.shell = InteractiveShell.instance() self.path = path def load_module(self, fullname): """import a notebook as a module""" path = find_notebook(fullname, self.path) print ("importing IPython notebook from %s" % path) # load the notebook object with io.open(path, 'r', encoding='utf-8') as f: nb = current.read(f, 'json') # create the module and add it to sys.modules # if name in sys.modules: # return sys.modules[name] mod = types.ModuleType(fullname) mod.__file__ = path mod.__loader__ = self mod.__dict__['get_ipython'] = get_ipython sys.modules[fullname] = mod # extra work to ensure that magics that would affect the user_ns # actually affect the notebook module's ns save_user_ns = self.shell.user_ns self.shell.user_ns = mod.__dict__ try: for cell in nb.worksheets[0].cells: if cell.cell_type == 'code' and cell.language == 'python': # transform the input to executable Python code = self.shell.input_transformer_manager.transform_cell(cell.input) # run the code in themodule exec(code, mod.__dict__) finally: self.shell.user_ns = save_user_ns return mod
The finder is a simple object that tells you whether a name can be imported, and returns the appropriate loader. All this one does is check, when you do:
import mynotebook
it checks whether
mynotebook.ipynb exists.
If a notebook is found, then it returns a NotebookLoader.
Any extra logic is just for resolving paths within packages.
class NotebookFinder(object): """Module finder that locates IPython Notebooks""" def __init__(self): self.loaders = {} def find_module(self, fullname, path=None): nb_path = find_notebook(fullname, path) if not nb_path: return key = path if path: # lists aren't hashable key = os.path.sep.join(path) if key not in self.loaders: self.loaders[key] = NotebookLoader(path) return self.loaders[key]
Now we register the
NotebookFinder with
sys.meta_path
sys.meta_path.append(NotebookFinder())
After this point, my notebooks should be importable.
Let's look at what we have in the CWD:
ls nbpackage
So I should be able to
import nbimp.mynotebook.
Here is some simple code to display the contents of a notebook with syntax highlighting, etc.
from pygments import highlight from pygments.lexers import PythonLexer from pygments.formatters import HtmlFormatter from IPython.display import display, HTML formatter = HtmlFormatter() lexer = PythonLexer() # publish the CSS for pygments highlighting display(HTML(""" <style type='text/css'> %s </style> """ % formatter.get_style_defs() ))
def show_notebook(fname): """display a short summary of the cells of a notebook""" with io.open(fname, 'r', encoding='utf-8') as f: nb = current.read(f, 'json') html = [] for cell in nb.worksheets[0].cells: html.append("<h4>%s cell</h4>" % cell.cell_type) if cell.cell_type == 'code': html.append(highlight(cell.input, lexer, formatter)) else: html.append("<pre>%s</pre>" % cell.source) display(HTML('\n'.join(html))) show_notebook(os.path.join("nbpackage", "mynotebook.ipynb"))
So my notebook has a heading cell:
import shutil from IPython.utils.path import get_ipython_package_dir utils = os.path.join(get_ipython_package_dir(), 'utils') shutil.copy(os.path.join("nbpackage", "mynotebook.ipynb"), os.path.join(utils, "inside_ipython.ipynb") )
and import the notebook from
IPython.utils
from IPython.utils import inside_ipython inside_ipython.whatsmyname()
This approach can even import functions and classes that are defined in a notebook using the
%%cython magic. | https://nbviewer.jupyter.org/github/ipython/ipython/blob/rel-4.0.1/examples/Notebook/Importing%20Notebooks.ipynb | CC-MAIN-2019-18 | refinedweb | 767 | 53.47 |
Introduction: How to Augmented Reality App for Beginners With Unity 3D, Vuforia, and User Defined Targets
This tutorial is designed for anyone beginning with mobile development and augmented reality. We will use the Unity 3D video game engine as well as the Vuforia augmented reality plugin to animate some Imperial Walkers so they can take a stroll around your living room. We will go through how to modify Vuforia's sample scene for user defined targets. Vuforia requires the use of a fiducial marker or image target in order to augment a 3D object. The image it requires lets your mobile device know where the augmentation is to occur, and is often used to create a ground plane so you can place objects on top of it. The app we are going to make with this tutorial will allow the user to take a picture of any image they want, and the augmentation will occur on top of that image. This is much less cumbersome than creating a predefined image that the user must print out in order to use the app. This tutorial does not require any previous experience with Unity or Vuforia so it should be very quick and easy to implement.
All code and game assets can be found here:....
Step 1: Download Everything!
-Download Unity 3D:...
-Download the Vuforia SDK for Unity:
-Download the Imperial Walker 3D Model and the game assets folder from here:....
-Download the Vuforia sample package for Unity here:
Now, open Unity and create a new project, you can call it whatever you want.
Go to File -> build settings, and switch the build platform to IOS or Android depending on what type of phone you have. If you don't already have the appropriate modules installed their will be a button next to each platform to install what you need.
Drag in the Vuforia SDK into your assets folder, and then do the same thing with the Vuforia sample package.
Unzip the game assets folder and drag in the Imperial Walker .obj file as well.
Step 2: Prepare the Scene.
First go to Vuforia create a new app in the developer portal. Copy your app license key to your clipboard.
Go back to Unity and go the the samples folder and find the scenes folder. Double click the user defined target scene to open it.
This scene should already work. If you click play in the editor and hold up a magazine or other image to your webcam you should be able to press the camera button with your mouse and a teapot should appear on top of it.
Click play again and lets make this teapot an Imperial Walker.
Under user defined target in the hierarchy you will see a teapot, delete that and drag in the Imperial Walker .obj file on top of where it says user defined target, this will make it a child.
Now we have to color the walker. Expand the at-at walker and go to the game assets folder. You should see a silver texture in there. Drag that silver texture onto every piece of the walker.
Step 3: Lets Make Our Walker Move.
Right click the walker and add a new component off to the right. Add a new c-sharp script and call it atScript.cs
Paste in this code:
using UnityEngine; using System.Collections; public class atScript : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { transform.Translate (Vector3.forward * .01f * Time.deltaTime); transform.Rotate(Vector3.up * 3f * Time.deltaTime); } }
This will cause our walker to move in a constant forward direction while turning at the same time.
Step 4: Add an Animation.
Now that are walker is moving we need to add an animation so it doesn't look like it is gliding across the ground. We need to animate all 4 legs.
Select the walker and change it's x,y,and z scale to .2 so it's size is a little more manageable.
With the walker selected, go to the animation tab and click to create an animation.
These animations work on key frames and Unity has a system built in that will smoothly interpolate between them.
Go to the 1 second mark and move the feet and legs of the left front area to the position of one step (you will need to change position and rotation of each piece).
You will notice that the default position gets created for you at the 0 mark. Copy those keyframes and paste them in at the 2 second mark.
Now if you click play you will see the first leg doing a complete and smooth step.
Repeat this process for the other 3 legs.
You will now have an animation file in your assets folder. Click that and go up to the top right where you will see 3 lines. Click that and change normal to debug. Tick the check box for legacy.
Go back to your walker and add an animation component to it. Drag in the animation that you just created and change wrap mode to loop.
If you click play your walker will be...walking.
Duplicate this walker and reposition it as many times as you like, in the demo video I did it 3 times.
Step 5: Let's Add Some Snowy Dusty Stuff.
If you have seen the battle of Hoth you will know its in some kind of desert wasteland. So let's create a frozen tundra type effect.
Go to assets at the top and import standard assets -> particle systems.
Go to the prefabs folder in there and find dust storm mobile.
Drag that onto the user defined target making it a child and click its check box on the top right to turn it on. Click the simulate button to see it in action.
Change its scale to .0001 across the board and move it into whatever position you see fit.
With the dust storm mobile prefab still selected, change its start and end color to white in order to give it a more snowy effect. If you want a darker scene change either one of those colors to black.
Step 6: Add the Cockpit and Make It Shoot.
Click on target builder UI and create a new UI image.
Find the cockpit image from the game assets folder and click on it.
Look up to the top right and change it from a texture to a sprite.
Click on the UI image you just created and drag in the cockpit image to it's empty slot.
Resize it so it spans the entire width and height of the canvas.
Right click the target builder UI again and create a new button. Change its sprite to a circular image and changes it's color to red (or whatever color you want). Delete its child text.
Reposition that button as well as the build button to wherever you want.
Right click the target builder UI and add a new c-Sharp script. Call it controllerScript.cs
Paste in this code:
using UnityEngine; using System.Collections; using UnityEngine.UI; public class controllerScript : MonoBehaviour { public Button fireButton; // Use this for initialization void Start () { fireButton.onClick.AddListener (OnButtonDown); } // Update is called once per frame void Update () { } void OnButtonDown(){ GameObject bullet = Instantiate(Resources.Load("bullet", typeof(GameObject))) as GameObject; Rigidbody rb = bullet.GetComponent<Rigidbody>(); bullet.transform.rotation = Camera.main.transform.rotation; bullet.transform.position = Camera.main.transform.position; rb.AddForce(Camera.main.transform.forward * 500f); bullet.GetComponent<AudioSource>().Play (); Destroy (bullet, 3); } }//end of file
Step 7: Make It Shoot.
Before we can make this shoot we need to create a reference for the button we created.
Save and build the script you just added. Go back to the scene and on the target builder UI you will see an empty space inside the script we just made where it says public button. Drag in the button we previously created.
The script we added contains instructions for what to do when the button is pressed. Those instructions include firing a bullet and playing the laser blaster sound.
First lets create the bullet.
Right click anywhere in the hierarchy and create a sphere. Resize it so that it is shaped more like a bullet by extending its z scale and shrinking its x and y scale.
Right click in your assets folder and create a new material. Change the material to red (or whatever other color you want the bullet to be).
Drag that material onto the bullet.
Add a rigid body component to the bullet and uncheck use gravity. Also, remove the bullets sphere collider component.
Add an audio source component onto the bullet and drag the laser blaster .wav file from the game assets folder into the audio clip slot.
Finally, rename the sphere "bullet" and drag it into your resources folder.
Step 8: Build It Out to IOS or Android.
Now when you click play everything should be working, all that is left is to get the app onto your phone.
Plug your phone in via usb to your computer.
For IOS go to build settings -> player settings and change the landscape to left.
Change the bundle identifier to something like com.YourName.YourAppName
Make sure there is some kind of description inside where it says "Camera Usage Description"!
8 Discussions
Mathew I finally worked my way through this and finished late Friday. I had a case sensitivity issue with scripts. I renamed the public class names and that fixed both scripts.
This is an awesome tutorial. I had no previous experience in Unity 3d or other similar software. I'm a stay at home mom determined to teach myself how to build augmented reality applications. This was a great learning experience. Thanks so much.
Just replied to your message before I saw this, sorry it took so long I am not on here much. Thats soo good to here though, if you have any other questions I can be reached at matthewhallberg@gmail.com and let me know if there is anything specific you want to learn and I will try to incorporate it into a video.
Also, I plan to build out to my Android phone. What packages do I need to install on my phone? Anything specific? What are those steps to install? Thanks so much.
I'm working on this. A couple of issues that are different from your video. These may or may not be an issue. 1) When I make the script it pops up visual studio for me. Looks different than your environment. I copied pasted the script saved it and ran Build. 2) The next step to create the animation, my Unity environment does not have the tab for animation. See the attached photo. How do I add that tab or why is it missing? 3) Also note at the bottom of this image, there are some error messages. These appeared upon the import of the packages. Will these cause problems with getting this to run properly on my Android? If so how do I fix them. Thanks so much.
can you put this on the app store?
I don't think I can because if licensing issues, I think it might get taken down. Not sure though.
thats sad its pretty cool
thanks so much! | https://www.instructables.com/id/Star-Wars-Augmented-Reality-App-for-Beginners/ | CC-MAIN-2018-34 | refinedweb | 1,896 | 75.5 |
Dart’s Web UI package offers live, two-way data binding implemented with an efficient observable engine. The new observable system propagates changes proportional to the number of changes, instead of the number of watched expressions. This results in less work per event loop and more responsive web UIs.
This article covers Web UI’s observables and how to use them with interactive web apps. To learn more about Web UI, try our tutorials or articles.
Web UI helps you efficiently bind application data to HTML, and vice versa, with observables and observers. Observables are variables, fields, or collections that can be observed for changes. Observers are functions that run when an observable changes.
When an observable and and observer are connected, they are bound. For example, an HTML input field can be bound to some application data. There are two kinds of bindings with Web UI: one-way and two-way. A one-way binding updates some HTML text when application data changes. A two-way binding joins an input field to application data, such that whenever application data or field value changes, the other side of the binding also changes.
Any variable, class field, class, setter, getter, or collection can be observable. Use the @observable metadata annotation to tell Web UI “please observe this, other things care when it changes.” The Web UI (dwc) looks for @observable and generates the necessary boilerplate code to track the changes for you.
Instead of asking every possible observable “Did you change?” on every event loop over and over, Web UI has an efficient mechanism to notify only the right observers at the right time. When data is read from or written to an observable, Web UI stores a small record of the interaction. At the end of the event loop, Web UI updates only the observers that correspond to each recorded interation.
Web UI can observe a variable, and update HTML whenever the variable points to a new object.
Here is a simple example of data binding between a top-level Dart variable and inline HTML text. This example also includes declarative event binding.
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Hello World</title> <link rel="stylesheet" href="App.css"> </head> <body> <h1>Hello Web UI</h1> <p>Web UI is {{ superlative }}</p> <button id="change-it" on-Change</button> <script type="text/javascript" src="dart.js"></script> <script type="application/dart" src="hello_world.dart"></script> </body> </html>
A variable wrapped with
{{ and
}} is observed, and its
object is converted to a string and inserted into the DOM. Any time the variable
is changed, the HTML is updated.
Here is the Dart code that supports this simple app:
library hello_world; import 'package:web_ui/web_ui.dart'; @observable String superlative = 'awesome'; const List<String> alternatives = const <String>['wicked cool', 'sweet', 'fantastic', 'wonderful']; int _alternativeCount = 0; String get nextAlternative => alternatives[_alternativeCount++ % alternatives.length]; changeIt() { superlative = nextAlternative; } main() { }
The superlative top-level variable is the same variable that is observed in the
HTML page with
{{ superlative }}. The
@observable
metadata tells Web UI to observe this variable for changes.
Every time the button is clicked, the
changeIt() function is run and the
superlative variable is pointed to a different string. Because superlative
changed, the HTML page changes, too. This is data binding in action!
You don’t have to write code to deal with propagating changes, finding HTML elements, or updating HTML elements. All the of the mechanics for observing changes and updating HTML is provided by Web UI.
This example shows how to observe a variable, but not actually the object a variable points to. Read the next section to learn how to observe an object’s state.
Web UI can also observe the internal state of objects. An observable variable only signals when the variable points to another object, but an observable class signals when any field on the object changes. This distinction is subtle but very important.
Consider this example, which include a Person class. Here is a one-way binding
between the expression
person.name and HTML:
<p>Hello {{person.name}}!</p> <p><button on-Change Name</button></p>
Here is the corresponding Dart code:
library observe_object; import 'package:web_ui/web_ui.dart'; @observable class Person { String name; Person(this.name); } final Person person = new Person('Bob'); const List<String> names = const <String>['Sally', 'Alice', 'Steph']; int _nextCounter = 0; String get nextName => names[_nextCounter++ % names.length]; newName() { person.name = nextName; } main() {}
We don’t need to observe the variable person because it always refers to the same object (it is marked as final). We need to observe the Person class because the name field changes. Therefore, @observable moved from the variable to the class.
Marking a class as
@observable is the same as marking all of its fields as
@observable. You can choose to mark individual fields, individual getters and
setters, or the entire class.
To observe a List, Map, Set, or Iterable, wrap it with
toObservable(). A
change to the collection, such as an add, remove, or clear, is intercepted by
the observable wrapper and recorded. Only changes to the collection, and not
changes to the items themselves, will signal a change event.
Here is an example of an observable list. In the HTML, a template is rendered with the contents from the timestamps list.
<p> <button on-Add Timestamp</button> <button on-Clear</button> </p> <ul> <li template{{ts}}</li> </ul>
Here is the corresponding Dart code:
library observe_list; import 'package:web_ui/web_ui.dart'; final List<DateTime> timestamps = toObservable([]); void addTimestamp() { timestamps.add(new DateTime.now()); } void clear() { timestamps.clear(); } main() {}
Notice how timestamps is really a wrapped observable list. Any change to the list, such as an add, remove, or clear, triggers the template in the HTML to be re-rendered.
The toObservable() wrapper is shallow, in that it doesn’t make all the individual contents of a collection observable. If you want to observe the changes to the items themselves, use @observable on the classes of the items stored in the list.
Consider the case of a Person class that has a nested Address class. Use @observable on both classes to ensure all fields on person and address are observable.
Here is the Dart code:
import 'package:web_ui/web_ui.dart'; @observable class Person { String name; Address address; } @observable class Address { String city; } @observable Person person; main() { person = new Person() ..name = 'Clark Kent' ..address = ( new Address() ..city = 'Metropolis' ); }
And here is the HTML:
<p> Name: <input type="text" bind- </p> <p> City: <input type="text" bind- </p> <p>Person's name: {{person.name}}</p> <p>Person's Address's city: {{person.address.city}}</p>
We’ve talked about how the observable system tracks changes to variables, fields, and collections. This is only half the story. To actually respond to the change, you also need observers. An observer observes an observable expression and runs a function when the expression changes.
Most of the time, you don’t need to explicitly create observers because the dwc transformer generates them for you (through bindings between HTML and observed expressions). We don’t recommend that you manually recreate the functionality of bindings, but you can create your own observers when you need some code to respond to changes.
To illustrate how observers work, let’s manually recreate some of what the dwc does for you.
Here is some HTML with a placeholder span element that gets updated when a button is clicked.
<p>The time is <span id="msg"></span></p> <p><button on-Update</button></p>
Here is the corresponding Dart code.
library manual_watching; import 'dart:html'; import 'package:web_ui/web_ui.dart'; @observable String msg; updateMsg() { msg = new DateTime.now().toString(); } main() { observe(() => msg, (_) { querySelector('#msg').text = msg; }); }
A new observer is created to watch the msg object. When msg changes, a callback is run to update the span with a new message.
When a new observer is created, it first runs the expression to be observed. It will detect each and every Observable field that was read by that expression, and listen for changes to that field. If the final value is Observable, it listens for any changes. This direct connection is one reason why Web UI’s observables are so efficient.
Web UI includes a transformer, named Dart Web Compiler (dwc), that converts your code into vanilla Dart and dart:html code so that it can ultimately be compiled to vanilla JavaScript and HTML. The dwc looks for @observable metadata, and generates the appropriate getters and setters that record changes to fields, variables, etc.
For example, here is a class Person that has @observable metadata:
@observable class Person { String name; Person(this.name); }
Here is the generated Person class:
import 'package:web_ui/web_ui.dart'; @observable class Person extends Observable { String __$name; String get name { if (__observe.observeReads) { __observe.notifyRead(this, __observe.ChangeRecord.FIELD, 'name'); } return __$name; } set name(String value) { if (__observe.hasObservers(this)) { __observe.notifyChange(this, __observe.ChangeRecord.FIELD, 'name', __$name, value); } __$name = value; } Person(name) : __$name = name; }
Notice how the original name field is converted into setters and getters that integrate the observable mechanics. This is where Web UI gets much of its performance from, because it knows exactly what fields changed. Instead of asking every single watched expression, “Did you change?”, it knows exactly what happened.
Thanks to Dart’s getters and setters, consumers of the Person class are unaware it changed. Getters and setters look like regular fields in Dart. Here is an example:
main() { var person = new Person('Bob'); person.name = 'Alice'; // goes through the setter, which records the change }
As another example, here is an example of @observable with a top-level variable.
library foo; import 'package:web_ui/web_ui.dart'; @observable String superlative;
Here is the generated code:
import 'package:web_ui/web_ui.dart'; String __$superlative = 'awesome'; String get superlative { if (__observe.observeReads) { __observe.notifyRead(__changes, __observe.ChangeRecord.FIELD, 'superlative'); } return __$superlative; } set superlative(String value) { if (__observe.hasObservers(__changes)) { __observe.notifyChange(__changes, __observe.ChangeRecord.FIELD, 'superlative', __$superlative, value); } __$superlative = value; }
Notice how you can have top-level getters and setters, too. Cool!
When an observable variable, field, or collection is changed, the following steps occur immediately:
Then, right before the end of the current event loop (in theory, before painting), the following actions occur:
The change propagation logic attempts to detect infinite loops to avoid scenarios where one change triggers another change in a cycle.
The Web UI package provides an efficient way to observe variables, classes,
fields, getters and setters, and even collections. The dwc generates
code from
@observable metadata, so that the system can track exactly what data
was changed. Observables and observers are used together to implement one-way
and two-way data bindings.
Learn more about Web UI, follow a tutorial, or join the Web UI mailing list. Dart’s Web UI is open source on Github, participation most welcome! | https://www.dartlang.org/web-ui/observables/ | CC-MAIN-2014-15 | refinedweb | 1,821 | 57.16 |
PICA::Record - Perl module for handling PICA+ records
version 0.584
To get a deeper insight to the API have a look at the documentation,
the examples (directory
examples) and tests (directory
t).
Here are some additional two-liners:
# create a field my $field = PICA::Field->new( "028A", "9" => "117060275", "d" => "Martin", "a" => "Schrettinger" ); # create a record and add some fields (note that fields can be repeated) my $record = PICA::Record->new(); $record->append( '044C', 'a' => "Perl", '044C', 'a' => "Programming", ); # read all records from a file my @records = PICA::Parser->new->parsefile( $filename )->records(); # read one record from a file my $record = readpicarecord( $filename ); # read one record from a string my ($record) = PICA::Parser->parsedata( $picadata, Limit => 1)->records(); # get two fields of a record my ($f1, $f2) = $record->field( 2, "028B/.." ); # extract some subfield values my ($given, $surname) = ($record->sf(1,'028A$d'), $record->sf(1,'028A$a')); # read records from a STDIN and print to STDOUT of field 003@ exists PICA::Parser->new->parsefile( \STDIN, Record => sub { my $record = shift; print $record if $record->field('003@'); return; }); # print record in normalized format and in HTML print $record->normalized; print $record->html; # write some records in XML to a file my $writer = PICA::Writer->new( $filename, format => 'xml' ); $writer->write( @records );
PICA::Record is a module for handling PICA+ records as Perl objects.
This module includes and installs the scripts
parsepica,
picaimport, and
winibw2pica. They provide most functionality on the command line without having to deal with Perl code. Have a look at the documentation of this scripts! More examples are included in the examples directory - maybe the application you need it already included, so have a look!
Character encoding is an issue of permanent confusion both in library databases and in Perl. PICA::Record treats character encoding the following way: Internally all strings are stored as Perl strings. If you directly read from or write to a file that you specify by filename only, the file will be opened with binmode utf8, so the content will be decoded or encoded in UTF-8 Unicode encoding.
If you read from or write to a handle (for instance a file that you have already opened), binmode utf8 will also be enabled unless you have already specified another encoding layer:
open FILE, "<$filename"; $record = readpicarecord( \*FILE1 ); # implies binmode FILE, ":utf8" open FILE, "<$filename"; binmode FILE,':encoding(iso-8859-1)'; $record = readpicarecord( \*FILE ); # does not imply binmode FILE, ":utf8"
If you read or write from Perl strings, UTF-8 is never implied. This means you must explicitely enable utf8 on your strings. As long as you read and write PICA record data from files and other sources or stores you should not need to do anything, but if you modify records in your scripts, use utf8.
If you download PICA+ records with the WinIBW3 client software, you may first need to convert the records to valid PICA+ syntax. For this reason this module contains the script
winibw2pica.
PICA+ is the internal data format of the Local Library System (LBS) and the Central Library System (CBS) of OCLC, formerly PICA. Similar library formats are the MAchine Readable Cataloging format (MARC) and the Maschinelles Austauschformat für Bibliotheken (MAB). In addition to PICA+ in CBS there is the cataloging format Pica3 which can losslessly be convert to PICA+ and vice versa.
PICA::Record is a Perl package that provides an API for PICA+ record handling. The package contains a parser interface module PICA::Parser to parse PICA+ (PICA::PlainParser) and PICA XML (PICA::XMLParser). Corresponding modules exist to write data (PICA::Writer and PICA::XMLWriter). PICA+ data is handled in records (PICA::Record) that contain fields (PICA::Field). To fetch records from databases via SRU or Z39.50 there is the interface PICA::Source and to access a record store via CWS webcat interface there is PICA::Store.
You can use PICA::Record for instance to:
Base constructor for the class. A single string will be parsed line by line into PICA::Field objects, empty lines and start record markers will be skipped. More then one or non scalar parameters will be passed to
append so you can use the constructor in the same way:
my $record = PICA::Record->new('037A','a' => 'My note');
If no data is given then it just returns a completely empty record. To load PICA records from a file, see PICA::Parser, to load records from a SRU or Z39.50 server, see PICA::Source.
If you provide a file handle or IO::Handle, the first record is read from it. Each of the following four lines has the same result:
$record = PICA::Record->new( IO::Handle->new("< $filename") ); ($record) = PICA::Parser->parsefile( $filename, Limit => 1 )->records(), open (F, "<:utf8", $plainpicafile); $record = PICA::Record->new( \*F ); close F; $record = readpicarecord( $filename );
Returns a clone of a record by copying all fields.
$newrecord = $record->copy;
Returns a list of
PICA::Field objects with tags that match the field specifier, or in scalar context, just the first matching Field.
You may specify multiple tags and use regular expressions.
my $field = $record->field("021A","021C"); my $field = $record->field("009P/03"); my @fields = $record->field("02.."); my @fields = $record->field( qr/^02..$/ ); my @fields = $record->field("039[B-E]");
If the first parameter is an integer, it is used as a limitation of response size, for instance two get only two fields:
my ($f1, $f2) = $record->field( 2, "028B/.." );
The last parameter can be a function to filter returned fields in the same way as a field handler of PICA::Parser. For instance you can filter out all fields with a given subfield:
my @fields = $record->field( "021A", sub { $_ if $_->sf('a'); } );
Shortcut method to get subfield values. Returns a list of subfield values that match or in scalar context, just the first matching subfield or undef. Fields and subfields can be specified in several ways. You may use wildcards in the field specifications.
These are equivalent (in scalar context):
my $title = $pica->field('021A')->subfield('a'); my $title = $pica->subfield('021A','a');
You may also specify both field and subfield seperated by '$' (don't forget to quote the dollar sign) or '_'.
my $title = $pica->subfield('021A$a'); my $title = $pica->subfield("021A\$a"); my $title = $pica->subfield("021A$a"); # $ not escaped my $title = $pica->subfield("021A_a"); # _ instead of $
You may also use wildcards like in the
field() method of PICA::Record and the
subfield() method of PICA::Field:
my @values = $pica->subfield('005A', '0a'); # 005A$0 and 005A$a my @values = $pica->subfield('005[AIJ]', '0'); # 005A$0, 005I$0, and 005J$0
If the first parameter is an integer, it is used as a limitation of response size, for instance two get only two fields:
my ($f1, $f2) = $record->subfield( 2, '028B/..$a' );
Zero or negative limit values are ignored.
Same as
subfield but always returns an array.
Returns an array of all the fields in the record. The array contains a
PICA::Field object for each field in the record. An empty array is returns if the record is empty.
Returns the number of fields in this record.
Returns the occurrence of the first field of this record. This is only useful if the first field has an occurrence.
Get the main record (level 0, all tags starting with '0').
Get a list of local records (holdings, level 1 and 2) or the local record with given ILN. Returns an array of PICA::Record objects or a single holding. This method also sorts level 1 and level 2 fields.
Get an array of PICA::Record objects with fields of each copy/item included in the record. Copy records are located at level 2 (tags starting with '2') and differ by tag occurrence.
Return true if the record is empty (no fields or all fields empty).
Get or set the identifier (PPN) of this record (field 003@, subfield 0). This is equivalent to
$self->subfield('003@$0') and always returns a scalar or undef. Pass
undef to remove the PPN.
Get zero or more EPNs (item numbers) of this record, which is field 203@/.., subfield 0. Returns the first EPN (or undef) in scalar context or a list in array context. Each copy record (get them with method items) should have only one EPN.
Get zero or more ILNs (internal library numbers) of this record, which is field 101@$a. Returns the first ILN (or undef) in scalar context or a list in array context. Each holdings record is identified by its ILN.
Appends one or more fields to the end of the record. Parameters can be PICA::Field objects or parameters that are passed to
PICA::Field->new.
my $field = PICA::Field->new( '037A','a' => 'My note' ); $record->append( $field );
is equivalent to
$record->append('037A','a' => 'My note');
You can also append multiple fields with one call:
my $field = PICA::Field->new('037A','a' => 'First note'); $record->append( $field, '037A','a' => 'Second note' ); $record->append( '037A', 'a' => '1st note', '037A', 'a' => '2nd note', );
Please note that passed PICA::Field objects are not be copied but directly used:
my $field = PICA::Field->new('037A','a' => 'My note'); $record->append( $field ); $field->update( 'a' => 'Your note' ); # Also changes $record's field!
You can avoid this by cloning fields or by using the appendif method:
$record->append( $field->copy() ); $record->appendif( $field );
You can also append copies of all fields of another record:
$record->append( $record2 );
The append method returns the number of fields appended.
Optionally appends one or more fields to the end of the record. Parameters can be PICA::Field objects or parameters that are passed to
PICA::Field->new.
In contrast to the append method this method always copies values, it ignores empty subfields and empty fields (that are fields without subfields or with empty subfields only), and it returns the resulting PICA::Record object.
For instance this command will not add a field if
$country is undef or
"":
$r->appendif( "119@", "a" => $country );
Replace a field. You must pass a tag and a field. If you pass a code reference, the code will be called for each field and the field is replaced by the result unless the result is
undef.
Please do not use this to replace repeatbale fields because they would all be set to the same values.
Delete fields specified by tags and returns the number of deleted fields. You can also use wildcards, and compiled regular expressions as tag selectors.
Sort the fields of this records. Respects level 0, 1, and 2.
Add header fields to a PICA::Record. You must specify two named parameters (
eln and
status). This method is experimental. There is no test whether the header fields already exist. This method may be removed in a later release.
Returns a string representation of the record for printing. See also PICA::Writer for printing to a file or file handle.
Returns record as a normalized string. Optionally adds prefix data at the beginning.
print $record->normalized(); print $record->normalized("##TitleSequenceNumber 1\n");
See also PICA::Writer for printing to a file or file handle.
Write the record to an XML::Writer or return an XML string of the record. If you pass an existing XML::Writer object, the record will be written with it and nothing is returned. Otherwise the passed parameters are used to create a new XML writer. Unless you specify an XML writer or an OUTPUT parameter, the resulting XML is returned as string. By default the PICA-XML namespaces with namespace prefix 'pica' is included. In addition to XML::Writer this methods knows the 'header' parameter that first adds the XML declaration and the 'xslt' parameter that adds an XSLT stylesheet.
Returns a HTML representation of the record for browser display. See also the
pica2html.xsl script to generate a more elaborated HTML view from PICA-XML.
Write a single record to a file or stream and end the output. You can pass the same parameters as known to the constructor of PICA::Writer. Returns the PICA::Writer object that was used to write the record. Use can check the status of the writer with a simple boolean check.
The functions readpicarecord and writepicarecord are exported by default. On request you can also export the function picarecord which is a shortcut for the constructor PICA::Record->new and the functions pgrep and pmap. To export all functions, import the module via:
use PICA::Record qw(:all);
Evaluates the COND for each field of
$record (locally setting $_ to each field) and returns a new PICA::Record containing only those fields that match. Instead of a PICA::Record field you can also pass any values that will be passed to the record constructor. An example:
# all fields that contain a subfield 'a' which starts with '2' pgrep { $_ =~ /^2/ if ($_ = $_->sf('a')); } $record; # all fields that contain a subfield '0' in level 0 pgrep { defined $_->sf('0') } $record->main;
Evaluates the COND for each field of
$record (locally setting $_ to each field), treats the return value as PICA::Field (optionally passed to its constructir), and returns a new record build if this fields. Instead of a PICA::Record field you can also pass any values that will be passed to the record constructor.
Read a single record from a file. Returns a non-empty PICA::Record object or undef. Shortcut for:
PICA::Parser->parsefile( $filename, Limit => 1 )->records();
In array context you can use this method as shortcut to read multiple records if you specify a
Limit parameter. use
Limit=>0 to read all records from a file. The following statements are equivalent:
@records = readpicarecord( $filename, Limit => 0 ); @records = PICA::Parser->parsefile( $filename )->records()
Write a single record to a file or stream. Shortcut for
$record->write( [ $output ] [ format => $format ] [ %options ] )
as described above - see the constructor of PICA::Writer for more details. Returns the PICA::Writer object that was used to write the record - you can use a simple if to check whether an error occurred.
Shortcut for PICA::Record->new( ... )
At CPAN there are the modules MARC::Record, MARC, and MARC::XML for MARC records and Encode::MAB2 for MAB records. The deprecated module Net::Z3950::Record also had a subclass Net::Z3950::Record::MAB for MAB records. You should now better use Net::Z3950::ZOOM which is also needed if you query Z39.50 servers with PICA::Source.. | http://search.cpan.org/~voj/PICA-Record-0.584/lib/PICA/Record.pm | CC-MAIN-2017-26 | refinedweb | 2,410 | 61.97 |
Tag Helper Syntax for View Components
Tag Helper Syntax for View Components
Calling view components in ASP.NET Core views may lead to long and ugly code lines containing generic type parameter of view component and anonymous type for...
Join the DZone community and get the full member experience.Join For Free
Calling view components in ASP.NET Core views may lead to long and ugly code lines containing generic type parameter of view component and anonymous type for
InvokeAsync()method parameters. Something my readers have been interested in for a long time is the tag helper syntax for view components.
ASP.NET Core Pager Example
Suppose you have pager implemented as view component and there's support for multiple views.
public class PagerViewComponent : ViewComponent { public async Task<IViewComponentResult> InvokeAsync(PagedResultBase pagedResult, string viewName = "Default") { var action = RouteData.Values["action"].ToString(); pagedResult.LinkTemplate = Url.Action(action, new { page = "{0}" }); return await Task.FromResult(View(viewName, pagedResult)); } }
This is how you call pager view component in views. Arguments are given as anonymous objects that are matched by an argument list of the component's
InvokeAsync() |method.
@(await Component.InvokeAsync<PagerViewComponent>(new { pagedResult = Model, viewName = "PagerSmall" }))
The nice thing is that you don't need such a long call in your views. ASP.NET Core supports a special tag helper syntax for view components. There's a special namespace called "vc" and through this we can call view components using the tag helper syntax. You can replace the call above with shorter one shown here.
<vc:pager
NB! Make sure you add your view components name space to the _ViewImports.cs file that is located in the Views folder. If you don't, then ASP.NET Core cannot find your view components.
Wrapping Up
View components and tag helpers are powerful features in ASP.NET Core. Tag helpers have nice and clean syntax that view components lack when included in views. There's special "vc" tag prefix in ASP.NET Core that brings tag helper syntax for view components to Razor views. Instead of anonymous object with arguments we can use tag attributes to give values to
InvokeAsync() method of view component. This leads us to shorter and cleaner views that are easier to handle.
Related Articles by Gunnar Peipman:
Published at DZone with permission of Gunnar Peipman , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/tag-helper-syntax-for-view-components?utm_medium=feed | CC-MAIN-2019-26 | refinedweb | 413 | 57.87 |
ICANN Names New CEO, Will Pay Him $800,000 To Run the Internet 141
darthcamaro writes "ICANN has officially hired a new CEO to replace the Rob Beckstom. ICANN industry unknown Fadi Chehade is taking the top job — but there is a catch. He can't start for another 90 days, even though ICANN has been looking for a new CEO for months. Even better is Chehade's salary. ICANN will pay him $800,000 a year. Is the CEO of ICANN one of the highest paying jobs in the Internet governance landscape?"
Hey, I'll do it for half that. (Score:3)
I can't imagine I'm the only one.
Re: (Score:3, Funny)
I'd definitely do it for half than that
... but if they insist on paying me 800k, I will not complain lol
Re: (Score:1)
Or hire a woman and pay her $616,000.
Re: (Score:2)
half is the bribe (Score:2)
half is the bribe
Re: (Score:3, Insightful)?
Bitch about $800k all you want, but at $400k I think
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
I agree with what you are saying.
I am curious, though. Why doesn't the Internet community form its own domain system, and not worry about ICANN?
Re: (Score:2)
Re: (Score:2, Insightful)
Bitch about $800k all you want, but at $400k I think we get a $400k run internet.
Indeed, back when the internet was free of commercial interests ICANN was not needed, as people could (and would) get stuff done because they wanted to, for no charge.
Re: (Score:2)?
Hell, yes I'd do it better. It wouldn't be hard to. Currently, the TLD thing shows how corrupt the system is. And I really don't understand why you are talking about WenJiaBao, or the Dalai Lama in the same sentence when we are talking about ICANN: you must have smoked weed or something. But when we're at it: that's the problem, politics should not be involved in something that should stay neutral.
Bitch about $800k all you want, but at $400k I think we get a $400k run internet. Pay for performance is a world-wide metric. Do I want someone to do it for free? No, because that is what I will get in return.
At the beginning of the Internet, it wasn't run by a company, and it was working well. The way ICANN runs thing
Re: (Score:2)
"Seriously, everyone here has an opinion but do you have the expertise to run it?"
Why yes. I did this from 1996 to 2006. Cost: $0.
Postel used to do it as a part time $15K/yr "task".
It really reallly isn't hard. If you'd tried or done it you'd know that. They make it look scary to justify the gold plated nonsen$e they get away with.
Re: (Score:2)
It's simply too much money for the job, smart people do great jobs for considerably less. What they are going to get for that kind of money is a MBA drone who talks in buzzwords, has no imagination, and only cares about acquiring as much personal wealth as possible.
People that get paid more just get more greedy and more not less susceptible to bribery and other types of corruption. You can't pay someone enough to change their human nature.
Re:Hey, I'll do it for half that. (Score:5, Informative)
Pay for performance is a world-wide metric.
Please stop perpetuating lies. It's an old wives tale that has absolutely no scientific backing. Evidence shows the opposite: high compensation has a detrimental effect on productivity of creative white collar employees. (This does not apply to manual labor workers) [ted.com]
So yeah, I would like the guy getting paid $100k instead, and use the remaining $700k to add new fiber infrastructure.
Re: (Score:2)
Re: (Score:2)
Bitch about $800k all you want, but at $400k I think we get a $400k run internet.
Exactly, thats why you pay CEOs multi million dollar bonuses to succed in the market like Bear Sterns, Fannie Mae, Freddie Mac, Lehman Brothers, Bear Stearns and Merrill Lynch, Morgan Stanley, Goldman Sachs etc
Re:Hey, I'll do it for half that. (Score:4, Insightful)
Postel did it for $15K/yr and the names, numbers and infrastructure grew more under Jon that at any other time. 250 tlds in a few years.
ICANN: half a billion dollars and we got
.coop
The thing is, Jon knew how to configure a nameserver. How many of the six figure guys in ICANN do you suppose can do this?
Re: (Score:2)
How to run it? Are you kidding?
First of all the CEO is never in the office. Second, ICANN has been around for a decade and has burned though ten of millions of dollars. Show me the deliverables. If any other government agency acted like this there'd be charges.
Keep in mind Beckstrom succeeded a guy that lied to congress about how much money he made (it's on youtube!) and was quickly sent home back to Australia.
I mean, that much money to run THIS: [vrx.net]
Re: (Score:2)
makes enough doing so that bribery is not a major source of income.
After a certain point, higher salaries actually tend to be indicative of a greater likelihood of bribeability. Someone who is interested in the work first, and money only as a means to an end, is relatively unbribeable. Someone who is only willing to do the work if the price is right is willing to do anything if the price is right. And there's always someone who can offer more than the salary, no matter how high you set it.
Re: (Score:2)
Woman at the bar to man next to her, "$100? I wouldn't sleep with you for $200!"
"Okay," says the man. "How about $300? We already know you're a whore, now we can haggle price."
Please do note nothing on talent or skill is addressed.
I dunno, but ICANN seems to have been screwing up by the numbers of late. I realize non-profits can be a great dodge for the upper crust, but $800k to facilitate extortion for silly domains? And competence is still not addressed.
More like pay for bullshit (Score:2)
Pay for performance is only a world wide metric if you consider the song and dance performances corporate officers give to justify their obscene salaries.
Seriously their only real job skill is selling their own self worth to their bosses, or board of directors.
Re: (Score:2)
... and I'll start tomorrow!
Except I'd tell the UN, with their grabby little paws, to get stuffed so probably I don't qualify. Oh, and the offices are moving from LA to Bermuda or Grand Cayman.
Oh, and I outsource registration for the new TLD's to existing registrars with good track records, so people don't need to use ICA clients to virtual machines with dubious availability to fill out forms.
OK, one more: I reduce the ICANN fees by 75%, but only after there's enough money to build a 50' monument to Jon
Re: (Score:2)
Based on your low slashdot ID you will do a better job too.
This guy is being paid way too much. ICANN is behaving irrationally by paying this much money, this reeks of corruption.
Re: (Score:2)
I used to do it for free.
CEO Pay (Score:5, Interesting)
Re:CEO Pay (Score:5, Informative)
Yeah, I mean, he has to run the business unit, ensure that sales and marketing are doing their jobs and that products are delivered to stores. Oh wait, HE AIN'T running a business at ALL!
People keep trying to rationalize these salaries as if there is some CEO shortage. Really it all about the good ole boy network and I will pad your salary and you pad mine. I remember after the banking crash in 2008 and they had someone reviewing salaries at banks. Every banking officer claimed that they were above average and deserved a raise!
I tell you what, lets set some goals for this guy as a CEO and if it meets them then he can have his huge salary. Otherwise this is just a welfare check to the overpaid.
Re:CEO Pay (Score:5, Funny)
Re: (Score:1)
I'm curious: how much do you think that someone who runs the internet should be paid?
Keep in mind that many parts of it are now the commercial/communication/entertainment hub to the world.
Do you think it should be less than priceline.com ($50M), Qualcomm ($36M), Viacom ($31M), Time Warner ($20M), and eBay ($15M). Presumably, he has the skillset to do most of these jobs. Microsoft clocks in at $1.4M, so he is making roughly half of that...
He isn't exactly getting stock options to sweeten the deal...
I find
Re:CEO Pay (Score:4, Insightful)
> I'm curious: how much do you think that someone who runs the internet should be paid?
How much is enough?
Keep in mind: Greed has no limit.
Re: (Score:2)
Same for any job. Enough to be immune to luring away (or bribing) from the competition. Enough to afford for the individual's set of skills. Enough to be commensurate with responsibility..
I would, however, say that the role of ICANN is more important, requires a higher skillset, and is more likely to be bribed than
Re: (Score:3)
Enough to be immune to luring away (or bribing) from the competition.
Greed knows no limit. Somebody demanding an $800k per year salary could easily be bribed for a few million if they are prone to being bribed. What's really silly is just how little people can be bribed for, people who could afford the things they are being bribed with..
The list just shows that CEOs are overpaid. Nothing new there. A CEO just needs to be a competent manager, and there are plenty of them around.
Re:CEO Pay (Score:4, Informative)
"A CEO just needs to be a competent manager, and there are plenty of them around."
Funniest thing ever said on slashdot!!
Re: (Score:1)
Re: (Score:2)
Despite the Dilbert mentality techies like to have about managers, I've known several good managers who were smart people and good at what they did. I've known some bad ones, too, but the same could be said about my tech colleagues.
Re: (Score:2)
The problem is that the management competence is not enough to be a CEO. You must have a close relationship with big players in finance that will set the goal of the amount of money there expect to extract from your business. The CEO salary, from there point of view, is just a small return on the massive profit there expect to get.
So if you don't have a plan to be very profitable, you will not be selected as CEO. If you ever get selected, this is because you promise massive profit and naturally expect a big
Re: (Score:2)
There ARE plenty of competent managers around. They just never get to the position of CEO, since they don't spend enough of their time backstabbing their coworkers.
Re: (Score:2)
If you're hiring someone you feel you need to make 'immune to luring away (or bribing)' you're hiring someone you already know is bribable and who you know would leave you for a larger salary. That rather sounds like a mistake.
Frankly I'd wager it's more a case of paying him enough that he'll pay you or your friends more when he's a member on the board of the company in which you're applying for the CEO position.
Most high paying corporate jobs have less to do with skills than with membership in the boys clu
Re:CEO Pay (Score:5, Insightful)
2.718 times the average industrial wage.
I believe that's somewhere in the region of $135,000, but I don't have exact figures for the median US wage. The multiplier is obvious.
Yes. Moreover, I think that such salaries should not be permitted in publicly listed, limited liability companies.
A screaming money casting its dung around the office probably has the skillset to run run them as well, since running them into the ground appears to be the only thing modern CEOs actually do in return for their compensation. That and engage in crime, but I digress.
Then doubtless you will be comfortable with the corresponding increase in your tax bill required to pay for it and the multitude of linked salaries. Moreover, you will of course be perfectly contented in seeing your own wages decrease in value of in real terms to support the increasingly bloated and unearned salaries of the class you so admire. Enjoy your banana republic.
Re: (Score:2, Insightful)
Nothing. No one should run the internet. But if I HAD to pick someone? It wouldn't be someone making 800k; it would be RMS or someone from the EFF, who, I suspect, would happily do so for free or nearly so.
Re:CEO Pay (Score:5, Insightful)
You had my interest until you suggested RMS running the internet. 800k is a bargain if that is the compitition.
Re: (Score:3)
Dude, he doesn't "run the internet". His job, apparently, is nothing more than finding new ways of polluting the gTLD namespace. If he didn't turn up for work for the next three months, the internet would not suddenly collapse.
Re: (Score:2)
Re: (Score:2)
" If he didn't turn up for work for the next three months"
You think the CEO actually turns up at the office? Hahahaha, you haven't checked, have you?
Congress was surprised to find this wasn't true either: [youtube.com]
Re: (Score:2)
I'm curious: how much do you think that someone who runs the internet should be paid?
Someone who runs the Internet? A hell of a lot! And there should be multiple redundant copies of him!
The head of ICANN? Much less.
Re: (Score:2)
"I'm curious: how much do you think that someone who runs the internet should be paid?"
He doesn't run the Internet. He heads up the company that for a decade has blocked the development of new top level domains for the trademark lobby/mpaa/riaa. This could have all been finished in 1998/1999.
Notice the Internet ran fine - and grew - before ICANN? And that since the there have been very few innovations and development. Just the way the intellectual property lobby wants it.
ICANN. Doesn't. Actually. Do. Anythi
Re:CEO Pay (Score:5, Interesting)
Competition for qualified talent is difficult at the CEO level.
Says who?
Re: (Score:2)
Says the people deciding on the pay checks, obviously.
Re:CEO Pay (Score:4, Funny)
It's so difficult that most corporations never manage it and yet they continue year after year
Re: (Score:3)
$800,000 isn't even all that much, when you're talking about executive pay. That's probably less than 10 times what an engineer at ICANN would make. In contrast, the average CEO made 380 times [cnn.com] what the average worker made in 2011.
Re:CEO Pay (Score:5, Informative)
the average CEO made 380 times what the average worker made in 2011.
No. The average CEO made far, far less than that. The figure you quote is only for CEOs of 300 of the largest public corporations. It doesn't include the millions of smaller corporations.
Re:CEO Pay (Score:4, Informative)
Apparently, you're right if you count only the salary, in which case a CEO makes roughly 90 times the worker salary. If you include all income, it is close to 500 times the worker salary. Sorry, but I don't see how this can be justified in terms of productivity or management skills or whatever..
From here [payscale.com]
Re: (Score:1)
Re: (Score:2)
Don't forget the 4X a year first class flights to five star hotels for a week for "meetings".
Re: (Score:2)
Re: (Score:2)
$800,000 isn't even all that much, when you're talking about executive pay.
We're talking about an honorific position here, of someone who should be listening to the world. We aren't talking about the CEO of a big corp/bank making billions. Oh and when we're at it: even bankers shouldn't get that much. Let's put it this way: for ANY position, this is too much money, yet even more in the case of ICANN, where it should be a charge rather than a gift.
Re:CEO Pay (Score:5, Insightful)
You should have used sarcasm tags.
The thing I've seen over the years is that the good CEOs make a big difference for their companies. With the effects of their decisions being as important as they are they can swing billions of dollars one way or another.
The bad ones can ruin a company, or at least drive it into the gutter.
The problem is that both the good and bad get extremely high pay, and only the good ones are worth it.
The way CEO incentives work is all wrong.
Then the way boards and CEOs interact is often broken too.
Re: (Score:2)
Re: (Score:2)
Sadly, the "qualification" is not the ability to competently run something, but rather, the membership in the special club which may or may not correlate with actual competence. For instance, Carly Fiorina....
Do you really believe, that in a nation of 300M people, that we couldn't find people willing and capable to do almost any administrative job for far less than 800k in total remuneration?
Re:CEO Pay (Score:4, Insightful)
The "hire a great CEO" problem is very similar to the "hire a great programmer" problem.
The real deal in both roles commands a huge salary, and is totally worth it. The trouble is, if your company doesn't already have one, it has no expertise to judge if the person they want to hire is worth the money.
The second-rate software company that hires a $500/hour consultant is no different from the big firm that hires a $5000/hour CEO. They have little ability to judge skills, and so tend to get suckered by a smooth, well-groomed candidate.
The firms with the expertise in place (e.g. Google for technical hires, Goldman Sachs for management hires) do not make this mistake, and shell out big bucks for the people who are actually worth it.
Re: (Score:2)
Re: (Score:2)
Speaking of "technical", how wise do you think this all Miscroft shop really is?
This came out just now: [icann.org]
If you think ICANN is the "best and the brightest" or "runs the Internet" then you don't understand life on this planet.
Re:CEO Pay (Score:5, Interesting)
John Postel used to do the same job of ICANN CEO and an entire swathe of their senior management for free. That was only 20 years ago.
While the net may have increased in scale since then, its complexity has not, and its has not grown to the point where someone needs to be paid $800,000 a year plus bonuses etc just to keep it all ticking over.
As for the "competition" at the CEO level; while there is indeed a worldwide race to the very bowels of vapidity, fecklessness, and incompetence in this field, again, the cream of this crop are not worth paying $800,000 a year for.
Re: (Score:3)
20 years ago most of the management was goverment or education organizations. Postel was employed by the University of Southern California.
Re: (Score:2)
Wasn't free, it was a $15K/yr "part time task".
Biggest mistake Jon ever made, because when the government gave him this money that was all the excuse they needed to claim aegis over it.
That's what happened. I was there. I saw this.
Re: (Score:2)
Competition for qualified talent is difficult at the CEO level.
That's why we should have a H1-B program for CEOs.
I'm sorry, but GM's CEO is not 34 times more talented or more qualified than Toyota's CEO. I say we outsource all our CEO positions to Japan.
If someone can run ICANN they can run a lot of other stuff too.
Yes, they can probably run Fannie Mae.
It must be really difficult to run a government-granted monopoly as if it were your own private domain. With an almost unlimited budget, it must be really difficult to hire consultants who are going to do all the work for you.
Re: (Score:2)
Really?
Sir/Madam, you give these people far too much credit. You should talk to some actual CEO's sometime. In 99% of the cases, they are nothing special at all. At best, they are simply well connected.
That has some value, but "talent" is not something I would call it.
Reason (Score:5, Funny)
Q: Why do you pay your CEO so much?
A: Because ICANN.
On top of that.. (Score:1)
Dude, the guy doesn't fucking 'run the internet.' Nobody runs the internet. It's like a flame, it's a manifestation.
Re: (Score:3)
Considering the rules of the UN, that's mostly the only thing they CAN do. The security council would nearly every other thing the general assembly tried to do. It's hard to find a project that none of the permanent members are against.
Re: (Score:2)
LOL (Score:1)
I just changed my computer to OpenNIC's DNS servers and now I see this
Only fair (Score:5, Insightful)
They got all those millions selling useless TLD's, they have to spend it somewhere.
Re: (Score:3, Interesting)
To to be honest, I always thought that it would be a good idea if you could have a carname.gm or carname.ford or item.microsoft, or routername.cisco, siri.apple instead of
.com. It just makes sense outside of a tech circle. Doesn't it make sense when you think about it? Governments should have a country.gov though, and same for countries. Yeah it might seem like a pain in the ass, and it is. But for the average person it's simple, it makes sense.
But then again, for duplicate companies and all that the
Re: (Score:1)
People can easily set their own bookmarks, so why can't they be given tech that allows them to safely modify their own hosts files, or maybe given the addresses to private name servers?
I agree with you. I honestly don't get why we all have to use the same naming scheme.
I would prefer countries be given gov.country, as opposed to the other way around.
Re: (Score:2)
Oh, well, that makes the new TLD's far more sensible then.
Re: (Score:2)
Yep. And that either makes me either 10 years behind the times. Or 25 years ahead of my time. Either one is possible.
Re: (Score:2)
They got all those millions selling useless TLD's, they have to spend it somewhere.
That might have something to do with it. They are a non-profit who used their granted monopoly to make millions ( at least ) by selling text strings for obscene amounts of cash.
Isn't it time we replaced the whole messed up DNS setup and got these nasty people off our Internet?
Low (Score:4, Insightful)
Average pay for a S&P 500 CEO was 10.7 million in 2007.
This guy has all those tubes to worry about.
Re: (Score:1)
Re: (Score:2)
This guy has all those tubes to worry about.
I think 800k is quite above average for the CEO of a plumbing company!
Ruin the internet? (Score:5, Funny)
Or am I the only one who first read it that way?
Why do we even need ICANN? (Score:5, Interesting)
WTF is he doing?
Re: (Score:1)
Re: (Score:2)
1. Come up with useless retarded ways to extort money from businesses and/or people.
2. Hire a CEO and funnel the money into his pockets to draw attention from the ridiculous salaries people who don't even come into work make.
SOP for businesses these days. It's about time we cut the legs out from under ICANN, and maybe even remove the head, too. ICANN has such an insignificantly infinitesimal role in Internet management, they could be shut-down today and
wait (Score:4, Funny)
What does ICANN actually do, anyway?
I mean, besides supporting various tourist economies with their biweekly meetings in exotic locations...?
Talk about overpaid... (Score:4, Interesting)
Re: (Score:2)
By chance did they hire the pointy-haired boss from Dilbert?
Couldn't have. His ineptitude is at least entertaining.
Re: (Score:2)
By chance did they hire the pointy-haired boss from Dilbert?
Couldn't have. His ineptitude is at least entertaining.
The ineptitude of the bosses of ICANN can come across as entertaining - until of course you realize that the idiots making the unbelievably stupid and short-sighted decisions are supposed to be "experts".
How much do you get paid for inventing it . . . ? (Score:2)
Running the Internet is easy. All the work is done on remote routers during the early morning hours on weekends by a race of Nibelung living their moms' basements. Or so the ancient Saga claims.
Inventing the Internet must have been a bitch and a half. We all probably owe someone gazillions in Intellectual Poopery fees for doing that.
s/run/ruin (Score:2)
higher income are harmful to economy (Score:2, Insightful)
Some people here seems to agree that 800 kUSD is a reasonable income. The problem is that excessivve income are hamrful to real economy. The sums required to pay such a salary are taken from real economy and are too big to return to it as good consumption or real economy investements. A lot of this money will end up in financial economy, feeding bubbles and preparing the next burst.
Moreover, who can claim his work cannot be done by someone else for less than 800 kUSD? Such an income is not necessary to ge
harmful to companies (Score:2)
Excessive CEO salaries are primarily harmful to the companies that pay them; the "real economy" reacts by buying cheaper products (say, from China).
These salaries only become harmful if they are coupled with government-granted monopolies that destroy market mechanisms, as they do for example for ICANN (goverment license to fiddle with the Internet), and Apple/Microsoft (patents).
Re: (Score:2)
Excessive CEO salaries are primarily harmful to the companies that pay them; the "real economy" reacts by buying cheaper products (say, from China).
That runs small SMB out of business and destroy jobs, I think this is indeed a damage to real economy.
Vint Cerf (Score:2)
Hey Vint (Just in case your reading)
I don't know his renumeration, but between pension, shares and the rest, he's gotta be coming close to this number.
There was a number of
.com packages going around, and in all honesty to the larger companies 800k isn't a lot of money (Level Crossing, I'm looking at you). You negotiate the right deals at the right level and look after the company you work for and all of a sudden you've paid for your pay cheque a couple of times over.
I've seen people in purchasing (I don't
It's all about the bribery. (Score:3)
You don't pay the heads of powerful regulatory agencies big bucks because their job is difficult. You pay them well to ensure that they are difficult to bribe.
It's true that some of the corporations who'd stand to gain from bribing this guy can drop $800K like it's pocket change, but the larger the bribe, the harder it is to hide.
Re: (Score:2)
"You pay them well to ensure that they are difficult to bribe."
You have't looked at Twomey's record in much depth have you. Codeword: Tina.
Re: (Score:2)
You're missing the point. You're right that for a given sized bribe, the larger your salary the easier it is to hide it. But you can't give someone earning $800k the same bribe you'd offer someone making $80K -- you'll need a bribe probably 10 times bigger to make it worthwhile. And the bigger bribe will be harder to hide on a company's accounting books, harder to convert into cash, physically bulkier and harder to conceal, and difficult to spend without attracting attention.
For example, suppose the guy
Is it wrong that I mistakenly read the headline as (Score:1)
Re: (Score:2)
This may come as a bit of a surprise, but I think you're doing it wrong.
Nowadays there are many easy to install DVD or CD images you can burn that won't involve fecal matter or ejaculate in the slightest.
I'd also suggest diversifying your acquaintances a little, maybe mixing with some normal people.
Glad to see Linux has another devotee! | https://tech.slashdot.org/story/12/06/22/2226202/icann-names-new-ceo-will-pay-him-800000-to-run-the-internet?sdsrc=next | CC-MAIN-2016-40 | refinedweb | 4,943 | 73.47 |
I am trying to find the longest common subsequence of 3 or more strings. The Wikipedia article has a great description of how to do this for 2 strings, but I'm a little unsure of how to extend this to 3 or more strings.
There are plenty of libraries for finding the LCS of 2 strings, so I'd like to use one of them if possible. If I have 3 strings A, B and C, is it valid to find the LCS of A and B as X, and then find the LCS of X and C, or is this the wrong way to do it?
I've implemented it in Python as follows:
import difflib def lcs(str1, str2): sm = difflib.SequenceMatcher() sm.set_seqs(str1, str2) matching_blocks = [str1[m.a:m.a+m.size] for m in sm.get_matching_blocks()] return "".join(matching_blocks) print reduce(lcs, ['abacbdab', 'bdcaba', 'cbacaa'])
This outputs "ba", however it should be "baa".
Just generalize the recurrence relation.
For three strings:
dp[i, j, k] = 1 + dp[i - 1, j - 1, k - 1] if A[i] = B[j] = C[k] max(dp[i - 1, j, k], dp[i, j - 1, k], dp[i, j, k - 1]) otherwise
Should be easy to generalize to more strings from this.
To find the Longest Common Subsequence (LCS) of 2 strings A and B, you can traverse a 2-dimensional array diagonally like shown in the Link you posted. Every element in the array corresponds to the problem of finding the LCS of the substrings A' and B' (A cut by its row number, B cut by its column number). This problem can be solved by calculating the value of all elements in the array. You must be certain that when you calculate the value of an array element, all sub-problems required to calculate that given value has already been solved. That is why you traverse the 2-dimensional array diagonally.
This solution can be scaled to finding the longest common subsequence between N strings, but this requires a general way to iterate an array of N dimensions such that any element is reached only when all sub-problems the element requires a solution to has been solved.
Instead of iterating the N-dimensional array in a special order, you can also solve the problem recursively. With recursion it is important to save the intermediate solutions, since many branches will require the same intermediate solutions. I have written a small example in C# that does this:
string lcs(string[] strings) { if (strings.Length == 0) return ""; if (strings.Length == 1) return strings[0]; int max = -1; int cacheSize = 1; for (int i = 0; i < strings.Length; i++) { cacheSize *= strings[i].Length; if (strings[i].Length > max) max = strings[i].Length; } string[] cache = new string[cacheSize]; int[] indexes = new int[strings.Length]; for (int i = 0; i < indexes.Length; i++) indexes[i] = strings[i].Length - 1; return lcsBack(strings, indexes, cache); } string lcsBack(string[] strings, int[] indexes, string[] cache) { for (int i = 0; i < indexes.Length; i++ ) if (indexes[i] == -1) return ""; bool match = true; for (int i = 1; i < indexes.Length; i++) { if (strings[0][indexes[0]] != strings[i][indexes[i]]) { match = false; break; } } if (match) { int[] newIndexes = new int[indexes.Length]; for (int i = 0; i < indexes.Length; i++) newIndexes[i] = indexes[i] - 1; string result = lcsBack(strings, newIndexes, cache) + strings[0][indexes[0]]; cache[calcCachePos(indexes, strings)] = result; return result; } else { string[] subStrings = new string[strings.Length]; for (int i = 0; i < strings.Length; i++) { if (indexes[i] <= 0) subStrings[i] = ""; else { int[] newIndexes = new int[indexes.Length]; for (int j = 0; j < indexes.Length; j++) newIndexes[j] = indexes[j]; newIndexes[i]--; int cachePos = calcCachePos(newIndexes, strings); if (cache[cachePos] == null) subStrings[i] = lcsBack(strings, newIndexes, cache); else subStrings[i] = cache[cachePos]; } } string longestString = ""; int longestLength = 0; for (int i = 0; i < subStrings.Length; i++) { if (subStrings[i].Length > longestLength) { longestString = subStrings[i]; longestLength = longestString.Length; } } cache[calcCachePos(indexes, strings)] = longestString; return longestString; } } int calcCachePos(int[] indexes, string[] strings) { int factor = 1; int pos = 0; for (int i = 0; i < indexes.Length; i++) { pos += indexes[i] * factor; factor *= strings[i].Length; } return pos; }
My code example can be optimized further. Many of the strings being cached are duplicates, and some are duplicates with just one additional character added. This uses more space than necessary when the input strings become large.
On input: "666222054263314443712", "5432127413542377777", "6664664565464057425"
The LCS returned is "54442" | https://pythonpedia.com/en/knowledge-base/5057243/longest-common-subsequence-of-3plus-strings | CC-MAIN-2020-45 | refinedweb | 744 | 65.42 |
13:55:15 <LeeF> AxelPolleres, you don't need to do rrsagent & zakim by hand in the future - you can just use "trackbot, start meeting" 13:56:56 <OlivierCorby> Hello, I am Olivier Corby from INRIA Sophia Antipolis, new member of the WG 13:57:02 <AxelPolleres> ah right, but I don't need to say it again now, do I? 13:57:21 <AxelPolleres> Welcome Olivier! 13:57:50 <LeeF> hi OlivierCorby, good to have you 13:58:05 <LeeF> AxelPolleres, right, no need to repeat it now, though trackbot does do other things like date and stuff 13:58 14:05:06 <LukeWM> AxelPolleres: remember to rejoin the group 14:05:28 <LukeWM> AxelPolleres: or ask your AC rep to do it. Will check this later. 14:05:47 <LukeWM> ericP: email problems due to recharting & not complicated enough tooling. 14:05:54 <AxelPolleres> 14:05:58 <LeeF> q+ to ask once more for people to fill out 14:06:06 <LeeF> ack me 14:06:48 <AxelPolleres> PROPOSED to accept 14:07:01 <LukeWM> LeeF: remember to fill in F2F2 attendance. 14:07:04 <ericP> -> F2F wiki 14:07:11 <AxelPolleres> RESOLVED to accept minutes 14:07:52 <LukeWM> AxelPolleres: next scribe is bijan if he doesn't still have telephone difficulties. 14:08:10 14:10:19 <LukeWM> ericP: we might end up needing to work with RIF 14:10:35 <LukeWM> ericP: because of collisions between our functions and operators. Perhaps a common document. 14:10:39 <AxelPolleres> chime, can you scribe next time? 14:10:50 <chimezie> sure 14:11:00 ) 14:11:37 <LukeWM> AxelPolleres: action for ericP to draft project expressions. 14:11:51 <LukeWM> ericP: that's done, but perhaps someone wants something more, otherwise, lets remove it. 14:12:07 <LukeWM> AxelPolleres: can we just close it? 14:12:18 <LukeWM> ericP: yes 14:12:35 <LukeWM> AxelPolleres: investigating issue 33 & creating trac links for update issues 14:12:46 <LukeWM> AxelPolleres: for Lee, is that done? 14:12:47 <AxelPolleres> Lee? 14:13:56 <LukeWM> AxelPolleres: chime, update on Aggregates issue. 14:14:18 <LukeWM> chimezie: there was something about groups, but I don't think this is part of the action. 14:14:38 <LukeWM> chimezie: leave the action open and I'll investigate. 14:14:52 <LukeWM> AxelPolleres: leaves an action for himself open. 14:14:56 <LeeF> ack me 14:15:08 <LukeWM> AxelPolleres: synchronising errata with Andy? 14:15:16 <LukeWM> LeeF: can't close that yet. 14:15:29 <LukeWM> LeeF: I'll go through and close the ones that need to be. 14:16:00 <LukeWM> AxelPolleres: look into xml spec for SPARQL query with Andy - action on ericP. 14:16:21 <LukeWM> ericP: ported SPARQL 1.1 to xml spec & update document to xml spec. 14:16:33 <LukeWM> ericP: I don't know if Andy is using it. 14:16:42 <LeeF> q+ to give mailing list heads up 14:16:54 <SimonS> q+ re xmlspec 14:16:55 <LukeWM> ericP: there's no point in leaving this action open. 14:17:24 <LukeWM> SimonS: a comment on XML spec. 14:17:36 <LukeWM> SimonS: can we have our own copy. 14:17:38 <Prateek> Prateek has joined #sparql 14:18:15 <LukeWM> ericP: we're free to copy it, but we ought to use an existing one to ensure minimum differences. 14:18:41 <LukeWM> SimonS: we have marked it up with a special class. 14:18:54 <LukeWM> ericP: we can just tweak the XSLT to make it all visible. 14:19:01 <AxelPolleres> q? 14:19:08 <LeeF> ack ericP 14:19:11 <LeeF> ack me 14:19:25 <SimonS> q- 14:19:55 <LukeWM> LeeF: there have been some hiccups with our mailing lists, and web archives haven't caught up. 14:20:21 <LukeWM> LeeF: we're working on it, and will keep you up to date. 14:20:47 <LukeWM> LeeF: w3c is being marked as spam by spamcop 14:20 mailing list archives at are now up to date 14:22:58 <LeeF> so follow along there! 14:23:10 <LukeWM> AxelPolleres: first internal draft next week 14:23:16 <LukeWM> AxelPolleres: choose reviewers today 14:23:23 <AxelPolleres> q? 14:23:35 <LukeWM> AxelPolleres: decide to publish on October 13th 14:24:02 <LukeWM> AxelPolleres: lets pick the reviewers when we go through the documents. 14:24:24 <LukeWM> AxelPolleres: how are we with respect to schedule, any issues? 14:24:29 <LeeF> ack me 14:24:51 <AxelPolleres> 14:25:00 <LukeWM> LeeF: can editors paste the links of the documents before discussing them. 14:24:35 <LukeWM> topic: status of Sparql Query document? 14:25:55 <LukeWM> SteveH: Aggregate functions, subqueries, negation, project expressions 14:26:13 <LukeWM> SteveH: negation, project expressions 80 -90% complete. 14:26:41 <LukeWM> SteveH: requires more work on Aggregate functions and subqueries, but probably OK for review. 14:27:09 <LukeWM> SteveH: Aggregate functions and project expressions aren't complete enough yet. 14:27:33 <BirteGlimm> I would like to review it 14:27:39 <LukeWM> AxelPolleres: there will be something to review next week? 14:27:45 <LeeF> at least 2 :) 14:27:53 <LukeWM> SteveH: yes 14:27:57 <LeeF> I'd like to review all of them, actually 14:28:05 <LeeF> Yes. 14:28:15 <LeeF> but would love 2 in addition to me ;) 14:28:25 <AxelPolleres> Reviewers for query: birte, Lee 14:28:28 <ivan++ sounds like it makes sense in the status of the document 14:29:28 <LukeWM> AxelPolleres: nobody disagrees 14:29:48 <AxelPolleres> ACTION: Steve to make a comment making it clear that we intend to merge this content with the old document. 14:29:48 <trackbot> Created ACTION-92 - Make a comment making it clear that we intend to merge this content with the old document. [on Steve Harris - due 2009-09-29]. 14:30:27 <LukeWM> AxelPolleres: anything urgent regarding this document? Aggregates is on agenda for next time. Is anything else needed? 14:30:39 <LukeWM> SteveH: we need to decide on the scope of the group expressions. 14:31:01 <LukeWM> SteveH: the algebra only allows group by variables, rather than expressions. 14:31:04 <LeeF> ISSUE: Does GROUP BY allow variables or expressions, and does it allow mutiple expressions? 14:31:04 <trackbot> Created ISSUE-41 - Does GROUP BY allow variables or expressions, and does it allow mutiple expressions? ; please complete additional details at . 14:31:40 <LukeWM> AxelPolleres: can you summarise it on the mailing list. 14:32:05 <SteveH> ivanher> q- 14:32:43 <AxelPolleres> ACTION: Steve to talk with AndyS on ISSUE-41, 14:32:43 <trackbot> Created ACTION-93 - talk with AndyS on ISSUE-41, [on Steve Harris - due 2009-09-29]. 14:33:00 <LukeWM> AxelPolleres: another reviewer? 14:33:07 <LeeF> ack me 14:33:38 <LukeWM> AxelPolleres: I'll be a reviewer, but next time I'll pick a victim. 14:33:47 <AxelPolleres> Reviewers 14:34:26 <AxelPolleres> 14:34:30 <AxelPolleres> q? 14:34:42 <SteveH> I see a lot of things that looks like XML errors 14:34:46 <LukeWM> AxelPolleres: SimonS, can you summarize the update document. 14:35:01 <LukeWM> SimonS: nearly done, good enough for review. 14:35:15 <LukeWM> SimonS: ericP is on the XML issues, so they should be fixed soon. 14:36:00 <LukeWM> ericP: we'll end up with a better stylesheet & dtd if you bear with me. 14:36:03 <SteveH> q+ to ask about grammar syntax 14:36:30 <LukeWM> ericP: I want to work out the minimal diff between a live version of the stylesheet and ours first. 14:37:04 <LukeWM> SimonS: we proposed to have a separate grammar document with an overlapping part but didn't have much response. 14:37:23 <LukeWM> SimonS: Andy's response wasn't pro or con, so would like other opinions. 14:37:29 Glimm> Hm, so you can identify what parts apply only to update and which parts apply also to standard SPARQL? 14:38:32 <AxelPolleres> the person typing, pls mute! 14:38:33 <LukeWM> SteveH: where did the html that does the grammar come from. 14:39:01 <LukeWM> ericP: pasted grammar into Yakker??? and got that to produce the HTML. 14:39:08 <ivan: e.g. it has the issues starting at the beginning. Perhaps we should reorder. 14:41:56 <LukeWM> AxelPolleres: Simon, can it be done? 14:42:07 <LukeWM> SimonS: yes, any preference? 14:42:27 <LukeWM> ivanherman: we should be as close to Steve's document in structure. 14:42:48 <LukeWM> SteveH: we had an item on the TODO list to reorder the original SPARQL document to make it easier to read. 14:43:09 <LukeWM> ivanherman: my point is that update & query documents should follow the same structure. 14:43:13 <LukeWM> SteveH: agreed. 14:43:31 <LukeWM> ivanherman: can we follow the query document. 14:43:34 <AxelPolleres> ACTION: SimonS to agree with SteveH to order sections to reflect better similar structure . 14:43:34 <trackbot> Created ACTION-94 - Agree with SteveH to order sections to reflect better similar structure . [on Simon Schenk - due 2009-09-29]. 14:45:23 <LukeWM> AxelPolleres: there are overlaps between the grammars, so to avoid redundancy, we should have a separate grammar document with the intersection between the grammars. 14:45:27 <SteveH> q+ 14:45:44 <ericP> q- ivanherman 14:46:05 <LukeWM> AxelPolleres: or we can just link from the query document to the update document 14:46:24 <LukeWM> SteveH: describing update in terms of query results in a double-headed grammar. 14:46:29 <AxelPolleres> q? 14:46:35 <SteveH> ack me 14:46:49 <LukeWM> ericP, I didn't catch your point. 14:47:04 <LukeWM> AxelPolleres: if you link, it still results in 2 grammars. 14:47:14 <LukeWM> SteveH: in all cases you end up with 2 grammars. 14:48:04 <LukeWM> ericP: conceptually, it's nice if they both reference their intersection. But from a tool perspective, it's easier to have one big piece. 14:48:21 <LukeWM> SteveH: ericP's master grammar with annotations for each spec is a good idea. 14:49:05 <LukeWM> AxelPolleres: Can we do the joint grammar document in time? What do the editor's say, is it doable? 14:49:32 <LukeWM> SteveH: we aren't merging grammars. 14:49:48 <AxelPolleres> non need to decide now. 14:49:52 <LukeWM> AxelPolleres: do we need to decide yet? 14:50:06 <SteveH> I'll review 14:50:13 <LukeWM> SteveH: no, differences can be describe in terms of 1.0 grammar. 14:50:28 <LukeWM> I can do it too 14:50:41 <LeeF> Not concerned with that 14:50:53 <AxelPolleres> reviewers for SPARQL/Update: Steve, Luke, Lee 14:51:19 <LukeWM> topic: status of RESTful update document 14:51:50 <LukeWM> chimezie: nothing to show, just trying to collect consensus, hopefully next week there'll be something worth reviewing. 14:52:15 <AxelPolleres> update-protocol-1.0? RESTful-update-1.0? 14:52:18 <LukeWM> AxelPolleres: what is the short name for the document? 14:52:18 <LeeF> chimezie, is there a URL for where the draft will go yet? 14:52:34 <SteveH> http-update? 14:52:35 <LukeWM> chimezie: lets not use REST in the name, something like RDF Update. 14:52:39 <SteveH> +1 to not using REST 14:52:41 <AxelPolleres> RDF-update? http-update? 14:53:00 <AlexPassant> +1 for not using REST but using HTTP in the title 14:53:03 <LukeWM> AxelPolleres: any preferences, RDF-update or http-update? 14:53:14 <BirteGlimm> +1 to http-update 14:53:21 <ivanherman> sparql-http-update? 14:53:27 <LukeWM> LeeF: we just need to distinguish it from SPARQL/update sufficiently. 14:53:31 <AxelPolleres> +1 to http-update 14:53:42 <chimezie> +1 http-rdf-update or http-update 14:53:47 <LukeWM> AxelPolleres: lets have a quick straw poll. 14:53:58 <LukeWM> +1 to rdf-http-update 14:54:04 <SteveH> +1 http-rdf-update or http-update 14:54:09 <SimonS> +1 http-rdf-update 14:54:12 <ivanherman> 'http-update' might not say that this is related to rdf or sparql 14:54:16 <AlexPassant> +1 http-rdf-update 14:54:17 <kasei> +1 http-rdf-update 14:54:19 <LeeF> yeah, http-update doesn't make any sense to me :) 14:54:25 <ivanher? 14:56:04 <LukeWM> AxelPolleres: SimonKJ, can you report on the protocol document 14:56:42 <LukeWM> SimonKJ: I need to send CVS keys to ericP, but haven't got going with it. 14:56:44 <ivanher> q+ 15:00:29 <LeeF> sparql-service-description ? 15:00:31 <LukeWM> kasei: discovery stuff is settled, but vocabulary still needs to be sorted 15:00:39 <LukeWM> kasei, that was you speaking wasn't it? 15:01:02 <LukeWM> AxelPolleres: short name? 15:01:11 <kasei> LukeWM, yes 15:01:11 <LukeWM> ivanherman: all short names should start the same. 15:01:12 <AxelPolleres> All short names should start with sparql- or rdf- 15:01:16 <LukeWM> cool 15:01:37 <LukeWM> AxelPolleres: will it be possible to review next week. 15:02:03 <LukeWM> kasei: I don't have anything yet, but will try for something next week. It isn't as deep in scope as some of the others. 15:02:15 <LukeWM> I can review it when it's done. 15:02:17 <SimonKJ> I'll do that one as well 15:02:19 <ericP> ivanher 15:03:23 <AxelPolleres> 15:03:29 <LukeWM> AxelPolleres: anything with respect to update for F & R ? Will there be a new version, will things be added for time allowed features. 15:03:50 <LukeWM> LeeF: I'm keen to hear about the status of entailment. Lets talk about this later. 15:05:08 <AxelPolleres> reviewers for F&R: chime, Lee 15:05:25 <BirteGlimm> 15:05:51 <LukeWM> AlexPassant, I didn't catch what you said about the F& R status, could you put it into IRC please. 15:06:11 <AlexPassant> AlexPassant: Will update FR with more content on allowed feature by next week 15:06:24 <LukeWM>Glimm: it isn't worth doing SPARQL RDFS because you don't gain much. 15:07:47 <LeeF> let's try one or two out first : 15:07:48 <LeeF> :) 15:07:50 <LukeWM> AxelPolleres: is there any RIF in there? 15:08:00 <LukeWM>Glimm: yes, I think it will work. 15:09:23 <chimezie> does the fact that we don't have an allocated editor for RIF-related entailment put us at risk for that regime? 15:09:59 <LeeF> chimezie, i think we've always been at risk for that, given that it was 3rd or 4th in line for a time-permitting feature in the first place 15:10:00 <kasei> shouldn't that have legitimate answers if you aren't using d-entailment and you have a resource with that type? 15:10:09 <AxelPolleres> q? 15:10:13 <LeeF> ...but i also hope that Birte and Bijan can edit that into the /Entailment document once we get to that 15:10:16 <LukeWM> AxelPolleres: talks with BirteGlimm about queries which are legal but don't have a legal answer. 15:10:18 <LeeF> ack LeeF 15:10:28 <LeeF> with help from our RIF-heads 15:10:29 <BirteGlimm> according to the spec no answers 15:10:43 <chimezie> okay 15:10:54 <AxelPolleres> ACTION: Axel to go over entailment doc to put in at least hooks for RIF/OWL RL entailments. 15:10:54 <trackbot> Created ACTION-95 - Go over entailment doc to put in at least hooks for RIF/OWL RL entailments. [on Axel Polleres - due 2009-09-29]. 15:11:57 <BirteGlimm> sure 15:12:05 <AxelPolleres> ACTION: Axel to contact Eric to setup CVS access for new editors. 15:12:05 <trackbot> Created ACTION-96 - Contact Eric to setup CVS access for new editors. [on Axel Polleres - due 2009-09-29]. 15:12:32 <LukeWM> LeeF: please send parts of drafts to the mailing list when you have them. 15:12:37 <LukeWM> AxelPolleres: any more discussion? 15:12:45 <LukeWM> AxelPolleres: reviewers? 15:12:52 <LukeWM> chimezie: I'll volunteer. 15:13:21 <AxelPolleres> reviewers for entailment: chime, lee 15:13:23 <LukeWM> AxelPolleres: anyone else, Lee? 15:13:32 <BirteGlimm> LeeF, That's still useful ;-) 15:13:42 <chimezie> i have to go unfortunately 15:13:47 <LukeWM> LeeF: I can look at completeness but not technical content. 15:13:52 <AxelPolleres> topic: function library TF 15:14:21 <AxelPolleres> 15:14:25 <LukeWM> AxelPolleres: function library TF has had a teleconference with Andy and Lee regarding the starting point. 15:15:01 <LukeWM> AxelPolleres: we have agreed to propose a list of functions & operators based on Xquery 15:15:09 <LukeWM> AxelPolleres: Andy has a minimal list already. 15:15:22 <AxelPolleres> 15:15:46 <LukeWM> AxelPolleres: all operators must have URL 15:16:34 <LukeWM> AxelPolleres: issues around namespace, whether to reuse fn or not. 15:16:42 <LukeWM> AxelPolleres: aggregates aren't covered yet. 15:17:20 <LukeWM> AxelPolleres: need to wait for aggregate accessibility discussion 15:17:28 <BirteGlimm> Should we say anything about test cases for different entailment regimes? 15:17:29 <SteveH> +1 to including in query 1.1 doc 15:17:30 <LukeWM> AxelPolleres: this should be part of the query document. 15:17:43 <SteveH> q+ 15:17:48 <SteveH> ack me 15:17:52 <LukeWM> AxelPolleres: should this be a comment? 15:18:19 <LukeWM> SteveH: yes, we should say we plan to include a function library but say it isn't defined yet. To save time. 15:18:31 <LukeWM> AxelPolleres: an editor's note that points to the wiki? 15:18:55 <LukeWM> SteveH: we should be careful pointing to the wiki 15:19:03 <AxelPolleres> ACTION: steveh to include commment on extended function library in current sparql/query-1.1 draft 15:19:03 <trackbot> Created ACTION-97 - Include commment on extended function library in current sparql/query-1.1 draft [on Steve Harris - due 2009-09-29]. 15:19:07 <LukeWM> SteveH: I will include the comment 15:19:35 <SimonS> 15:19:43 <AxelPolleres> topic: federated query TF 15:20:37 <LukeWM> SimonS: design page on the wiki hasn't changed yet. The algebra operator syntax missing, but it doesn't seem to do anything - will send a mail to discuss this. 15:21:02 <LukeWM> SimonS: one issue was that it should be an optional feature for security reasons. 15:21:03 <BirteGlimm> +1 to optional feature 15:21:19 <LukeWM> SimonS: we should add a comment to the query document stating that it is optional. 15:21:29 <kasei> optional features can mesh with service descriptions 15:21:49 <LukeWM> SimonS: it could be allowed for update if we choose a more complex form, but frankly I don't think it's a good idea for FPWD. 15:22:14 <SteveH> q+ 15:22:29 <LukeWM> AxelPolleres: we should add comments in query and service description, Steve & Greg. 15:22:56 <LukeWM> SteveH: the FPWD shouldn't mention this as it's time permitting - I forgot that the function library was also time permitting. 15:23:24 <SteveH> +1 to LeeF 15:23:26 <LukeWM> LeeF: lets not make a decision now, in general I agree with Steve. We need to ask ivanherman. 15:23:30 <SteveH> it's safer not to metion it, probably 15:23:42 <SteveH> can I lose my action to add it, for now 15:23:43 <AxelPolleres> decision on whether FPWD should mention time permitting features postponed. 15:23:46 <LukeWM> AxelPolleres: I'll ask ivanherman 15:24:19 <AxelPolleres> ACTION: Axel to ask ivanherman/eric whether we need to mention time permitting features in FPWD. 15:24:19 <trackbot> Created ACTION-98 - Ask ivanherman/eric whether we need to mention time permitting features in FPWD. [on Axel Polleres - due 2009-09-29]. 15:24:26 <LukeWM> LeeF: I don't know how issues around IP exclusions work, anyone else? 15:24:38 <LukeWM> LeeF, I hope that was a fair characterisation. 15:25:10 <AxelPolleres> topic: property paths | http://www.w3.org/2009/sparql/wiki/Chatlog_2009-09-22 | CC-MAIN-2015-48 | refinedweb | 3,428 | 72.56 |
The ReactiveConf 2017 took place in Bratislava on the 26th and 27th of October. In my view CSSinJS, and optimization techniques in general, were the major themes of this year’s conference.
Given that I regularly attend more “popular” conferences, I have to admit that I was really surprised by the high quality of the talks.
If you want to watch the whole conference there are video recordings of day 1 and day 2. What follows are summaries of my notes from selected talks.
- Evan You: Compile-time Optimizations in JavaScript Applications
- Igor Minar: Let’s tree-shake it…
- CSS Panel
- Tiago Forte: The React Productivity Revolution
- Jack Franklin: Migrating Complex Software
- Richard Feldman: CSS as Bytecode
- Jared Forsyth: Reason: JavaScript-flavored OCaml
- Robin Frischmann: CSS in JS – The Good & Bad Parts
- David Nolen: Out of the Tarpit, Revisited
Evan You: Compile-time Optimizations in JavaScript Applications
Evan You, creator of Vue.js, ended the first day with an overview of JavaScript compilation techniques in general. He pointed out that web frameworks are integrating more and more compilers. The reason is quite obvious: it is all about improving the performance.
He underlined that minifiers can be seen as kinds of compilers.
Modern frameworks compile HTML templates. In Vue.js, templates are translated into JavaScript render functions by transforming the template into an AST (Abstract Syntax Tree) first.
One can also see a trend that AOT (Ahead-of-Time Compilation) instead of JIT (Just-in-Time Compilation) is becoming more and more the default way of handling compilation.
CSS gets compiled to JavaScript by applying CSS rules only to the component it is attached to. That overcomes the feared problem of inheriting undesired styles by cascading.
Facebook’s Prepack, one of the newer approaches, optimizes by partial evaluation. One must keep in mind that it is quite a new technology.
Igor Minar: Let’s tree-shake it…
Igor Minar, a member of Angular’s core team, talked about reducing the size of an application’s bundle by using tree-shaking and code elimination in webpack. He encourages the use of the visualization tools webpack-bundle-analyzer and source-map-explorer. Both are good starting points for identifying areas where code removal can be useful.
Tree shaking works at the module level. Since dead code within modules cannot be detected, one should avoid “naked imports” – modules imported without specifying the actual classes, like
import 'rxjs/add/operator/map';.
If possible one should always go with ES Modules.
With CommonJS or UMD webpack has problems analyzing the code.
Module concatenation is useful since it reduces the code amount. Note: it has to be enabled in the current version of webpack explicitly.
To reduce your application’s critical path, you should do code splitting. This loads only the parts required by the critical path. Everything else can load afterwards.
Dead Code elimination means removal of code that is never executed. Usually this is done by Uglify. Since it can’t rely on static code analysis alone, it is harder than tree shaking.
CSS Panel
An interesting panel on CSS was held in a relaxed atmosphere.
One question that came up was how to integrate designers’ input into CSSinJS where only JavaScript is used. Panelists suggested splitting it into presentational and container components (concepts in React). The designers could then have access to the presentational components. The open question left by the panelists is whether designers know enough technical details to produce maintainable code.
Another interesting discussion addressed applying to CSS the principle of static types, as used in TypeScript or Flow. The problem here is that CSS is strongly coupled to HTML. A use case for the “static typed” approach might be to ensure an accessibility colour. For that one has to know the underlying background colour, which means you have to analyze the associated HTML.
This was followed by a conversation about the advantages and disadvantages of CSSinJS which, unlike common technologies like SASS or Less is, generates the CSS at runtime.
Runtime generation eliminated the need for the duplicated CSS code that increases the size of your app. It also enables things like state-driven styles and certain kinds of animation that require information that is only available during runtime. For example, one can pass the state of a component to a JavaScript function and make it more flexible in terms of CSS generation.
Theming is one of the major strengths of CSSinJS. One can create nearly an infinite number of themes very easily by requiring only a single JavaScript function..
On the down side, existing pre-processors or precheck mechanisms are only available during compile time.
An audience member asked for the panelists’ opinions of CSS Grids. Surprisingly, the consensus was negative.
When discussing the global vs. local styling problem, the panelists quickly agreed that the problem of hierarchical properties is the main issue. This is when styling information from a parent leaks into its child components in an unexpected way. On the one hand, one can use the CSS3 property “all: initial” that removes all inherited styles. On the other hand, it would be best if CSS is only kept local. The exception to this rule: global styles for values like corporate colours.
Usage of selectors over class-names is seen as bad practice, since their original intended target was pure documents and not today’s components.
In the round-up at the end the panelists said that one should not be petrified by all the new technologies popping up. This is especially valid in the npm environment.
Igor Minar: AMA (Ask me anything) Session
Igor Minar participated in an “Ask me anything” session and was of course confronted with an Angular vs. React question. His response was quite diplomatic. He observed that Angular provides a full package whereas React presents you with many choices. That can be a problem, he suggested, since you have to know how the different elements interact together.
In general, Minar sees that some problems are similar to problems that have already been solved in languages like Java or C.
Minar mentioned Google’s Closure compiler which, in his opinion, is the best tool currently available for eliminating dead code. It is even better when there is type information. It makes a good match when TypeScript is used, for example.
In terms of upgrading Angular 1 there is now a new option with ng-upgrade, that disables intercommunication between Angular and AngularJS. Large companies requested this feature. They usually transform complete parts of their applications to Angular and don’t want to couple it with AngularJS.
Compared to React’s performance he says Angular uses the superior approach. Change detection in React takes place on the view level, whereas Angular looks at the model level.
One should not fully trust the optimization techniques like tree shaking, Minar warned. One still has to test and measure. He reflected in his talk on optimization techniques where he showed the issues with “naked imports” like it is in rxjs.
Angular Elements is a new library where components can be used in a non-Angular application. Common use cases are CMS.
The top feature in the upcoming release of Angular 5, Minar said, will let service workers become first-class citizens. That will improve issues with network latency.
Minar concluded by mentioning that Angular’s advantage is having the right default settings. If you have frameworks where you can choose, you have to know the best practices in each field.
Tiago Forte: The React Productivity Revolution
Tiago Forte gave a very unusual talk on “The React Productivity Revolution”. With no software development experience, Forte defines himself as a software anthropologist. He studies the industry, how programmers work and what factors influence productivity. His findings are then redefined for non-programmers, so that other industries might benefit.
He understands the so-called “Flow” as major productivity criteria. Surprisingly, modern research shows that things like avoiding interruptions or striving for time boxes or single tasks are not required to bring a person into the “Flow”. Forte sees these procedures even as counterproductive, since they eliminate personal communication and take people away from the business side.
He emphasized that most people believe, mistakenly, that one requires a specific amount of time to get into the “Flow”. It has nothing to do with time. What the best productivity requires are statefulness, encapsulation, reusability and composability.
Jack Franklin: Migrating Complex Software
Jack Franklin talked about his team’s year-long experience migrating an AngularJS-based application to React. The very “context-specific” reason for switching to React instead of upgrading to Angular was very simple: the original developers left and the new ones simply had more experience with React.
The core migration strategy was not go with a big bang release. Instead they migrated incrementally component-by-component, starting with the components located at the bottom of the logical component tree. ngReact allowed them to embed React into Angular.
Acceptance tests were very useful. They aren‘t coupled that much to the application code and can therefore stay untouched even if the underlying technology changes.
To pick out the components to work on first, they took the ones with the highest churn rate. These are the ones that are changed very often. For larger components, Franklin suggested, one should try to split them into smaller pieces. They used feature branches which were merged regularly.
Sharing knowledge within a small team in markdown turned out to be a good thing and he recommends to do that in each project. A migration is also a good time to modernize the tooling systems.
A big challenge was to convince non-technical people that a migration was required at all.
Finally one also has to accept that things fail in live systems. One has to live with it and not panic.
The content of his talk is available on JSPlayground.
Richard Feldman: CSS as Bytecode
Richard Feldman, a popular figure in the Elm community, presented Elm’s solution for CSS. He identified the use of technologies originally designed to create simple documents as one of the major problems in modern web development. He acknowledged that things got better with the ongoing improvements in CSS, JavaScript and HTML standards. Still, they are all bound to the original root that have been designed for documents in the first place.
Elm provides a programming language, elm-style, designed up-front for creating user interface, e.g. modern web applications. It compiles down to JavaScript, but the developer should not carry the legacy burden of it.
Feldman continued to show via live coding, how easy it is to create designs like vertical alignments and so on that are usually hard with native CSS.
Jared Forsyth: Reason: JavaScript-flavored OCaml
Jared Forsyth presented the type safe and immutable programming language Reason, which is based on OCaml and maintained by Facebook. The main criteria for Reason are easiness to start and to maintain.
The interoperability with vanilla JavaScript is worse than in TypeScript but better than ClojureScript or Elm. The latter two don’t allow fetching dependencies from npm.
He made a good statement on static typing, I’d like to cite: “Unit Tests cover whatever you can think of, types cover the things you forgot.“
Reason is just starting to be used in production. Forsyth expects at least two years until is ready for use in large projects.
Robin Frischmann: CSS in JS – The Good & Bad Parts
CSSinJS creates CSS out of JavaScript functions, providing more possibilities, more predictability, and reducing the risk of overwriting. At least according to Robin Frischmann who tried to convince the participants of his talk to try to use CSSinJS.
He warned that this technology is quite new, there exist a lot of libraries and, since they “are moving fast“, things break a lot.
It was quite a motivating talk since his arguments were sound and comprehensible.
David Nolen: Out of the Tarpit, Revisited
David Nolen talked about the problems you face with managing state and what good job frameworks like Redux or GraphQL do in handling it. He mentioned the term “Place Oriented Programming” or simply PLOP, which was new to me.
In the second half of his talk he presented Datomic. This is an immutable database in Clojure allowing real “time travels”. You can actually query the database at any given state it had in the past. Its principles can be compared to git.
Nolen also recommended to read the paper “Out of the tar pit. | https://www.rainerhahnekamp.com/en/reactiveconf-2017-summary/ | CC-MAIN-2019-39 | refinedweb | 2,090 | 56.05 |
I have a program that creates a double array for 50 states storing the last 10 tax rates for each year. The tax rate is less than .06 in all of them and is randomly configured to make the program run easier. This program is just to help me learn to access multidimensional arrays.
I need to create the following:
A method returning an array of indexes of the states that have had at least one year with a tax rate less than 0.001
I created a method that returns the max tax rate of all the states and years and also the least and returns the indexes.
However, I do not know how to return an array of indexes to satisfy this question.
Here's my class so far:
Code Java:
import java.text.DecimalFormat; public class SalesTax { private double [][] rates; public SalesTax() { rates = new double [50][10]; for(int x=0; x<rates.length; x++) { for(int i=0; i<rates[x].length; i++) { double z= Math.random(); while(z > .06) { z=Math.random(); } rates[x][i] = z; } } } DecimalFormat newForm = new DecimalFormat("0.000000"); public void get_largest_rate() { double max = rates[0][0]; int tmpI = 0; int tmpX = 0; for(int x = 0; x < rates.length; x++) { double[] inner = rates[x]; for (int i = 0; i < rates[x].length; i++) { if(inner[i] > max) { max = inner[i]; tmpX = x; tmpI = i; } } } System.out.println(newForm.format(max)); System.out.println("The index of the highest rate is: ("+tmpX+","+tmpI+")"); } public void get_less_tax() { double leastTax = rates[0][0]; int tmpX = 0; int tmpI = 0; for(int x = 0; x < rates.length; x++) { double[] inner = rates[x]; for (int i = 0; i < rates[x].length; i++) { if(inner[i] < leastTax) { leastTax = inner[i]; tmpX = x; tmpI = i; } } } System.out.println(newForm.format(leastTax)); System.out.println("The index of the lowest rate is: ("+tmpX+","+tmpI+")"); } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/27894-returning-array-indexes-my-program-printingthethread.html | CC-MAIN-2015-40 | refinedweb | 315 | 66.33 |
Launchers overview¶
About Launchers¶
Launchers let you turn your analyses into self-service web forms that less technical colleagues can interact with. They are great for creating templatized reports and analyses, so stakeholders can answer questions without bothering data scientists.
You can build a Launcher to let consumers of your project upload data and specify parameters that are neatly consumed by your code to produce consumer-visible results. For example, you could give your users something like this:
That would produce for them something like this:
Basics¶
A Launcher is essentially a web form on top of any script you could run in Domino. Any arguments your script or executable expects can be exposed as UI elements in the web form.
The command¶
When you create a Launcher, you specify a command that runs under the hood. This command serves as a template: when an end-user runs the Launcher through the web form, parameters in your command template will be replaced with the user’s input values, and the resulting command will run.
Outputs¶
When your code runs, it runs just like anything else in Domino. Namely, Domino will detect new files your code produces and treat those as the results. Whoever runs your Launcher will get a link to those results to view on the web, and they’ll get an email when the results are ready. Your code can produce rich images, even interactive HTML dashboards, dynamically based on a user’s input.
Parameters¶
Like any other script you run through Domino, your command can take
parameters/arguments. Anything in your command of the form
${param_name} will be treated as a parameter in the Launcher.
Your parameters can be of the following types:
- Text: normal text field
- Select: drop down where you can select one value from a list
- File: button to select and upload files
- Multiselect: list where you can select multiple values
An end user would see those parameters rendered like this in the final web form:
Note:
When parameter values are executed in a Job started by the launcher, those values are encased in single quotes. This preserves spaces and other special shell characters in the values, however it means that such values cannot be used to expand to environment variables.
Writing your code to process parameter values¶
When an end-user runs your Launcher through the web form, the user’s input values will be passed into your command in place of the corresponding placeholders you specified, and that final command will be run as though it were a command-line executable. That means your underlying code can access parameters using any standard method for reading command-line inputs. The most common techniques would be:
Python argument handling¶
import sys p1 = sys.argv[1] p2 = sys.argv[2] # a file upload parameter with open(sys.argv[3], 'r') as f: print f.readline() # a multi-select parameter for part in sys.argv[4].split(","): print part
R argument handling¶
Use the commandArgs function.
args <- commandArgs(trailingOnly=TRUE) p1 <- args[1] p2 <- args[2] # a file upload parameter print(readLines(args[3], n = 1)) # a multi-select parameter for (each in strsplit(args[4],",")) { cat(each, sep="\n") }
Note that file parameters will be passed into your file as the path to the file. Multi-select parameters will be passed in as the comma-separated list of all selected choices.
Full examples in R and Python¶
R example¶
Our simple R example lets users input two numbers, and behind the scenes,
we run some R code that adds them and prints the sum. Our script is in a file called
launcher.R, and we
could tell Domino to run
launcher.R 10 20, so our Launcher’s command will be
launcher.R ${A} ${B}.
args <- commandArgs(trailingOnly = TRUE) a <- as.integer(args[1]) b <- as.integer(args[2]) if (is.na(a)) { print("A is not a number") } else if (is.na(b)){ print("B is not a number") } else { paste("The sum of", a, "and", b, "is:", a+b) }
The example Launcher is set up to take A and B as parameters.
When operational, the user will see:
Python¶
This Python example uses a script that creates an interactive scatter plot, using Bokeh, from a CSV file that anyone can upload using a web form. The user provides (a) a file (b) what to put on the X- and Y- axes and (c) some information how to color the data points. The Python script generates an interactive Bokeh scatterplot.
Writing the script¶
This is the complete code of the Python script itself.
from bokeh.plotting import show, output_file from bokeh.charts import Scatter import pandas as pd import sys output_file("scatter.html") data = pd.read_csv(sys.argv[1]) scatter = Scatter(data, x = sys.argv[2], y = sys.argv[3], color = sys.argv[4], legend = "top_left") show(scatter)
Building the Launcher¶
The Launcher itself needs 4 parameters.
Their exact names aren’t important for the script. The parameter names are human-readable guidance for the
users of the Launcher. The order of the parameters is directly
linked to the script. Rename them to
File, X, Y, Color.
After renaming the parameters, you should change their types. In the overview of the parameters, click File and use the dropdown next to Type to select Upload File. X, Y and Color can remain type Text.
Make sure to put in a clear name for the Launcher. Your result should look something like this:
Using the Launcher¶
Click Launchers from the project menu then click Run.
As a sample dataset, use scanvote.csv containing the percentage of a population voting “Yes” per district in Finland, Sweden and Norway.
With this data I would like to create a scatter plot with population (Pop) as X, “Yes” vote percentage as Y (Yes), and the points colored based on the country (Country)
Putting this in the launcher form will give the picture below. Click Run and wait for the result,
| https://docs.dominodatalab.com/en/4.3.2/reference/publish/launchers/Launchers_overview.html | CC-MAIN-2021-25 | refinedweb | 1,000 | 64 |
Python Data Structures Tutorial
Data structures are a way of organizing and storing data so that they can be accessed and worked with efficiently. They define the relationship between the data, and the operations that can be performed on the data. There are many various kinds of data structures defined that make it easier for the data scientists and the computer engineers, alike to concentrate on the main picture of solving larger problems rather than getting lost in the details of data description and access.
In this tutorial, you'll learn about the various Python data structures and see how they are implemented:
- Abstract Data Type and Data Structures
- Primitive Data Structures
- Non-Primitive Data Structures
In DataCamp's free Intro to Python for Data Science course, you can learn more about using Python specifically in the data science context. The course gives an introduction to the basic concepts of Python. With it, you'll discover methods, functions, and the NumPy package.
Abstract Data Type and Data Structures
As you read in the introduction, data structures help you to focus on the bigger picture rather than getting lost in the details. This is known as data abstraction.
Now, data structures are actually an implementation of Abstract Data Types or ADT. This implementation requires a physical view of data using some collection of programming constructs and basic data types.
Generally, data structures can be divided into two categories in computer science: primitive and non-primitive data structures. The former are the simplest forms of representing data, whereas the latter are more advanced: they contain the primitive data structures within more complex data structures for special purposes.
Primitive Data Structures
These are the most primitive or the basic data structures. They are the building blocks for data manipulation and contain pure, simple values of a data. Python has four primitive variable types:
- Integers
- Float
- Strings
- Boolean
In the next sections, you'll learn more about them!
Integers
You can use an integer represent numeric data, and more specifically, whole numbers from negative infinity to infinity, like 4, 5, or -1.
Float
"Float" stands for 'floating point number'. You can use it for rational numbers, usually ending with a decimal figure, such as 1.11 or 3.14.
Take a look at the following DataCamp Light Chunk and try out some of the integer and float operations!
Note that in Python, you do not have to explicitly state the type of the variable or your data. That is because it is a dynamically typed language. Dynamically typed languages are the languages where the type of data an object can store is mutable.
String
Strings are collections of alphabets, words or other characters. In Python, you can create strings by enclosing a sequence of characters within a pair of single or double quotes. For example:
'cake',
"cookie", etc.
You can also apply the
+ operations on two or more strings to concatenate them, just like in the example below:
x = 'Cake' y = 'Cookie' x + ' & ' + y
'Cake & Cookie'
Here are some other basic operations that you can perform with strings; For example, you can use
* to repeat a string a certain number of times:
# Repeat x * 2
'CakeCake'
You can also slice strings, which means that you select parts of strings:
# Range Slicing z1 = x[2:] print(z1) # Slicing z2 = y[0] + y[1] print(z2)
ke Co
Note that strings can also be alpha-numeric characters, but that the
+ operation still is used to concatenate strings.
x = '4' y = '2' x + y
'42'
Python has many built-in methods or helper functions to manipulate strings. Replacing a substring, capitalising certain words in a paragraph, finding the position of a string within another string are some common string manipulations. Check out some of these:
- Capitalize strings
str.capitalize('cookie')
'Cookie'
- Retrieve the length of a string in characters. Note that the spaces also count towards the final result:
str1 = "Cake 4 U" str2 = "404" len(str1)
8
- Check whether a string consists of only digits
str1.isdigit()
False
str2.isdigit()
True
- Replace parts of strings with other strings
str1.replace('4 U', str2)
'Cake 404'
- Find substrings in other strings; Returns the lowest index or position within the string at which the substring is found:
str1 = 'cookie' str2 = 'cook' str1.find(str2)
0
'cook'is found at the start of
'cookie'. As a result, you refer to the position within
'cookie'at which you find that substring. In this case,
0is returned because you start counting positions from 0!
str1 = 'I got you a cookie' str2 = 'cook' str1.find(str2)
12
'cook'is found at position 12 within
'I got you a cookie'. Remember that you start counting from 0 and that spaces count towards the positions!
You can find an exhaustive list of string methods in Python here.
Boolean
This built-in data type that can take up the values:
True and
False, which often makes them interchangeable with the integers 1 and 0. Booleans are useful in conditional and comparison expressions, just like in the following examples:
x = 4 y = 2 x == y
False
x > y
True
x = 4 y = 2 z = (x==y) # Comparison expression (Evaluates to false) if z: # Conditional on truth/false value of 'z' print("Cookie") else: print("No Cookie")
No Cookie
Data Type Conversion
Sometimes, you will find yourself working on someone else's code and you'll need to convert an integer to a float or vice versa, for example. Or maybe you find out that you have been using an integer when what you really need is a float. In such cases, you can convert the data type of variables!
To check the type of an object in Python, use the built-in
type() function, just like in the lines of code below:
i = 4.0 type(i)
float
When you change the type of an entity from one data type to another, this is called "typecasting". There can be two kinds of data conversions possible: implicit termed as coercion and explicit, often referred to as casting.
Implicit Data Type Conversion
This is an automatic data conversion and the compiler handles this for you. Take a look at the following examples:
# A float x = 4.0 # An integer y = 2 # Divide `x` by `y` z = x/y # Check the type of `z` type(z)
float
In the example above, you did not have to explicitly change the data type of y to perform float value division. The compiler did this for you implicitly.
That's easy!
Explicit Data Type Conversion
This type of data type conversion is user defined, which means you have to explicitly inform the compiler to change the data type of certain entities. Consider the code chunk below to fully understand this:
x = 2 y = "The Godfather: Part " fav_movie = y + x
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-51-b8fe90df9e0e> in <module>() 1 x = 2 2 y = "The Godfather: Part " ----> 3 fav_movie = y + x TypeError: Can't convert 'int' object to str implicitly
The above example gave you an error because the compiler does not understand that you are trying to perform concatenation or addition, because of the mixed data types. You have an integer and a string that you're trying to add together.
There's an obvious mismatch.
To solve this, you'll first need to convert the
int to a
string to then be able to perform concatenation.
Note that it might not always be possible to convert a data type to another. Some built-in data conversion functions that you can use here are:
int(),
float(), and
str().
x = 2 y = "The Godfather: Part " fav_movie = (y) + str(x) print(fav_movie)
The Godfather: Part 2
Non-Primitive Data Structures
Non-primitive types are the sophisticated members of the data structure family. They don't just store a value, but rather a collection of values in various formats.
In the traditional computer science world, the non-primitive data structures are divided into:
- Arrays
- Lists
- Files
Array
First off, arrays in Python are a compact way of collecting basic data types, all the entries in an array must be of the same data type. However, arrays are not all that popular in Python, unlike the other programming languages such as C++ or Java.
In general, when people talk of arrays in Python, they are actually referring to lists. However, there is a fundamental difference between them and you will see this in a bit. For Python, arrays can be seen as a more efficient way of storing a certain kind of list. This type of list has elements of the same data type, though.
In Python, arrays are supported by the
array module and need to be imported before you start inititalizing and using them. The elements stored in an array are constrained in their data type. The data type is specififed during the array creation and specified using a type code, which is a single character like the
I you see in the example below:
import array as arr a = arr.array("I",[3,6,9]) type(a)
array.array
Python Array documentation page provides more information about the various type codes available and the functionalities provided by the
array module.
List
Lists in Python are used to store collection of heterogeneous items. These are mutable, which means that you can change their content without changing their identity. You can recognize lists by their square brackets
[ and
] that hold elements, separated by a comma
,. Lists are built into Python: you do not need to invoke them separately.
x = [] # Empty list type(x)
list
x1 = [1,2,3] type(x1)
list
x2 = list([1,'apple',3]) type(x2)
list
print(x2[1])
apple
x2[1] = 'orange' print(x2)
[1, 'orange', 3]
Note: like you have seen in the above example with
x1, lists can also hold homogeneous items and hence satisfying the storage functionality of an array. This is fine unless you want to apply some specific operations to this collection.
Python provides many methods to manipulate and work with lists. Adding new items to a list, removing some items from a list, sorting or reversing a list are common list manipulations. Let's see some of them in action:
- Add
11to the
list_numlist with
append(). By default, this number will be added to the end of the list.
list_num = [1,2,45,6,7,2,90,23,435] list_char = ['c','o','o','k','i','e'] list_num.append(11) # Add 11 to the list, by default adds to the last position print(list_num)
[1, 2, 45, 6, 7, 2, 90, 23, 435, 11]
- Use
insert()to insert
11at index or position 0 in the
list_numlist
list_num.insert(0, 11) print(list_num)
[11, 1, 2, 45, 6, 7, 2, 90, 23, 435, 11]
- Remove the first occurence of
'o'from
list_charwith the help of
remove()
list_char.remove('o') print(list_char)
['c', 'o', 'k', 'i', 'e']
- Remove the item at index
-2from
list_char
list_char.pop(-2) # Removes the item at the specified position print(list_char)
['c', 'o', 'k', 'e']
list_num.sort() # In-place sorting print(list_num)
[1, 2, 2, 6, 7, 11, 11, 23, 45, 90, 435]
list.reverse(list_num) print(list_num)
[435, 90, 45, 23, 11, 11, 7, 6, 2, 2, 1]
If you want to know more about Python lists, you can easily walk through the 18 Most Common Python List Questions tutorial!
Arrays versus Lists
Now that you have seen lists in Python, you maybe wondering why you need arrays at all. The reason is that they are fundamentally different in terms of the operations one can perform on them. With arrays, you can perform an operations on all its item individually easily, which may not be the case with lists. Here is an illustration:
array_char = array.array("u",["c","a","t","s"]) array_char.tostring() print(array_char)
array('u', 'cats')
You were able to apply
tostring() function of the
array_char because Python is aware that all the items in an array are of the same data type and hence the operation behaves the same way on each element. Thus, arrays can be very useful when dealing with a large collection of homogeneous data types. Since Python does not have to remember the data type details of each element individually; for some uses arrays may be faster and uses less memory when compared to lists.
It is also worthwhile to mention the NumPy array while we are on the topic of arrays. NumPy arrays are very heavily used in the data science world to work with multidimensional arrays. They are more efficient than the array module and Python lists in general. Reading and writing elements in a NumPy array is faster, and they support "vectorized" operations such as elementwise addition. Also, NumPy arrays work efficiently with large sparse datasets. To learn more, check out DataCamp's Python Numpy Array Tutorial.
Here is some code to get you started on NumPy Array:
import numpy as np arr_a = np.array([3, 6, 9]) arr_b = arr_a/3 # Performing vectorized (element-wise) operations print(arr_b)
[ 1. 2. 3.]
arr_ones = np.ones(4) print(arr_ones)
[ 1. 1. 1. 1.]
multi_arr_ones = np.ones((3,4)) # Creating 2D array with 3 rows and 4 columns print(multi_arr_ones)
[[ 1. 1. 1. 1.] [ 1. 1. 1. 1.] [ 1. 1. 1. 1.]]
Traditionally, the list data structure can be further categorised into linear and non-linear data structures.
Stacks and
Queues are called "linear data structures", whereas
Graphs and
Trees are "non-linear data structures". These structures and their concepts can be relatively complex but are used extensively due to their resemblance to real world models. You will get a glimpse of these topics in this tutorial.
Note: in a linear data structure, the data items are organized sequentially or, in other words, linearly. The data items are traversed serially one after another and all the data items in a linear data structure can be traversed during a single run. However, in non-linear data structures, the data items are not organized sequentially. That means the elements could be connected to more than one element to reflect a special relationship among these items. All the data items in a non-linear data structure may not be traversed during a single run.
Stacks
A stack is a container of objects that are inserted and removed according to the Last-In-First-Out (LIFO) concept. Think of a scenario where at a dinner party where there is a stack of plates, plates are always added or removed from the top of the pile. In computer science, this concept is used for evaluating expressions and syntax parsing, scheduling algortihms/routines, etc.
Stacks can be implemented using lists in Python. When you add elements to a stack, it is known as a push operation, whereas when you remove or delete an element it is called a pop operation. Note that you have actually have a
pop() method at your disposal when you're working with stacks in Python:
# Bottom -> 1 -> 2 -> 3 -> 4 -> 5 (Top) stack = [1,2,3,4,5] stack.append(6) # Bottom -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 (Top) print(stack)
[1, 2, 3, 4, 5, 6]
stack.pop() # Bottom -> 1 -> 2 -> 3 -> 4 -> 5 (Top) stack.pop() # Bottom -> 1 -> 2 -> 3 -> 4 (Top) print(stack)
[1, 2, 3, 4]
Queue
A queue is a container of objects that are inserted and removed according to the First-In-First-Out (FIFO) principle. An excellent example of a queue in the real world is the line at a ticket counter where people are catered according to their arrival sequence and hence the person who arrives first is also the first to leave. Queues can be of many different kinds.
Lists are not efficient to implement a queue, because
append() and
pop() from the end of a list is not fast and incur a memory movement cost. Also, insertion at the end and deletion from the beginning of a list is not so fast since it requires a shift in the element positions.
Graphs
A graph in mathematics and computer science are networks consisting of nodes, also called vertices which may or may not be connected to each other. The lines or the path that connects two nodes is called an edge. If the edge has a particular direction of flow, then it is a directed graph, with the direction edge being called an arc..
Here, you will find a simple graph implementation using a Python Dictionary to help you get started:
graph = { "a" : ["c", "d"], "b" : ["d", "e"], "c" : ["a", "e"], "d" : ["a", "b"], "e" : ["b", "c"] } def define_edges(graph): edges = [] for vertices in graph: for neighbour in graph[vertices]: edges.append((vertices, neighbour)) return edges print(define_edges(graph))
[('a', 'c'), ('a', 'd'), ('b', 'd'), ('b', 'e'), ('c', 'a'), ('c', 'e'), ('e', 'b'), ('e', 'c'), ('d', 'a'), ('d', 'b')]
You can do some cool stuff with graphs such as trying to find of there exists a path between two nodes, or finding the shortest path between two nodes, determining cycles in the graph.. The nodes with the same parent are called siblings. Do you see why this is also called a family tree?
Trees help in defining real world scenarios and are used everywhere from the gaming world to designing XML parsers and also the PDF design principle is based on trees. In data science, 'Decision Tree based Learning' actually forms a large area of research. Numerous famous methods exist like bagging, boosting use the tree model to generate a predictive model. Games like chess build a huge tree with all possible moves to analyse and apply heuristics to decide on an optimal move.
You can implement a tree structure using and combining the various data structures you have seen so far in this tutorial. However, for the sake of simplicity, this topic will be tackled in another post.
class Tree: def __init__(self, info, left=None, right=None): self.info = info self.left = left self.right = right def __str__(self): return (str(self.info) + ', Left child: ' + str(self.left) + ', Right child: ' + str(self.right)) tree = Tree(1, Tree(2, 2.1, 2.2), Tree(3, 3.1)) print(tree)
1, Left child: 2, Left child: 2.1, Right child: 2.2, Right child: 3, Left child: 3.1, Right child: None
You have learnt about arrays and also seen the list data structure. However, Python provides many different flavours of data collection mechanisms, and although they might not be included in traditional data structure topics in computer science, they are worth knowing specially with regards to Python programming language:
Tuples
Tuples are another standard sequence data type. The difference between tuples and list is that tuples are immutable, which means once defined you cannot delete, add or edit any values inside it. This might be useful in situations where you might to pass the control to someone else but you do not want them to manipulate data in your collection, but rather maybe just see them or perform operations separately in a copy of the data.
Let's see how tuples are implemented:
x_tuple = 1,2,3,4,5 y_tuple = ('c','a','k','e') x_tuple[0]
1
y_tuple[3] x_tuple[0] = 0 # Cannot change values inside a tuple
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-74-b5d6da8c1297> in <module>() 1 y_tuple[3] ----> 2 x_tuple[0] = 0 # Cannot change values inside a tuple TypeError: 'tuple' object does not support item assignment
Dictionary
Dictionaries are exactly what you need if you want to implement something similar to a telephone book. None of the data structures that you have seen before are suitable for a telephone book.
This is when a dictionary can come in handy. Dictionaries are made up of key-value pairs.
key is used to identify the item and the
value holds as the name suggests, the value of the item.
x_dict = {'Edward':1, 'Jorge':2, 'Prem':3, 'Joe':4} del x_dict['Joe'] x_dict
{'Edward': 1, 'Jorge': 2, 'Prem': 3}
x_dict['Edward'] # Prints the value stored with the key 'Edward'.
1
You can apply many other inbuilt functionalies on dictionaries:
len(x_dict)
3
x_dict.keys()
dict_keys(['Prem', 'Edward', 'Jorge'])
x_dict.values()
dict_values([3, 1, 2])
Sets
Sets are a collection of distinct (unique) objects. These are useful to create lists that only hold unique values in the dataset. It is an unordered collection but a mutable one, this is very helpful when going through a huge dataset.
x_set = set('CAKE&COKE') y_set = set('COOKIE') print(x_set)
{'A', '&', 'O', 'E', 'C', 'K'}
print(y_set) # Single unique 'o'
{'I', 'O', 'E', 'C', 'K'}
print(x - y) # All the elements in x_set but not in y_set
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-3-31abf5d98454> in <module>() ----> 1 print(x - y) # All the elements in x_set but not in y_set NameError: name 'x' is not defined
print(x_set|y_set) # Unique elements in x_set or y_set or both
{'C', '&', 'E', 'A', 'O', 'K', 'I'}
print(x_set & y_set) # Elements in both x_set and y_set
{'O', 'E', 'K', 'C'}
Files
Files are traditionally a part of data structures. And although big data is commonplace in the data science industry, a programming language without the capability to store and retrieve previously stored information would hardly be useful. You still have to make use of the all the data sitting in files across databases and you will learn how to do this.
The syntax to read and write files in Python is similar to other programming languages but a lot easier to handle. Here are some of the basic functions that will help you to work with files using Python:
open()to open files in your system, the filename is the name of the file to be opened;
read()to read entire files;
readline()to read one line at a time;
write()to write a string to a file, and return the number of characters written; And
close()to close the file.
# File modes (2nd argument): 'r'(read), 'w'(write), 'a'(appending), 'r+'(both reading and writing) f = open('file_name', 'w') # Reads entire file f.read() # Reads one line at a time f.readline() # Writes the string to the file, returning the number of char written f.write('Add this line.') f.close()
The second argument in the
open() function is the file mode. It allows you to specify whether you want to read (
r), write (
w), append (
a) or both read and write (
r+).
To learn more about file handling in Python, be sure to check out this page.
You Did It!
Hurray! You reached the end of this tutorial! This gets you one topic closer to your dreams of conquering the data science world.
If you're interested, DataCamp's two-part Python Data Science Toolbox dives deeper into functions, iterators, lists, etc.
Take a break and when you are ready, head over to one of the recommended tutorials to continue your journey! | https://www.datacamp.com/community/tutorials/data-structures-python | CC-MAIN-2018-39 | refinedweb | 3,826 | 60.24 |
A simple query interface for tabular data.
Project description.
Some Examples
The examples below will query a CSV file containing the following data (example.csv):
To begin, we load the CSV file into a Select object:
import squint select = squint.Select('example.csv')
Installation
The Squint package is tested on Python 2.7, 3.4 through 3.8, PyPy, and PyPy3; and is freely available under the Apache License, version 2.
The easiest way to install squint is to use pip:
pip install squint
To upgrade an existing installation, use the “--upgrade” option:
pip install --upgrade squint
The development repository for squint is hosted on GitHub..
Freely licensed under the Apache License, Version 2.0
Copyright 2015 - 2020 National Committee for an Effective Congress, et al.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/squint/ | CC-MAIN-2020-40 | refinedweb | 155 | 57.98 |
These notebooks convert Abjad original examples to well... notebook format ;) .
To be have to use abjad output (i.e the scores) inlined in notebook you will need my code to bridge those.
After installation and importing, its usage is 100% transparent against abjad's standard dialect. Remember that abjad tends to use:
from abjad import *
Then you only have to do
%run ../src/abjad-nb.py
And everything will be inlined on the notebooks. This works by redefining the show function. The new function will also convert multi-page output.
This was tested on Python 3 with abjad from github (not the current stable - 2.14 - which does not support Python 3). If you want to use Python 2 and encounter any problems, please contact me. The code should work with 2.14, but the examples below will not.
I believe that the interactive/collaborative enviroment of IPython notebook is a perfect match for score writing.
Ferneyhough: Unsichtbare Farben
Mozart: Musikalisches Würfelspiel
Pärt: Cantus in Memory of Benjamin Britten
Ligeti: Désordre (This one still misses explnation the text, is just the barebones code)
I just converted the examples, the content authorship rests with abjad's authors.
I did not convert most of the diacritics (just to avoid potential problems with folks who do not have software prepared for this), but after this stabilizes I will do that. | http://nbviewer.jupyter.org/github/tiagoantao/abjad-ipython/blob/master/notebooks/Index.ipynb | CC-MAIN-2018-09 | refinedweb | 227 | 66.13 |
The QContactSaveRequest class allows a client to asynchronously request that certain contacts be saved to a contacts store. More...
#include <QContactSaveRequest>
Inherits: QContactAbstractRequest.
This class was introduced in Qt Mobility 1.0.
The QContactSaveRequest class allows a client to asynchronously request that certain contacts be saved to a contacts store.
For a QContactSaveRequest, the resultsAvailable() signal will be emitted when either the individual item errors (which may be retrieved by calling errorMap()), or the resultant contacts (which may be retrieved by calling contacts()), are updated, as well as if the overall operation error (which may be retrieved by calling error()) is updated.
Please see the class documentation of QContactAbstractRequest for more information about the usage of request classes and ownership semantics.
Constructs a new contact save request whose parent is the specified parent
Frees any memory used by this request
Returns the list of contacts which will be saved if called prior to calling start(), otherwise returns the list of contacts with their ids set appropriately (successfully saved new contacts will have an id assigned).
This function was introduced in Qt Mobility 1.0.
See also setContacts().
Returns the list of definitions that this request will operate on.
If the list is empty, the request will operate on all details.
See also setDefinitionMask().
Returns the map of input contact list indices to errors which occurred
This function was introduced in Qt Mobility 1.0.
Sets the contact to be saved to contact. Equivalent to calling:
setContacts(QList<QContact>() << contact);
This function was introduced in Qt Mobility 1.0.
Sets the list of contacts to be saved to contacts
This function was introduced in Qt Mobility 1.0.
Set the list of definitions to restrict saving to definitionMask. This allows you to perform partial save (and remove) operations on new and existing contacts.
If definitionMask is empty (the default), no restrictions will apply, and the passed in contacts will be saved as is. Otherwise, only details whose definitions are in the list will be saved. If a definition name is present in the list, but there are no corresponding details in the contact passed into this request, any existing details in the manager for that contact will be removed.
This is useful if you've used a fetch hint to fetch a partial contactContacts framework will emulate the functionality (fetching the whole contact, applying the new restricted details, and saving the contact back).
See also definitionMask(). | http://doc.trolltech.com/qtmobility-1.2/qcontactsaverequest.html | crawl-003 | refinedweb | 405 | 53.92 |
Feature #5445
Need RUBYOPT -r before ARGV -r
Description
Libraries given by -r options in RUBYOPT should be loaded before ones in direct command line arguments..
My custom loader is too large to include here, so I will simply demonstrate the problem with simple sample code:
$ cat req.rb
p "Custom Require"
module Kernel
alias :require0 :require
def require(*a) puts "Kernel#require" p a require0(*a) end class << self alias :require0 :require def require(*a) puts "Kernel.require" p a require0(*a) end end
end
If we load this via RUBYOPT, the result is:
$ RUBYOPT=-r./req.rb ruby -rstringio -e0
Custom Require
But if we load via -r the result is:
$ ruby -r./req.rb -rstringio -e0
Custom Require
Kernel#require
["stringio"]
I would ask that the output of both invocations to be identical.
(Note, the -T option should still allow RUBYOPT to be omitted regardless.)
History
#1
Updated by Nobuyoshi Nakada over 2 years ago
- Tracker changed from Bug to Feature
=begin
A patch follows.
diff --git i/ruby.c w/ruby.c
index 2e6751f..3c8bb3a 100644
--- i/ruby.c
+++ w/ruby.c
@@ -1251,9 +1251,13 @@ processoptions(int argc, char **argv, struct cmdlineoptions *opt)
VALUE srcencname = opt->src.enc.name;
VALUE extencname = opt->ext.enc.name;
VALUE intencname = opt->intern.enc.name;
+ long reqcount = opt->reqlist ? RARRAYLEN(opt->reqlist) : 0;
opt->src.enc.name = opt->ext.enc.name = opt->intern.enc.name = 0; moreswitches(s, opt, 1);
- if (reqcount && RARRAYLEN(opt->reqlist) > reqcount) {
- rbaryrotate(opt->reqlist, reqcount);
- } if (srcencname) opt->src.enc.name = srcencname; if (extencname) =end
#2
Updated by Yusuke Endoh over 2 years ago
Hello,
2011/10/14 Thomas Sawyer transfire@gmail.com:
Libraries given by -r options in RUBYOPT should be loaded before ones in direct command line arguments.
How about adding a new environment variable, such as RUBYOPT_BEFORE?
I don't think it is a good idea to change the order because of
compatiblilty.
--
Yusuke Endoh mame@tsg.ne.jp
#3
Updated by Thomas Sawyer over 2 years ago
My thought is that it makes sense that RUBYOPT comes first, b/c it defines starting environment.
In production is -r even used? Likelihood of compatibility issue is probably extremely low.
#4
Updated by Yusuke Endoh about 2 years ago
- Status changed from Open to Assigned
- Assignee set to Yukihiro Matsumoto
#5
Updated by Yusuke Endoh over 1 year ago
- Target version set to next minor
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/5445 | CC-MAIN-2014-15 | refinedweb | 408 | 57.98 |
On 13.07.2021 03:59, Bobby Eshleman wrote:
> --- a/xen/arch/x86/gdbstub.c
> +++ b/xen/arch/x86/gdbstub.c
> @@ -18,7 +18,9 @@
> * You should have received a copy of the GNU General Public License
> * along with this program; If not, see <>.
> */
> -#include <asm/debugger.h>
> +#include <asm/uaccess.h>
> +#include <xen/debugger.h>
> +#include <xen/gdbstub.h>
Here and in at least one more case below: Our usual pattern is to
have xen/ ones before asm/ ones. And (leaving aside existing
screwed code) ...
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -58,7 +58,7 @@
> #include <asm/hvm/trace.h>
> #include <asm/hap.h>
> #include <asm/apic.h>
> -#include <asm/debugger.h>
> +#include <xen/debugger.h>
> #include <asm/hvm/monitor.h>
> #include <asm/monitor.h>
> #include <asm/xstate.h>
... we also try to avoid introducing any mixture. Plus ...
> --- a/xen/arch/x86/hvm/vmx/realmode.c
> +++ b/xen/arch/x86/hvm/vmx/realmode.c
> @@ -14,7 +14,7 @@
> #include <xen/sched.h>
> #include <xen/paging.h>
> #include <xen/softirq.h>
> -#include <asm/debugger.h>
> +#include <xen/debugger.h>
... we strive to have new insertions be sorted alphabetically. When
the existing section to insert into isn't suitably sorted yet, what
I normally do is try to find a place where at least the immediately
adjacent neighbors then fit the sorting goal.
Sorry, all just nits, but their scope is about the entire patch.
> --- /dev/null
> +++ b/xen/include/xen/debugger.h
> @@ -0,0 +1,81 @@
> +/******************************************************************************
> + * Generic hooks into arch-dependent Xen.
Now that you move this to be generic, I think it better also would
indeed be. See <>.
> + *
> + *
Nit: No double blank (comment) lines please.
> + * Each debugger should define three functions here:
> + *
> + * 1. debugger_trap_entry():
> + * Called at start of any synchronous fault or trap, before any other work
> + * is done. The idea is that if your debugger deliberately caused the trap
> + * (e.g. to implement breakpoints or data watchpoints) then you can take
> + * appropriate action and return a non-zero value to cause early exit from
> + * the trap function.
> + *
> + * 2. debugger_trap_fatal():
> + * Called when Xen is about to give up and crash. Typically you will use
> this
> + * hook to drop into a debug session. It can also be used to hook off
> + * deliberately caused traps (which you then handle and return non-zero).
> + *
> + * 3. debugger_trap_immediate():
> + * Called if we want to drop into a debugger now. This is essentially the
> + * same as debugger_trap_fatal, except that we use the current register
> state
> + * rather than the state which was in effect when we took the trap.
> + * For example: if we're dying because of an unhandled exception, we call
> + * debugger_trap_fatal; if we're dying because of a panic() we call
> + * debugger_trap_immediate().
> + */
> +
> +#ifndef __XEN_DEBUGGER_H__
> +#define __XEN_DEBUGGER_H__
> +
> +/* Dummy value used by ARM stubs. */
> +#ifndef TRAP_invalid_op
> +# define TRAP_invalid_op 6
> +#endif
To avoid the need to introduce this, please flip ordering with the
subsequent patch.
> +#ifdef CONFIG_CRASH_DEBUG
> +
> +#include <asm/debugger.h>
> +
> +#else
> +
> +#include <asm/regs.h>
> +#include <asm/processor.h>
> +
> +static inline void domain_pause_for_debugger(void)
> +{
> +}
> +
> +static inline bool debugger_trap_fatal(
> + unsigned int vector, const struct cpu_user_regs *regs)
I'm afraid the concept of a vector may not be arch-independent.
> +{
> + return false;
> +}
> +
> +static inline void debugger_trap_immediate(void)
> +{
> +}
> +
> +static inline bool debugger_trap_entry(
> + unsigned int vector, const struct cpu_user_regs *regs)
> +{
> + return false;
> +}
> +
> +#endif /* CONFIG_CRASH_DEBUG */
> +
> +#ifdef CONFIG_GDBSX
> +unsigned int dbg_rw_mem(unsigned long gva, XEN_GUEST_HANDLE_PARAM(void) buf,
> + unsigned int len, domid_t domid, bool toaddr,
> + uint64_t pgd3);
> +#endif /* CONFIG_GDBSX */
I'm afraid this whole construct isn't arch independent, at least as long
as it has the "pgd3" parameter, documented elsewhere to mean "the value of
init_mm.pgd[3] in a PV guest" (whatever this really is in a 64-bit guest,
or in a non-Linux one).
But I don't see why this needs moving to common code in the first place:
It's x86-specific both on the producer and the consumer. | https://lists.xenproject.org/archives/html/xen-devel/2021-07/msg01009.html | CC-MAIN-2021-49 | refinedweb | 651 | 51.24 |
Created on 2018-05-16 00:12 by rad164, last changed 2018-12-03 10:17 by vstinner.
I just reported a bug about email folding at issue 33524, but this issue is more fatal in some languages like Chinese or Japanese, which does not insert spaces between each words.
Python 3.6.5 has this issue, while 3.6.4 does not.
Create an email with longer header than max_line_length set by its policy. And the header contains non-ascii characters but no white spaces.
When try to fold it, python gets stuck and finally system hangs. There are no output unless I stop it with Ctrl-C.
^CTrace35, in _fold_as_ew
ew = _ew.encode(first_part)
File "/usr/lib/python3.6/email/_encoded_words.py", line 215, in encode
blen = _cte_encode_length['b'](bstring)
File "/usr/lib/python3.6/email/_encoded_words.py", line 130, in len_b
groups_of_3, leftover = divmod(len(bstring), 3)
KeyboardInterrupt
Code to reproduce:
from email.message import EmailMessage
from email.policy import default
policy = default # max_line_length = 78
msg = EmailMessage()
msg["Subject"] = "á"*100
policy.fold("Subject", msg["Subject"])
No problems in following cases:
1. If the header is shorter than max_line_length.
2. If the header can be split with spaces and the all chunk is shorter than max_line_length.
3. If the header is fully composed with ascii characters. In this case, there is no problem even if it is very long without spaces.
I tried the test case on master branch. I ran the test case on 1GB RAM Linux based digitalocean droplet to have the script killed. Please find the results as below :
# Python build
➜ cpython git:(master) ✗ ./python
Python 3.8.0a0 (heads/bpo33095-add-reference:9d49f85, Jun 17 2018, 07:22:33)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
# Test case
➜ cpython git:(master) ✗ cat foo.py
from email.message import EmailMessage
from email.policy import default
policy = default # max_line_length = 78
msg = EmailMessage()
msg["Subject"] = "á"*100
policy.fold("Subject", msg["Subject"])
# Test case execution
➜ cpython git:(master) ✗ time ./python foo.py
[2] 13637 killed ./python foo.py
./python foo.py 387.36s user 3.85s system 90% cpu 7:11.94 total
# I tried to do Ctrl + C after 2 minutes to stop and the stack trace is as below :
➜ cpython git:(master) ✗ time ./python foo.py
^CTraceback (most recent call last):
File "foo.py", line 7, in <module>
policy.fold("Subject", msg["Subject"])
File "/root/cpython/Lib/email/policy.py", line 183, in fold
return self._fold(name, value, refold_binary=True)
File "/root/cpython/Lib/email/policy.py", line 205, in _fold
return value.fold(policy=self)
File "/root/cpython/Lib/email/headerregistry.py", line 258, in fold
return header.fold(policy=policy)
File "/root/cpython/Lib/email/_header_value_parser.py", line 144, in fold
return _refold_parse_tree(self, policy=policy)
File "/root/cpython/Lib/email/_header_value_parser.py", line 2650, in _refold_parse_tree
part.ew_combine_allowed, charset)
File "/root/cpython/Lib/email/_header_value_parser.py", line 2728, in _fold_as_ew
ew = _ew.encode(first_part, charset=encode_as)
File "/root/cpython/Lib/email/_encoded_words.py", line 226, in encode
qlen = _cte_encode_length['q'](bstring)
File "/root/cpython/Lib/email/_encoded_words.py", line 93, in len_q
return sum(len(_q_byte_map[x]) for x in bstring)
File "/root/cpython/Lib/email/_encoded_words.py", line 93, in <genexpr>
return sum(len(_q_byte_map[x]) for x in bstring)
KeyboardInterrupt
./python foo.py 131.41s user 0.43s system 98% cpu 2:13.89 total
Thanks
Since it's a denial of service which can be triggered by an user, I mark this issue as a security issue.
I can be wrong, but it seems like Python 2.7 isn't affected: Lib/email/_header_value_parser.py was added by bpo-12586 (commit 0b6f6c82b51b7071d88f48abb3192bf3dc2a2d24). Python 2.7 doesn't have this file nor policies. | https://bugs.python.org/issue33529 | CC-MAIN-2019-09 | refinedweb | 634 | 54.59 |
One of the smartest guys in the ASP.NET world, Nikhil Kothari, just wrote a short piece and is sharing code for a Facebook app framework. Sweet. Check it out here:.
It occurred to me fairly recently that these systems provide a level of abstraction that makes sense in terms of data storage, but you still have to do a fair amount of work in your providers to make sure you aren't pounding the data store. That means caching, of course, but you still might be creating objects in several places (handlers, the page, user controls, etc.), and each time you're going to the well, whether it be by way of the caching mechanism you create or going to the data store.
So why cache at all? Why not get the data once, early in the request cycle, and go back to it when you need it? Oddly enough, it was good old fashioned FormsAuth that made me think of this, where in ASP.NET v1.x we would look up the user and roles, probably in an HttpModule, and access that data from where ever.
Yes, you can do this today with a well-written Membership provider, but it's more complicated than it needs to be. Not only is the interface fairly large, but then you have to create the various objects during the page lifecycle which in turn call the various pieces of the provider. It just seems like a lot of work. It means that in every piece, you have to call something like Membership.GetUser(User.Identity.Name), invoking the provider methods.
What if you did something a little more simple. Heck, use the Membership API if you want to, but do it once. Create an HttpModule that goes something like this:
public class UserContext : IHttpModule { public void Init(HttpApplication application) { application.AuthenticateRequest += new EventHandler(application_AuthenticateRequest); } private void application_AuthenticateRequest(object sender, EventArgs e) { HttpContext context = HttpContext.Current; MembershipUser user = Membership.GetUser(context.User.Identity.Name); context.Items.Add("membershipUser", user); } public static MembershipUser Current { get { return (MembershipUser)HttpContext.Current.Items["membershipUser"]; } } public void Dispose() { } }
Then all you need is to call UserContext.Current from your code and you've got access to all that your user object has to offer. I used Membership here, but you could roll your own too. In fact, you could make this an abstract class and leave the implementation of AuthenticateRequest to a derived class, specific to your app.
At first glance, this seems like a very small win, but how much do you use user objects around your page? Furthermore, how much plumbing have you written to implement caching? When you eliminate all of that, suddenly your code gets a lot more simple, and a lot easier to manage. If I had a half-dozen user controls on the page that all had to access my user, and maybe manipulate it, at least it only has to read once from the data store, and I'm free to not write a bunch of caching code for subsequent object creation.
One caveat here is that you do need to keep thread safety in mind. If you don't use asynchronous page or handler methods, you're OK, but keep that in mind. | http://weblogs.asp.net/jeff/archive/2007/07.aspx | CC-MAIN-2014-15 | refinedweb | 544 | 63.19 |
C# features from 2.0 to 5.0 versionGENERICS
.” by MSDN
Generics are very easy feature to learn and it is inevitably to know. It is very easy to learn the basics of C# language like: iterations, classes, structures, methods, properties,… but you will have to learn other parts of the language if you want to say that you really know C#.
So let’s start with basic examples. The most basic example of generic class is List. I believe that everyone has used List at least once. If no, let’s start with the example.
List<type> _list = new List<type>();What we have done here? Nothing special. We have just declared a new List and we named it _list. This is type safe code because it is generic. That’s the basic meaning of generics in C#.
List is one of the many collections in C#/.NET so we have to declare a type of that List. Let’s say it’s a string type. That means that we would insert unkown numbers of string in our List. We are declaring that by <> brackets.
How we can declare our own generic class? This is the basic syntax:
public class OurClass<T> { ... }Where T is a data type we would use in the future code. Also it is important to know that we can declare more than one argument in the generics.
For more about generics look at MSDN.
ANONYMOUS METHODS
Anonymous methods are used when we want to pass a code block as a delegate parameter. It is also a very simple to learn.
Here is the basic example of anonymous method:
button1.Click += delegate(Object sender, EventArgs e) { //code that is activated when button1 is clicked }Ofc, there is a way to write a similar code without using delegate keyword (using Lamda Expression) but that is not for this post.
From this example you can write any event handler you want (and that is the part of your code).
Be aware that if you want to use code from the example, you will have to write it inside the main part of the code (main part of the code differs from the templates: WinForms, WPF,…), if you want to avoid errors, because IntelliSense will not recognize your object where you want to join the event of that object.
For more information about Anonymous methods look at MSDN.
NULLABLE TYPES
Nullable types are most easiest feature of C# 2.0 to learn. We know that Int32 type holds the range of integer values that we can assign to it from -2147483648 to 2147483647. But what if we want to declare a new integer (Int32 type in this case) and we want of it to be empty but also we want of it to be initialize. In that case we are using “null” value which is not representing zero value.
It is a good practice to do that, so I recommend you to do that in your code in the future.
For more information about the nullable types look at MSDN.
Source
- 1 | http://forum.codecall.net/blog/1799/entry-1931-c-features-part-1-c-20-features/ | CC-MAIN-2017-13 | refinedweb | 514 | 82.75 |
Fill all 2D contours to create polygons. More...
#include <vtkContourTriangulator.h>
Fill all 2D contours to create polygons.
vtkContourTriangulator will generate triangles to fill all of the 2D contours in its input. The contours may be concave, and may even contain holes i.e. a contour may contain an internal contour that is wound in the opposite direction to indicate that it is a hole.
Definition at line 46 of file vtkContourTriangulator.h.
Definition at line 50 of file vtkContourTriangulator.
Check if there was a triangulation failure in the last update.
Generate errors when the triangulation fails.
Note that triangulation failures are often minor, because they involve tiny triangles that are too small to see.
A robust method for triangulating a polygon.
It cleans up the polygon and then applies the ear-cut triangulation. A zero return value indicates that triangulation failed.
Given some closed contour lines, create a triangle mesh that fills those lines.
The input lines must be single-segment lines, not polylines. The input lines do not have to be in order. Only numLines starting from firstLine will be used.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkPolyDataAlgorithm.
Definition at line 97 of file vtkContourTriangulator.h.
Definition at line 98 of file vtkContourTriangulator.h. | https://vtk.org/doc/nightly/html/classvtkContourTriangulator.html | CC-MAIN-2019-47 | refinedweb | 215 | 53.37 |
Building custom tags for Django templates has gotten much easier over the years, with decorators provided that do most of the work when building common, simple kinds of tags.
One area that isn't covered is block tags, the kind of tags that have an opening and ending tag, with content inside that might also need processing by the template engine. (Confusingly, there's a block tag named "block", but I'm talking about block tags in general).
A block tag can do pretty much anything, which is probably why there's not a simple decorator to help write them. In this post, I'm going to walk through building an example block tag that takes arguments that can control its logic.
Django Documentation
There are a couple of pages in the Django documentation that you should at least scan before continuing, and will likely want to consult while reading:
- Custom template tags and filters is the place to start. It documents the general approach and the machinery we use for writing custom tags.
- The Django template language: for Python programmers has two sections of interest to us: Rendering a context and Playing with Context objects.
What our example tag will do
Let's write a tag that can make simple changes to its content, changing occurrences of one string to another. We'll call it replace, and usage might look like this:
{% replace old="dog" new="cat" %} My dog is great. I love dogs. {% endreplace %}
which would end up rendered as My cat is great. I love cats..
We'll also have an optional numeric argument to limit how many times we do the replacement:
{% replace 1 old="dog" new="cat" %} My dog is great. I love dogs. {% endreplace %}
which we'll want to render as My cat is great. I love dogs..
Parsing the template
The first thing we'll write is the compilation function, which Django will call when it's parsing a template and comes across our tag. Conventionally, such functions are called do_<tagname>. We tell Django about our new tag by registering it:
from django import template register = template.Library() def do_replace(parser, token): pass register.tag('replace', do_replace)
We'll be passed two arguments, parser which is the state of parsing of the template, and token which represents the most recently parsed token in the template - in our case, the contents of our opening template tag. For example, if a template contains {% replace 1 2 foo='bar' %}, then token will contain "replace 1 2 foo='bar'".
To parse that token, I ended up writing the following method as a general-purpose template tag argument parser:
from django.template.base import FilterExpression, kwarg_re def parse_tag(token, parser): """ Generic template tag parser. Returns a three-tuple: (tag_name, args, kwargs) tag_name is a string, the name of the tag. args is a list of FilterExpressions, from all the arguments that didn't look like kwargs, in the order they occurred, including any that were mingled amongst kwargs. kwargs is a dictionary mapping kwarg names to FilterExpressions, for all the arguments that looked like kwargs, including any that were mingled amongst args. (At rendering time, a FilterExpression f can be evaluated by calling f.resolve(context).) """ # Split the tag content into words, respecting quoted strings. bits = token.split_contents() # Pull out the tag name. tag_name = bits.pop(0) # Parse the rest of the args, and build FilterExpressions from them so that # we can evaluate them later. args = [] kwargs = {} for bit in bits: # Is this a kwarg or an arg? match = kwarg_re.match(bit) kwarg_format = match and match.group(1) if kwarg_format: key, value = match.groups() kwargs[key] = FilterExpression(value, parser) else: args.append(FilterExpression(bit, parser)) return (tag_name, args, kwargs)
Let's work through what that does.
Calling split_contents() on the token is like calling .split(), but it's smart about quoted parameters and will keep them intact. We get back args, a list of strings representing the parts of the template tag invocation, very much like sys.argv gives us for running a program, except that no quotation marks have been stripped away.
The first element in args is our template tag name itself. We remove it because we don't really need it for parsing the args, but save it for generality.
Next we work through the arguments, using the same regular expression as Django's template library to decide which arguments are positional and which are keyword arguments.
The regular expression for keyword arguments also splits on the =, so we can extract the keyword and the value.
We'd like our argument values to support literal values, variables, and even applying filters. We can't actually evaluate our arguments yet, since we're just parsing the template and don't have any particular template context yet where we could look for things like variables. What we do instead is construct a FilterExpression for each one, which parses the syntax of the value, and uses the parser state to find any filters that are referred to.
When all that is done, this method returns a three-tuple: (<tagname>, <args>, <kwargs>).
Our replace tag has two required kwargs and an optional arg. We can check that now:
from django.template import TemplateSyntaxError # ... def do_replace(parser, token): tag_name, args, kwargs = parse_tag(token, parser) usage = '{% {tag_name} [limit] old="fromstring" new="tostring" %} ... {% end{tag_name} %}'.format(tag_name=tag_name) if len(args) > 1 or set(kwargs.keys()) != {'old', 'new'}: raise TemplateSyntaxError("Usage: %s" % usage)
Note again how we haven't hardcoded the tag name.
Let's pull our limit argument out of the args list:
if args: limit = args[0] else: limit = FilterExpression('-1', parser)
If no limit was supplied, we default to -1, which will indicate later that there's no limit. We wrap it in a FilterExpression so we can just call limit.resolve(context) without having to check whether limit is a FilterExpression or not.
We can't check the values here. They might depend on the context, so we'll have to check them at rendering time.
This is all similar to what we might do if we were writing a non-block tag without using any of the helpful decorators that hide some of this detail. But now we need to deal with some unknown amount of template following our opening tag, up to our closing tag. We need to ask the template parser to process everything in the template until we get to our closing tag:
nodelist = parser.parse(('end_replace',))
We get back a NodeList object (django.template.NodeList), which represents a list of template "nodes" representing the parsed part of the template, up to but not including our end tag.
We tell the parser to just ignore our end tag, which is the next token:
parser.delete_first_token()
Now we're done parsing the part of the template from our opening tag to our closing tag. We have the arguments to our tag in limit and kwargs, and the parsed template between our tags in nodelist.
Django expects our function to return a new node object that stores that information for us to use later when the template is rendered. We haven't written the code for our node object yet, but here's how our parsing function will end:
return ReplaceNode(nodelist, limit=limit, old=kwargs['from'], new=kwargs['to'])
Reviewing what we've done so far
Each time Django comes across {% replace ... %} while parsing a template, it calls do_replace(). We parse all the text from {% replace ... %} to {% endreplace %} and store the result in an instance of ReplaceNode. Later, whenever Django renders the parsed template using a particular context, we'll be able to use that information to render this part of the template.
The node
Let's start coding our template node. All we need it to do so far is to store the information we got from parsing part of the template:
from django import template class ReplaceNode(template.Node): def __init__(self, nodelist, limit, old, new): self.nodelist = nodelist self.limit = limit self.old = old self.new = new
Rendering
As we've seen, the result of parsing a Django template is a NodeList containing a list of node objects. Whenever Django needs to render a template with a particular context, it calls each node object, passing the context, and asks the node object to render itself. It gets back some text from each node, concatenates all the returned pieces of text, and that's the result.
Our node needs its own render method to do this. We can start with a stub:
class ReplaceNode(template.Node): ... def render(self, context): ... return "result"
Now, let's look at those arguments again. We've mentioned that we couldn't validate their values before, because we wouldn't know them until we had a context to evaluate them in.
When we code this, we need to keep in mind Django's policy that in general, render() should fail silently. So we program defensively:
class ReplaceNode(template.Node): ... def render(self, context): # Evaluate the arguments in the current context try: limit = int(self.limit.resolve(context)) except (ValueError, TypeError): limit = -1 from_string = self.old.resolve(context) to_string = conditional_escape(self.new.resolve(context)) # Those should be checked for stringness. Left as an exercise.
Also note that we conditionally escape the replacement string. That might have come from user input, and can't be trusted to be blindly inserted into a web page due to the risk of Cross Site Scripting.
Now we'll render whatever was between our template tags, getting back a string:
content = self.nodelist.render(context)
Finally, do the replacement and return the result:
content = mark_safe(content.replace(from_string, to_string, limit)) return content
We've escaped our own input, and the block contents we got from the template parser should already be escaped too, so we mark the result safe so it won't get double-escaped by accident later.
Conclusion
We've seen, step by step, how to build a custom Django template tag that accepts arguments and works on whole blocks of a template. This example does something pretty simple, but with this foundation, you can create tags that do anything you want with the contents of a block.
If you found this post useful, we have more posts about Django, Python, and many other interesting topics. | https://www.caktusgroup.com/blog/2017/05/01/building-custom-block-template-tag/ | CC-MAIN-2018-39 | refinedweb | 1,725 | 64.3 |
How do I make a M2Web call to get a list of ewons for a particular pool on a PRO account?
How do I make a M2Web call to get a list of ewons for a particular pool on a PRO account?
anontywdfuin #1
The following api call will do what you are looking for.
Note at the end the &pool=poolname parameter. Just update to have your poolname.
anontywdfuin #4
Thanks.
Do you have any sample python code to take the reply from this webcall and get the list into some sort of array?
Below is a simple python program that will return a list of all ewons in the defined pool when you call getEwonByPool.
Please note: This code is provided as is
import requests accountInfo = "" devicesInfo = "" account = "Your account" username = "Your username" password = "Your password" developerId = "Your developerId" poolName = "Pool you want" #This is case sensitive! # Retrieve the pool id required by the api call based on # the pool name. # returns int def getPoolId(name): r = requests.post(accountinfo, data={ 't2maccount' : account, 't2musername' : username, 't2mpassword' : password, 't2mdeveloperid' : developerid }) data = r.json() for pool in data['pools']: if pool['name'] == name: return pool['id'] # Retrieve the eWON list by the pool id # returns JSON array of eWONs in the pool def getEwonByPool(): poolid = getPoolId(poolName) r = requests.post(devicesInfo, data={ 't2maccount' : account, 't2musername' : username, 't2mpassword' : password, 't2mdeveloperid' : developerid, 'pool' : poolid }) return r.json() # Test our call print(getEwonByPool())
I hope this helps!
Repl.it sandbox
anontywdfuin #6
Thanks Jordan
| https://forum.hms-networks.com/t/how-do-i-make-a-m2web-call-to-get-a-list-of-ewons-for-a-particular-pool-on-a-pro-account/5109 | CC-MAIN-2019-30 | refinedweb | 251 | 64.91 |
05 September 2006 22:36 [Source: ICIS news]
HOUSTON (ICIS news)--Dow Chemical is just ahead of the industry curve in its decision to shut down five plants around the world, an analyst with Citigroup said on Tuesday.
“Dow’s new CEO [chief executive officer] Andrew Liveris is making his mark on the company by pro-actively shutting down five plants around the world,” said chemicals analyst PJ Jukevar. Dow’s actions reflect its strategic shift of commodity assets to low-cost sites in the ?xml:namespace>
The shutdowns will reduce North American capacities for chlor-alkali by 3.1%m, polystyrene by 3.9% and low density polyethylene by 2.2%, Jukevar said. It will also reduce global capacity for toluene diisocyanate by 6%.
“What is strikingly different about these shutdowns compared [with] the previous ones back in 2002, is that these actions are being taken during the good times rather than waiting for the bad times,” Jukevar said. “Management didn’t have to shut down these assets, but opted to do so, especially the chlor-alkali plant in
The decision to close the chlor-alkali plant in
“So although the caustic molecule was very profitable, the EDC trade with
The broader issue surrounding the shutdowns involves
“This raises an important question for the broader industry regarding many | http://www.icis.com/Articles/2006/09/05/1089312/dow-ahead-of-the-curve-in-shutdowns-citigroup.html | CC-MAIN-2015-14 | refinedweb | 219 | 50.77 |
#include "ltkrn.h"
#include "ltclr.h"
L_LTCLR_API L_INT L_LoadICCProfile(pszFilename, pICCProfile, pLoadOptions)
Loads an ICC profile saved/embedded in an image file.
Character string containing the name of the file from which to load the ICC profile.
Pointer to an ICCPROFILEEXT structure to be updated with the loaded structure pointed to by pICCProfile first by calling L_InitICCProfile. If L_LoadICCProfile succeeds, free the ICC profile by calling L_FreeICCProfile. In fact, when any ICCPROFILEEXT structure initialized by L_InitICCProfile is no longer needed, the memory must be freed by calling L_FreeICCProfile.
To save an ICC Profile to an image file, call L_SaveICCProfile.
Call L_FillICCProfileStructure
Win32, x64.
For an example, refer to L_InitICCProfile.
Direct Show .NET | C API | Filters
Media Foundation .NET | C API | Transforms
Media Streaming .NET | C API | https://www.leadtools.com/help/sdk/v22/colorconversion/api/l-loadiccprofile.html | CC-MAIN-2022-27 | refinedweb | 126 | 52.66 |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns=""> <head> <meta http- >Hacker's Guide to Subversion</title> </head> <body> <div class="app"> <h1 style="text-align: center;">Hacker's Guide to Subversion</h1> <p>If you are contributing code to the Subversion project, please read this first.</p> <pre> $LastChangedDate: 2009-08-12 14:07:10 +0000 (Wed, 12 Aug 2009) $ </pre> <!-- Other pages seem to use "h2" for ToC, but I think "h1" works better, because the ToC is fundamentally different from other sections and therefore it's confusing when it looks the same as the others. --> <div class="h1"><!-- no 'id' or 'title' attribute for ToC --> <h1>Table of Contents</h1> <ul> <li><a href="#participating">Participating in the community</a></li> <li><a href="#docs">Theory and documentation</a></li> <li><a href="#code-to-read">Code to read</a></li> <li><a href="#directory-layout">Directory layout</a></li> <li><a href="#interface-visibility">Code modularity and interface visibility</a></li> <li><a href="#coding-style">Coding style</a></li> <li><a href="#secure-coding">Secure coding guidelines</a></li> <li><a href="#destruction-of-stacked-resources">Destruction of stacked resources</a></li> <li><a href="#documenting">Documentation</a></li> <li><a href="#use-page-breaks">Using page breaks</a></li> <li><a href="#error-messages">Error message conventions</a></li> <li><a href="#other-conventions">Other conventions</a></li> <li><a href="#apr-pools">APR pool usage conventions</a></li> <li><a href="#apr-status-codes">APR status codes</a></li> <li><a href="#exception-handling">Exception handling</a></li> <li><a href="#automated-tests">Automated tests</a></li> <li><a href="#write-test-cases-first">Writing test cases before code</a></li> <li><a href="#server-debugging">Debugging the server</a></li> <li><a href="#net-trace">Tracing network traffic</a></li> <li><a href="#tracing-memory-leaks">Tracking down memory leaks</a></li> <li><a href="#log-messages">Writing log messages</a></li> <li><a href="#crediting">Crediting</a></li> <li><a href="#patches">Patch submission guidelines</a></li> <li><a href="#filing-issues">Filing bugs / issues</a></li> <li><a href="#issue-triage">Issue triage</a></li> <li><a href="#commit-access">Commit access</a></li> <li><a href="#branch-based-development">Branch-based development</a></li> <li><a href="#configury">The configuration/build system under unix</a></li> <li><a href="#releasing">How to release a distribution tarball</a></li> <li><a href="#release-numbering">Release numbering, compatibility, and deprecation</a></li> <li><a href="#release-stabilization">Stabilizing and maintaining releases</a></li> <li><a href="#tarball-signing">Signing source distribution packages</a></li> <li><a href="#custom-releases">Custom releases</a></li> <li><a href="#l10n">Localization (l10n)</a></li> </ul> </div> <div class="h2" id="participating" title="participating"> <h2>Participating in the community</h2> <p>Although Subversion is originally sponsored and hosted by CollabNet (<a href=""></a>), it's a true open-source project under the Apache License, Version 2.0. A number of developers work for CollabNet, some work for other large companies (such as RedHat), and many others are simply excellent volunteers who are interested in building a better version control system.</p> <p>The community exists mainly through mailing lists and a Subversion repository. To participate:</p> <p>Go to <a href="" ></a> and</p> <ul> <li>.</p></li> <li><p>Get a copy of the latest development sources from <a href="" ></a> <br /> New development always takes place on trunk. Bugfixes, enhancements, and new features are backported from there to the various release branches.</p></li> </ul> <p>There are many ways to join the project, either by writing code, or by testing and/or helping to manage the bug database. If you'd like to contribute, then look at:</p> <ul> <li><p>The bugs/issues database <a href="" ></a></p></li> <li><p>The bite-sized tasks page <a href="" ></a></p></li> </ul> <p>To submit code, simply send your patches to dev@subversion.tigris.org. No, wait, first read the rest of this file, <i>then</i> start sending patches to dev@subversion.tigris.org. :-)</p> <p dev@subversion.tigris.org. ("Subversion: We're here to help you help us!")</p> <p>Another way to help is to set up automated builds and test suite runs of Subversion on some platform, and have the output sent to the svn-breakage@subversion.tigris.org mailing list. See more details at <a href="" ></a> in the description for the svn-breakage list.</p> </div> <div class="h2" id="docs" title="docs"> <h2>Theory and documentation</h2> <ol> <li><p>Design</p> <p>A <a href="design.html">design spec</a> was written in June 2000, and is a bit out of date. But it still gives a good theoretical introduction to the inner workings of the repository, and to Subversion's various layers.</p> </li> <li><p>API Documentation</p> <p>See the section on the <a href="#doxygen-docs">public API documentation</a> for more information.</p> </li> <li><p>Delta Editors</p> <p>Karl Fogel wrote a chapter for O'Reilly's 2007 book <a href=""> Beautiful Code: Leading Programmers Explain How They Think</a> covering the design and use of <a href=""> Subversion's delta editor interface</a>.</p> </li> <li> <p>Network Protocols</p> <p>The <a href="webdav-usage.html">WebDAV Usage</a> document is an introduction to Subversion's DAV network protocol, which is an extended dialect of HTTP and uses URLs beginning with "http://" or "https://".</p> <p>The <a href="" >SVN Protocol</a> document contains a formal description of Subversion ra_svn network protocol, which is a custom protocol on port 3690 (by default), whose URLs begin with "svn://" or "svn+ssh://".</p> </li> <li><p>User Manual</p> <p>Version Control with Subversion is a book published by O'Reilly that shows in detail how to effectively use Subversion. The text of the book is free, and is actively being revised. On-line versions are available at <a href="" ></a>. The XML source and translations to other languages are maintained in their own repository at <a href="" ></a>.</p> </li> <li><p>System notes</p> <p>A lot of the design ideas for particular aspects of the system have been documented in individual files in the <a href="">notes/</a> directory.</p> </li> </ol> </div> <div class="h2" id="code-to-read" title="code-to-read"> <h2>Code to read</h2> <p>Before you can contribute code, you'll need to familiarize yourself with the existing code base and interfaces.</p> <p>Check out a copy of Subversion (anonymously, if you don't yet have an account with commit access) — so you can look at the code.</p> <p>Within 'subversion/include/' are a bunch of header files with huge doc comments. If you read through these, you'll have a pretty good understanding of the implementation details. Here's a suggested perusal order:</p> <ol> <li><p>the basic building blocks: svn_string.h, svn_error.h, svn_types.h</p> </li> <li><p>useful utilities: svn_io.h, svn_path.h, svn_hash.h, svn_xml.h</p> </li> <li><p>the critical interface: svn_delta.h</p> </li> <li><p>client-side interfaces: svn_ra.h, svn_wc.h, svn_client.h</p> </li> <li><p>the repository and versioned filesystem: svn_repos.h, svn_fs.h</p> </li> </ol> <p>Subversion tries to stay portable by using only ANSI/ISO C and by using the Apache Portable Runtime (APR) library. APR is the portability layer used by the Apache httpd server, and more information can be found at <a href="" ></a>.</p> <p>Because Subversion depends so heavily on APR, it may be hard to understand Subversion without first glancing over certain header files in APR (look in 'apr/include/'):</p> <ul> <li><p>memory pools: apr_pools.h</p></li> <li><p>filesystem access: apr_file_io.h</p></li> <li><p>hashes and arrays: apr_hash.h, apr_tables.h</p></li> </ul> .</p> </div> <div class="h2" id="directory-layout" title="directory-layout"> <h2>Directory layout</h2> <p>A rough guide to the source tree:</p> <ul> <li><p><tt>doc/</tt><br /> User and Developer documentation.</p> </li> <li><p><tt>www/</tt><br /> Subversion web pages (live content at <a href="" ></a>).</p> </li> <li><p><tt>tools/</tt><br /> Stuff that works with Subversion, but that Subversion doesn't depend on. Code in tools/ is maintained collectively by the Subversion project, and is under the same open source copyright as Subversion itself.</p> </li> <li><p><tt>contrib/</tt>.</p> </li> <li><p><tt>packages/</tt><br /> Stuff to help packaging systems, like rpm and dpkg.</p> </li> <li><p><tt>subversion/</tt><br /> Source code to Subversion itself (as opposed to external libraries).</p> </li> <li><p><tt>subversion/include/</tt><br /> Public header files for users of Subversion libraries.</p> </li> <li><p><tt>subversion/include/private/</tt><br /> Private header files shared internally by Subversion libraries.</p> </li> <li><p><tt>subversion/libsvn_fs/</tt><br /> The versioning "filesystem" API.</p> </li> <li><p><tt>subversion/libsvn_repos/</tt><br /> Repository functionality built around the `libsvn_fs' core.</p> </li> <li><p><tt>subversion/libsvn_delta/</tt><br /> Common code for tree deltas, text deltas, and property deltas.</p> </li> <li><p><tt>subversion/libsvn_wc/</tt><br /> Common code for working copies.</p> </li> <li><p><tt>subversion/libsvn_ra/</tt><br /> Common code for repository access.</p> </li> <li><p><tt>subversion/libsvn_client/</tt><br /> Common code for client operations.</p> </li> <li><p><tt>subversion/svn/</tt><br /> The command line client.</p> </li> <li><p><tt>subversion/tests/</tt><br /> Automated test suite.</p> </li> <li><p><tt>apr/</tt><br /> Apache Portable Runtime library. (Note: This is not in the same repository as Subversion. Read INSTALL for instructions on how to get it if you don't already have it.)</p> </li> <li><p><tt>neon/</tt><br /> Neon library from Joe Orton. (Note: This is not in the same repository as Subversion. Read INSTALL for instructions on how to get it if you don't already have it.)</p> </li> </ul> </div> <div class="h2" id="interface-visibility" title="interface-visibility"> <h2>Code modularity and interface visibility</h2> <p:</p> <p><em>"Are the consumers of my new code local to a particular source code file in a single library?"</em> If so, you probably want a static function in that same source file.</p> <p><em>"Is my new function of the sort that other source code within this library will need to use, but nothing *outside* the library?"</em> If so, you want to use a non-static, double-underscore-named function (such as <tt>svn_foo__do_something</tt>), with its prototype in the appropriate library-specific header file.</p> <p><em>"Will my code need to be accessed from a different library?"</em> Here you have some additional questions to answer, such as <em>"Should my code live in the original library I was going to put it in, or should it live in a more generic utility library such as libsvn_subr?"</em> Either way, you're now looking at using an inter-library header file. But see the next question before you decide which one...</p> <p><em>"Is my code such that it has a clean, maintainable API that can reasonably be maintained in perpetuity and brings value to the Subversion public API offering?"</em> If so, you'll be adding prototypes to the public API, immediately inside <tt>subversion/include/</tt>. <tt>subversion/include/private/</tt>.</p> </div> <div class="h2" id="coding-style" title="coding-style"> <h2>Coding style</h2> <p).</p> <p>Read <a href="" ></a> for a full description of the GNU coding standards. Below is a short example demonstrating the most important formatting guidelines, including our no-space-before-param-list-paren exception:</p> <pre>); } } </pre> <p.</p> </div> <div class="h2" id="secure-coding" title="secure-coding"> <h2>Secure coding guidelines</h2> <p.</p> <p>Input validation is the act of defining legal input and rejecting everything else. The code must perform input validation on all untrusted input. </p> <p>Security boundaries:</p> <p.</p> <p>Functions which make calls to a security boundary must include validation checks of the arguments passed. Functions which themselves are security boundaries should audit the input received and alarm when invoked with improper values. </p> <p>[### todo: need some examples from Subversion here...]</p> <p>String operations:</p> <p.</p> <p>Password storage:</p> <p.</p> </div> <div class="h2" id="destruction-of-stacked-resources" title="destruction-of-stacked-resources"> <h2>Destruction of stacked resources</h2> <p>Some resources need destruction to ensure correct functioning of the application. Such resources include files, especially since open files cannot be deleted on Windows.</p> <p).</p> <p>At first in <a href=""></a> and later in <a href=""></a> this was discussed in more general terms for files, streams, editors and window handlers.</p> <p>As Greg Hudson put it:</p> <blockquote> <p>On consideration, here is where I would like us to be:</p> <ul><li>Streams which read from or write to an underlying object own that object, i.e. closing the stream closes the underlying object, if applicable.</li> <li>The layer (function or data type) which created a stream is responsible for closing it, except when the above rule applies.</li> <li>Window handlers are thought of as an odd kind of stream, and passing the final NULL window is considered closing the stream.</li> </ul> <p.</p> </blockquote> <p>There is one exception to the rules above though. When a stream is passed to a function as an argument (for example: the 'out' parameter of svn_client_cat2()), that routine can't call the streams destructor, since it did not create that resource.</p> <p>If svn_client_cat2() creates a stream, it must also call the destructor for that stream. By the above model, that stream will call the destructor for the 'out' parameter. This is however wrong, because the responsibility to destruct the 'out' parameter lies elsewhere.</p> <p>To solve this problem, at least in the stream case, svn_stream_disown() has been introduced. This function wraps a stream, making sure it's <em>not</em> destroyed, even though any streams stacked upon it may try to do so.</p> </div> <div class="h2" id="documenting" title="documenting"> <h2>Documentation</h2> <div class="h3" id="document-everything" title="document-everything"> <h3>Document Everything</h3> <p>Every function, whether public or internal, must start out with a documentation comment that describes what the function does. The documentation should mention every parameter received by the function, every possible return value, and (if not obvious) the conditions under which the function could return an error.</p> <p>For internal documentation put the parameter names in upper case in the doc string, even when they are not upper case in the actual declaration, so that they stand out to human readers.</p> <p>For public or semi-public API functions, the doc string should go above the function in the .h file where it is declared; otherwise, it goes above the function definition in the .c file.</p> <p>For structure types, document each individual member of the structure as well as the structure itself.</p> <p>For actual source code, internally document chunks of each function, so that an someone familiar with Subversion can understand the algorithm being implemented. Do not include obvious or overly verbose documentation; the comments should help understanding of the code, not hinder it.</p> <p>For example:</p> <pre> <span style="color: red;">/*** How not to document. Don't do this. ***/</span> /* Make a foo object. */ static foo_t * make_foo_object(arg1, arg2, apr_pool_t *pool) { /* Create a subpool. */ apr_pool_t *subpool = svn_pool_create(pool); /* Allocate a foo object from the main pool */ foo_t *foo = apr_palloc(pool, sizeof(*foo)); ... } </pre> <p>Instead, document decent sized chunks of code, like this:</p> <pre> /*)); } </pre> <p>Read over the Subversion code to get an overview of how documentation looks in practice; in particular, see <a href=""> subversion/include/*.h</a> for doxygen examples. </p> </div> <div class="h3" id="doxygen-docs" title="doxygen-docs"> <h3>Public API Documentation</h3> <p>We use the <a href="">Doxygen</a> format for public interface documentation. This means anything that goes in a public header file. <a href="">Snapshots </a> of the public API documentation are generated nightly from the latest Subversion sources.</p> <p>We use only a small portion of the available <a href="">doxygen commands</a> to markup our source. When writing doxygen documentation, the following conventions apply:</p> <ul> <li>Use complete sentences and prose descriptions of the function, preceding parameter names with <code>@a</code>, and type and macro names with <code>@c</code>.</li> <li>Use <code><tt>...</tt></code> to display multiple words and <code>@p</code> to display only one word in typewriter font.</li> <li>Constant values, such as <code>TRUE</code>, <code>FALSE</code> and <code>NULL</code> should be in all caps.</li> <li>When several functions are related, define a group name, and group them together using <code>@defgroup</code> and <code>@{...@}</code>.</li> </ul> <p>See the <a href="">Doxygen manual</a> for a complete list of commands.</p> </div> </div> <div class="h2" id="use-page-breaks" title="use-page-breaks"> <h2>Using page breaks</h2> <p>We're using page breaks (the Ctrl-L character, ASCII 12) for section boundaries in both code and plaintext prose files. Each section starts with a page break, and immediately after the page break comes the title of the section.</p> <p.</p> </div> <div class="h2" id="error-messages" title="error-messages"> <h2>Error message conventions</h2> <p>For error messages the following conventions apply:</p> <ul> <li><p>Provide specific error messages only when there is information to add to the general error message found in subversion/include/svn_error_codes.h.</p></li> <li><p>Messages start with a capital letter.</p></li> <li><p>Try keeping messages below 70 characters.</p></li> <li><p>Don't end the error message with a period (".").</p></li> <li><p>Don't include newline characters in error messages.</p></li> <li><p>Quoting information is done using single quotes (e.g. "'some info'").</p></li> <li><p>Don't include the name of the function where the error occurs in the error message. If Subversion is compiled using the '--enable-maintainer-mode' configure-flag, it will provide this information by itself.</p></li> <li><p>When including path or filenames in the error string, be sure to quote them (e.g. "Can't find '/path/to/repos/userfile'").</p></li> <li><p>When including path or filenames in the error string, be sure to convert them using <a href="" ><tt>svn_path_local_style()</tt></a> before inclusion (since paths passed to and from Subversion APIs are assumed to be in <a href="" >canonical form</a>).</p></li> <li><p>Don't use Subversion-specific abbreviations (e.g. use "repository" instead of "repo", "working copy" instead of "wc").</p></li> <li><p>If you want to add an explanation to the error, report it followed by a colon and the explanation like this:</p> <pre> "Invalid " SVN_PROP_EXTERNALS " property on '%s': " "target involves '.' or '..'". </pre></li> <li><p>Suggestions or other additions can be added after a semi-colon, like this:</p> <pre> "Can't write to '%s': object of same name already exists; remove " "before retrying". </pre></li> <li><p>Try to stay within the boundaries of these conventions, so please avoid separating different parts of error messages by other separators such as '--' and others.</p></li> </ul> <p>Also read about <a href="#l10n">Localization</a>.</p> </div> <div class="h2" id="other-conventions" title="other-conventions"> <h2>Other conventions</h2> <p>In addition to the GNU standards, Subversion uses these conventions:</p> <ul> <li><p>When using a path or file name as input to most <a href="">Subversion APIs</a>, be sure to convert them to Subversion's internal/canonical form using the <a href="" ><tt>svn_path_internal_style()</tt></a> API. Alternately, when receiving a path or file name as output from a Subversion API, convert them into the expected form for your platform using the <a href="" ><tt>svn_path_local_style()</tt></a> API.</p></li> <li><p>Use only spaces for indenting code, never tabs. Tab display width is not standardized enough, and anyway it's easier to manually adjust indentation that uses spaces.</p> </li> <li><p.)</p> </li> <li><p <tt>libsvn_wc/update_editor.c</tt>) do not require any additional namespace decorations. Symbols that need to be used outside a library, but still are not public are put in a shared header file in the <tt>include/private/</tt> directory, and use the double underscore notation. Such symbols may be used by Subversion core code only.</p> <p>To recap:</p> <pre> /*> <p>Pre-Subversion 1.5, private symbols which needed to be used outside of a library were put into public header files, using the double underscore notation. This practice has been abandoned, and any such symbols are legacy, maintained for <a href="#release-numbering">backwards compatibility</a>.</p> </li> <li><p>In text strings that might be printed out (or otherwise made available) to users, use only forward quotes around paths and other quotable things. For example:</p> <pre> $ svn revert foo svn: warning: svn_wc_is_wc_root: 'foo' is not a versioned resource $ </pre> <p>There used to be a lot of strings that used a backtick for the first quote (`foo' instead of 'foo'), but that looked bad in some fonts, and also messed up some people's auto-highlighting, so we settled on the convention of always using forward quotes.</p> </li> <li><p>If you use Emacs, put something like this in your .emacs file, so you get svn-dev.el and svnbook.el when needed:</p> <pre> ;;; </pre> <p>You'll need to customize the path for your setup, of course. You can also make the regexp to string-match more selective; for example, one developer says:</p> <pre> >. :-) </pre> </li> <li><p.</p> </li> <li><p>Put two spaces between the end of one sentence and the start of the next. This helps readability, and allows people to use their editors' sentence-motion and -manipulation commands.</p> </li> <li><p>There are many other unspoken conventions maintained throughout the code, that are only noticed when someone unintentionally fails to follow them. Just try to have a sensitive eye for the way things are done, and when in doubt, ask.</p> </li> </ul> </div> <div class="h2" id="apr-pools" title="apr-pools"> <h2>APR pool usage conventions</h2> <p>(This assumes you already basically understand how APR pools work; see apr_pools.h for details.)</p> <p>Applications using the Subversion libraries must call apr_initialize() before calling any Subversion functions.</p> <p>Subversion's general pool usage strategy can be summed up in two principles:</p> <ol> <li><p>The call level that created a pool is the only place to clear or destroy that pool.</p> </li> <li><p>When iterating an unbounded number of times, create a subpool before entering the iteration, use it inside the loop and clear it at the start of each iteration, then destroy it after the loop is done, like so:</p> <pre> apr_pool_t *subpool = svn_pool_create(pool); for (i = 0; i < n; ++i) { svn_pool_clear(subpool); do_operation(..., subpool); } svn_pool_destroy(subpool); </pre> </li> </ol> <p>By using a loop sub.</p> <p:</p> <pre>; </pre> <p>Except for some legacy code, which was written before these principles were fully understood, virtually all pool usage in Subversion follows the above guidelines.</p> <p>One such legacy pattern is a tendency to allocate an object inside a pool, store the pool in the object, and then free that pool (either directly or through a close_foo() function) to destroy the object.</p> <p>For example:</p> <pre> <span style="color: red;">/*** Example of how NOT to use pools. Don't be like this. ***/</span>).] </pre> <p.</p> <p>See also the <a href="#exception-handling">Exception handling</a> section, for details of how resources associated with a pool are cleaned up when that pool is destroyed.</p> <p>In summary:</p> <ul> <li><p>Objects should not have their own pools. An object is allocated into a pool defined by the constructor's caller. The caller knows the lifetime of the object and will manage it via the pool.</p> </li> <li><p>Functions should not create/destroy pools for their operation; they should use a pool provided by the caller. Again, the caller knows more about how the function will be used, how often, how many times, etc. thus, it should be in charge of the function's memory usage.</p> <p.</p> </li> <li><p>Whenever an unbounded iteration occurs, an iteration subpool should be used.</p> </li> <li><p.</p> </li> </ul> <p>See also <a href="#tracing-memory-leaks">Tracking down memory leaks</a> for tips on diagnosing pool usage problems.</p> </div> <div class="h2" id="apr-status-codes" title="apr-status-codes"> <h2>APR status codes</h2> <p>Always check for APR status codes (except APR_SUCCESS) with the APR_STATUS_IS_...() macros, not by direct comparison. This is required for portability to non-Unix platforms.</p> </div> <div class="h2" id="exception-handling" title="exception-handling"> <h2>Exception handling</h2> <p>OK, here's how to use exceptions in Subversion.</p> <ol> <li><p>Exceptions are stored in svn_error_t structures:</p> <pre>; </pre> </li> <li><p>If you are the <em>original</em> creator of an error, you would do something like this:</p> <pre> return svn_error_create(SVN_ERR_FOO, NULL, "User not permitted to write file"); </pre> <p>NOTICE the NULL field... indicating that this error has no child, i.e. it is the bottom-most error.</p> <p>See also the <a href="#error-messages"> section on writing error messages</a>.</p> <p>Subversion internally uses UTF-8 to store its data. This also applies to the 'message' string. APR is assumed to return its data in the current locale, so any text returned by APR needs conversion to UTF-8 before inclusion in the message string.</p> </li> <li><p>If you <em>receive</em> an error, you have three choices:</p> <ol> <li><p>Handle the error yourself. Use either your own code, or just call the primitive svn_handle_error(err). (This routine unwinds the error stack and prints out messages converting them from UTF-8 to the current locale.)</p> <p>When your routine receives an error which it intends to ignore or handle itself, be sure to clean it up using svn_error_clear(). Any time such an error is not cleared constitutes a <em>memory leak</em>.</p> </li> <li><p>Throw the error upwards, unmodified:</p> <pre> error = some_routine(foo); if (error) return svn_error_return(error); </pre> <p>Actually, a better way to do this would be with the SVN_ERR() macro, which does the same thing:</p> <pre> SVN_ERR(some_routine(foo)); </pre> </li> <li><p>Throw the error upwards, wrapping it in a new error structure by including it as the "child" argument:</p> <pre> error = some_routine(foo); if (error) { svn_error_t *wrapper = svn_error_create(SVN_ERR_FOO, error, "Authorization failed"); return wrapper; } </pre> <p>Of course, there's a convenience routine which creates a wrapper error with the same fields as the child, except for your custom message:</p> <pre> error = some_routine(foo); if (error) { return svn_error_quick_wrap(error, "Authorization failed"); } </pre> <p>The same can (and should) be done by using the SVN_ERR_W() macro:</p> <pre> SVN_ERR_W(some_routine(foo), "Authorization failed"); </pre> </li> </ol> <p:</p> <ul> <li><p>Memory</p></li> <li><p>Files</p> <p>All files opened with apr_file_open are closed at pool cleanup. Subversion uses this function in its svn_io_file_* api, which means that files opened with svn_io_file_* or apr_file_open will be closed at pool cleanup.</p> <p>Some files (lock files for example) need to be removed when an operation is finished. APR has the APR_DELONCLOSE flag for this purpose. The following functions create files which are removed on pool cleanup:</p> <ul> <li><p>apr_file_open and svn_io_file_open (when passed the APR_DELONCLOSE flag)</p></li> <li><p>svn_io_open_unique_file (when passed TRUE in its delete_on_close)</p></li> </ul> <p>Locked files are unlocked if they were locked using svn_io_file_lock.</p> </li> </ul> </li> <li><p>The <code>SVN_ERR()</code> macro will create a wrapped error when <code>SVN_ERR__TRACING</code> is defined. This helps developers determine what caused the error, and can be enabled with the <code>--enable-maintainer-mode</code> option to <code>configure</code>. </p> </li> <li><p>Sometimes, you just want to return whatever a called function returns, usually at the end of your own function. Avoid the temptation to directly return the result:</p> <pre> /* Don't do this! */ return some_routine(foo);</pre> <p>Instead, use the svn_error_return meta-function to return the value. This ensures that stack tracing happens correctly when enabled.</p> <pre> return svn_error_return(some_routine(foo));</pre> </li> </ol> </div> <div class="h2" id="automated-tests" title="automated-tests"> <h2>Automated tests</h2> <p>For a description of how to use and add tests to Subversion's automated test framework, please read <a href="" >subversion/tests/README</a> and <a href="" >subversion/tests/cmdline/README</a>.</p> <p <a href="" >svntest</a> framework (start at the <a href="" >README</a>).</p> <p>Lieven Govaerts has set up a <a href="" >BuildBot</a> build/test farm at <a href="" ></a>, see his message</p> <pre> <a href=""></a> (Thread URL: <a href=""></a>) </pre> <p>for more details. (<a href="" >BuildBot</a> is a system for centrally managing multiple automated testing environments; it's especially useful for portability testing, including of uncommitted changes.)</p> </div> <div class="h2" id="write-test-cases-first" title="write-test-cases-first"> <h2>Writing test cases before code</h2> <pre> From: Karl Fogel <kfogel@collab.net> Subject: writing test cases To: dev@subversion.tigris.org Date: Mon, 5 Mar 2001 15:58:46 -0600 Many of us implementing the filesystem interface have now gotten into the habit of writing the test cases (see fs-test.c) *before* writing the actual code. It's really helping us out a lot -- for one thing, it forces one to define the task precisely in advance, and also it speedily reveals the bugs in one's first try (and second, and third...). I'd like to recommend this practice to everyone. If you're implementing an interface, or adding an entirely new feature, or even just fixing a bug, a test for it is a good idea. And if you're going to write the test anyway, you might as well write it first. :-) Yoshiki Hayashi's been sending test cases with all his patches lately, which is what inspired me to write this mail to encourage everyone to do the same. Having those test cases makes patches easier to examine, because they show the patch's purpose very clearly. It's like having a second log message, one whose accuracy is verified at run-time. That said, I don't think we want a rigid policy about this, at least not yet. If you encounter a bug somewhere in the code, but you only have time to write a patch with no test case, that's okay -- having the patch is still useful; someone else can write the test case. As Subversion gets more complex, though, the automated test suite gets more crucial, so let's all get in the habit of using it early. -K </pre> </div> <div class="h2" id="server-debugging" title="server-debugging"> <h2>Debugging the server</h2> <div class="h3" id="debugging-ra-dav" title="debugging-ra-dav"> <h3>Debugging the DAV server</h3> <p>:</p> <pre> % gdb httpd (gdb) run -X ^C (gdb) break some_func_in_mod_dav_svn (gdb) continue </pre> <p.</p> <p>You'll probably want to watch Apache's run-time logs</p> <pre> /usr/local/apache2/logs/error_log /usr/local/apache2/logs/access_log </pre> <p>to help determine what might be going wrong and where to set breakpoints.</p> </div> <div class="h3" id="debugging-ra-svn" title="debugging-ra-svn"> <h3>Debugging the ra_svn client and server, on Unix</h3> <p>Bugs in ra_svn usually manifest themselves with one of the following cryptic error messages:</p> <pre> svn: Malformed network data svn: Connection closed unexpectedly </pre> <p>(The first message can also mean the data stream was corrupted in tunnel mode by user dotfiles or hook scripts; see <a href="" >issue #1145</a>.) The first message generally means you to have to debug the client; the second message generally means you have to debug the server.</p> <p.</p> <p>To debug the client, simply pull it up in gdb, set a breakpoint, and run the offending command:</p> <pre> % gdb svn (gdb) break marshal.c:NNN (gdb) run ARGS Breakpoint 1, vparse_tuple (list=___, pool=___, fmt=___, ap=___) at subversion/libsvn_ra_svn/marshal.c:NNN NNN "Malformed network data"); </pre> <p>There are several bits of useful information:</p> <ul> <li><p>A backtrace will tell you exactly what protocol exchange is failing.</p> </li> <li><p>.)</p> </li> <li><p>The format string determines what the marshaller was expecting to see.</p> </li> </ul> <p:</p> <pre> % gdb svnserve (gdb) break marshal.c:NNN (gdb) run -X </pre> <p>Then run the offending client command. From there, it's just like debugging the client.</p> <p.</p> </div> </div> <div class="h2" id="net-trace" title="net-trace"> <h2>Tracing network traffic</h2> <p>Use <a href="">Wireshark</a> (formerly known as "Ethereal") to eavesdrop on the conversation.</p> <p>First, make sure that between captures within the same wireshark session, you hit <em>Clear</em>, otherwise filters from one capture (say, an HTTP capture) might interfere with others (say, an ra_svn capture). </p> <p>Assuming you're cleared, then:</p> <ol> <li><p>Pull down the <i>Capture</i> menu, and choose <i>Capture Filters</i>.</p></li> <li><p>If debugging the http:// (WebDAV) protocol, then in the window that pops up, choose "<code>HTTP TCP port (80)</code>" (which should result in the filter string "<code><i>tcp port http</i></code>").</p> <p>If debugging the svn:// (ra_svn) protocol, then choose <i>New</i>, give the new filter a name (say, "ra_svn"), and type "<code>tcp port 3690</code>" into the filter string box.</p> <p>When done, click OK.</p></li> <li><p>Again go to the <i>Capture</i> menu, this time choose <i>Interfaces</i>, and click <i>Options</i> next to the appropriate interface (probably you want interface "lo", for "loopback", assuming the server will run on the same machine as the client).</p></li> <li><p>Turn off promiscuous mode by unchecking the appropriate checkbox.</p></li> <li><p>Click the <i>Start</i> button in the lower right to start the capture.</p></li> <li><p>Run your Subversion client.</p></li> <li><p>Click the Stop icon (a red X over an ethernet interface card) when the operation is finished (or <i>Capture->Stop</i> should work). Now you have a capture. It looks like a huge list of lines.</p></li> <li><p>Click on the <i>Protocol</i> column to sort.</p></li> <li><p>Then, click on the first relevant line to select it; usually this is just the first line.</p></li> <li><p>Right click, and choose <i>Follow TCP Stream</i>. You'll be presented with the request/response pairs of the Subversion client's HTTP conversion.</p></li> </ol> <p>The above instructions are specific to the graphical version of Wireshark (version 0.99.6), and don't apply to the command-line version known as "tshark" (which corresponds to "tethereal", from back when Wireshark was called Ethereal).</p> <p>Alternatively, you may set the <tt>neon-debug-mask</tt> parameter in your <tt>servers</tt> configuration file to cause neon's debugging output to appear when you run the <tt>svn</tt> client. The numeric value of <tt>neon-debug-mask</tt> is a combination of the <tt>NE_DBG_...</tt> values in the header file <tt>ne_utils.h</tt>. For current versions of neon, setting <tt>neon-debug-mask</tt> to 130 (i.e. <tt>NE_DBG_HTTP+NE_DBG_HTTPBODY)</tt> will cause the HTTP data to be shown.</p> <p>You may well want to disable compression when doing a network trace—see the <tt>http-compression</tt> parameter in the <tt>servers</tt> configuration file.</p> <p>Another alternative is to set up a logging proxy between the Subversion client and server. A simple way to do this is to use the <tt>socat</tt> program. For example, to log communication with an svnserve instance, run the following command:</p> <p><tt>socat -v TCP4-LISTEN:9630,reuseaddr,fork TCP4:localhost:svn</tt></p> <p>Then run your svn commands using an URL base of <tt>svn://127.0.0.1:9630/</tt>; <tt>socat</tt> will forward the traffic from port 9630 to the normal svnserve port (3690), and will print all traffic in both directions to standard error, prefixing it with < and > signs to show the direction of the traffic.</p> </div> <div class="h2" id="tracing-memory-leaks" title="tracing-memory-leaks"> <h2>Tracking down memory leaks</h2> <p.</p> <p>If you have a favorite memory leak tracking tool, you can configure with --enable-pool-debug (which will make every pool allocation use its own malloc()), arrange to exit in the middle of the operation, and go to it. If not, here's another way:</p> <ul> <li><p>Configure with --enable-pool-debug=verbose-alloc. Make sure to rebuild all of APR and Subversion so that every allocation gets file-and-line information.</p> </li> <li><p>Run the operation, piping stderr to a file. Hopefully you have lots of disk space.</p> </li> <li><p>In the file, you'll see lots of lines which look like:</p> <pre> POOL DEBUG: [5383/1024] PCALLOC ( 2763/ 2763/ 5419) \ 0x08102D48 "subversion/svn/main.c:612" \ <subversion/libsvn_subr/auth.c:122> (118/118/0) </pre> <p.</p> </li> </ul> </div> <div class="h2" id="log-messages" title="log-messages"> <h2>Writing log messages</h2> <p>Every commit needs a log message. </p> <p.</p> <p <a href=""></a>.) However, if the commit is just one simple change to one file, then you can dispense with the general description and simply go straight to the detailed description, in the standard filename-then-symbol format shown below.</p> <p.</p> <p:</p> <pre> *. </pre> <p>Later on, when someone is trying to figure out what happened to `twirling_baton_fast', they may not find it if they just search for "_fast". A better entry would be:</p> <pre> *. </pre> <p>The wildcard is okay in the description for `handle_parser_warning', but only because the two structures were mentioned by full name elsewhere in the log entry.</p> <p>You should also include property changes in your log messages. For example, if you were to modify the "svn:ignore" property on the trunk, you might put something like this in your log:</p> <pre> * trunk/ (svn:ignore): Ignore 'build'. </pre> <p>The above only applies to properties you maintain, not those maintained by subversion like "svn:mergeinfo".</p> <p.</p> <p>As an exception to the above, if you make exactly the same change in several files, list all the changed files in one entry. For example:</p> <pre> * subversion/libsvn_ra_pigeons/twirl.c, subversion/libsvn_ra_pigeons/roost.c: Include svn_private_config.h. </pre> <p>If all the changed files are deep inside the source tree, you can shorten the file name entries by noting the common prefix before the change entries:</p> <pre> . </pre> <p:</p> <pre> Fix issue #1729: Don't crash because of a missing file. * subversion/libsvn_ra_ansible/get_editor.c (frobnicate_file): Check that file exists before frobnicating. </pre> <p>Try to put related changes together. For example, if you create svn_ra_get_ansible2(), deprecating svn_ra_get_ansible(), then those two things should be near each other in the log message:</p> <pre> * subversion/include/svn_ra.h (svn_ra_get_ansible2): New prototype, obsoletes svn_ra_get_ansible. (svn_ra_get_ansible): Deprecate. </pre> .</p> <p>See <a href="#crediting">Crediting</a> for how to give credit to someone else if you are committing their patch, or committing a change they suggested.</p> <p:</p> <pre> (consume_count): If `count' is unreasonable, return 0 and don't advance input pointer. </pre> <p>And then, in `consume_count' in `cplus-dem.c':</p> <pre>; } </pre> <p>This is why a new function, for example, needs only a log entry saying "New Function" --- all the details should be in the source.</p> <p20946 for an example.</p> <p>In general, there is a tension between making entries easy to find by searching for identifiers, and wasting time or producing unreadable entries by being exhaustive. Use the above guidelines and your best judgment, and be considerate of your fellow developers. (Also, <a href=""> run "svn log"</a> to see how others have been writing their log entries.)</p> <p.</p> </div> <div class="h2" id="crediting" title="crediting"> <h2>Crediting</h2> <p>It is very important to record code contributions in a consistent and parseable way. This allows us to write scripts to figure out who has been actively contributing — and what they have contributed — so we can <a href="">spot potential new committers quickly</a>. The Subversion project uses human-readable but machine-parseable fields in log messages to accomplish this.</p> <p>When committing a patch written by someone else, use "Patch by: " at the beginning of a line to indicate the author:</p> <pre> Fix issue #1729: Don't crash because of a missing file. * subversion/libsvn_ra_ansible/get_editor.c (frobnicate_file): Check that file exists before frobnicating. Patch by: J. Random <jrandom@example.com> </pre> <a href="" >COMMITTERS</a> (the leftmost column in that file). Additionally, "me" is an acceptable shorthand for the person actually committing the change.< </pre> <p>If someone found the bug or pointed out the problem, but didn't write the patch, indicate their contribution with "Found by: ":</p> <pre> Fix issue #1729: Don't crash because of a missing file. * subversion/libsvn_ra_ansible/get_editor.c (frobnicate_file): Check that file exists before frobnicating. Found by: J. Random <jrandom@example.com> </pre> <p>If someone suggested something useful, but didn't write the patch, indicate their contribution with "Suggested by: ":</p> <pre> Extend the Contribulyzer syntax to distinguish finds from ideas. * www/hacking.html (crediting): Adjust accordingly. Suggested by: dlr </pre> <p>If someone reviewed the change, use "Review by: " (or "Reviewed by: " if you prefer):</p> <pre> Fix issue #1729: Don't crash because of a missing file. * subversion/libsvn_ra_ansible/get_editor.c (frobnicate_file): Check that file exists before frobnicating. Review by: Eagle Eyes <eeyes@example.com> </pre> <p>A field may have multiple lines, and a log message may contain any combination of fields:< </pre> <p:</p> <pre>.) </pre> <p>Currently, these fields</p> <pre> Patch by: Suggested by: Found by: Review by: </pre> "Reported by: ". These are okay, but try to use an official field, or a parenthetical aside, in preference to creating your own. Also, don't use "Reported by: " when the reporter is already recorded in an issue; instead, simply refer to the issue.</p> <p>Look over Subversion's existing log messages to see how to use these fields in practice. This command from the top of your trunk working copy will help:</p> <pre> svn log | contrib/client-side/search-svnlog.pl "(Patch|Review|Suggested) by: " </pre> <p><b>Note:</b>.</p> </div> <div class="h2" id="patches" title="patches"> <h2>Patch submission guidelines</h2> <p>Mail patches to dev@subversion.tigris.org, starting the subject line with <tt>[PATCH]</tt>. This helps our patch manager spot patches right away. For example:</p> <pre> Subject: [PATCH] fix for rev printing bug in svn status </pre> <p>If the patch addresses a particular issue, include the issue number as well: "<tt>[PATCH] issue #1729: ...</tt>". Developers who are interested in that particular issue will know to read the mail.</p> <p>A patch submission should contain one logical change; please don't mix N unrelated changes in one submission — send N separate emails instead.</p> <p>Generate the patch using <tt>svn diff</tt> from the top of a Subversion trunk working copy. If the file you're diffing is not under revision control, you can achieve the same effect by using <tt>diff -u</tt>.</p> <a href="#log-messages">Writing log messages</a>, and be enclosed in triple square brackets, like so:</p> <pre> [[[ Fix issue #1729: Don't crash because of a missing file. * subversion/libsvn_ra_ansible/get_editor.c (frobnicate_file): Check that file exists before frobnicating. ]]] </pre> <p>(The brackets are not actually part of the log message, they're just a way to clearly mark off the log message from its surrounding context.)</p> <p>If possible, send the patch as an attachment with a mime-type of <code>text/x-diff</code>, <code>text/x-patch</code>, or <code>text/plain</code>..</p> .</p> <p <a href="#log-messages">Writing log messages</a>.</p> <p>It is normal for patches to undergo several rounds of feedback and change before being applied. Don't be discouraged if your patch is not accepted immediately — it doesn't mean you goofed, it just means that there are a <em>lot</em>.</p> <p.</p> <div class="h3" id="patch-manager" title="patch-manager"> <h3>The "Patch Manager" Role</h3> <p>Subversion usually has a Patch Manager, whose job is to watch the dev@ mailing list and make sure that no patches "slip through the cracks".</p> <p.</p> <p>The Patch Manager needs a basic technical understanding of Subversion, and the ability to skim a thread and get a rough understanding of whether consensus has been reached, and if so, of what kind. It does <em>not</em> require actual Subversion development experience or commit access. Expertise in using one's mail reading software is optional, but recommended :-).</p> <p>The current patch manager is Gavin 'Beau' Baumanis <gavin@thespidernet.com>.</p> </div> </div> <div class="h2" id="filing-issues" title="filing-issues"> <h2>Filing bugs / issues</h2> <p>This pretty much says it all:</p> <pre>, ... Thank you, -Karl </pre> </div> <div class="h2" id="issue-triage" title="issue-triage"> <h2>Issue triage</h2> <p>When an issue is filed, it goes into the special milestone "---", meaning <em>unmilestoned</em>. This is a holding area that issues live in until someone gets a chance to look at them and decide what to do.</p> <p>The unmilestoned issues are listed first when you sort by milestone, and <em>issue triage</em> is the process of trawling through all the <a href="" >open issues</a> .</p> <p>Here's an overview of the process (in this example, 1.5 is the next release, so urgent issues would go there):</p> <pre> for i in issues_marked_as("---"): if issue_is_a_dup_of_some_other_issue(i): close_as_dup(i) elif issue_is_invalid(i): # A frequent reason for invalidity is that the reporter # did not follow the <a href="">"buddy system"</a>") </pre> </div> <div class="h2" id="commit-access" title="commit-access"> <h2>Commit access</h2> <p>There are two types of commit access: full and partial. Full means anywhere in the tree, partial means only in that committer's specific area(s) of expertise. The <a href="" >COMMITTERS</a> file lists all committers, both full and partial, and says the domains for each partial committer.</p> <div class="h3" id="full-commit-access" title="full-commit-access"> <h3>How full commit access is granted</h3> <p.</p> <p><i>The primary criterion for full commit access is good judgment.</i></p> <p>You do not have to be a technical wizard, or demonstrate deep knowledge of the entire codebase, to become a full committer. You just need to know what you don't know. If your patches adhere to the guidelines in this file, adhere to all the usual unquantifiable rules of coding (code should be readable, robust, maintainable, etc.), and respect the Hippocratic Principle of "first, do no harm", then you will probably get commit access pretty quickly. The size, complexity, and quantity of your patches do not matter as much as the degree of care you show in avoiding bugs and minimizing unnecessary impact on the rest of the code. :-) .)</p> <p>To assist developers in discovering new committers, we record patches and other contributions in a <a href="#crediting">special crediting format</a>, which is then parsed to produce a browser-friendly <a href="">contribution list</a>, updated nightly. If you're thinking of proposing someone for commit access and want to look over all their changes, that <a href="">contribution list</a> might be the most convenient place to do it.</p> </div> <div class="h3" id="partial-commit-access" title="partial-commit-access"> <h3>How partial commit access is granted</h3> .</p> <p:</p> <pre> Approved by: lundblad </pre> <a href="#lightweight-branches" >section on lightweight branches</a>, and this mail:</p> <pre> <a href="" >></a> </pre> </div> <div class="h3" id="contrib-area" title="contrib-area"> <h3>The contrib/ area</h3> <p>When a tool is accepted into the <i>contrib/</i>).</p> <p>Code under contrib/ must be open source, but need not have the same license or copyright holder as Subversion itself.</p> </div> <div class="h3" id="obvious-fix" title="obvious-fix"> <h3>The "obvious fix" rule</h3> <p>Any committer, whether full or partial, may commit fixes for obvious typos, grammar mistakes, and formatting problems wherever they may be — in the web pages, API documentation, code comments, commit messages, etc. We rely on the committer's judgement to determine what is "obvious"; if you're not sure, just ask.</p> <p>Whenever you invoke the "obvious fix" rule, please say so in the <a href="#log-messages">log message</a> of your commit. For example:</p> <pre> ------------------------------------------------------------------------ r32135 | stylesen | 2008-07-16 10:04:25 +0200 (Wed, 16 Jul 2008) | 8 lines Update "check-license.py" so that it can generate license text applicable to this year. Obvious fix. * tools/dev/check-license.py (NEW_LICENSE): s/2005/2008/ ------------------------------------------------------------------------ </pre> </div> </div> <div class="h2" id="branch-based-development" title="branch-based-development"> <h2>Branch-based development</h2> <p>We prefer to have development performed on the trunk. Changes made to trunk have the highest visibility and get the greatest amount of exercise that can be expected from unreleased code. That said, trunk is expected at all times to be stable. It should build. It should work. Those policies, combined with our preference to see large changes broken up and committed in the smallest logical chunks feasible, and applied to particularly large changes (new features, sweeping code reorganizations, etc.), makes for set of rules that are almost impossible to keep. It is in those situations that you might consider using a custom branch dedicated to your development task. The following are some guidelines to make your branch-based development work go smoothly.</p> <div class="h3" id="branch-creation-and-management" title="branch-creation-and-management"> <h3>Branch creation and management</h3> <p>There's nothing particularly complex about branch-based development. You make a branch from the trunk (or from whatever branch best serves as both source and destination for your work), and you do your work on it. Subversion's merge tracking feature has greatly helped to reduce the sort of mental overhead required to work in this way, so making good use of that feature (by using Subversion 1.5 or newer clients, and by performing all merges to and from the roots of branches) is highly encouraged.</p> </div> <div class="h3" id="lightweight-branches" title="lightweight-branches"> <h3>Lightweight branches</h3> <p>If you're working on a feature or bugfix in stages involving multiple commits, and some of the intermediate stages aren't stable enough to go on trunk, then create a temporary branch in /branches. There's no need to ask — just do it. It's fine to try out experimental ideas in a temporary branch, too. And all the preceding applies to partial as well as full committers.</p> <p>If you're just using the branch to "checkpoint" your code, and don't feel it's ready for review, please put some sort of notice at the top of the log message, such as:</p> <pre> *** checkpoint commit -- please don't waste your time reviewing it *** </pre> <p>And if a later commit on that branch <em>should</em> be reviewed, then please supply, in the log message, the appropriate 'svn diff' command, since the diff would likely involve two non-adjacent commits on that branch, and reviewers shouldn't have to spend time figuring out which ones they are.</p> <p>When you're done with the branch — when you've either merged it to trunk or given up on it — please remember to remove it.</p> <p>See also the <a href="#partial-commit-access" >section on partial commit access</a> for our policy on offering commit access to experimental branches.</p> </div> <div class="h3" id="branch-readme-files" title="branch-readme-files"> <h3>BRANCH-README files</h3> <p>For branches you expect to be longer-lived, we recommend the creation and regular updating of a file in the root of your branch named <tt>BRANCH-README</tt>. Such a file provides you with a great, central place to describe the following aspects of your branch:</p> <ul> <li><p>The basic purpose of your branch: what bug it exists to fix, or feature to implement; what issue number(s) it relates to; what list discussion threads surround it; what design docs exists to describe the situation.</p></li> <li><p>What style of branch management you are using: is this a reintegrate-able branch that will regularly be kept in sync with its source branch and ultimately merged back to that source branch using <tt>svn merge --reintegrate</tt>? Or is it not reintegrate-able, managed in total disregard to new changes made to the source branch, and expected to be merged back to that source without the <tt>--reintegrate</tt> option <tt>svn merge</tt>?</p></li> <li><p>What tasks remain for you to accomplish on your branch? Are those tasks claimed by someone? Do they need more design input? How can others help you?</p></li> </ul> <p>Why all the fuss? Because this project idealizes communication and collaboration, understanding that the latter is more likely to happen when the former is a point of emphasis.</p> <p>Just remember when you merge your branch back to its source to delete the <tt>BRANCH-README</tt> file.</p> </div> </div> <div class="h2" id="configury" title="configury"> <h2>The configuration/build system under unix</h2> <p.</p> <p>Here is Greg's original mail describing the system, followed by some advice about hacking it:</p> <pre>, </pre> <p>And here is some advice for those changing or testing the configuration/build system:</p> <pre> </pre> <p>The build system is a vital tool for all developers working on trunk. Sometimes, changes made to the build system work perfectly fine for one developer, but inadvertently break the build system for another.</p> <p> To prevent loss of productivity, any committer (full or partial) can immediately revert any build system change that breaks their ability to effectively do development on their platform of choice, as a matter of ordinary routing, without fear of accusations of an over-reaction. The log message of the commit reverting the change should contain an explanatory note saying why the change is being reverted, containing sufficient detail to be suitable to start off a discussion of the problem on dev@, should someone chose to reply to the commit mail.</p> <p>However, care should be taken not go into "default revert mode". If you can quickly fix the problem, then please do so. If not, then stop and think about it for a minute. After you've thought about it, and you still have no solution, go ahead and revert the change, and bring the discussion to the list.</p> <p>Once the change has been reverted, it is up to the original committer of the reverted change to either recommit a fixed version of their original change, if, based on the reverting committer's rationale, they feel very certain that their new version definitely is fixed, or, to submit the revised version for testing to the reverting committer before committing it again.</p> </div> <div class="h2" id="releasing" title="releasing"> <h2>How to release a distribution tarball</h2> <p>See <a href="release-process.html">The Subversion Release Procedure</a> for a description of how to create a release tarball.</p> <p>For an enlightening case study of the bungled Subversion 1.5 release cycle, see <a href="">this paper</a>.</p> </div> <div class="h2" id="release-numbering" title="release-numbering"> <h2>Release numbering, compatibility, and deprecation</h2> <p>Subversion uses "MAJOR.MINOR.PATCH" release numbers, with the same guidelines as APR (see <a href="" ></a>), plus a few extensions, described later. The general idea is:</p> <ol> <li>.)</p> </li> <li><p.</p> </li> <li><p>When the major number changes, all bets are off. This is the only opportunity for a full reset of the APIs, and while we try not to gratuitously remove interfaces, we will use it to clean house a bit.</p> </li> </ol> <p>Subversion extends the APR guidelines to cover client/server compatibility questions:</p> <ol> <li><p:</p> <ol> <li><p>Fields can be added to any tuple; old clients will simply ignore them. (Right now, the marshalling implementation does not let you put number or boolean values in the optional part of a tuple, but changing that will not affect the protocol.)</p> <p>We can use this mechanism when information is added to an API call.</p> </li> <li><p>At connection establishment time, clients and servers exchange a list of capability keywords.</p> <p>We can use this mechanism for more complicated changes, like introducing pipelining or removing information from API calls.</p> </li> <li><p>New commands can be added; trying to use an unsupported command will result in an error which can be checked and dealt with.</p> </li> <li><p>The protocol version number can be bumped to allow graceful refusal of old clients or servers, or to allow a client or server to detect when it has to do things the old way.</p> <p>This mechanism is a last resort, to be used when capability keywords would be too hard to manage.</p> </li> </ol> </li> <li><p".</p> </li> </ol> <p>Subversion does not use the "even==stable, odd==unstable" convention; any unqualified triplet indicates a stable release:</p> <pre> </pre> <p>The order of releases is semi-nonlinear — a 1.0.3 <em>might</em> come out after a 1.1.0. But it's only "semi"-nonlinear because eventually we declare a patch line defunct and tell people to upgrade to the next minor release, so over the long run the numbering is basically linear.</p> <p>Non-stable releases are qualified with "alphaN" or "betaN" suffixes, and release candidates with "-rcN". For example, the prereleases leading to 1.3.7 might look like this:</p> <pre> </pre> <p>The output of 'svn --version' corresponds in the obvious way:</p> <pre> version 1.3.7 (Alpha 1) version 1.3.7 (Alpha 2) version 1.3.7 (Beta 1) version 1.3.7 (Release Candidate 1) version 1.3.7 (Release Candidate 2) version 1.3.7 (Release Candidate 3) version 1.3.7 </pre> <p>(See <a href="#alphas-betas">this section</a> for more information about when and how we do alpha and beta releases.)</p> <p.</p> <p>For working copy builds, there is no tarball name to worry about, but 'svn --version' still produces special output:</p> <pre> version 1.3.8 (dev build) </pre> <p>The version number is the next version that project is working towards. The important thing is to say "dev build". This indicates that the build came from a working copy, which is useful in bug reports.</p> <p>We have no mechanism for releasing dated snapshots. If we want code to get wider distribution than just those who build from working copies, we put out a prerelease.</p> <div class="h3" id="name-reuse" title="name-reuse"> <h3>Reuse of release names</h3> <p>If a release or candidate release needs to be quickly re-issued due to some non-code problem (say, a packaging glitch), it's okay to reuse the same name, as long as the tarball hasn't been <a href="#tarball-signing">blessed by signing</a> yet. But if it has been uploaded to the standard distribution area with signatures, or if the re-issue was due to a change in code a user might run, then the old name must be tossed and the next name used.</p> </div> <div class="h3" id="deprecation" title="deprecation"> <h3>Deprecation</h3> <p:</p> <pre> /** * @deprecated Provided for backward compatibility with the 1.0.0 API. * * Similar to svn_repos_dump_fs2(), but with the @a use_deltas * parameter always set to @c FALSE. */ SVN_DEPRECATED); </pre> <p.</p> <p.</p> </div> </div> <div class="h2" id="release-stabilization" title="release-stabilization"> <h2>Stabilizing and maintaining releases</h2> <p>Minor and major number releases go through a stabilization period before release, and remain in maintenance (bugfix) mode after release. To start the release process, we create an "A.B.x" branch based on the latest trunk, for example:</p> <pre> $ svn cp \ </pre> <p.</p> <p>At the beginning of the final week of the stabilization period, a new release candidate tarball should be made if there are any showstopper.</p> <p>Under some circumstances, the stabilization period will be extended:</p> <ul> <li><p.</p> </li> <li><p>If a critical bugfix is made during the final week of the stabilization period, the final week is restarted. The final A.B.0 release is always identical to the release candidate made one week before (with the exceptions discussed below).</p> </li> </ul> <p>If there are disagreements over whether a change is potentially destabilizing or over whether a bug is critical, they may be settled with a committer vote.</p> <p>After the A.B.0 release is out, patch releases (A.B.1, A.B.2, etc.) follow when bugfixes warrant them. Patch releases do not require a four week soak, because only conservative changes go into the line.</p> <p>Certain kinds of commits can go into A.B.0 without restarting the soak period, or into a later release without affecting the testing schedule or release date:</p> <ul> <li><p>Without voting:</p> <ul> <li><p>Changes to the STATUS file.</p></li> <li><p>Documentation fixes.</p></li> <li><p>Changes that are a normal part of release bookkeeping, for example, the steps listed in notes/releases.txt.</p></li> <li><p>Changes to dist.sh by, or approved by, the release manager.</p></li> <li><p>Changes to message translations in .po files or additions of new .po files.</p></li> </ul> </li> <li><p>With voting:</p> <ul> <li><p>Anything affecting only tools/, packages/, or bindings/.</p></li> <li><p>Changes to printed output, such as error and usage messages, as long as format string "%" codes and their args are not touched.</p></li> </ul> </li> </ul> <p.</p> <p>Core code changes, of course, require voting, and restart the soak or test period, since otherwise the change could be undertested.</p> <p>The voting system works like this:</p> .</p> <p>Here's an example, probably as complex as an entry would ever get:</p> <pre> * r98765 (issue #56789) Make commit editor take a closure object for future mindreading. Justification: API stability, as prep for future enhancement. Notes: There was consensus on the desirability of this feature in the near future; see thread at http://... (Message-Id: blahblah). Concerns: Vetoed by jerenkrantz due to privacy concerns with the implementation; see thread at http://... (Message-Id: blahblah) Votes: +1: ghudson, bliss +0: cmpilato -0: gstein -1: jerenkrantz </pre> <p>A change needs three +1 votes from full committers (or partial committers for the involved areas), and no vetoes, to go into A.B.x. If partial committers would like to vote for a different area, which does not fall within their privilege, it must be made clear in the STATUS file that their vote is 'non-binding' as follows:</p> <pre> * r31833 svndumpfilter: Don't match prefixes to partial path components. Fixes #desc4 of issue #1853. Votes: +1: danielsh, hwright +1 (non-binding): stylesen </pre> <p.</p> <p>If you add revisions to a group, note that the previous voters have not voted for those revisions, as follows:</p> <pre> * r30643, r30653, r30785 Update bash completion script. Votes: +1: arfrever (r30785 only), stylesen </pre> <p>If in case votes have been communicated via IRC or other means, note that in the log message. <a href="#obvious-fix">Obvious fixes</a> do not require '(rX only)' to be mentioned.</p> <p.</p> <p.</p> <p>When votes are listed in the STATUS file, they are placed into a section for a given release. Some developers may want to vote the change into a <b>later</b> release, if (for example) they believe it requires further review or testing. Simply add a comment along with your vote:</p> <pre> * r12345 Fiddle with the hoobey-gidget to clean the garblesnarf. Justification: All hell breaks loose if the jobbywonk gets out. Votes: +1: brane +1: gstein (for 1.7.2) </pre> <p>Since votes are tied to specific releases (specified by the section they fall under), be very careful when moving change candidates among the sections. Existing votes were for the <b>original</b> version, not where the candidate has been moved it. Annotate existing votes as being cast only for the original version.</p> ."</p> <p.</p> <p>NOTE: Changes to STATUS regarding the temporary branch, including voting, are always kept on the main release branch.</p> <div class="h3" id="alphas-betas" title="alphas-betas"> <h2>Alpha and beta releases</h2> <p>When we want new features to get wide testing before we enter the formal stabilization period described above, we'll sometimes release alpha and beta tarballs (as <a href="#release-numbering" >shown earlier</a>)..</p> <p.</p> <p>When the alpha or beta is publicly announced, distribution packagers should be firmly warned off packaging it. See <a href="" >this mail from Hyrum K. Wright</a> for a good model.</p> </div> </div> <div class="h2" id="tarball-signing"> <h2>Signing source distribution packages (a.k.a tarballs)</h2> <p <tt>.tar.bz2</tt>, <tt>.tar.gz</tt> and <tt>.zip</tt> files, the release (candidate) can go public.</p> <p>Signing a tarball means that you assert certain things about it. When sending your signature (see below), indicate in the mail what steps you've taken to verify that the tarball is correct. Running <tt>make check</tt> over all RA layers and FS backends is a good idea, as well as building and testing the bindings.</p> <p>After having extracted and tested the tarball, you should sign it using <a href="">gpg</a>. To do so, use a command like:</p> <pre> gpg -ba subversion-1.3.0-rc4.tar.bz2 </pre> <p>This will result in a file with the same name as the signed file, but with a <tt>.asc</tt> extension in the appropriate format for inclusion in the release announcement. Include this file in a mail, typically in reply to the announcement of the unofficial tarball.</p> <p>If you've downloaded and tested a <tt>.tar.bz2</tt> file, it is possible to sign a <tt>.tar.gz</tt> file with the same contents without having to download and test it separately. The trick is to extract the <tt>.bz2</tt> file, and pack it using <tt>gzip</tt> like this:</p> <pre> bzip2 -cd subversion-1.3.0-rc4.tar.bz2 \ | gzip -9n > subversion-1.3.0-rc4.tar.gz </pre> <p. </p> </div> <div class="h2" id="custom-releases" title="custom-releases"> <h2>Custom releases</h2> <p>It is preferred to use the patch process and have your changes accepted and applied to trunk to be released on the normal Subversion release schedule. However, if you feel that you need to make changes that would not be widely accepted by the Subversion developer community, or need to provide early access to unreleased features, you should follow the guidelines below.</p> <p>First, make sure you follow the <a href="">Subversion trademark policy</a>. You will need to differentiate your release from the standard Subversion releases to reduce any potential confusion caused by your custom release.</p> <p>Second, consider creating a branch in the public Subversion repository to track your changes and to potentially allow your custom changes to be merged into mainline Subversion.</p> <p>Third, if your custom release is likely to generate bug reports that would not be relevant to mainline Subversion, please stay in touch with the users of the custom release so you can intercept and filter those reports. But of course, the best option would be not to be in that situation in the first place — the more your custom release diverges from mainline Subversion, the more confusion it invites. If you must make custom releases, please try to keep them as temporary and as non-divergent as possible.</p> </div> <div class="h2" id="l10n" title="l10n"> <h2>Localization (l10n)</h2> <p>Translation has been divided into two domains. First, there is the translation of server messages sent to connecting clients. This issue has been <a href="">punted for now</a>. Second there is the translation of the client and its libraries.</p> <p>The gettext package provides services for translating messages. It uses the xgettext tool to extract strings from the sources for translation. This works by extracting the arguments of the _(), N_() and Q_() macros. _() is used in context where function calls are allowed (typically anything except static initializers). N_() is used whenever _() isn't. Strings marked with N_() need to be passed to gettext translation routines whenever referenced in the code. For an example, look at how the header and footer are handled in subversion/svn/help-cmd.c. Q_() is used for messages which have singular and plural version.</p> <p>Beside _(), N_() and Q_() macros also U_() is used to mark strings which will not be translated because it's in general not useful to translate internal error messages. This should affect only obscure error messages most users should never ever see (caused by bugs in Subversion or very special repository corruptions). The reason for using U_() is to explicitly note that a gettext call was not just forgotten.</p> .</p> <p>All required setup for localization is controlled by the ENABLE_NLS conditional in svn_private_config.h (for *nix) and svn_private_config.hw (for Windows). Be sure to put</p> <pre> #include "svn_private_config.h" </pre> <p>as the last include in any file which requires localization.</p> <p>Also note that return values of _(), Q_() and *gettext() calls are UTF-8 encoded; this means that they should be translated to the current locale being written as any form of program output.</p> <p>The GNU gettext manual (<a href="" ></a>) provides additional information on writing translatable programs in its section "Preparing Program Sources". Its hints mainly apply to string composition.</p> <p>Currently available translations can be found in <a href="" >the po section of the repository</a>. Please contact dev@subversion.tigris.org when you want to start a translation not available yet. Translation discussion takes place both on that list and on dedicated native language mailing lists (<a href="" ><em>l10n-??@subversion.tigris.org</em></a>).</p> <p>See the <a href="translating.html">Guide to Translating Subversion</a> for more information about translating.</p> </div> </div> </body> </html> | http://opensource.apple.com/source/subversion/subversion-35/subversion/www/hacking.html | CC-MAIN-2014-49 | refinedweb | 12,151 | 54.12 |
View Poll Results: If you read it, did you find DirectJNgine User's Guide adequate?
- Voters
- 54. You may not vote on this poll
Yes
No
Ext Direct Java based implementation
Ext Direct Java based implementation
We have finished the first public release of DirectJNgine, an Ext Direct implementation for Java.
You can find the project at.
As far as we know, DirectJNgine is fairly feature complete. For example, it runs all examples provided by ExtJs in examples/direct, as well as our own additional examples and a host of automated tests.
But, of course, we have to wait for feedback from the community to make sure that it fits the community needs, and that it implements the mandatory Ext Direct feature set.
Main features
- Supports all types of request: JSON requests, batched JSON requests, form posts, form posts with file uploads and even polling provider based requests.
- Fully tested: more than 75 fully automated tests as of version 0.7.1 (beta).
- Provides a detailed User's Guide (more than 20 pages).
- Runs all of ExtJs examples in examples/direct.
- Provides additional examples.
- A WAR is enclosed showing all examples and running all automated tests.
For a complete feature list, go to.
How it works -in 30 seconds
If you are curious about how it works, there is a short excerpt from the User's Guide:Of course, there are things to configure, best practices to follow, etc. But adding a new method will just feel like this.Now, how is everyday life with DirectJNgine?
Let’s assume that you want your Javascript code to call a sayHello Java method, which will receive a person name and return a greeting message. That is as easy as writing the following Java code:
Basically, you write your Java code without much concern for whether it will be called by some Javascript code living in a remote server or not. The secret? Using the @DirectMethod annotation DirectJNgine...Code:
public class Action1 { @DirectMethod public String sayHello( String name) { return “Hello, ” + name + “. Nice to meet you!”; } }
For further detail, read the User's Guide, please.
What will be there in version 1.0?
For the 1.0 release we will provide all that's there in the 0.7.1, and then we plan to:
- Add support for custom data conversion from Javascript to JSON to Java and back.
- Enhance the documentation a bit.
- Increment the number of automated tests well beyond one hundred.
- Fix small issues.
We are especially interested in finding all missing pieces there might be with respect to the mandatory feature set.
And, of course, we are open to feedback and criticism, and will consider other features for which there is a strong demand from the community.
Beyond 1.0...
We would love to add better exception reporting...
We might have to optimize logging: we do some heavy duty logging when logging level is set to DEBUG or TRACE, a must when you are dealing with remote communications and data conversion from Javascript to JSON to Java and back.
We would like a "safe client call" mode: just try calling MyAction.myMethod( 1, undefined, 2), and you will probably be in for a surprise.
Or try returning Long.MAX_VALUE from a Java method, and check what number Javascript receives. Avoiding dangerous calls or at least emitting warnings would be nice...
It would be nice to handle individual requests in batched requests in separate threads, now that we have those nice multicore CPUs...
And there is the polymorphism issue, when passing objects from Java to JS and then back to Java...
That said, this is just food for thought, and we having committed yet to post-1.0 features.
Feedback and criticism
We are releasing this library in the hope that it is useful to the programming community.
We understand that this is the first public beta release of the library, which has been tested in a very restricted environment. Unfortunately, that can only guarantee that there is not way for it to be fully feature complete or bug free.
We are committed to making DirectJNgine a useful library. Therefore, it is only natural that we will be happy to receive feedback and criticism.
Last, but not least, we hope you enjoy using DirectJNgine as much as we did developing it.
Regards,
Ped,251
- Vote Rating
- 61
Thanks for the encouraging words.
I will fix the IE issues for the next version, due out tomorrow.
For the final version I plan to add a step to the release procedure for running the automated tests against all major browsers.
A little error in doc :
Page 13 :
Code:
In this case, we have to pass the form’s el element as the first and only parameter to a server method annotated with @DirectFormMethod, which is implemented as follow
Fixed! Thanks.
I tried to modify a little FormPostDemo.js and the Java to do something like that :
Code:
var remotingProvider = Ext.Direct.addProvider( Ext.app.REMOTING_API); Djn.RemoteCallSupport.addCallValidation(remotingProvider); var form = new Ext.form.FormPanel({ // url: Ext.app.PROVIDER_BASE_URL,api:{load: FormPostDemo.load, // Something to load, work perfectly submit: FormPostDemo.submit // here is the problem }, ..... Config / Items / etc...... buttons: [{ text: 'Submit', handler:function(){ form.getForm().submit(); // here is the problem }
Code:
@DirectPostMethod public Result submit(Map<String, String> formParameters, Map<String, FileItem> fileFields ) throws IOException { for( String fieldName : formParameters.keySet()) { System.out.println( fieldName + "</b>='" + formParameters.get(fieldName) + "'<br>"); } return null; }
If i delete the verification on remotingProvider, it works.
Is it a bug?
It would be nice to do things like that to have generic formpanels and just configure them.
Regards,
GeorgioA | http://www.sencha.com/forum/showthread.php?73027-Ext-Direct-Java-based-implementation&p=353149&viewfull=1 | CC-MAIN-2013-48 | refinedweb | 940 | 57.37 |
Basically, I am trying to send a single frequency tone for a specified
duration (currently 5ms) whenever a user wants to send one. My current
solution uses messages by basically sending a single byte of ones no
matter what a person enters on the cmd line. I convert that byte to
symbols (representing one due to my above hardcode) and add to it the
frequency of transmission. I then use a fixed number of repeaters to
extend this symbol and use that to ourput from a vco that particular
frequency. I have the code at the end of my email.
What I am observing is that there seems to be some fundamental limit
to how short that tranmission can be. In my code below, I am trying to
send an 18Khz tone for 5ms. However, no matter what I do:
a) the transmitted signal lasts for ~10-11ms (from scope and wav file)
b) not every key press generates the tone. I generally miss one or two
keypress. This happens even if the keypress are well spaced in time.
So two questions to the community. What could be causing such a
behavior? and is there an alternative way to get a similar result than
what I am currently doing?
Thank you.
Affan
Code
TONE_FREQ=18000 #hz
AUDIO_RATE=48000 #samples/sec
TONE_DUR=5/1000#in sec
BITS_IN_BYTES=8
REPEAT_TIME= TONE_DUR*AUDIO_RATE/BITS_IN_BYTES
Main Graph
class tone_graph(gr.top_block):
def __init__(self): gr.top_block.__init__(self) audio_rate = AUDIO_RATE self.src = gr.message_source(gr.sizeof_char, msgq_limit) src = self.src b_to_syms = gr.bytes_to_syms() #we know that there should always be just one byte of code add = gr.add_const_ff(TONE_FREQ) repeater = gr.repeat(gr.sizeof_float,REPEAT_TIME) fsk_f = gr.vco_f(audio_rate, 2*pi,1.0) speaker = audio.sink(audio_rate); dst = gr.wavfile_sink("tone_tx.wav", 1, audio_rate, 16) self.connect(src,b_to_syms,add,repeater,fsk_f,speaker) self.connect(fsk_f,dst)
def main():
fg = tone_graph() fg.start() # start flow graph try: while True: message = raw_input(" transmit a 5ms tone (Ctrl-D to
exit): ")
#make a one byte packet to indicate the tone transmission. pkt = struct.pack('B', 0xff) msg = gr.message_from_string(pkt) fg.src.msgq().insert_tail(msg) except EOFError: print "\nExiting." fg.src.msgq().insert_tail(gr.message(1)) fg.wait()
Main python entry
if name == ‘main’:
try:
main()
except KeyboardInterrupt:
pass | https://www.ruby-forum.com/t/variable-transmission-time-of-a-single-frequency/185719 | CC-MAIN-2022-33 | refinedweb | 383 | 60.31 |
In this tip, you learn how to create and use templates in the MVC framework that you can use to display database data. I show you how to create a new MVC Helper method named the RenderTemplate() method.
In this tip, you learn how to create and use templates in the MVC framework that you can use to display database data. I show you how to create a new MVC Helper method named the RenderTemplate() method.
While I was back home in California during the 4th of July weekend, I was talking to my smarter, older brother about the differences among building web applications with ASP.NET Web Forms, ASP.NET MVC, and Ruby on Rails. I was bemoaning the fact that I really missed using controls when building an ASP.NET MVC application. In particular, I was complaining that I missed the clean separation between HTML and UI logic provided by the templates that you get with an ASP.NET Web Forms control. A Repeater control is really not the same as a for…next loop.
My brother told me something surprising. He said “Templates, Ruby on Rails has templates, they are called partials.” At first, I didn’t understand. I thought partials in the Ruby on Rails world were more or less the same as user controls in the ASP.NET MVC world. However, my brother explained that when you render a partial in a Ruby on Rails application, you can pass a collection of items. Each item in the collection is rendered with the partial.
Cool. You can use the same approach to create templates in an ASP.NET MVC application. Create a new helper method that accepts an IEnumerable and a path to a user control. The helper method can use the user control as a template for each item from the IEnumerable. Listing 1 contains the code for a new helper method named the RenderTemplate() method.
Listing 1 – TemplateExtensions.vb (VB.NET)
Imports System
Imports System.Text
Imports System.Collections
Imports System.Web.Mvc
Public Module TemplateExtensions
<System.Runtime.CompilerServices.Extension()> _
Public Function RenderTemplate(ByVal helper As HtmlHelper, ByVal items As IEnumerable, ByVal virtualPath As String) As String
Dim sb = New StringBuilder()
For Each item As Object In items
sb.Append(helper.RenderUserControl(virtualPath, item))
Next item
Return sb.ToString()
End Function
End Module
Listing 1 – TemplateExtensions.cs (C#)
using System;
using System.Text;
using System.Collections;
using System.Web.Mvc;
namespace Helpers
{
public static class TemplateExtensions
{
public static string RenderTemplate(this HtmlHelper helper, IEnumerable items, string virtualPath)
{
var sb = new StringBuilder();
foreach (object item in items)
{
sb.Append( helper.RenderUserControl(virtualPath, item));
}
return sb.ToString();
}
}
}
Imagine, for example, that you want to display a list of movies. You can use the HomeController in Listing 2 to return a collection of Movie entities. The Index() action executes a LINQ to SQL query and passes the query results to the Index view.
Listing 2 – HomeController.vb (VB.NET)
Public Class HomeController
Inherits System.Web.Mvc.Controller
Private _dataContext As New MovieDataContext()
Public Function Index() As ActionResult
Dim movies = _dataContext.Movies
Return View(movies)
End Class
Listing 2 – HomeController.cs (C#)
using System.Collections.Generic;
using System.Linq;
using System.Web;
using Tip14.Models;
namespace Tip14.Controllers
public class HomeController : Controller
private MovieDataContext _dataContext = new MovieDataContext();
public ActionResult Index()
var movies = _dataContext.Movies;
return View(movies);
The view in Listing 3 simply calls the RenderTemplate() method passing the method the ViewData.Model and the path to an MVC user control that contains the template for each movie.
Listing 3 -- Index.aspx (VB.NET)
<%@ Page Language="VB" AutoEventWireup="false" CodeBehind="Index.aspx.vb" Inherits="Tip14.Index" %>
<%@ Import Namespace="Tip14" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="" >
<head runat="server">
<title></title>
</head>
<body>
<div>
<%= Html.RenderTemplate(ViewData.Model, "~/Views/Home/MovieTemplate.ascx") %>
</div>
</body>
</html>
Listing 3 -- Index.aspx (C#)
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="Tip14.Views.Home.Index" %>
<%@ Import Namespace="Helpers" %>
The MovieTemplate.ascx MVC User Control is strongly typed. The code-behind file for this user control is contained in Listing 4. Notice that the User Control is strongly typed to represent a Movie entity.
Listing 4 – MovieTemplate.ascx.vb (VB.NET)
Public Partial Class MovieTemplate
Inherits System.Web.Mvc.ViewUserControl(Of Movie)
Listing 4 – MovieTemplate.ascx.cs (C#)
namespace Tip14.Views.Home
public partial class MovieTemplate : System.Web.Mvc.ViewUserControl<Movie>
Finally, the view part of the MVC user control is contained in Listing 5. Notice that you can use expressions like ViewData.Model.Title and ViewData.Model.Director to display the movie title and movie director. These expressions work because you are using a strongly typed MVC User Control that represents a movie entity.
Listing 5 – MovieTemplate.ascx (VB.NET)
<%@ Control Language="VB" AutoEventWireup="false" CodeBehind="MovieTemplate.ascx.vb" Inherits="Tip14.MovieTemplate" %>
<b><%= ViewData.Model.Title %></b>
<br />
Director: <%= ViewData.Model.Director %>
<hr />
Listing 5 – MovieTemplate.ascx (C#)
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="MovieTemplate.ascx.cs" Inherits="Tip14.Views.Home.MovieTemplate" %>
When you request the Index view, you get the page shown in Figure 1. Notice that the MVC User Control has been rendered for each movie.
Figure 1 – The Movies records rendered with a template
In this tip, I explained how you can create and use templates in an ASP.NET MVC application. I’ve demonstrated how you can create a template by creating an MVC User Control and use the template to control how a set of database records is rendered. Now, there is no reason to ever use a Repeater control in an ASP.NET MVC application.
You can download the TemplateExtensions (which includes the RenderTemplate() method) by clicking the following link:
Download the Code
This already exists as Html.RenderComponent and Html.RenderUserControl I believe.
Pingback from ASP.NET MVC Tip #14 ??? Create a Template Helper Method - Stephen Walther on ASP.NET MVC
@Malkir -- The RenderComponent and RenderUserControl methods don't take a collection of items. The RenderTemplate() method described in this tip accepts a collection of items (such as database records) and renders each item using the template.
@Stephen, I think malkir was referring to the code that already exists in the "RenderUserControl" and accompanying methods (extensions on HtmlHelper). It means you're repeating a lot of stuff that's already there (the methods are even named the same "InstantiateControl", "DoRendering", "RenderPage" etc.). Why not just make a single extension method:
public static string RenderTemplate(this HtmlHelper helper, IEnumerable items, string virtualPath)
{ var sb = new StringBuilder();
foreach (object item in items)
{ sb.Append( helper.RenderUserControl(virtualPath, item);
}
return sb.ToString();
and forget about the rest (no need for that).
Cheers,
Mio
This is a nice one, althou I had a deja vu feeling.
I digged up Dynamic Data sources to see how they are instantiating Templates since DD is based on templates.
The instantiation is similar to yours, but maybe more ASP.Net "friendly":
(IFieldTemplate) BuildManager.CreateInstanceFromVirtualPath(virtualPath, typeof(IFieldTemplate));
They use BuildManager straight instead of Activator.CreateInstance.
Upon digging the sources I thought it further (beware I'm new to MVC bits, but not the concept :-)). Why can't we have standard "field" templates in a designated directory within an MVC app, which ones can be used by various views as needed and can be created on demand by a ITemplateFactory class.
Thanks,
Attila
@Mio - Really good point. I removed everything but the loop and a call to RenderUserControl(). Thanks!
Stephen, it would be nice if you could add the call that needs to be done from the ViewPage...
@Simone -- I updated the tip so that it includes the ViewPage - thanks for pointing that out!
在这个Tip中,你将学到在MVC框架中显示数据库数据时,如何创建和使用模板。Stephen Walther介绍了如何创建一个名为RenderTemplate()的辅助方法。
[翻译] ASP.NET MVC Tip #14 – 创建模板辅助方法 原文地址: weblogs.asp.net/.../07
Hello Stephen,
I'm using this helper method in my MVC application. However, in preview 5 it doesn't work anymore, because a text writer is used to render partial views aka usercontrols. Any idea how to fix this?
Thanx,
Henk
i use this:
public static void RenderTemplate(this HtmlHelper helper, IEnumerable items, string templateName)
{
foreach (object item in items)
{
helper.RenderPartial(templateName, item);
}
}
Template should be placed in shared directory for use with all controllers.
How can we render UserControl using ASP.NET MVC Framework Preview 5?
Would you please post the above workaround using MVC Preview 5? We are in trouble while rendering UserControl by Html.RenderPartial(). | http://weblogs.asp.net/stephenwalther/archive/2008/07/07/asp-net-mvc-tip-14-create-a-template-helper-method.aspx | crawl-002 | refinedweb | 1,406 | 52.56 |
I have a test case listener that, in the method under the @BeforeTestCase annotation, reads through an external file to obtain information. I am wondering if it’s possible to stop execution of the test case and mark it as an error from a catch block on the event of an error, e.g. file not found. I’ve tried KeywordUtil.markErrorAndStop in the catch block of the listener, but the test case execution will still begin regardless.
Also want to note that I do have an available workaround of sorts with KeywordUtil.markError followed by System.exit, but this isn’t very elegant and it won’t actually mark the test as an error.
Hi Michael
That sounds a little buggy to me - you might want to post a bug once you’ve “tried everything”.
However…
Recently I bumped into an issue in a suite listener. At the time I was pressed for time and didn’t get to resolve the actual cause. The problem I had was setting a Map key for the suiteId. The Map was static, imported statically but it seemed the listener couldn’t see it. Like I said, I was pressed for time so I ended up calling a method (on the same damn static class - go figure) to set the Map directly in the class. (I’m saying, if Kat couldn’t see the Map, it shouldn’t have been able to see the method, either, but it worked).
Perhaps you could try the same approach.
Alternatively, did you try
throw after the markError? That would definitely stop the test (not sure about the markError surviving that though). Try it. Might work.
Hi Russ,
I’ve tried a throw, but it’ll still proceed with the test case afterwards.
One observation I’ve noticed is that markErrorAndStop will return control to the test case and not execute any more code in the listener:
throw new StepErrorException("An error was encountered while reading the data file, etc...") //Keeps running... KeywordUtil.markErrorAndStop("An error was encountered...") //Anything below here will not execute
markError, however, will allow additional lines of code to execute. I guess markErrorAndStop assumes it’s being called directly from the test case rather than a listener or keyword.
I could theoretically put a check in the TC itself for a variable to be set from the listener, but this is clunky and I’d need to add it as a step to every single case that depends on the data file. Still, I might not have a choice.
Of course, I should have known that…
Your code in the Test Case is a method (in a Groovy Script class). Your “final” throw has to come from there to stop it in its tracks. Sorry, I really missed the point back there.
For reference, this is how my TCs look:
import ... public class Test extends com.my_company.some_page_object { Test() { // Test steps ... } } try { new Test() passThisTest() } catch(Exception e) { failThisTest(e.message) }
So anywhere I perform a throw (or any throw issued by kat code), I get to catch it, deal with the message my way (I write my own reports for example) and then my script method ends.
So it does look like you need to set the stage in the listener and deal with it somewhere in the TC runtime scope (if that makes sense). This also might have been my issue I didn’t investigate properly.
Anyway, hope that gives you food for thought.
No worries.
I’ve been able to make something that does work, though it uses a static method inside a keyword groovy script (or is it Java?):
public static String write(int inputNum) { try { String input = values.get(inputNum.toString()) return input != null ? input : "### Invalid Input ###" } catch(NullPointerException) { KeywordUtil.markError("An error was encountered while reading the data file. Please ensure the file values.csv exists and is readable.") } }
“values” is a static Dictionary that is populated by my listener if nothing goes wrong while accessing values.csv. If something blows up, then values remains null, and as such, the write() method will throw a null pointer exception once called. This ends up marking the test as failed (close enough) and does not execute any more steps in the TC. While I’d like to avoid starting the TC at all, this is the best result I’ve seen yet. | https://forum.katalon.com/t/stop-and-mark-test-case-as-error-from-listener/20101 | CC-MAIN-2022-33 | refinedweb | 733 | 73.47 |
Hi everyone,
I’m new here. First of all thank you really much for your great work in this forum. It helped me out multiple times. But now I encountered an issue where I couldn’t find a thread on, yet.
With the latest version (0.84.0) I’m experiencing a problem with the file_uploader widget.
In the prior version (0.82.0) it was very handy, for my case, to select one file after another and drag&drop it to the widget. With the latest version this doesn’t seems possible anymore. When one or more files were uploaded the next file/s draged and droped to the widget are not accessible, even though these files appear in the interactive list below the widget.
I want to use st.session_state is the app I’m working on and for that reason the version 0.84.0 is necessary to my understanding.
In order for you to reproduce the situation I made up this example:
import streamlit as st uploaded_files = st.file_uploader('Select files',type=['txt'],accept_multiple_files=True) file_lst = [uploaded_file.name for uploaded_file in uploaded_files] st.write(file_lst)
Assuming I want to upload the two files test1.txt and test2.txt one after another. For the first file (test1.txt) the behavior is as expected and equivalent for both versions:
Then later I want to upload another file in this case test2.txt.
The expected behavior can be seen with version 0.82.0. Both files are shown in the interactive list below the widget as well as in the written file_lst.
With version 0.84.0 only the interactive list below the widget shows both files. The written file_lst shows only test1.txt.
Anyone had a similar issue? I apologize, if the solution is obvious, but I got stuck with it and can’t figure out, how to solve the issue.
| https://discuss.streamlit.io/t/issue-with-file-uploader-and-streamlit-version-0-84-0/14812 | CC-MAIN-2021-31 | refinedweb | 313 | 69.79 |
Hello Codeforces! Did you enjoy the AtCoder Beginner Contest 130? As usual, there was only Japanese editorial published, so I translated it into English. Um, actually it's already three days after the contest, it might be a bit late, but well, whatever?
Disclaimer. Note that this is an unofficial editorial and AtCoder has no responsibility for this. Also, I didn't do proofreading at all, so it might contain many typos. Moreover, this is the third experience I write such kind of editorial, so the English may not be clear, may be confusing, or even contain mistakes. Any minor corrections (including grammatical one) or improvement suggestions are welcome. Please do not hesitate posting a comment about it.
A: Rounding
You can implement it straightforward: print $$$0$$$ if $$$X < A$$$, and print $$$10$$$ if $$$x >= A$$$.
An example code is shown in List 1:
Listing 1. Example Code of Rounding
#include <bits/stdc++.h> using namespace std; int main() { int X, A; cin >> X >> A; puts(X < A ? "0" : "10"); return 0; }
B: Bounding
You can calculate all $$$D_1, D_2, \cdots, D _ {N+1}$$$ according to the recurrence formula, and judge if each element is less than or equal to $$$X$$$. The time complexity and the space complexity is $$$\mathcal{O}(N)$$$.
An example code is shown in List 2:
Listing 2. Example Code of Bounding
#include <bits/stdc++.h> using namespace std; int main() { int N, X; cin >> N >> X; vector<int> D(N + 1); D[0] = 0; for (int i = 0; i < N; ++i) { int x; cin >> x; D[i + 1] = D[i] + x; } int ans = 0; for (int i = 0; i <= N; ++i) { if (D[i] <= X) { ans++; } } cout << ans << endl; return 0; }
C: Rectangle Cutting
When you cut the rectangle into two parts by a straight line passing through both given point $$$(x, y)$$$ and the center of the rectangle, the area of the part whose area is not larger than that of the other is half exactly half the are of the entire rectangle, and this is maximum. On the other hand, when line does not pass through the center, the area of the part which contains the center is larger.
Taking it into consideration, the way of cutting that satisfies the problems'condition can be uniformly determined if the given point is not the center of the rectangle. Otherwise, the areas of the two parts are equal regardless of the line, so you can get the answer by the following code:
#include<stdio.h> #include<vector> #include<algorithm> using namespace std; typedef long long ll; int main() { int a, b, x, y; scanf("%d%d%d%d", &a, &b, &x, &y); printf("%lf %d\n", double(a)*double(b) / 2, x + x == a&&y + y == b); }
D: Enough Arrays(writer : yuma000)
If you iterate through all the possible left end and right end and check if each partial sum is greater than or equal to $$$K$$$, it takes $$$O(N^2)$$$ (you can calculate the partial sum in $$$O(1)$$$ using cumulative sums). So you have to look for more efficient way.
Let $$$S(l, r) = \sum_r^l A_k$$$, then it holds that
- $$$S(a, b) < S(a, b+1)$$$
- $$$S(a, b) > S(a+1, b)$$$.
Consequently, the following fact holds:
- If $$$S(l, r) >= K$$$ for some $$$l, r$$$, then for all $$$x (x <= r)$$$, $$$S(l, x)>=K$$$.
This means that if you fix a left end $$$l$$$ of some subsequence, if you could find the minimum possible $$$r$$$ where $$$S(l, r) >= K$$$, you can calculate the number of the subsequence whose left end is $$$l$$$. (Concretely, it's $$$N-r+1$$$.)
Specifically, you can find $$$r$$$ by using
- "two pointers" technique (O(N)), or
- binary search (O(NlogN)),
so you can implement it in either way. (Personally speaking, two-pointers is preferable because it's faster and easy to implement)
The following is the code of two pointers:
#include<bits/stdc++.h> using namespace std; int main(){ int N;long long int K; cin>>N>>K; vector<long long int>A(N); for(int i=0;i<N;++i){ cin>>A[i]; } long long int answer=0; long long int sum=0; int r=0; for(int l=0;l<N;++l){ while(sum<K){ if(r==N)break; else{ sum+=A[r]; r++; } } if(sum<K)break; answer+=N-r+1; sum-=A[l]; } cout<<answer<<endl; return 0; }
E: Common Subsequence
In this problem you have to count the number of index sets $$$1 \leq i_1 < i_2 < \cdots < i_k \leq N$$$ and $$$1 \leq j_1 < j_2 < \cdots < j_k \leq M$$$ so that $$$S_{i_1} = T_{j_1}, \cdots, S_{i_k}=T_{j_k}$$$. First, one possible naive dp is that $$$dp\mathrm{[i][j]}$$$ : the number of ways of choosing alphabets from the first $$$i$$$ alphabets of $$$S$$$ and from the first $$$j$$$ alphabets of $$$T$$$ while $$$i$$$-th letter of $$$S$$$ and the $$$j$$$-th letter of $$$T$$$ is paired. Then $$$dp\mathrm{[i][j]} = (\sum_{k=1}^{i-1} \sum_{l=1}^{j-1} dp\mathrm{[k][l]}) + 1$$$ if $$$S_i = T_j$$$ and otherwise $$$0$$$, but the time complexity is $$$\mathcal{O}(N^2 \ast M^2)$$$. In fact, you can calculate the double sum by means of two-dimensional cumulative sum, so that the time complexity will be $$$O(NM)$$$. Specifically, let $$$sum\mathrm{[i][j]} = \sum_{k=1}^{i} \sum_{l=1}^{j} dp\mathrm{[k][l]}$$$, then $$$sum\mathrm{[i][j]} = sum\mathrm{[i-1][j]} + sum\mathrm{[i][j-1]} - sum\mathrm{[i-1][j-1]} + dp\mathrm{[i][j]}$$$ holds.
F: Minimum Bounding Box
Henceforward $$$(x_{max} - x_{min}) \times (y_{max} - y_{min})$$$ will be referred to as "bounding box".
After each point starts moving, $$$x _ {max}$$$ decreases, then stays constant, and then increases (some of which may be skipped). You can find the boundary of the differential by binary search, making use of its monotony. The similar property holds in $$$x_{min}$$$, $$$y_{max}$$$ and $$$y _ {min}$$$.
Therefore, you can split the timeline of point's movement into segments so that the differentials of each maximum and minimum value is constant within each segment. And in fact, the bounding box can be minimum only at some boundary time of these segments.
Here is a proof.
Let $$$dx = x_{max} - x_{min}, dy = y_{max} - y_{min}$$$. In a segment where $$$dx$$$ and $$$dy$$$ are weakly increasing monotonously, the bounding box also increases. Within such segment, the bounding box is minimum at the beginning point of the segment. Similarly, in a segment where $$$dx$$$ and $$$dy$$$ are weakly decreasing monotonously, the bounding box is minimum at the last point of the segment.
Next, let's consider a segment where $$$dx$$$ is strictly increasing while $$$dy$$$ is strictly decreasing monotonously (the reverse is also true). Here, the differential of bounding box depends on the proportion of $$$dx$$$ and $$$dy$$$. While $$$dx$$$ is less enough than $$$dy$$$, the bounding box increases. After $$$dx$$$ increases to some extent (or possibly, from the beginning), the decreasing effect becomes dominant and the bounding box starts to decrease. After all, within such segment the bounding box is always convex upward, and it's minimum at the either end point.
Hence the statement has been proved. | http://codeforces.com/blog/WaterColor2037 | CC-MAIN-2019-47 | refinedweb | 1,221 | 67.08 |
As promised, here's my first entry with more details about Customer Debug Probes and CLR SPY.
The Marshaling probe is the easiest one to understand and great for experimentation since it's the only one that doesn't report on error/warning conditions. It also fits best into the spy theme since it non-intrusively reports on work that is regularly done by just about any managed application. In today's world, you'll be hard-pressed to find a managed application that doesn't marshal parameters to unmanaged code - especially if you count the .NET Framework APIs used by the application.
The Marshaling probe logs how parameters and return types get marshaled from managed to unmanaged code. (The probe doesn't tell you about marshaling in the opposite direction, nor does it tell you about fields as they are marshaled.) For a "Hello, World" C# application:
public class HelloWorld
{
public static void Main()
{
System.Console.WriteLine("Hello, World!");
}
}
I get the following output from CLR SPY (without the date, time, and probe prefix so it's easier to see):
Marshaling from IntPtr to DWORD in method GetStdHandle.
Marshaling from Int32 to DWORD in method GetStdHandle.
Marshaling from Int32 to DWORD in method GetFileType.
Marshaling from IntPtr to DWORD in method GetFileType.
Marshaling from Int32 to DWORD in method WriteFile.
Marshaling from IntPtr to DWORD in method WriteFile.
Marshaling from Byte* to DWORD in method WriteFile.
This happens because Console.WriteLine internally makes PInvoke calls to the Win32 GetStdHandle, GetFileType, and WriteFile APIs.
The steps to do this yourself are:
Look what happens if I use CLR SPY on an equivalent managed C++ "Hello, World" application:
Marshaling from UInt32 to DWORD in method _mainCRTStartup.
This highlights a difference between the C++-generated code, which has special entry point logic, and the C#-generated code.
There are two things to be aware of when analyzing Marshaling probe messages. For any method:
When parameters are user-defined types, they are qualified with their namespace. However, the methods whose parameters are being marshaled are never displayed with their namespace or even class name, which can make examining the output a little tedious.
If you're using the Marshaling probe to debug problems in your own code, you probably want to suppress messages for marshaling done by the .NET Framework. This is possible using the "Marshaling Filter" pane in CLR SPY. When you click the Edit button, you can type a semicolon-delimited expression list in the text box that states the items for which you want to see messages. The string you assign as the filter can be a semicolon-delimited expression list (with no spaces inbetween). Here are the rules for each expression:
For the C++ "Hello, World" example, a filter of GetStdHandle;GetFileType;WriteFile would show all messages except the _mainCRTStartup one. A filter of System.IO would only show the WriteFile messages (since that's the namespace containing the private PInvoke signature), and a filter of Microsoft.Win32.Win32Native would only show the GetStdHandle and GetFileType messages. | http://blogs.msdn.com/b/adam_nathan/archive/2003/05/15/56687.aspx | CC-MAIN-2014-52 | refinedweb | 508 | 55.24 |
Hi All,
please help me on this issue, deployed the custom ui5 application in fiori launchpad, after clicking the tile we are facing the below error.
I am having some doubts.
In "LPD_CUST" under "Additional Information" tab, what I need to specify if the application is custom UI5 application.
SAPUI5.Component=zmm2 (I mentioned like this facing the above error)
Other wise I gave the additional information as blank getting the error is "Index.html not found ".
Anyone of you please help me on this issue.
@Thanks in advance
Regards
Sravya
Hi Sravya,
Take a look at "Adding Custom Apps to Fiori launchpad" section in SAP Fiori - Developing Custom Apps - Additional Topics - SCN Wiki.
Regards,
Masa / SAP Technology RIG
Help to improve this answer by adding a comment
If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead.
Hello sai,
in your screenshot what is 'z_mm_kpi_dshbrd' ?. may be that is causing some issue. try to deactivate that service in sicf. and try to execute or just check out what has been done to it. Might be it is trying to load that web service and bsp application. may be some lpd_cust settings are not configured properly like namespace and all.. check it out. even if still you are getting the error then
please check all the sicf services, odata services, call all the services in the browser. it should work. its pretty straight forward thing. let me know if any thing still persists. Thanks.
Regards,
Siddarth Kabde
Help to improve this answer by adding a comment
If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead.
Add a comment | https://answers.sap.com/questions/12626097/after-deploying-custom-ui5-appliaction-in-flp.html | CC-MAIN-2020-40 | refinedweb | 295 | 74.29 |
Serving up Mercurial using mod_python
The following information resulted from several hours of battling with SELinux and Apache, attempting to find some way of serving up my Mercurial repository (now at) over HTTP. In short, I found that cgi-bin would not work at all with SELinux, for reasons I couldn’t figure out (it didn’t generate any AVC messages once I extended Apache’s permissions — it just wouldn’t serve any data).
But getting mod_python to work turned out to be no picnic either. The data is out there, on the Web, but nothing from any one place worked for me. So after much effort, here’s what I discovered, boiled down into nice easy chunks for the tired reader:
- Install Mercurial. I’m using CentOS 5, and Mercurial was not available through
yum. I installed it from Dag Wieers repository.
- Install mod_python. This you can do through yum with
yum install mod_python.
- Create a directory for your Mercurial projects in
/srv/hg. My ledger project ended up having all the Mercurial files in
/srv/hg/ledger/.hg. Note that unlike systems like git, you do need the files in the
.hgsubdirectory. The rest of the parent directory will remain empty. Make sure that the security context on all of these directories is
user_u:object_r:public_content_rw_t, if you intend to allow others to post changes. Otherwise, make it
user_u:object_r:public_content_t. The files will have to be readable and writable by Apache if you want to allow commits. To change the security context, you can use this command:
chcon -R user_u:object_r:public_content_t /srv/hg.
- Create a directory called
/etc/hgweb. In it you will have two files,
hgwebdir.confand
users.htdigest. I’ll give you some example files in the appendices to this article. The security context for this directory and all the files in it should be
system_u:object_r:httpd_config_t. The
users.htdigestfile should be owned by
apache, and have permissions set to 400.
- Create a directory called
/var/hg. It will have two files in it,
hgwebdir.pyand
modpython_gateway.py. Both of those are found below. The security context for this directory and its contents should be
system_u:object_r:httpd_sys_content_t. It can be owned by root, just make sure it’s world readable.
- Configure your Apache server. I’m using a virtual host to serve pages on
hg.newartisans.com. I leave it up to you to modify this for running on your main server. See below.
- I still can’t get pushes to work over HTTP (I’m using ssh). If you figure this out or have an answer, please send me a note!
/etc/hgweb/hgwebdir.conf
The only part of this you’ll need to change is the name and e-mail address of the administrator. And change
push_ssl if you need to deliver encrypted data (such as work-related sources). Note: if you change push_ssl, you’re going to have to configure Apache to use SSL for your host, which is beyond the scope of this article.
[collections]
/srv/hg = /srv/hg
[web]
style = gitweb
allow_archive = bz2 gz zip
contact = John Wiegley, johnw@newartisans.com
push_ssl = false
/etc/hgweb/users.htdigest
You can make this file very easily. Here’s what I typed to create an entry for myself:
htdigest -c users.htdigest "New Artisans LLC" johnw
It will ask you for the password to use. You’ll need an entry for everyone who will have commit access to your repository. If you are using the htdigest command to make these entries, remove the
-c flag! That causes the file to get created the first time.
/var/hg/hgwebdir.py
Strangely enough, there is no working version of this on the Web that I could find. But here’s what worked for me:
#!/usr/bin/env python
#
# An example CGI script to export multiple hgweb repos,
# edit as necessary
import cgitb
cgitb.enable()
from mercurial.hgweb.hgwebdir_mod import hgwebdir
from mercurial.hgweb.request import wsgiapplication
import mercurial.hgweb.wsgicgi as wsgicgi
def make_web_app():
return hgwebdir("/etc/hgweb/hgwebdir.conf")
def start(environ, start_response):
toto = wsgiapplication(make_web_app)
return toto (environ, start_response)
/var/hg/modpython_gateway.py
This is quite a bit longer, so I’m just going to link to the copy of modpython_gateway.py in my Downloads section.
changes to /etc/httpd/conf/httpd.conf
There are a couple of oddities in here: First, the rewrite rule. You’ll want this. Ignore the fact that it contains the string
hgwebdir.cgi in it. Apparently hgwebdir depends on this and then ignores it, but since we’re running with mod_python it doesn’t actually do anything. If you get rid of it, the errors you get will be rather hard to figure out!
Also, be careful with the AuthName. This is the same exact string that you have to pass to the
htdigest command (see above), or else user authentications will fail.
<VirtualHost *:80>
ServerAdmin webmaster@newartisans.com
ServerName hg.newartisans.com
ErrorLog /var/log/httpd/error_log
CustomLog /var/log/httpd/access_log combined
RewriteEngine On
RewriteRule ^/(.*) /hgwebdir.cgi/$1
<Location />
PythonPath "sys.path + ['/var/hg']"
SetHandler mod_python
PythonHandler modpython_gateway::handler
PythonOption wsgi.application hgwebdir::start
Order allow,deny
Allow from all
# authentication
AuthType Digest
AuthName "New Artisans LLC"
AuthDigestDomain hg.newartisans.com
AuthDigestProvider file
AuthUserFile "/etc/hgweb/users.htdigest"
<Limit POST PUT>
Require valid-user
</Limit>
</Location>
</VirtualHost>
Syndicated 2007-10-19 09:09:46 from johnw@newartisans.com | http://www.advogato.org/person/johnw/diary/10.html | CC-MAIN-2014-10 | refinedweb | 906 | 59.8 |
Introspection and Actions; UIA_PaneControlTypeId and A Grid Within
Started by
Sn3akyP3t3,
3 posts in this topic
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!
Register a new account
Already have an account? Sign in here.?
I devised my program for two main reasons really, based on giving me greater control, using the Free version of TeraCopy, which has some limitations (perhaps even with the paid full version). My program utilizes the features of the TeraCopy command-line.
1. I was fed up with Thumbs.db files regularly halting the process in Win XP or preventing a folder being deleted after a move. Only happens with Win XP.
2. I wanted to automate delays between jobs, allowing all HDD's to rest periodically when doing large and lengthy jobs. Letting HDD's heat up too much, can have tragic results or considerably shorten their lifespan. Prevention is better than cure or just relying on monitoring software etc.
--------------------------------------------------------------------------------------
You can browse to set the Source and Destination paths, or like me, just use Drag & Drop to those inputs.
Once the Destination path has been set, the MIN (minimalist GUI) button becomes available. In Minimal mode, you get a further level of automation, once initial options are set, and thus less prompts ... none in fact, as jobs are created automatically based on either COPY or MOVE.
Minimalist Window
In the Minimalist mode, you can only use Drag & Drop to add a source file or folder, and COPY or MOVE is permanently set when the window first opens, via a choice prompt.
Assigning a WAIT is simple and easy, and is the latest feature added to the program ... I used a much more complex variant before that (see Advanced Delay).
The program now displays three file size reports.
Individual Size of the current (last added) job (Blue label).
Total Size of all jobs combined (Black label).
Subtotal Size for each grouping of jobs, defined by a WAIT selection (Red label).
You toggle between Black and Red, by just clicking that label. If no WAIT has been set, then the values will be the same.
The wait of 5 minutes in the screenshot above is set for the source shown. It means wait 5 minutes before copying (or moving) that source.
More sources added from that point, add to a new subtotal. To see the previous subtotal, you need to be at the main (MAX) window, and select the prior job.
So every time a WAIT is clicked for a source a new subtotal count is started.
In the screenshot above, you can see the Job name, and that it is Job number 5 order wise. MOVE has been set for that job.
Selecting Job 4, you will see the previous (complete) Subtotal. Selecting job 3 (in this instance), you would see the subtotal up to that job.
Click the red Total label and it will change to Black, and show you the total size over all, as shown in the second screenshot above.
This new WAIT feature is the simplest and best way to use the program generally (in my view) ... but check out the following, as it is not always the case.
Advanced Delay Window
The Advanced Delay Options, are a further level of automation, added during the early stages of development, before I thought to create the Minimalist window and show Sizes. It was before I decided to put a lot more effort into the program. As with all my programs though, it is continual use that eventually dictates what I ultimately want to happen, to make my life easier ... simpler, better, smarter, quicker.
The main difference between this older method, and the new WAIT one, where you specify delays precisely, is that the program attempts to determine the best moments to pause, based on various factors, which you setup and can vary between different types of Job sessions.
The chief purpose for all the advanced options, is an attempt to cater for the difference between moving a small number of big files and a lot of small files. Moving or Copying lots of small files (hundreds or thousands), as many would know, can heat up a HDD far quicker and to a much higher temperature than a small number of (even very) large files. I guess that is primarily due to the sheer number of indexes that need to be created, and with small files are done at a staggering rate.
Which method you use, is up to you, and should be governed by the type of job. Both methods can be used together, but not advised if you don't want the possibility of unnecessary extra long delays. EDIT - That said, you could use the WAIT option just for a delayed start of the first job ... perhaps your PC is busy doing something else until then, but you want to go and do something else for a bit, and have it all done by the time you return (i.e. watch a movie)..0.zip
Enjoy!
P.S. I am not affiliated in any way with those who created and provide the excellent third party program - TeraCopy.
- By Earthshine
;
install_7zip_test.au3
WaitForControls.au3
- Psyllex!
- By WalkHappy: | https://www.autoitscript.com/forum/topic/169207-introspection-and-actions-uia_panecontroltypeid-and-a-grid-within/ | CC-MAIN-2018-05 | refinedweb | 882 | 71.04 |
David Kellum wrote:
>
> I'm writing a performance minded server in Java that needs to repeatedly
> parse relatively small (5k) XML documents obtained from a remote
> server. I don't need or want to have the overhead of any validation in
> this parse. However, the returned document includes a doctype
> declaration like so:
>
> <?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
> <!DOCTYPE FOO SYSTEM "foo.dtd">
> <FOO>
> ...
> </FOO>
>
> I don't control this format so I can't get rid of the DOCTYPE
> declaration without performing some error-prone hack like stripping the
> line out before passing it to the parser.
>
> With Xerces-J I can work around this by using the SAX-2 interface's
> XMLReader.setEntityResolver() to an instance of the following:
>
> public class NullEntityResolver
> implements EntityResolver
> {
> public InputSource resolveEntity( String publicId, String systemId )
> {
> return new InputSource( new ByteArrayInputStream( new byte[0] )
> );
> }
> }
>
> However, I can't seem to do the same with the Crimson 1.1 parser. Here
> I get the following:
>
> Exception in thread "main" org.xml.sax.SAXParseException: Relative URI
> "foo.dtd"; can not be resolved without a document URI.
> at org.apache.crimson.parser.Parser2.fatal(Parser2.java:3035)
> at org.apache.crimson.parser.Parser2.fatal(Parser2.java:3029)
> at org.apache.crimson.parser.Parser2.parseSystemId(Parser2.java:2627)
>
> Any suggestions on how I might work around this with Crimson? Any
> comments on the validity of my approach for dealing with this in
> Xerces-J?
Sounds like a good approach. Looking at the crimson code, it looks like
the parser tries to resolve the SystemID in the doctype decl which is a
relative URI so it tries to get the base URI of the document, which is
null, hence the exception.
Try this, in the code that starts the parse, there is a SAX InputSource
object representing the main document. Use InputSource.setSystemID() to
set some URI on the main document. I believe an empty string ("")
should work as well. HTH,
-Edwin | http://mail-archives.apache.org/mod_mbox/xml-general/200102.mbox/%3C3A97361B.25A60E87@eng.sun.com%3E | CC-MAIN-2015-06 | refinedweb | 329 | 57.27 |
ABOUT SASM ---------- MOTIVATION for SASM: hiding the details of Pentium asm. lang. (on purpose!) SASM code will look more like HLL code -- to make student's transition easier. Introducing one more level of abstraction in order to postpone discussion of several topics. HLL SASM assembly machine code each HLL statement maps into 1 or MORE SASM instructions each SASM instruction maps into 1 or MORE Pentium instructions SASM -- the language -------------------- A subset of the functionality of most high level languages -- no records/structures no formal arrays no procedures/functions What is required by a programming language? declarations arithmetic operations conditional execution (if then else) looping control structures communication w/user. . .(write statement) About SASM: -- one instruction, declaration per line -- comments are anything on a line following `;' (comments may not span lines) -- given the Intel architecture and its history, there are an enormous number of RESERVED WORDS. Consult APPENDIX A always! DECLARATIONS ------------ - they give information about how much memory space is needed - they assign a name to the memory space SASM has 3 basic types: integer, float (real), character can build other types out of these, for example, boolean is really an integer with only 2 defined values. Pascal: var variablename: type; C or C++: type variablename; SASM: variablename type value type is dd if integer db if character dd if floating point value is required -- it gives the variable an initial value -- to explicitly leave value undefined, use the '?' character examples: bool_flag dd 0 counter dd 0 variable3 dd ? constant_e dd 2.71828 uservalue db ? letter_a db 'a' string1 db 'This is a string.', 0 ; null terminated string example, VERY USEFUL! string2 db 'Another string', 0ah, 0 ; that 0ah is the newline character, AND this string is ; null terminated. remember: -- one declaration per line. DIRECTIVES ---------- a way to give information to the assembler. - some directives start with `.' (period) examples: dd # tells the assember to allocate 32 bits db # tells the assember to allocate 8 bits .data # identifies the start of the declaration section # there can be more than 1 .data section in # a program .code # identifies where instructions are # there can be more than 1 .code section in # a program .stack # You get this set of memory, called a stack. # Don't worry about it for now, just use it. .model # Gives the assembler information about how to # place stuff in memory, and how to call stuff # outside the program (like library calls) ARITHMETIC instructions ---------------------- SASM Pascal C or C++ NOTES move x, y x := y; x = y; x and y are ints or floats moveb x, y x := y; x = y; x and y are chars movezx x, y NO EQUIV NO EQUIV x is int, y is char (SIZE) movesx x, y x := y; x = y; x is int, y is char (SIZE) ineg x x := -x; x = -x; iadd x, y x := x + y; x = x + y; integer addition isub x, y x := x - y; x = x - y; integer subtraction imult x, y x := x * y; x = x * y; integer multiplication idivi x, y x := x div y; x = x / y; integer division (quotient) irem x, y x := x mod y; x = x % y; integer division (remainder) fpadd x, y x := x + y; x = x + y; floating point addition fpsub x, y x := x - y; x = x - y; floating point subtraction fpmul x, y x := x * y; x = x * y; floating point multiplication fpdiv x, y x := x / y; x = x / y; floating point division NOTES: 2. cannot increase the number of operands. 3. y can be an IMMEDIATE for all except the floating point instructions examples: move count, 0 imult product, multiplier iadd sum, 1 NOTE: there are other instructions that implement boolean functions, but we don't cover them yet. The move instructions must be carefully chosen to match the type of the data being moved. The operation and difference between movezx and movesx will be covered after we talk about representations. CONDITIONAL EXECUTION --------------------- sometimes an instruction (or a set of instructions) should be executed, and sometimes it (they) shouldn't. HLL -- simplest form is a go-to. (Always discouraged.) Pascal if-then-else (a conditional go-to!) if (condition) then statement else statement; C if-then-else if (condition) statement; else statement; SASM 'ifs' and 'gotos' (a better name is CONTROL INSTRUCTIONS) --------------------------------------- SASM effect of instruction br label goto label; blz label if SF=1 then goto label; bgz label if SF=0 and ZF=0 then goto label; blez label if SF=1 or ZF=1 then goto label; bgez label if SF=0 or ZF=1 then goto label; bez label if ZF=1 then goto label; bnz label if ZF=0 then goto label; compare x, y result of x-y sets condition codes compareb x, y result of x-y sets condition codes This is different than many other modern machines. There are two CONDITION CODES that we must think about. condition code contents zero flag (ZF) ZF=1 if result is 0 sign flag (SF) SF=1 if result is negative SF=0 if result is zero or positive These condition codes get changed (set) according to the result of certain instructions (iadd, isub, ineg, and some logical instructions). The condition codes are used by the control instructions. To explicitly set the condition codes, use a compare instruction. EXAMPLE: -------- Pascal if-then-else: if (count < 0) then begin count := count + 1; end; C equivalent: if (count < 0) count = count + 1; SASM equiv to if-then-else: compare count, 0 blz ifstuff br end_if ifstuff: iadd count, 1 end_if: # next program instruction goes here -- OR -- compare count, 0 bgez end_if iadd count, 1 end_if: # next program instruction goes here WHICH ONE OF THESE IS BETTER? NOTE: Be careful not to use RESERVED WORDS for your variable names or label names. (some reserved words: end endif if else elseif for while repeat) Structured loops can be built out of IF's and GOTO's (test and branch) EXAMPLES: --------- while loop example Pascal: while ( count > 0 ) do begin a := a mod count; count := count - 1; end; BAD STYLE Pascal: while: if (count <= 0) then goto endwhile; a := a mod count; count := count - 1; goto while; endwhile: C or C++: while (count > 0) { a = a % count; count --; } SASM: while_loop: compare count, 0 blez end_while irem a, count isub count, 1 br while_loop end_while: # next program instruction goes here while loop example (compound conditional) ------------------------------------------ Pascal: while (count < limit) and (c = d) do begin /* loop's code goes here */ end; C or C++: while ( (count < limit) && (c==d) ) { /* loop's code goes here */ } SASM: while_loop: compare count, limit bgez end_while compare c, d bnz end_while # loop's code goes here br while_loop end_while: for loop example ---------------- Pascal: for i:= 3 to 8 do begin a := a + i; end; C: for ( i = 3; i <= 8; i++) { a = a + i; } SASM: move i, 3 for_loop: compare i, 8 bgz end_for iadd a, i iadd i, 1 br for_loop end_for: COMMUNICATION WITH THE USER (I/O operations) -------------------------------------------- SASM effect of instruction get_ch x read character from input, place into x put_ch x send character in x to output put_i x send integer in x to output put_fp x send floating point value in x to output put_str x send (NULL TERMINATED) string at x to output SASM doesn't have any oddities about testing for eoln or eof. The newline character (0ah, or '\n' in C) is just another character to be read or written. NOTE: There are times when you will want to 'get' something that isn't a character (like an integer or floating point value input by user). In SASM, you can't, since the instruction doesn't exist. At the end of Chapter 5, you will know enough about data representation to be able to read an integer (or floating point value) character by character and translate it to an integer. It is done this way because input from a keyboard are only characters. Output to a simple display (which we are assuming) are only characters. The C library (that we utilize) gives easy implementation of output for other types, so you get that benefit in this language. EXAMPLES: ; this is a code FRAGMENT, not a whole program .data msg1 db 'The integer is ', 0 int1 dd 285 newline db 0ah msg2 db 'The second string.', 0ah, 0 .code put_str msg1 put_i int1 put_ch newline put_str msg2 prints: The integer is 285 The second string. | http://pages.cs.wisc.edu/~smoler/x86text/lect.notes/SASM.html | crawl-003 | refinedweb | 1,411 | 52.73 |
Hello. I would like to have (in Xamarin Forms) ListView, where ViewCell contains label + image. Image source is from internet url.
So I followed this tutorial
So in my case its should be something like this:
public partial class FruitPage : ContentPage { public FruitPage() { InitializeComponent(); var listView = new ListView { ItemsSource = new List<Recipe> { new Recipe { Heading = "Apple", Description = "Awesome!", Img_Url = "" }, new Recipe { Heading = "Banana", Description = "Beautiful!", Img_Url = "" }, new Recipe { Heading = "Cherry", Description = "Cheap!", Img_Url = "" }, }, ItemTemplate = new DataTemplate(typeof(FruitCell)), RowHeight = 150, }; listView.ItemTapped += (sender, e) => { listView.SelectedItem = null; Navigation.PushAsync(new FruitDetailPage(e.Item as Recipe)); }; Title = "Fruits"; Content = listView; } } public class FruitCell : ViewCell { public const int RowHeight = 55; public FruitCell() { var nameLabel = new Label { FontAttributes = FontAttributes.Bold }; nameLabel.SetBinding(Label.TextProperty, "Name"); var imagge = new Image(); imagge.SetBinding(Image.SourceProperty, "Img_Url"); View = new StackLayout { Spacing = 2, Padding = 5, Children = { nameLabel, imagge }, }; } } public partial class FruitDetailPage : ContentPage { public FruitDetailPage(Recipe recipe) { InitializeComponent(); var embeddedImage = new Image { Aspect = Aspect.AspectFit }; embeddedImage.Source = ImageSource.FromUri(new Uri(recipe.Img_Url)); Title = recipe.Heading; Content = embeddedImage; } }
BUT when Application starts, screen is empty (because images are downloading ???) its clickable, everything works fine, but no images.
and when i refresh page, images are displayed correctly.
So my question is. How to solve this problem ? What is the good practise with ListView with online images
the larger images worked in my sample app I posted just fine. I jus needed to wait a few seconds for them to load. I didn't have to do anything to cause a refresh.
Answers
Yes, I think it is loading the images. I had about a 5 second delay the first time I ran your test program. Also, each of the URL's are https so maybe the encryption is slowing things down too?
My comments on your code:
I have attached a tweaked version of your sample program that uses 3 different URL's that point to images that are only 64x64 in size.
Hi Shawn,
Thank you for your suggestions and hits, I ll definitely implement it, but using smaller (thumbnail) images on web-service side not solving problem of ListView. There must be some way how to "re-write" ListView after images are downloaded or something else how to handle larger images.
I think I had the same problem.
Try to put a
await Task.Delay(100)
before you set the ItemSource. This was a quick workaround for me.
the larger images worked in my sample app I posted just fine. I jus needed to wait a few seconds for them to load. I didn't have to do anything to cause a refresh.
You can replace
Imagewith
CachedImagefrom (API compatible
Imagereplacement). It should help a lot. Also, try set
DownsampleToViewSizeto
true. It will resize your images automatically. | https://forums.xamarin.com/discussion/comment/207195/ | CC-MAIN-2019-35 | refinedweb | 461 | 59.7 |
Constants refer to fixed values that the program may not alter and they are called literals.
Constants can be of any of the basic data types and can be divided into Integer Numerals, Floating-Point Numerals, Characters, Strings and Boolean Values.
Again, constants are treated just like regular variables except that their values cannot be modified after their definition.
There are two Boolean literals and they are part of standard C++ keywords −
A value of true representing true.
A value of false representing false.
You should not consider the value of true equal to 1 and value of false equal to 0. −
Following is the example to show a few escape sequence characters −
#include <iostream> using namespace std; int main() { cout << "Hello\tWorld\n\n"; return 0; }
When the above code is compiled and executed, it produces the following result −
Hello World. All the three forms are identical strings.
"hello, dear" "hello, \ dear" "hello, " "d" "ear"
There are two simple ways in C++ to define constants −
Using #define preprocessor.
Using const keyword.
Following is the form to use #define preprocessor to define a constant −
#define identifier value
Following example explains it in detail −
#include <iostream> using namespace std; #define LENGTH 10 #define WIDTH 5 #define NEWLINE '\n' int main() { int area; area = LENGTH * WIDTH; cout << area; cout << NEWLINE; return 0; }
When the above code is compiled and executed, it produces the following result −
50
You can use const prefix to declare constants with a specific type as follows −
const type variable = value;
Following example explains it in detail −
#include <iostream> using namespace std; int main() { const int LENGTH = 10; const int WIDTH = 5; const char NEWLINE = '\n'; int area; area = LENGTH * WIDTH; cout << area; cout << NEWLINE; return 0; }
When the above code is compiled and executed, it produces the following result −
50
Note that it is a good programming practice to define constants in CAPITALS.
154 Lectures 11.5 hours
14 Lectures 57 mins
30 Lectures 12.5 hours
54 Lectures 3.5 hours
77 Lectures 5.5 hours
12 Lectures 3.5 hours | https://www.tutorialspoint.com/cplusplus/cpp_constants_literals.htm | CC-MAIN-2022-21 | refinedweb | 344 | 58.72 |
On Mon, Dec 29, 2008 at 04:18:55PM +0530, Kedar Sovani wrote: > > > #include_next is not understood by the compiler, and hence it gives up. > > > > > > (I don't understand why this is not seen in x86 builds, comments?) > > > > Probably because the binary package in the archive was built with > > some older gcc version, or something like that. Try rebuilding the > > package on x86 on a fully updated root fs and see if it still happens? > > > > > > > %prep > > > %setup -q > > > %patch0 -p1 -b .aconf262 > > > +%ifarch %{arm} > > > +%patch1 -p1 -b .include-next-fix > > > +%endif > > > > Don't make patches dependent on the build environment (I think there > > is a rule forbidding doing what you're doing here). > > The patch has appeared upstream at the libidn project (31 Aug 2007): > >;a=commitdiff;h=12f2443a37b9e6c95022e503f7853bd02110865e;hp=d39d8bd1e12c81a0795ad6f55ceb930a8f7662c2 > > It seems the Fedora libidn has been using version 0.6.14, while upstream > has moved to version 1.0, about 16 months ago. What I mean is, just include it unconditionally (i.e. not dependent on %{arm})? | https://www.redhat.com/archives/fedora-arm/2008-December/msg00030.html | CC-MAIN-2015-18 | refinedweb | 169 | 75.4 |
#pragma comment(linker,"/delayload:xxx.dll") -- not supported in VS .NET
#1 Members - Reputation: 301
Posted 14 October 2005 - 06:10 AM
#2 Members - Reputation: 331
Posted 14 October 2005 - 09:21 AM
(If you don't already know this, LoadLibrary takes a path or filename of a DLL and returns a handle. GetProcAddress takes that handle and the function name and provides you with a pointer to the function. You assign that to a function pointer variable with the correct definition for the function in the DLL and you're good to go. Get it wrong and you'll almost certainly go down in flames)
One way to do this is to wrap those function pointers in another function in your dll (or in a macro) and check the pointer for null. If it is, then you GetProcAddress and hook up etc. At this point you could raise an exception or whatever if LoadLibrary or GetProcAddress return null or there was some other problem you detected. You could also just link everything all at once in whatever context makes the best sense.
If you're talking about dlls with just a handful of exports you want to link, this is no big deal. But for DirectX itself, you've got some fairly heavy lifting to do.
The basic problem is that if you let the OS do this runtime linking for you, it will succeed or fail before execution ever gets to code you have any control over (there may be some way to get your own code in even earlier but I sure don't know what it is)
I've never heard of that pragma before and I don't know what it does. It might set up something like what I described above with those macros.
#3 Members - Reputation: 301
Posted 14 October 2005 - 11:07 AM
#4 Members - Reputation: 331
Posted 14 October 2005 - 12:00 PM
#5 Members - Reputation: 235
Posted 14 October 2005 - 02:38 PM
#6 Members - Reputation: 999
Posted 15 October 2005 - 01:54 AM
Quote:
Here's how:
// glext_funcs.h
FUNC(int, wglSwapIntervalEXT, (int))
// declaring function pointers (this goes in a header)
#define FUNC(ret, name, params) extern ret (CALL_CONV *name) params;
#include "glext_funcs.h"
#undef FUNC
// importing them
// note: can also use GetProcAddress(hDLL, #name)
#define FUNC(ret, name, params) *(void**)&name = SDL_GL_GetProcAddress(#name);
#include "glext_funcs.h"
#undef FUNC
HTH+HAND
#7 Members - Reputation: 301
Posted 15 October 2005 - 11:20 AM
Quote:
I assume you're building with the VC7 libs and runtimes? Are you using the pragma as I have in my subject line? I'm wondering how it works for you because I read online that it was specifically taken out of VS .NET.
#8 Members - Reputation: 968
Posted 16 October 2005 - 10:03 AM
#9 Members - Reputation: 301
Posted 17 October 2005 - 07:13 PM
Quote:
So you're not using the pragma but actually putting it in your project settings? Where would I put that? The command line is read only.
Maybe they put the functionality back in for 2005. Does it work for you in 2003?
#10 Members - Reputation: 968
Posted 18 October 2005 - 12:55 AM
Quote:
Yes.
Quote:
Project Properties->Configuration Properties->Linker->Input->Delay Loaded DLLs
Quote:
You don't have a white "Additional options" box there?
Quote:
Yes. And in 2002.
#11 Members - Reputation: 301
Posted 01 November 2005 - 08:12 AM
I did find the additional options edit box...I don't know how I missed it before. :
Thanks | http://www.gamedev.net/topic/351577-pragma-commentlinkerdelayloadxxxdll----not-supported-in-vs-net/ | CC-MAIN-2016-44 | refinedweb | 591 | 60.14 |
On Tue, 2011-01-04 at 10:33 -0800, Kevin D. Kissell wrote:
> On 01/04/11 09:54, Anoop P A wrote:
> > On Tue, 2011-01-04 at 09:21 -0800, Kevin D. Kissell wrote:
> >> I'm trying to figure out a reason why your change below should help, and
> >> offhand, modulo tool bugs, I don't see it. I'm assuming that your diff
> >> below is a diff relative to the pre-patch stackframe.h. I wouldn't
> > Yes patch created against stock code .
> >
> >> bless it as an alternative because it moves code and comments
> >> unnecessarily - all you should really have to do is to move the
> >>
> >>
> >> 190 mfc0 v1, CP0_STATUS
> >> 191 LONG_S $2, PT_R2(sp)
> >>
> >> to be just after the #endif /* CONFIG_MIPS_MT_SMTC */ at around line 201.
> > Actually I just moved code under CONFIG_MIPS_MT_SMTC to previous block
> > of code ( which store $0 ) . git diff did the rest on behalf of me :)
> >
> >> If moving the save of zero to PT_R0(sp) actually makes a difference,
> >> it's evidence that you've got problems in your toolchain (or, heaven
> >> forbid, your pipeline)!
> > In previous version of patch usage of V0 was creating issue. I have
> > verified this with previous version of code ( working code before
> > David's instruction rearrangement patch.) .
>
> Argh. It's not very clearly commented, but it looks as if the system
> call trap handler has an implicit assumption that v0 has never been
> changed by SAVE_SOME, TRACE_IRQS_ON_RELOAD, or STI. So yeah, moving the
> code around to fix the v1 conflict ends up being better than using v0 -
> otherwise, we'd need to add a LONG_L v0, PT_R2(sp) somewhere after the
> LONG_S v0, PT_TCSTATUS(sp) of the original patch.
Well, Here is the patch.
diff --git a/arch/mips/include/asm/stackframe.h
b/arch/mips/include/asm/stackframe.h
index 58730c5..19418c4 100644
--- a/arch/mips/include/asm/stackframe.h
+++ b/arch/mips/include/asm/stackframe.h
@@ -187,8 +187,6 @@
* need it to operate correctly
*/
LONG_S $0, PT_R0(sp)
- mfc0 v1, CP0_STATUS
- LONG_S $2, PT_R2(sp)
#ifdef CONFIG_MIPS_MT_SMTC
/*
* Ideally, these instructions would be shuffled in
@@ -199,6 +197,8 @@
.set mips0
LONG_S v1, PT_TCSTATUS(sp)
#endif /* CONFIG_MIPS_MT_SMTC */
+ mfc0 v1, CP0_STATUS
+ LONG_S $2, PT_R2(sp)
LONG_S $4, PT_R4(sp)
LONG_S $5, PT_R5(sp)
LONG_S v1, PT_STATUS(sp)
>
> Regards,
>
> Kevin K. | https://www.linux-mips.org/archives/linux-mips/2011-01/msg00061.html | CC-MAIN-2016-40 | refinedweb | 384 | 70.13 |
#include <librets/RetsXmlEndElementEvent.h>
Clasify the type of the Xml event.
Contstruct the object with a default line and column number.
These numbers should reflect the line/column from the XML stream where this element can be found and is used for debugging.
Checks to see if the attribute names are identical between two RetsXmlEndElementEvent objects.
Reimplemented from RetsObject.
Get the column number for this element.
Get the line number for this element.
Returns the name of the attribute.
Always returns END_ELEMENT.
Implements RetsXmlEvent.
Prints the attribute in a standard form for debugging and error reporting.
Reimplemented from RetsObject.
Sets the name of this event. | http://lpod.org/librets/classlibrets_1_1_rets_xml_end_element_event.html | CC-MAIN-2020-10 | refinedweb | 105 | 62.85 |
This is the mail archive of the cygwin mailing list for the Cygwin project.
Hi Robert. Thanks for your reply. > On Sun, 11 Nov 2007 03:50:28 -0500 (EST), Robert Kiesling wrote >> > and not, > typedef-name: > identifier1 identifier2... identifiern > The compiler is looking for a semicolon after, "fred," and your > example is lacking a semicolon after, "here," anyway. The compiler should never have been exposed to that typedef. It is not part of the C89 standard. > This is not the compiler's job - to demangle a possibly inconistent > type statement. It is the compiler header's job (Cygwin in this case) to not expose programs to non-C89 types when in ANSI mode. If you go and compile my code on some other C89 compiler, you should find that it works, unless it has the same bug, anyway. > It's the job of the parser and is potentially expensive > and invalid. By the way, this mailing list is exposing email addresses over at: I'll ask them to stop doing that. I just got spammed. :-( Still, at least now I know where to find replies. :-) BFN. Paul. ----- Original Message ----- From: "Paul Edwards" <mutazilah@gmail.com> To: <cygwin@cygwin.com> Sent: Sunday, November 11, 2007 1:40 PM Subject: cygwin 1.5.24-2 gcc 3.4.4 stdio.h > I just downloaded cygwin 1.5.24-2 (just a couple of hours ago) and > compiled the following program with "gcc -ansi fred.c" > (NOTE the "-ansi" keyword): > > #define pid_t fred was here > > #include <stdio.h> > > int main(void) > { > printf("hello, world\n"); > return (0); > } > > And got the following result: > > In file included from /usr/include/stdio.h:46, > from fred.c:3: > /usr/include/sys/types.h:180: error: parse error before "was" > In file included from /usr/include/sys/types.h:373, > from /usr/include/stdio.h:46, > from fred.c:3: > /usr/include/cygwin/types.h:146: error: parse error before "fred" > > > ie it is hitting a typedef for pid_t > > > This is the compiler. > > > Is "-ansi" not the correct thing to do to get pure ANSI C89 headers? > > BFN. Paul. > -- Unsubscribe info: Problem reports: Documentation: FAQ: | https://sourceware.org/legacy-ml/cygwin/2007-11/msg00535.html | CC-MAIN-2021-43 | refinedweb | 359 | 68.87 |
From the L.A. Times:
A generating tower at the world’s largest solar energy plant was shut down Thursday after a mirror misalignment caused sunlight to burn through electrical wiring and start a small fire, according to officials.
The blaze at the Ivanpah Solar Electric Generating System in the Mojave Desert broke out around 9:30 a.m., according to the San Bernardino County Fire Department. In a Facebook post, officials said that flames could be seen near the ninth floor of the Unit 3 tower, but that they had apparently died out by the time firefighters arrived.
…
Some misaligned mirrors instead focused sunlight on a different spot, which caused the electrical cables to catch fire, San Bernardino County Fire Capt. Mike McClintock told the Associated Press.
From Computerworld:
The fire at the Ivanpah Solar Electric Generating Systemin California forced firefighters to climb 300 feet up the tower.
h/t to Stand Stendera
The plant had been plagued by production problems, and state utility regulators had threatened to shut it down if it didn’t get back on track. They gave it a temporary reprieve.
It has been on track according to recent reports, but this latest setback may derail that effort.
250 thoughts on “Fire breaks out at world's largest solar power plant – Ivanpah”
I’m sure when they were children, their mothers told them to be careful where they aimed those death rays. NOW, I’ll bet they wish they’d paid more attention to mom.
Does this mean we are going to have political fights over Solar Death Ray Control (like Gun Control only, ya know, Solar Death Rays), now? I wonder where the NRA will be on this issue. 🙂
The NRA’s official policy on gun control is to hit what you’re aiming at. Had these people done that, there would have been no fire.
I sure she said “You’ll death ray your eye out!”
I’m sure when they were children, their mothers told them to be careful where they aimed those death rays. NOW, I’ll bet they wish they’d paid more attention to mom.
You’ll shoot your eye out. — A Christmas Story
Yup, Remember what she said about playing with matches and bed wetting?
” Save your pee for the fire”?
If I were a fireman, I would climb their tower just as soon as I knew that the last of the 700,000 mirrors was lying on the ground face down, and that the incoming electric cables to the controlling computer bank, were disconnected from the breaker box.
Otherwise I would let it burn itself out.
g
PS And I would close down all of California air space, from any flying machines, including Go Pro Quad copters.
When that thing is fully ” fired up ” as we say in the trade, does anybody know just how much near black body thermal radiation that thing is emitting to space.
That has to be the brightest thermionic emitter on the planet.
Somebody designed this thing, right ? or did it just fall off the back of a turnip truck.
Turnips have to be there somewhere in the design team.
g
“does anybody know just how much near black body thermal radiation that thing is emitting to space.”
Are you suggesting that making objects very very hot is a way to reduce AGW?
Makes sense – well, as much as any other geoengineering proposition.
Well I don’t know how much you know about fancy “lighting”, but one of the commonest yuppie lighting gizmos is a thing called an MR-16 lamp.
It is a high Temperature quartz envelope 50 mm diameter, by about 35 mm long, available to run on either 120 V AC or 12 V DC. Well it’s a quartz halogen light bulb, and they hang them on string wires that are supposed to be sexy, but you can move them around by sliding them along the bare wires.
Well these things are available with up to 50 watt power rating, in either spot or wide area beam patterns.
The string wires, and the terminals on the back of the MR-16 bulb are quite incapable of conducting any of that 50 watt “heat” away from the bulb, and only a few percent of it comes out the front in the form of visible illumination (aka light).
So the ONLY way that an MR-16 light bulb is prevented from melting down, and dripping molten quartz all over the place, is that it is coated with a IR black coating that makes it a very efficient IR radiator at some very high quartz temperature.
The entire 50 watt power is dissipated as black body thermal radiation in a Lambertian pattern right off the 50 mm diameter face of the bulb.
So what happens when the solid state LED people set out to make a drop in plug and play LED replacement for a 50 watt quartz halogen MR-16 bulb.
Well for starters, their 50 watt equivalent LED source is actually only going to take about 8 or 9 watt of DC electricity to put out the same amount of illumination as the 50 W quartz halogen, so you only need to get rid of 9 watt instead of 50.
Well you don’t even need to do that, because the LED might actually be more than 40 % efficient in converting electricity to “light” or visible EM radiation, so now you only have five watt of heat to get rid of and you want to get rid of essentially all of it, because the LED does not want to run at a Temperature of 600 deg. C or anything like it. Even 80 to 100 deg C junction temperature for the LED is highly undesirable.
So the problem is, that you cannot properly cool an LED MR-16 replacement lamp that is a 50 watt equivalent.
Well Soraa has done a respectable job of meeting that criterion.
The whole problem would disappear if they ditched the MR-16 concept, and designed a replacement 50 mm lamp fixture that properly provides for conducting the 5 watt of heat out of the base to the external package.
The idea of a plug in LED replacement for an incandescent lamp is quite absurd. The LED isn’t going to need to be replaced, so whay make it plug in in the first place.
Anyhow, the whole point of this shaggy dog story is that a macro scale SOLAR furnace, that converts the energy receiver into a nice black body radiator, that is a sizeable fraction of the brightness of the sun, is just stupid to begin with.
That tower needs to be coated with an efficient dichroic mirror coating, that lets solar spectrum (0.25-4.0) micron EM radiation in and does not let longer wave thermal radiation out.
If the boiler were to run at 600 kelvin, it would be generating a 2.5 to 40 micron IR spectrum, so you likely would put the crossover wavelength at around 2.5 microns, which would lose a bit of solar IR but keep the generated heat inside the target.
I think the whole thing is not only a piece of optical crap, but also a piece of thermal crap, and evaporated bird aromas thrown in for good measure.
G
A Green “Climate” industry which can whitewash and rationalize dead birds making smoke streamers through the sky will have no trouble with this minor fire issue.
Can I have fries with my barbequed bird, please.
They aren’t barbecued. They are sacrificed to Gaia.
That’s not sacrificed to Gaia; that’s sacrificed for Giai. Giai being Gore Is An Idiot.
Well you have to inhale them; they smell just like fried chicken.
g
I’ve drove past this plant many a time on the 15. It’s very impressive to see yet I have to shake my head every time. This technology is best described in one word: fail.
This should be the final nail in the coffin of this pointlessly expensive fiasco.
Yeah like coal mine accidents, oil spills, explosions at oil refineries, fires at power plants, pipeline ruptures, and fuel tanker truck accidents were reason to stop fossil fuel energy. Not to mention the effects on air quality (that we all breath).
You forgot to mention things like beer floods, molasses floods, dams bursting, cats and dogs living together, etc.
SMC:
David,
Cats and Dogs living together:
Sam,
Air is cleaner now than it has been for half a century or more in almost all places (except China which ironically continues to be held up as a shining example of how great the socialist model is for the environment).
All of the examples you cited were productive, profit-making commercial enterprises along the way, unlike Ivanpah which is not and will not ever be able to carry it’s own water, so to speak.
The accidents you mentioned did not disrupt the flow of electricity. The world’s largest solar energy plant just lost a third of its ability to produce energy. See the difference?
In the end though, nobody cares. The percentage solar plants contribute to the worlds energy needs is close to a rounding error. Which as this incident shows is a good thing.
Any greenhouse gases released?
Is vaporized aluminum a green house gas?
Vaporized aluminum is a deadly gas to welders !
Prolly at least irritating to greenhouse workers . .
Are any greenhouse gases released?
No more than would be released by burning any other greenhouse.
Lots of them all the time. It’s got three huge honking gas-fired pre-heaters, one at the foot of each tower. It burns so much gas that it’s on the state list of major emitters. So much for solar being supposedly an emissions-free technology.
Geesh you’d think solar would be grateful to fossil fuels for making it’s existence possible.
Now who would have predicted that could happen.
any reasonably competent mechanical engineer during the design process.
No need for a mechanical engineer. It was a blinding glimpse of the obvious to (almost) anyone.
Mechanical engineer?, check your energy well for a mechanical engineer We have no mechanical engineer. In fact, we don’t need a mechanical engineer. I don’t have to show you any stinking mechanical engineer.
Everybody knows the mirrors are controlled by software! We have our best programmer on it right now. We know he is working on it because we found empty RedBull cans and fresh potato chip bags in the trash when we came in this morning.
Murphy.
RWturner May 20, 2016 at 7:57 am
posted a picture of what appears to be some of the damage. You can find it in the comments of the article:
Ivanpah suffering an engineering failure on top of failure to meet cost projections. Typical green project. I do suppose the government (we poor fools) are picking up the cost of repair.
Hmmm .. misaligned mirrors .. is that PR talk for hacked computers? .. nah, can’t be .. I’m sure they have everything under control .. under control .. under control …
Most likely a software bug…
Houses have bugs. Software has mistakes. Completely different.
Micro$oft Patch !
g
Ivanpah becomes self-aware at 9:30 a.m. Western time, May 19th. Some of the mirrors decide to take out the Master Control Program. In a panic, the firemen try to pull the plug.
Ivanpah becomes self-aware, and being a computer, realizes how insane its existence is. Suicide is the logical response.
LOL! +100 to you both (Janice the Elder and Joelobryan).
” Sterilize ! ” You are in error ! In this case you may ” Cauterize ”
g
And what happens when there is a power failure/outage during the day and the Sun moves to the correct position to melt the whole building or just enough to weaken the structure supporting that reservoir of high temperature salt? Next disaster will happen, guaranteed.
I’m sure there is some fail-safe mechanism, that safes all of these mirrors instantly. A shaped charge behind each mirror would work nicely.
Nothing like a molten salt bath to start your day off. They would all become pillars of salt.
Hey, it’s new technology and a pilot plant. That they were able to get production up to 67,300 megawatt-hours electricity in February, up from about 30,300 a year earlier is great! That’s a whole bunch of oil we didn’t need to import from our buddies in the mideast.
It also suggests that the technology works. Because this plant was new technology, costs are going to be high. Whether they now know how to duplicate the plant and make it cost effective is an interesting question. It’s all pretty basic technology though.
Hey, that’s funny..thanks : )
67MWH is pretty much nothing. I doubt the Saudi’s, or the Texan’s, or the North Dakotan’s, or the Alaskan’s, or the Canadian’s, or the Venezuelan’s noticed the difference in terms of oil production.
This kind of technology has worked for years. It’s never been cost effective. With current technology, it’s not likely, if ever, to be cost effective…when compared to other sources of energy, e.g. nuclear or fossil fuel.
That was 67,300 megawatt-hours, not 67MWH. It’s about 116 thousand barrels of oil, if I did the math right. Per month.
My bad, you are correct, 67,000MWH…still almost nothing. 116K barrels of oil per month is nothing, when compared to the 35K barrels of oil per day that is processed in the smallest oil refinery I have been in. In most of the refineries I’ve been in, that would even be one days worth of production.
oops, last sentence should read, ‘…wouldn’t even be one days production.’
the beer and late night are starting to affect my typing. 🙂
Not to mention they use NG to power electrical generation in California instead of oil. No need to import oil for electrical generation when we have plenty of NG here.
67,000 MWH’s is a pittance. There are about 720 hrs in month, so it had an average power of 93MW. And of course you would have to subtract the natural gas balancing and the enormous construction costs to get net. An output of 93MW is less than 5% of the output of a 2 tower nuclear plant or 10% of a medium coal or nat gas plant. That amt of power is meaningless to the overall grid.
So Ivanpah put out 93MW on 4,000 acres while the nuclear plant to close to my house put outs 2400MW on 1,000 acres, and most of that is buffer zone. The power output per acre for the nuclear is 100x times the solar. And of course Ivanpah is way more expensive. But somehow we are saving something with this JUNK!
It seems more like newer uses for older technology.
Yeah Greg, and what’s a few fried birds and a couple of fires, gee we must be horrible to worry about environmental damage when it comes to new technology. /sarc.
And as I said below.. those bird kills will probably only happen for the first year or two. !
Dropping down to near zero after that..
… As they run out of birds to burn? I’ll have to remember that excuse the next time an oil spill happens.
“We fully expect the number of dead birds washing up on shore to drastically reduce now that they’re all dead.”
They counted 220 bird kills from the oil spill in California last year and the Santa Barbara County District Attorney has filed 46 criminal charges against them. How many criminal charges should be filed for 2200 bird kills at a thermal solar plant?
“Hey, it’s new technology and a pilot plant.”
Um, no and no.
@Greg:
You say: “That’s a whole bunch of oil we didn’t need to import from our buddies in the mideast.”
I don’t know if your comment at 7:18 pm is sarcastic or sincere. For the umpteenth time however, allow me to repeat what I have to keep having to remind people here at WUWT: We in the U.S. DO NOT use crude oil to generate electricity except for a meager 1% of it. Solar and wind energy, no matter how much you try to scale it up, it is not going to make any meaningful difference in our demand for crude oil, domestic or imported. We use crude oil for transportation fuels, petrochemicals and artificial materials like plastics and polyester.
See here:.
CD, you are correct to point that out. A heck of a lot of your imports are coming from Canada because US-based ‘agents’ have managed to stall every pipeline that leads out of central-western Canada. 350.org is here in Waterloo right now trying to get the entire industry ‘disinvested’. As a result, oil from Alberta has to be trained out of the country south, mostly dumped on the central northern US market. This keeps the price of gas down in the Dakotas, but costs Canada 50% of value.
In other words, the rent-a-gangs (as the BC Premier called them) showing up to block all the pipelines are being funded by the likes of the Sierra Club through local NGO’s who of course love the income. One local NGO head admitted on CBC radio they received $17,000 a week of US funding for multiple years to campaign against any western pipeline to the coast.
A new favourite tactic is to hire First nations residents to create loud noises about the ‘environment’ and how ‘dangerous’ everything is. Check the news reports. Selling ‘noble savage’ environmental concepts plays well in the urban press.
As for the solar fiasco and ‘saving oil’, how much oil did it take to create this tower of financial power? How much oil will be needed to generate the income necessary to raise the taxes to pay back the loans? If we are going to run a renewables economy, they can start by charging a fair rate of return on the investment, and use the electricity to make solar panels. Let’s see if the energy put into the panel production is less than those panels produce.
If so, what is the break-even cost of a panel?
How much will the electricity from that PV panel have to sell for to pay for the cost of the electricity that went into making it, when that electricity comes from Ivanpah?
Subsidies don’t work. They are lazy.
….and, if anyone is interested, the EIA has updated their website that I linked to above. Solar’s annual contribution to our electricity generation total is now at 0.6% as of April 1st of this year. If I recall correctly, that is up from 0.4% in 2015.
Sure, one can claim that solar generation has risen 50% in a year’s time. It is still a laughably meager total, especially when one considers that the solar PV panel was invented 62 years ago in 1954. Any technology that takes in excess of 62 years to scale up the meaningful commercial levels probably never will.
Crispin in Waterloo has described the situation in Canada perfectly. The sleaziest tactic of all is bribing FN tribes. It is eco-imperialism.
Crispin in Waterloo:
Are PV panels a major cost in a grid-connected PV system?
Slywolfe, asking Crispin in Waterloo:
Yes. But they are only a part. Land. Installation. Infrastructure to DO the installation. (Roads, power, water, concrete, housing, transportation to a site in the desert a loooooooong way away from civilization. (If it takes 1-1/2 to drive to the construction site, you lose 3 hours per day per worker just getting to work! More for every ton of materials that have to get there. And be removed when finished. ) Long-distance connecting lines to the high-volt grid. The routine maintenace and servicing for hundreds of thusands of solar panels.
And then replacing the panels at their 7-9 year life. If not, their output drops to less than half of first-installed rating.
Now, if the solar panel is on roof-top. some of these go away. Grid power is to the house, land is bought – only the expensive roof top mounts are needed. Daily and monthly maintenance is needed – particularly of the batteries. 7-9 year replacement of all panels and supports are still needed. But you only get a little power from a rooftop array. Part of the day power for only a part of the single house’s daily load.
How significant is the cost of grid upgrade to handle variable and intermittent power?
“And then replacing the panels at their 7-9 year life. If not, their output drops to less than half of first-installed rating. ”
RACookPE1978, that’s not even close to the truth. Most panels carry a 20 year warranty for output. They can drop 20% in 20 years, not 50% in 9 years.
Paul and the rest:
The POV crystalline panels drop about 20% in 20 years assuming the epoxy coating is not damaged. It has been like that for a long time. So good quality panels are far better than the flexible kind. So a 22% efficiency system drops to 18% over 20 years.
A friend put them on his rooftop recently and is getting paid CDN$0.345 per kWh by the power company. That is 10 times the cost from Darlington nuclear power or Pickering.
It is obviously someone’s fantasy to have ‘solar power’ on their development agenda. One is not allowed to go off-grid, BTW, if you sign the contract. Ontario has a $7 bn subsidy plan to force home heating to be moved from cheap co-produced natural gas (which is otherwise burned to get rid of it) to electric heating, such as geothermal. I investigated geothermal. It appears not to be worth it though they do (mostly) work. There is nothing really attractive about it. When the electricity is off in winder storm you freeze to death. A propane or gas connection is still needed.
Perhaps we need a natural gas generator to power the geothermal compressors when the power goes down. There is no end of complexity we can add if it is ‘free’.
How much natural gas did they burn to produce that 67,300 MWh?
Please include references to backup your claims.
If my back of the napkin calculations are correct. 67,300MWH in a month equates to roughly 93.5MW. The plant is supposed to be capable of 392MW. 93.5 MW isn’t even the capacity of one of the three units, according to Wikipedia (ya, ya, I know, Wikipedia isn’t a scholarly source). This is supposed to be efficient and cost effective?
“That’s a whole bunch of oil we didn’t need to import from our buddies in the mideast.”
Wonder how much oil it took the make those thousands of mirrors?
It’s a rounding error. It’s less than 5% of California’s annual coal consumption, which is the smallest of all the fuel sources California uses including renewables. It’s an undetectible fraction compared with the emissions-free electricity lost when San Onofre closed and California emissions went up. Secondly, California doesn’t get its oil from the Middle East. It gets most of it from Texas refineries which in turn are largely supplied from the US, Canada and Venezuela.
It also suggests that rare migratory birds were not incinerated mid-flight during the outage
and the natural gas from fracking that is used to produce 1/4 of the output was not spent.
The only way to duplicate he plant is to have another Senate Majority leader and his son ensure the Chinese get Stimulous funds from taxpayers through a money laundering scheme. The likelihood of that happening again is nill unless they are friends of Klinton Inc..
That’s 67TW’s at the generator. After you subtract the energy needed to run the plant, the net output was negative.
Less than 1% of electricity is generated from oil in the US. Very little (if any) oil was saved.
Fire is good. Our first way to generate heat from trapped carbon. It’s worked for 10,000 years and is still working. I had a barbecued burger tonight. Wish I had a small solar furnace in my backyard and I wouldn’t even need those naughty carbon atoms.
Ronald, buy a magnifying glass, see if you can barbecue your burgers with that.
Give me a big enough magnifying glass and I’ll barbeque the world. 🙂
Are you related to Marvin the Martian?
Hey SMC,
You forgot to ask for a place to stand.
g
Greg, well no. They burn gas and somewhere a real power station has to cover their output. Some “pilot” plant
Cost 2.2 billion, income 25 million……………………………………………When will it close?
When the the Climate Lysenkoism ends.
In the USSR, Lysenkoism ended when Stalinin’s hand picked successor fell from power,
Silly rabbit. Climate Math is different from ordinary math. 1 +1=3. That’s how they roll.
If only climate math was that dependable. In climate math 1+1= whatever number you need it to today. And tomorrow if you need it to add up to a different number you can just adjust it.
Whenever misaligned mirrors burn through one of the gas-fired pre-heaters.
Just remember those mirrors are flat, and they make a beam that is about the area of the mirror, which I think is about six meters square, with a beam divergence angle of about 0.5 degrees, the same as the angular size of the sun. So that’s about 36 kW of solar power for one mirror out of whack.
And that beam can easily propagate to 30,000 feet and fry an aluminum passenger bird too. there’s no reason a few thousand of them might get a M$ glitch that focusses all of them at 30 k feet altitude.
So far as I know, there is no practical way to monitor EVERY mirror to detect when it is not hitting the tower, and flip it into the face down sleep mode.
You could in principle, have a small element of each mirror, probably with a deflecting prism, or just a mirror, that puts a small offset spot for each and every mirror, on to a sensor. You would transmit a laser directed at each mirror sequentially, and its reflection would be received back, only when the solar reflection was centered on the boiler.
Tricky to set up, but any mirror that did not report in correctly would be put to sleep in the mirror down mode.
But hey, why worry about the lethality of this death ray. The whole damn idea is stupid to begin with, and a total waste of valuable real estate.
G
Someone ought to get hold of a whole heap of nuts and bolts, ball bearings etc…
, and drop them on the mirrors from a great height !!
I’m going to guess there’s enough sand, rocks, and tumbleweeds that nuts and bolts aren’t even necessary…The desert can be a windy place.
Yeah, a really good sandstorm should take care of those mirrors. Here’s hoping. !!!
I did have an idea how they could save all those birds from getting cooked….
….. put rows of wind turbines around the solar farm.
Chopped up and then flash fried… now I’m hungry for some KFC. <¿<
I guess the thought of placing thermal sensors or bolometer safety network on the tower at various point vertically would have been too expensive?
Would you like to be the fire person going up the tower to put the fire out? I sure wouldn’t; unless of course it was totally dark with the sun safely below the horizon.
That was my thought too when I read that.
This Ivanpah thing is like one of those ancient Roman or Greek temples to one of their gods.
From Wikipedia on ancent Greek temples:
“Each ancient Greek temple was dedicated to a specific god within the pantheon and was used in part as a storehouse for votive offerings.”
These temples were of course hoped to bring some favor or benefit to humans from an unseen deity. It required nothing but faith to believe that the gods could both be merciful and vengeful in their dealings with humans.
Ivanpah is dedicated the Climate changeist’s deity Gaia. If we can show Gaia that we are trying to find “renewable” power, no matter how expensive, then maybe she won’t subject us to thermageddon for our CO2 emission sins.
If private entities want to continue their Climate Change worship and belief in carbon sin, then that is their concern, their money.
But the US and state governments, and their conscription of taxpayers and utility ratepayers to support a religion, they need to get out of the Climate Change religion business.
Ah, but the Roman sacrifices actually DID benefit the general public: if you sat through the big wig’s prayer, you got to eat the bull at the end!
Now all we get is the BS from start to finish. At least the Romans gave out actual beef.
In those applications using high power lasers there is an OSHA requirement for “beam blocks”. These must be controlled by a keyed (like a padlock) switch so only authorized users can enable the light flux.
So we have a 1 milliwatt laser instrument that needs (by OSHA decree) a keyed switch so only “trained personnel” can “turn on the light” ?? But these dummies in the desert can just willy nilly point the rays from the SUN wherever they might end up… What could possibly go wrong (asked from an ant’s perspective)….
Maybe all those heliostats need an individual beam block ? Oh wait, that would require another trillion dollars of taxpayer money to make this gigantic magnifying glass safe to operate in a desert…
Cheers, KevinK
.
I was refreshing myself on Wikipedia concerning Ivanpah. Total power is supposed to be 392MW and it covers about 4,000 acres. Now, my local coal fired plant has 3 operation units. The smallest is 600MW. The other two are 1000MW each. The plant covers about 100acres. Ivanpah doesn’t even have the capacity to replace even the smallest unit at my local plant and, it covers 40 times the land area.
It’s called energy density.
Yeah but coal doesn’t work at night does it ? Oh sorry the mirrors won’t work at night ,go the CAGW producing power plant .
Sarc
One open pit coal mine alone in Wyoming hase disturbed 43,000 arces,, before it is finished it will be over 70,000′; your local coal power plant is the small tail end of a giant coal extraction and transport system. Electricity requires a system. Solar is the 21st century, PV cost as dropped from 70 dollars per kW in the 1970’s to about 47 cents currently. Think about exponential change, we are sitting at the bottom of the S Curve of adoption. Sites like WUet are linear in thinking, like incumbent basic thinkers always are,
What mine would that be Sam?
That would be the goldmine of green energy subsidies.
North Antelpoe Rochelle mine in WY. There are 12 such mines in WY.
The mine is filled in and reclaimed when the coal is gone.
Solar is 3rd century, trying to be updated and failing.
If you think that they are ever going to get solar much below 47cents, despite the trillions in subsidies, then you are delusional.
Like all trolls, you are leaving out the major costs, the batteries and or back up fossil fuel plants that have to be kept running in the background in case a cloud passes over your magical tower.
Ah, North Antelope Rochelle…thought so. The acreages you tout are for the leases, not the actual mine itself. The total amount under lease is questionable. Regardless, the actual surface mine, while very large for a coal mine, pales in comparison to the size of some of the gold, silver, copper and other surface mines. As usual, just CAWG disinformation.
Well a kW of solar PV electric would be 5 square meters (50 square feet) of single crystal 20% efficient Sunpower systems panels, and I don’t think you can buy that much panel for 47 cents, and that’s just for the panels, not for the 100 year storm proof installation hardware, and the required real estate. And that is with the panels properly pointing at the sun.
So I’ll pass on your solution thank you.
G
I have not been able to find any data about how much electricity it takes to operate the mirrors. It would require two servo motors on each mirror, in order to position them continually during the day, to track the sun across the sky. Has anyone seen anything about the power consumption of those motors? They would all have to be serviced regularly, also, to avoid misalignment problems.
Wouldn’t a single servo do the job if the mirrors’ axes were parallel to the earth’s?
Would that work for the seasons?
I assume, in reference to … “ servo motors on each mirror, in order to position them continually during the day, to track the sun across the sky.”
lee – May 21, 2016 at 9:27 pm …… was asking:
HA, maybe the resident climate scientists, engineers and/or programmers miscalculated the “spring to summer” SEASONAL shift of the Sun’s position as it approaches its Summer solstice, ….. and if so, …… there was no “mirror misalignments” as reported, to wit:
Quoting article:
Robotic “driving” software ….. does not, … either randomly or infrequently, initiate a programmed “misalignment” of a repeated real-time process.
NO NO NO !!!
Each and every one of those mirrors has to have its OWN PERSONAL guidance program.
No two of the mirrors follow the same steerage algorithm. You need a separate micro-controller for each mirror. They are totally independent optical systems.
G
It sounds like a Series of Unfortunate Events 😉
Let me guess:
1) Solar, including thermal solar, is a mature technology and is replacing coal right NOW and will provide baseload soon.
2) But because the plant isn’t working, it will have been a prototype and not a mature technology (yet) and it’s allowed to fail and we are learning a lot and solar technology will be make progress thanks to its failure.
Right?
Right.
And don’t forget,
3) Being on the right side of history or something.
And the bird kills…. they probably only happen for the first year or two. !
… like the Snail Darter?
well that will be enough to run you out of birds.
g
The Sun God is angry, prepare a sacrifice!
Wouldn’t it be ironic if an exploded bird in it’s final death dive landed on a combustible component?
If not, then outright hilarious.
I was going to make a learned and constructive comment, but instead I’ll just say:
Ha ha! You just got burned green solar warriors!
They use natural gas to preheat the water – seems to me that’s cheating… .”
“They use natural gas to preheat the water – seems to me that’s cheating…”
Kosher “renewables” (those renewables that are the least useful, the more costly, the more subsidized), like wind turbine, solar, very small hydro (but even that isn’t always kosher) are heavily promoted the NYT, the graun, etc. They are both:
– a money making scheme for some industries in bed with the government;
– an energy laundering scheme, as the “backup” (that actually produce most power) is “dirty”, “evil” fossil but gets a pass.
In this case, it’s even better than the usual solar PV+wind+backups: the “solar” plant is allowed to burn nat gas and still produces kosher energy!
If you look back, this plant proved that solar was cheap, effective and competitive with fossil fuel. This claim was pushed globally on every media, blogs and facebook. However, when confronted with the facts, simple costs, admitted maintenance and gas usage, those claims were easily shot down. The plants operation and actual performance shows that it does not work, as in provide cheap reliable 24/7 power with zero back up.
The uninitiated may be oblivious to the fact that images of Ivanpah are invariably taken so that the tower obscure the view of the plant’s essential gas-boilers’ chimney.
See the image on page5 here:
The gas-fired boilers used 254,000,000 kWh in 2014
Ivanpah may be large by solar standards but in reality, 392 MW is hardly anything at all. The capacity factor will probably be less than 30% so you will need three Ivanpah’s to get that 392MW.
Total US energy usage is more like 4TW so you will need about 30,000 Ivanpahs. At six square miles per installation that is about 180,000 square miles and over 100,000 years of construction time if you build one at a time.
After thirty years at the most you will probably need to replace them. If my mental arithmetic is correct, that means that you will need to build about 3 per day for ever. What could go wrong?
If you build them next to each other you could play a giant version of snakes on the us landscape. Obviously you get extra points for hitting a town.
By my calculation that will generate a snake roughly 50,000 miles long moving at 1,800 Miles a year
Murphy rules !!!
… forced firefighters to climb 300 feet up the tower to a water boiler that’s superheated by tens of thousands of mirror to create steam to run a turbine.
Wow! Kudos to the firefighters in dealing with a dangerous situation. I guess you can’t just turn off tens of thousands of mirrors when things go wrong.
You don’t think any of those fire fighters climbed up that tower, while even one mirror was not looking face down at the dirt ??
And I would want the main breaker to the control electronics in the off position and the wires removed so no idiot could turn it back on while I was up the tower.
G
I have driven by that thing on I-15 many times in years past. I always kind of wondered if it malfunctioned could it focus a beam onto a spot of the interstate and cause problems. The idea of being the ant under the magnifying glass on a sunny day is not appealing.
There is a tower in England that cooks the opposite side of the street.
Designed by “An award winning architect”
It literally melted the side mirror off a parked car.
Meanwhile, China’s Shenhua Group committed to building 1,000 MWe of solar thermal power with US-based Solar Reserve. That’s roughly 3 times the output of Ivanpah plant. The Shenhua plant will have energy storage, which Ivanpah does not have.
Solar is also a growing power source in India, as First Solar already shipped 1,000 MW of thin-film PV capacity to India.
It is very interesting to read the comments above, and the gloating tone from WUWT commenters on the Ivanpah minor incident. No one was injured, and no radiation was released. Most interesting is to see the complete absence of any such coverage, and commentary, on the numerous and far more dangerous incidents in the US and worldwide nuclear power reactors. There were 89 reactor incidents in a six-year period (2010-2015 inclusive) in the US, a rate of one incident every 3.5 weeks. That statistic should make everyone sleep well at night.
Kudos, though, to Anthony for including a link and a brief mention of the Ivanpah plant reaching the design output, after diagnosing and correcting a mechanical defect.
Now is the time to be designing, building, and testing various forms of renewable power. Coal plants cannot compete economically, now that they are required to shoulder their environmental burden and stop polluting the air. Nuclear plants also cannot compete economically and are aging fast, and shutting down with regularity. Coal and nuclear together provided a bit more than 50 percent of US electricity in 2015, which must be replaced by other forms of generation within 10 to 15 years. Only natural gas, wind, and solar have the resources and proven technology to do that.
For details on the US grid transition, see my post “US Power Grid Transitioning: Wind Energy Climbing Fast.”
“and no radiation was released.” Tell that to the flying Smokers who used to be birds, what do you think sunlight is? The surface of the Sun is over 11,000 degrees F, and concentrated sunlight can produce a similar temperature right here on Earth. Ouch…
An incident is NOT the same as an accident. Green doublespeak ?
*A light bulb burns out ahead of schedule in a nuclear power plants men’s room*
WHOA, make that 90 incidents.
This is from the linked article:
I would draw you attention to the “MOU”. It actually means nothing except (probably) getting some funds from US federal government……
“SANTA MONICA, Calif., May 3, 2016 /PRNewswire/ —…”
‘SolarReserve’s solar storage technology solves the intermittency issues experienced with other renewable energy sources, enabling the delivery of 100% renewable baseload and dispatchable power with operational capabilities comparable to traditional fossil-fired and nuclear electricity generation methods…”’
Journalistic license. Even SolarReserve’s site doesn’t make such an outlandish claim.
‘Molten salt storage creates a buffer between the when the sun is shining and when electricity is generated, smoothing out the intermittency that limits other renewable technologies. –
“Smoothing out” is not eliminating.
Molten salt storage is about 70% efficient. So, if you are going to provide baseload electricity from a solar plant, you need 4 to 5 X the capacity, to provide ONE DAY’S electricity. Day two depends on the weather.
“Only natural gas, wind, and solar have the resources and proven technology to do that. ”
Well, you got one out of three correct. Wind and solar have yet to prove they can generate electricity reliably and cost effectively.
Incident, as defined by IAEA: Any unintended event, including operating errors, equipment failures, initiating events, accident precursors, near misses or other mishaps, or unauthorized act, malicious or non-malicious, the consequences or potential consequences of which are not negligible from the point of view of protection or safety.
The USNRC does not define an incident per se. They define Emergency Classifications. There are 4 levels.
Notification of Unusual Event—Events that indicate potential degradation in the level of safety of the plant are in progress or have occurred. No release of radioactive material requiring offsite response or monitoring is expected unless further degradation occurs.
Alert—Events that involve an actual or potential substantial degradation in the level of plant safety are in progress or have occurred. Any releases of radioactive material are expected to be limited to a small fraction of the limits set forth by the EPA.
Site Area Emergency—Events that may result in actual or likely major failures of plant functions needed to protect the public are in progress or have occurred. Any releases of radioactive material are not expected to exceed the limits set forth by the EPA except near the site boundary.
General Emergency—Events that involve actual or imminent substantial core damage or melting of reactor fuel with the potential for loss of containment integrity are in progress or have occurred. Radioactive releases can be expected to exceed the limits set forth by the EPA for more than the immediate site area.
For general usage, a Notification of Unusual event (or just Unusual Event) and an Alert could be classified as a minor and major incident, respectively.
A Site Area Emergency and General Emergency could be classified as a minor and major accident, respectively.
When it comes to coal plants being able to complete economically, well, if the government wasn’t actively trying to shut them down, on an economic basis, coal plants would win, hands down. Natural Gas and Nuclear would be 2 and 3. Wind and solar wouldn’t even place, if judged on the ability to compete economically.
When it comes to polluting the air, do you even know what is coming out of the chimneys of a modern coal fired plant? It’s primarily nitrogen, carbon dioxide and water vapor. There are trace amounts of sulfur oxides and nitrous oxides and completely negligible amounts of fly ash and mercury. Modern SCR’s and FGD’s are 90%+ efficient at removing the SOx and NOx. ESP’s are darn close to 100% efficient at removing particulate. Mercury is a thoroughly overhyped component of the flue gases that has been used to try and scare people.
Mr. Sowell, I imagine you will find a much more receptive audience in the echo chambers of one of the CAGW faithful websites. Or, with the socialist members of our government.
@SMC, The 89 nuclear incidents I referenced above are only the most serious ones, not including minor infractions of the safety regulations. These 89 incidents were all either SIT or AIT events, as defined by NRC below.
“When an event or condition increases the chance of
reactor core damage by a factor of 10, however, the NRC is
likely to send out a special inspection team (SIT). When the
risk rises by a factor of 100, the agency dispatches an augmented
inspection team (AIT).”
You could take up your problem with the NRC directly, if you believe they are doing it wrong. I’m sure they will be happy to listen to your arguments.
Mr. Sowell
You’re joking, right? Why don’t you go read the “NRC Incident Investigation Program”, because by your comment, you haven’t read it or, you don’t understand it.
There was another incident at a nuclear plant when an operator spewed his morning coffee on the control equipment while reading Rodger’s comment. It was also noted there was a meltdown but fortunately it was just another operator reading the same comment and not the core.
Increases the chance of core damage by a factor of 10.
So it went from one in a million to one in 100K?
@Roger Sowell:
CSPs and wind turbines killing birds, and the greenies all but ignore it. But WHOA, when a oil spill starts affecting and killing birds, then its eternal damnation for the party responsible.
Hey Roger, can I have one of those licenses you guys awared yourselves to invoke double standards? I think it would be really cool to be able to do that and not have to justify or explain myself…just like you greenies do.
The other point here involves the trouble solar (and wind) energy pushers like you have with their failure to understand the physics, math and engineering technology that preclude solar and wind energy from scaling up to a level needed provide a meaningful amount of support to the grid:
What is needed to go 100% solar? The piece above says that it….
“……requires 29,333,333,333 (29.33 billion) solar panels and 4.4 million battery modules contained in a number 40 shipping container (40 feet X 6 feet 8 feet,) covering a surface area of 130.8 km2 or a square with sides of 11.4 km with zero space between modules. This data is presented in a straightforward fashion for nonscientists in the publication “Going Solar.”
2. Manufacturing considerations. Twenty nine and 1/3 billion is a very large number of panels to manufacture. As pointed out in “Going Solar” it would take 929 years to produce this number of panels if they could be built at the rate of 1 per second. For comprehension, today’s commercially available PV panels are standardized at 1.46 square meters and weigh about 40 pounds. Fabrication is a multistep process involving silicon crystal fabrication, cell construction, interconnection, back plane and frame. Each panel needs to be inspected, tested, and certified to meet specification……”
Even a fraction of the totals above is still laughable (say 25 or 50%).
As for our ageing nuclear plants Roger, try checking out 4th generation nuclear power technology before you foolishly wave off nuclear power. One example is the molten salt reactor:.
Roger, where did you get the idea that we humans are incapable of improving on existing nuclear technology or replacing existing nuclear technologies with better tech and resolving the issues with them?
@ CD in Wisconsin, my nuclear facts stem from undergraduate courses in nuclear chemistry and nuclear reactor design, plus 40 years or more of following closely the developments in nuclear power plants. In addition, my engineering career included a stint evaluating all types of nuclear power for providing chemical plants and oil refineries with electricity, steam, and process heat at required temperatures. I also had a personal tour of the Perry Nuclear Generating plant on the south shore of Lake Erie in Ohio, one week before the initial load of fuel rods was delivered to the plant.
I only observe the facts before me. If anyone wants to produce valid facts on low-cost and safe nuclear plants, welcome and go to it. Thus far, as I have documented on my blog, there are zero contenders for safe, efficient, and economical nuclear power. They all require massive government subsidies, as Article 13 of my series The Truth About Nuclear Power shows. The plants in the US are shutting down due to inability to cover their cash operating costs. Fort Calhoun in Nebraska is losing $30 per kWh for every kWh it creates. Plants across the mid-West and East coast are screaming for more government subsidy in order to stay open and “protect jobs.” Those are indisputable facts.
In that same environment, a new nuclear plant has zero hope of producing power and making any money, not ever paying for the capital costs. All this is documented extensively on my blog. Or, you can find anything to refute any of this. Be my guest.
As to double standards, why don’t the nuclear power cheerleaders admit they are almost fully subsidized? Why don’t they admit the environmental damage created by the entire nuclear process, from mining the ore to refining the uranium to polluting the planet with plutonium and other toxic radioactive substances? Or, is that simply not worth mentioning to a nuclear cheerleader?
Wind power is barely subsidized, at the rate of approximately $1 per month for the average US residential user. Those subsidies ratchet down and disappear in 5 years. Can nuclear ever make the same claim?
Can coal-fired power plants deny the amount of toxic air pollution they have spewed forth for decades? I think not. They cannot afford to install the stack gas scrubbers to meet the emissions standards, so the plant owners elect to shut them down.
As to understanding the physics, and math of renewable energy, I believe I can hold my own with anyone. A degree and 40 years experience worldwide as a consulting chemical engineer will do that.
Mr. Sowell,
You want to talk about subsidies? Fine, lets use the data at from the chart: Table ES2. Quantified energy-specific subsidies and support by type, FY 2010 and FY 2013 (million 2013 dollars).
According to this information, Wind and Solar received $11 billion dollars and change in subsidies from the government. Nuclear $1.6 billion and change. Nuclear power produces about 20% of our electricity, reliably while wind and solar produce less than 2%, unreliably.
Nuclear power plants are shutting down because they are reaching the end of their designed life. Regulations, litigation and NIMBY are preventing the refurbishment, upgrades or building of existing or new plants. The last nuke plant built was in the 1970’s, in the US.
As for the coal plant scrubbers, I assume you are talking about CO2 scrubbers. To date, there is no economically viable technology capable of scrubbing flue gas from a typically sized coal plant. So of course the utilities can’t buy one, they, for all practical purposes, don’t exist.
Also, Plutonium is not a naturally occurring element on Earth, as you seem to imply.
While you may be able to hold up your end in a technical engineering discussion, debatable from what I’ve seen, you really stink at CAGW agenda promotion stuff. Also, take your self promotion someplace else. I’ve been to your blog, you have little of substance there and less to offer.
CD,
Good summary of the amount of solar to replace current electricity generation, thanks.
And that barely saves very much oil since there is no liquid fuel produced by solar to run our transportation system since we use very little oil to generate electricity. Think of all the fossil fuel wasted to produce ship and install all those solar needs. Solar is not sustainable without massive amounts of fossil fuels including the natural gas to fire the boilers when the sun sets or the panels are covered with snow.
Sure we could require everyone to use an electric car but a use-able battery range has been elusive despite all the tax payer subsidies already thrown at an objective that may not be achievable based on physics, chemistry, and thermodynamics, not to mention the additional tax payer subsidies required to replace the enormous infrastructure that allows us to drive almost anywhere in the US.
@Roger Sowell, if you know so much about the nuclear industry why are you so frightened of nuclear power?
I’m assuming you don’t go in commercial airplanes; they’re orders of magnitude more dangerous. They actually kill people on a regular basis and there are hundreds per day high risk incidents. Then there are the unstudied effects of higher levels of exposure to ionising radiation to which anyone who slips the surly bonds of earth are exposed.
Better to stay at home wearing your tinfoil hat and underpants.
@Roger Sowell
First of all Roger, I congratulate you on your education and years of experience in your field. I have no doubt that it is all quite impressive.
I you would then sir, please provide your peer reviewed paper or Internet piece which refutes the conclusions drawn by w post that I provided in my previous comment. There seem to be quite a few people at this website who don’t appear to have problems with its general conclusions (if not the exact numbers it provides). I would hazard a guess and say many of them perhaps have credentials equal to yours. In particular, please show your math and explain the physics enabling wind and solar energy to scale up as viable alternatives to our fossil fuel and nuclear plants. Thank you.
Second, if I may say so, you are doing the same thing that many anti-nukes seem to do these days; You condemn nuclear power based on our experience with PRESENT DAY nuclear technlogy
(2nd and 3rd generation) and never seem to give any attention or credence to the potential for 4th generation technology.
The molten salt reactor (as I told you previously) is one of those potential technologies. If you are up-to-date on nuclear power development, I assume you’ve heard of it:.
I assume you know who Alvin Weinberg was (may he R.I.P.). Back in the late 1960s, he and his fellow scientists conducted a four-yerar MSR experiment at Oak Ridge..
Their general conclusions were than the technology was viable. Thus Weinberg asked for more funding to continue the research and development of it. MSRs are regarded as being safer and cheaper than today’s reactor technology, and they can use plutonium as their input at startup. We can thus hopefully draw down our plutonium stockpiles with MSRs IF it the technology were to pan out as hoped.
Roger, if you could please, explain to me what you know about MSRs than Weinberg and company did not. Thanks again.
“why don’t the nuclear power cheerleaders admit they are almost fully subsidized?”
Because they aren’t, they are severely harmed by the regulations.
“Why don’t they admit the environmental damage created by the entire nuclear process, from mining the ore to refining the uranium to polluting the planet with plutonium and other toxic radioactive substances?”
Because it’s a lie.
CD in Wisc, your thorium wet dream is nonsense.
The U.S. was interested in breeding thorium in the 50s and 60s because we thought uranium was scarce. We lost interest in thorium when we found uranium isn’t scarce.
The viability of MSR has nothing to do with thorium. Double ought zero.
@Gamecock:
Okay Gamecock, I’m tired of arguing with you about thorium. If you want to run MSRs on uranium instead of thorium, then we’ll run them on uranium. As a matter of fact, the MSRE at Oak Ridge back in the 1960s actually did run on uranium rather than thorium, IIRC.
My main reason for supporting MSR research and development is as a safer and cheaper alternative to today’s gen of nuclear reactors. THAT is why I believe they deserve continued research and development (as is being done now in China). The fuel they run on is secondary to me.
Do you at least agree that the basic design of MSRs makes them safer and that they provide a means to draw down our stockpiles of plutonium? Do you at least agree that they deserve continued research and development?
‘Do you at least agree that the basic design of MSRs makes them safer and that they provide a means to draw down our stockpiles of plutonium? Do you at least agree that they deserve continued research and development?’
Not necessarily. MSRs require real time separations, rather than the batch separations of traditional reactors. This is a complication. The liquid mass in the MSR is critical. Moderating it could be tricky. Should an emergency arise, dumping the liquid won’t make it non-critical. Dumping into a pit with moderators might work, but when it hardens with cooling, you’ve got a big problem. Others have said that the salts of MSRs are necessarily corrosive.
I don’t know. It might work; it might not. But it is not easy peasy. And, I know that people who know a lot more than me got it to work for a while 50 years ago, then abandoned it. I believe they had good reasons. I’ve heard all sorts of conspiracy theories, but I don’t believe them. Weinberg was a flake; he was fired for cause.
Regardless, I think it silly for laymen to push a particular technology.
And . . . I don’t care about “our stockpiles of plutonium.”
“moderators”
what?
Whatever became of PBR’s or Pebble Bed Rectors? It seemed when I heard of them about 10 years ago, they were supposed to be modular, scalable and safer than the current reactors at that time.
@Gamecock: I never made this clear, and I probably should have. My support for 4th generation nuclear power is not limited to MSRs. I also support the continued research into the IFR (GE’s PRISM), the PBR and Bill Gate’s Traveling Wave Reactor. I have simply been using MSRs as one example. I apologize if that was never clear.
RE your lack of concern for our stockpiles of plutonium. Anti-nuke activists use the plutonium “waste” issue as a selling point to get rid of nuclear power altogether. If we are to have nuclear powered future in this country, it needs to be addressed for that reason if for no other one. I believe the activists won’t let the issue die until they succeed at killing nuclear or the plutonium issue is somehow successfully dealt with. And I would imagine that there are considerable number of policitians in Washington who support the anti-nukes.
As for lay people being “silly” for supporting any nuclear technology, I somehow find it difficult to believe that we could continue to have a nuclear powered future in this country without the support of the majority (a lot) of what are called “lay people” — the American people. Bill Gates is a lay person. Is he silly for pouring a ton of his money into his Travelling Wave Reactor company?
I am hoping that you want nuclear power to advance in this country like I do, but I can’t tell for sure. If you do, should we be sitting on our butts and settle for the nuclear status quo….and possibly let it die out? I hope your answer to that question is ‘no” as it is with me.
I don’t want the the anti-nuke activists to win here Gamecock. If nuclear is to advance, we should all support and promote the potential for 4th gen nuclear and show the anti-nukes and renewables peoples that they are wrong. That is all I’m really trying to say. There is room to apathy.
@Simple-Touriste and Gamecock…
Gamecock is confusing his terminology. Specifically, he is confusing a moderator with a poison. A moderator is material that slows fast neutrons and makes them thermal neutrons. Thermal neutrons are what makes a reactor controllable. Examples of moderators are water and graphite. A poison is material that absorbs neutrons and thus stops the fission process. Examples of poisons are Boron and Hafnium.
This is a very simplified explanation. If somebody wants to get in to the fine frog hair splitting technical details, go ahead but, it’s beyond the scope of this discussion.
Roger Sowell: If I was to tell you that Ivanpah (Brightsource) has included Thermal Storage in its (useless) power station I bet you would still be pushing the fact that the Chinese are the only ones doing it. You see, as soon as you are outed on this economy of the truth anything further you have to say is worthless.
See Joe Public’s comment above, which includes Brightsource’s own website showing thermal storage as part of the design. The fact that Ivanpah is a complete waste of space (literally) is neither here nor there: I’m just impressed that fire-fighters are willing to go up the boiler tower in direct line to all those mirrors – which have been shown not to be reliable in their aim.
One has to buy-in to the idea of ‘Carbon Footprint’ to accept that a well designed, well run nuclear power plant has much an environmental burden. Since I do not accept that premise, your argument for solar power has no merit. It isn’t that you can’t build more ‘renewable energy’ plants, but you will never be able to supply more than a fraction of the power needed to drive the economic engine.
I do accept that coal burning has a large environmental impact. Taking CO2 out of the argument since I do not accept it is harmful to the environment in the amounts released, burning coal releases a lot of dangerous heavy metals – not immediately dangerous but over time can pollute an area enough to be of concern. This is and has been addressed over time to reduce the release of (actual) dangerous substances. It is only the argument that CO2 must be ‘captured’ that is killing a coal plants ability to compete. Obviously mining coal has a much more obvious environmental impact to the area where it is mined, especially if its an open pit mine, but have you seen the Texas landscape being ruined by dozens of wind turbines being put on every hilltop? Its awful, and needs to stop.
Natural gas for the next 30 or so years will provide for us just fine. Big centralized solar power plants is simply not practical.
had the pleasure of driving by one of those Texas wind farms the other day–all but 3 of the turbines were standing still as could be–not an unusual occurrence. I think that wind now provides 15% of the Texas grid generating ability. Don’t know the actual numbers, but I’m sure the percent it actually adds is a lot less.
JVC
jvc, worse, they weren’t standing still. Had you stopped to watch them for a few minutes you would have seen them turning slowly.
They use grid power to keep them turning so that they don’t develop flat spots on the bearings.
thanks for that Mark. Learn something new every day.
JVC
@Sowell – Before bragging about the output from Ivanpah. Please calculate the power needed to move 173,500 Heliostats, (two motors each, one for azimuth and one for elevation) garage door size mirrors. Keep these Garage doors properly aimed during wind and other weather conditions and move them second by second. You also need to add the power consumed by all of the auxiliary loads needed to “Make Power,” that is pumps, HVAC equipment, controllers, heat exchangers cooling the steam back into water to be made back into steam, Computers, and on, and on.
My back of the envelope shows that for the way Ivanpah makes power that this is about 25% of their Rated Name Plate power, [about 100 MW], 24/7/365 is needed to ensure there is not another [fire] because the heliostats are not in the correct position. Yes, they need to be moved even when NOT making power. The auxiliary systems need to be “at the Ready” also. Which is about 75% of the power that it produces. However, this power is not accounted for in their “Output Power.” Output power is measured at the output of the generator. All powerplants measure it there. It is measured with a separate meter. Power used for all of the stuff I listed comes from a separate source, is measured with a separate meter and is NOT subtracted from the number that they provide you as their declared “Output power.”
Now, subtract the amount of power actually generated by burning NG. Looks to me like a “Net” loss.
Ann never mind the fact that the places where large solar arrays like this can work are very limited and generally not located near to the places where the electricity will be used. There is a reason why that monstrosity is out in the Mojave. Lets see how good it would be in Central Indiana! On second thought, let’s not. I like the birds and bats around here and the windfarms are already taking their toll.
The computers are left on at night, plus lights and environmental controls for the workers who are at the plant 24/7. Also the mirrors have to be returned to their morning position sometime during the night.
“Nuclear plants also cannot compete economically”
due to the crazy “regulations” and uncertainties of permit renewals, all indirectly caused by the absurd fear of such trivial levels of radiations that can’t do any harm (and can even do good).
The nuclear fission industry has an excellent track record. (Not perfect, but better than pretty much any other industry.) Yes, this is even including “Chernobyl”.
“No one was injured, and no radiation was released.”
Seriously?
This is how you measure the gravity of events? Persons injured, and then “radiation” released?
By that (silly) metric, most incidents in nuclear plants are unimportant. Yet they often get massive media coverage. How do you explain that?
Blind anti-nuke hysteria? Seriously, if no one was hurt or radiation was released, then all you have is a repair problem, the same as any other power plant. So what are the majority of these ‘incidents’? And if any of theme had happened in a non nuclear power plant, would anyone have done more then ordered a repair and made a note in this month’s reports?
Now me personally, I’m glad that the people operating our nuclear power plants are so careful. I fully expect them to take more precautions then someone working at a coal or gas power plant. But the simple truth is that nuclear’s safety track record is way better then any other form of power generation. A basic GOOGLE search for ‘Nuclear power plant fire’ compared to fires in any other generators (even renewables. Hell, even hydro) will show you just how safe nuclear really is.
The real problem is too many people get their knowledge of nuclear energy from The Simpsons. >¿<
“The real problem is too many people get their knowledge of nuclear energy from The Simpsons. >¿<"
In The Simpsons, the cooling towers of the nuclear plant cause acide rain. I have always believed that Burns' plant is an allegory for all the polluting power plants, and the small town of Springfield couldn't justify having many different power plants.
Let us know when one of those “incidents” cuts back production by a third as happened with Ivanpah.
It would be great if you and like minded could choose to depend and pay through the nose for your energy needs via solar plants and the rest of us be left alone. Unfortunately we do not live in so perfect a world.
It would be great if you and like minded could choose to depend and pay through the nose for your energy needs via solar plants and the rest of us be left alone. Unfortunately we do not live in so perfect a world.
*********************************************************************************************
Except it can be done.
All utilities could be required to provide a “green” or “renewables” tariff with no cross-subsidy from other tariffs (unlike what happens now).
Their customers can be connected via smart meters. They receive power when, (and only when) said power is available, up to the figure available. When there is insufficient power they are disconnected.
Everyone is happy!
I somehow can’t envisage many takers.
SteveT
Roger, you have to be grasping at straws to continue to promote this project as a viable energy option.
Yes we are all happy that nobody was hurt. But why was not the possibility of a steerage fault designed into the system in the first place.
Certainly many of us, have pointed to defects in the concept, but I can’t say that I have previously thought about the thing setting fire to itself.
And as for failures, and accidents for other systems of energy production; each of those schemes has to deal with its own vulnerabilities.
But the vulnerability of other systems, is a problem of those systems; it is NOT a strength of solar boiler systems.
Solar energy whether PV or thermal does have its place in any comprehensive energy supply plan. Neither one has any value whatsoever for transportation which is a huge energy consumption market.
This desert boondoggle is not the way to utilize renewable energy ; it is a totally harebrained scheme at best.
At least solar PV does not take a side trip down to the waste heat garbage dump of the energy spectrum.
But Ivanpah starts off right in the sewer treatment plant of the energy spectrum.
G
Notwithstanding the fact that you claim to be a Chemical Engineer and a Lawyer your distortion of the facts is truly amazing. Barring your claim that wind and solar have the technology to fill a perceived gap created by this administration, there are no recourses other than ill gotten CRAPITALISM gifts dolled out by the Eco-Radicals in the EPA and US Energy Departments. Coal has a 100-150 years supply and Natural Gas at least 300 years supply and that doesn’t count the methane calthrates we haven’t begun to harvest.
Oil is the waste stream of daughter reactions of fission uranium and thorium in the earth’s core and an infinite supply for millenniums.
Nuclear would be competitive if it were not for the disproportionate subsidies (wasted) on renewables. Oh and the Nuclear Scientists, i.e., Fonda, Lemon and Sheen and Hollywood Libtards.
Modular Thorium Reactors are available, affordable and can be implemented in the US if we can eliminate the NRC and its stranglehold on the industry.
IMHO you are years behind on the knowledge tree and suggest the following primers as starters:
The 10th Annual International Climate Conference accessed at. Mark Mills presentation “Energy Reality”. Google Mark’s white paper about the Cloud needing Coal reliable facts.
By the way Coal Burning in the United States is not pollution. Volcanoes on the other hand.
You are not ready to opine on this Big Boy website.
I could never figure out how heating the atmosphere up to almost 1,000º F will help cool the atmosphere.
It was probably a flaming California Condor plummeting to earth that knocked the mirror out of position.
Has anybody done a study to see if heating the atmosphere to almost 1,000ºF in several places around California and Nevada could actually be changing the weather? I think it would be hard for clouds to survive around these things. Could be the reason for the drought.
Can the overheated air produce toxic gases?
It’s very simple. The total radiant emittance of a thermal energy source goes as T^4.
So the very hottest dry deserts in the hottest time of the day (mid afternoons?) is where global cooling is operating at its finest.
Cold places don’t do any useful global cooling.
G
Despite the inane Greenie cheerleading from Roger Sowell above, the fact remains that without huge subsidies, and without the punishing of coal because of “carbon”, the use of renewables aka “clean energy” would be extremely minimal. Electric and hybrid cars either wouldn’t exist, or would be merely expensive toys for the wealthy. Same with biofuels.
Perhaps with a President Trump we can back to a sane energy policy based, including nuclear.
I doubt a President Trump will be given a chance to prove his leadership skills (whether good or bad) by Congress. I imagine Mr. Trump’s greatest achievement, if he is elected, will be to unite Congress in an effort to impeach him. But, at least he will have brought bipartisanship by uniting Congress on a common goal. 🙂
Donnie is a master of divide and conquer strategy. He will instigate frequent in fights and enjoy the ride. I can see him getting a second term.Z
A question and a suggestion. Question: What happens to a mirror when a smoker’s death dive strikes the mirror? Suggestion: Locate an Ivanpah-like power plant near all major airports. Doing so will reduce the number of birds that flame out airplane engines thus reducing the number of airplane crashes; and the proximity to temperature monitoring stations will increase measured surface temperatures giving the “keepers of the faith” more proof of man-made global warming.
I know, I know. You ask: Won’t sunlight reflected from the mirrors blind pilots on final approach? Yes, but the fix to that is simple. Give each pilot a Darth Vader-like are arc-welder visor.
Does that mean we will have to redesign our aircraft to look like TIE fighters?
If so, I am all for the scheme. 🙂 As every American non-Trekkie will be. 😀
You clearly are not paying attention.
There are NO mirrors anywhere near the tower or boiler, and the birds are evaporated not singed. There is no bird that can fly fast enough to do its final swan dive onto one of the closest mirrors if it gets close enough to the boiler to get fried.
At most even the largest bird could not be simultaneously in the beam of more than about ten mirrors, and be over even the closest mirror.
G
>Not sure if SARC or serious.
There is no reason that this technology, using sunlight to heat a liquid into gas that drives a turbine, can’t work. Obviously one needs to better think through the likely problems – misaligned mirrors, when you have hundreds of them, seems like an obvious case. The tower can be encased in a white ceramic protecting any pipes or wires within it. Good engineering can fix so many things.
That said, making this technology cost effective and able to scale, is likely beyond our abilities. Its an interesting but very expensive toy, and we discover, harmful to wildlife. This plant, in my opinion, will be quietly shut down after a few years of discovering how expensive it really is to operate. I have to wonder if this facility will survive a decent earthquake…or will the mirrors be so out of alignment they cannot easily be corrected?
Building better nuclear power plants is not beyond our abilities – more efficient, safer ones that can burn a lot more of their fuel, even burn what today we call nuclear waste. We can recycle waste to extract more fuel. We can reduce the radioactive life of any resulting nuclear waste. This is the clear path towards burning less fossil fuels.
Meanwhile, because of a new technology called ‘Fracking’ we have all the natural gas we need for producing electricity for the foreseeable future – plenty of time to build our nuclear plants.
I have to believe that in the next 30 years electric cars will have evolved enough to be reasonable alternatives to combustion engine cars – not completely replacing them but enough of them used to reduce the need for crude oil. I think the ‘oil problem’, that is the need for lots of fuel for combustion engines, will solve itself over that time.
So I am not sure what this solar plant, other than being an interesting toy, is supposed to be doing. It simply seems impractical to me.
Scientist have been searching for a better battery for about 5 or more decades, isn’t it time to look elsewhere for alternative source for transportation fuels?. Doubling down on failures and subsidizing and implementing electric cars without the elusive battery is just dumb. Also how much are you willing to pay for the infrastructure that allows you to drive a care anywhere in the USA where there are roads.
As someone who uses a lot of rechargeable power tools, I can assure you that the batteries we have today are significant better then the ones from just 10 years ago. Hell, even the basic lead-acid battery that sits in your car is far superior to the one that sat in your grandfathers. And most of the real innovation for the last few decades has been focusing on the switch from single discharge power cells like the old Everready and Duracell to rechargeable batteries.
What has really amazed me is some people don’t think batteries will ever store as much energy as gasoline. Both store energy in chemical bonds. It’s just a matter of releasing that energy in a usable form. We’ve only been producing electricity for a couple of centuries. We’ve been burning things for millennia.
You really want to understand how far our batteries have advanced? Try recharging a tank of unleaded after you’ve run out. All you can do is refuel it, just like I had to replace the D’s in my RC car when I was a kid. But today my nephews just plug theirs in for an hour and off they go again. And I doubt that’s as far as the technology will ever get.
You clearly don’t understand the concept of this machine.
EACH & EVERY one of those tens of thousands of mirrors has its own computerized full time tracking system.
If ANY one mirror that was perfectly aligned to dead center of the boiler, suddenly lost power to its control motor, it would take no more than sixty seconds for the moving sun image to scan completely off the boiler.
The sun moves one degree of angle in four minutes. A stationary mirror reflection will move one degree in two minutes. The sun’s angular diameter is 30 arc minutes, so it takes one minute to move the reflection by the full diameter of the sun off a stationary mirror.
If the boiler is 200 feet high, say measured from the mean height of a mirror, then the effective focal length of the nearest mirror must be at least 200 feet. This will give a minimum size solar image of about 21 inches diameter. But the mirror size is about six meters square, so the spot has to be at least that big; about 28 feet diameter. Toss in the 21 inches for the sun angle and you have a minimum spot of about 30 feet diameter.
So it is not a question of proper alignment of a system and it drifting. You have to actively steer each mirror accurately for the whole of the sun up day time.
So those mirror mounting bearings are going to wear out eventually.
G
What happens when there is a power failure or outage and the sun moves to a position so that the support tower receives the full impact of the Sun’s energy? Are they going to bae able to move 173,500 heliostats by hand before complete destruction?
usurbrain wrote:
“Are they going to bae able to move 173,500 heliostats by hand before complete destruction?”
No. Plan A is to have all males drink 2 liters of water and report on deck for fire duty. Plan B is to shoot out the sun.
Takes at most a few minutes for the sun to scan off from a stationary mirror array. That is not a serious failure mode, as a tracked mirror that is off target but still tracking.
G
“…So I am not sure what this solar plant, other than being an interesting toy, is supposed to be doing….”
It is so called “subsidy mine”.
If each tower is surrounded by 45000 mirrors, I wonder just how many mirrors were miss-aligned to cause such a poorly placed hot spot.
Also, why was the cable not protected from this kind of incident, did the designers not perform a safety case on all aspects of the system?
I understand that motors could sometime malfunction, but to have such hot spot requires a “conspiracy” of motors, IOW, motors aren’t to blame, the control is!
Misalignment was probably caused by incinerated bird carcasses landing on the mirrors.
Shoot-self-in-foot syndrome at work.
This if true, sounds as a viable alternative.
Produce it in the tropics, store it, ship it on demand.
Electricity from seawater: New method efficiently produces hydrogen peroxide for fuel cells
(Phys.org)—Scientists have.
The biggest advantage of using liquid H2O2 instead of gaseous hydrogen (H2), as most fuel cells today use, is that the liquid form is much easier to store at high densities. Typically, H2 gas must be either highly compressed, or in certain cases, cooled to its liquid state at cryogenic temperatures. In contrast, liquid H2O2 can be stored and transported at high densities much more easily and safely.
This does not make much sense. Instead of hydrogen, they produce oxygen (in a form of a peroxide, very dangerous at high concentrations). Are you a chemist?
vuk, I cannot see how this can possibly work. Current fuel cells combine a reducing agent (H2) with an oxidising agent (O2). Hydrogen peroxide is an oxidising agent. What do they react it with? You cannot replace a reductant with an oxidant in any chemical reaction and expect similar results. All the hydrogen peroxide could replace would be the oxygen currently taken from the air and still need something like hydrogen to complete the reaction.
Doesn’t Hydrogen peroxide react with copper to produce O2 and H2O. Its a dangerous reaction in a sealed container mind you
No, I’m not a chemist, never done anything to do with the hydrogen peroxide.
Here in London there is a fleet of hydrogen fuelled buses. I thought this might be an advance, but judging by the above comments, apparently NOT.
Vuk,
The article is misleading, you cannot turn Hydrogen into a liquid by compressing. It must be cooled and compressed to turn it into a liquid. Most hydrogen for transportation is stored at very high pressures with a thick shell.
It only in useful in a university professor’s mind and is expensive but hey London is rich.
Re: the earlier comment: “That’s a whole bunch of oil we didn’t need to import from our buddies in the mideast.” Let’s find out what “our buddies in the middle east think about this project.
If they are promoting it – then we can assume that they do not perceive it as a threat to their interests.
This entire global warming alarmism and anti-coal, anti-fracking, anti-energy-independence debacle is working out brilliantly for the middle east.
They more we waste money on stupid crap that doesn’t work – the happier they will be.
They are happy, as long as we don’t dig up and burn our cheap coal, frack for cheap gas and oil or build our own pipelines and refineries.
They are happy with any dance that we do. Provided that it involves stepping on our own toes.
Luckily for them, our societies contain a vast number of anti-capitalist useful idiots who are ready to serve their interests, by protesting against progress and profit.
Primarily thanks to decades of preliminary ground work by the KGB.
So what do they say on the middle east propaganda channel – for or against. You guessed it!!!!
As an engineer I have never seen Ivanapah but travel by a large solar installation on I-8 between Tucson and Yuma. Naturally I was curious, so I paused to inspect. Everything is there except one obvious thing missing.
There are no high tension lines from the solar “power” plant to anywhere! That instantly tells me that the massive “Solar Plant” doesn’t produce enough power to need them. And that it doesn’t generate enough power to notice. But the Plant is a “triumph” of the know-nothing fools on the Arizona Public Utility Commission who brag how they forced the Utilities to build it.
“Most panels carry a 20 year warranty for output.”
Warranties do not produce electricity! I suggest reading the fine print. Let say Paul gets duped into buying $50k in solar panels for the roof on his house but after a few years years he finds out they only produce $500 per year in electricity. Paul then read the warranty. He must pay to remove them and return them in the original packaging (which he does not have). The manufacturer, if still in business, will test them in the lab showing that they are not the problem. Paul figures out he can still brag about having gone solar if they do not work.
RA Cook is wrong! The solar panels do not work from the get go and never get replaced. If Paul wants to tell me how far from the truth I am, he will need to provide comprehensive data. He can not. We in the power industry love to brag about performance. The lack of bragging is very telling.
Speaking of fires. This is not an unusual way for rooftop PV systems to stop working which is why I call them smoke emitting diodes.
Solar panels on suburban homes is increasing in my area. I don’t get it. I just replaced a gas furnace in my current home that was the original when the home was built 35 years ago. That’s quite a return on investment. The new more efficient one cost about $4000 including installation.
Why in the world do people want to put $30,000 monstrosities on the homes, it is an extreme expense, limited return (depends on government subsidies and forced give-backs), and aesthetically are miserable eyesores. Solar panels on the roof of a beautiful classic Victorian home looks ridiculous, not to mention when they start cutting down beautiful, ancient trees to increase access to sunlight. Way to ruin a neighborhood and increase the tax burden and energy expenses of your neighbors.
Couple of things about your post Alx. First, since you have a furnace instead of a heat pump; you live in a northern climate where solar is not as effective. Second, existing homes often do not roofs that are good for solar. Third, shade trees reduce heat load on houses in summer reducing the need for AC.
Reading through the comments it is nice to see that the anti-solar crowd can be just as stupid as the anti-nuke crowd. Since fire is an inherent hazard of producing power, all power plants will have a fire at one time or the other. It is not a criteria for being against any source of electricity.
The criteria for judging power plant is the amount of electricity produced relative to design expectation. I am still waiting to read about the successful solar thermal plant. Massive failure makes for interesting reading. I would not fault the designers because it is a difficult task. Solar thermal is a bad engineering idea conceived by well meaning folks.
Safety is a different issue. In the US all power plants are safe. It is a legal requirement. Safety is measured by lost time accidents and fatalities. The power industry has a very good safety record.
Killing a bird is not a safety issue, it is an environmental issue. I am proud to say that no one has ever been hurt by nuclear radiation from reactors designed to the criteria that I was trained to use as a nuclear safety design engineer. Have not killed any birds either.
For those who think a safer reactor can be designed, how can you beat perfect? Safety is not some model for worst case hypothetical accidents.
Fires are a significant safety issue for power plants. I will relate some personal experience. I was the engineering duty officer on a nuclear ship in port when lightning hit nearby knocking out shorepower. The duty electrician was checking the circuits following procedures for such an event when a damaged breaker caused a fireball that injured him. This was a lost time accident but a fatality was prevented by the use of personal protective equipment.
A second example was at my first commercial nuke plant. A fire was reported in the emergency service water pumphouse. Following procedures, the shift supervisor declared an unusual event and notified the NRC. Turned out to be an arcing welding cable.
Roger Sowell would correctly label these as an ‘incidence’ at a nuclear power plant. What Roger does not say is that incidences occur all the time in all kinds of industries. They get investigated by plant staff and regulators such as the NRC, OSHA, or the chemical safety board.
Roger claims expertise on nuclear power based on touring Perry Nuclear Power Plant. On my resume after being radiation safety on my last navy ship, is Senior Reactor Operator certification on the Perry simulator. That is the difference between doing and talking.
The bottom line is that nuclear safely produces 20% of US power.
“Solar thermal is a bad engineering idea conceived by well meaning folks.”
If solar thermal is obviously a bad engineering idea, than I don;t see any well meaning folks, I see folks blinded by ideology, which is always bad news.
It’s not that we are anti-solar, it’s that we are anti-wasting our money on other people’s stupid ideas.
Yes, and as an investor in best-of-breed utility scale solar PV with world beater cost reduction it’s frustrating to watch the also rans and wheeler dealers with unusual government assistance and promotion. It’s one thing to make policy mistakes at DOE and elsewhere in other federal agencies with a low rent version of due diligence or a spirit of diversified tech development on a grand industrial scale, it’s another to ignore the winners or at best use them in industry averages while protecting the losers and the policies that produced them. Where is GAO when you need them, or is their absence on the scene by design. A level playing field bid process in renewables would take lowest bids in combination with least impact on grid and transmission cost impact. We are still living with the uncompetitve relics of policy, much like the industrial scale energy flaws of the Carter Era at Beulah and Rifle. Economic illiteracy is no excuse for organized policy cheats.
You can thank me as a taxpayer for helping to fund your investments.
I don’t really know why a lot of people here on WUWT are against solar and wind energy. While it is true that the cost effectiveness of a large solar or wind farm is not competitive commercially with NG, nuclear, and coal power plants, it is still an alternative source. For an advanced country such as the USA, it might not be economically sound to invest in renewables. However, in some areas of developing countries, they are finding small scale renewables as, sometimes, the only alternative.
Although fossil fuels has been a game changer in the 18th to the 21st century, technology will soon relegate it to the role of the wood and charcoal in the future. Admittedly, charcoal, was the king for hundreds of years, so oil might be king for a few hundreds more. But in the end, climate changes, even for sources of energy.
We are against them because they are not cost competitive alternatives to fossil and nuclear power systems.
In fact when you add in all the factors that go into construction, maintenance and replacement, most don’t produce net power.
There’s also the fact that for every MW of “alternative” power, you need 1 MW of fossil/nuclear power running in the background ready to take over when the sun doesn’t shine or the wind doesn’t blow.
“There’s also the fact that for every MW of “alternative” power, you need 1 MW of fossil/nuclear power running in the background ready to take over when the sun doesn’t shine or the wind doesn’t blow.”
MarkW gets the carefully crafted lie of the day award.
It is not a fact. It is a big fat lie.
Sorry dude, just because you don’t want to believe it does not make it a lie.
When solar/wind stops producing one of two things happens, either load gets shed or new power comes online.
Since blackouts are not pleasant things, the utility replaces the power with fossil/nuclear power. Since those sources can’t ramp up as fast as solar/wind can ramp down, they have to be kept running with the power they generate being dumped, until it is needed.
Just in case RKP decides to try his previously refuted lie.
No, fossil/nuclear plants do not need to have a 1:1 backup ratio.
It is very, very rare such a plant to drop out unexpectedly. Maintenance is scheduled at a time when there is sufficient slack in the system to handle the reduced power output.
How is energy dumped?
Giant hair dryers.
… no, really. The energy is run through large high-resistance coils and the waste heat produced is blown away with huge fans. I’ve personally seen them placed in everything from individual units to groups of over 2 dozen. Blew me away (pun intended) the first time I saw it in operation. I was shocked to learn it was apparently cheaper to just waste the energy then to try to store or recover it.
Do you a reference on these “hair dryers”?
Do they allow a gentle stop of the reactor (no SCRAM) and turbine (no free turbine) and condenser (no turbine bypass) if the grid connexion is lost?
Not against solar or wind at all. It’s just that small scale has it’s uses, but the stuff doesn’t economically scale up to grid needs. Have a solar powered pump on one of the wells on the ranch. Works out just fine for my needs with that well, and running grid power to it would have cost me about 3x what the panel and pump did. Will probably add a little more solar as time goes by, and I can afford it, but I sure don’t want to pay the cost of grid sized installations.
JVC
Did you consider an ICE driven pump or maybe an old fashioned wind driven pump? I think you will find PV is not cost effective.
Pulsar @ 2.22am
Solar and wind can work for off-grid usage. Wind power has been used for centuries.
However, once the use of coal, gas, and nuclear were developed for grid use they became dominant. Why? Because they work 24/7/365 and with a small extra reserve maintenance and other outages can be planned or dealt with.
Advanced society requires power 24/7 not just when the wind is blowing or the sun shining, so wind and solar don’t work without some means of storage or backup. If the storage or backup costs the same again (or more) especially grid costs and losses then why not just use coal, gas or nuclear and save the horrendous costs of having a part-time system of providing power?
SteveT
Everyone did notice that a Chinese coal firm is going to build 1GW of the same technology (10 plants, I believe)?
China plans 10GW of these…
/// oh, they solved the bird frying issue too.
Which will help to give China 3% solar generation (184 TWh) by 2020 if they stick to the 5 year plan.
That seems to underestimate China’s electricity demand in 2020 – but also underestimate China’s renewables plans for that date
However, an increasing part of new demand is now going to be renewables, not coal.
Well at least we know where you get your talking points from. Try this which is the original source.
100 GW planned by 2020 according to your source too. 100 GW x 21% utilisation = 184 TWh by 2020 or 3% of China’s projected 2020 total (5650 TWh in 2014 x 1.23 in line with overall energy increase). Maybe less than 3% if you think that total should be higher. Hence a smaller proportion than UK’s current solar. This all matches the figures in the document you linked to pretty well as I’m sure you know.
Sorry about ‘ni’s’ ‘contribution’, some strange computer glitch sent it by mistake before I moderated my language in deference to our host, maybe a gust of wind at Kentish Flats.
Griff got a bridge I want to sell you! You seem to believe self serving press releases and Chinese communist propaganda.
Get back to me when it works.
Speaking of people who believe in self-serving press releases, I give Retired Kit Pee.
I merely mention that in passing – the major fact is that a US company is selling 1GW of this tech to the Chinese.
This tech works in the right place.
In China as in the U.S., data on the number of fried birds is a state secret.
From BrightSource ‘Limitless'(!) ‘The Top Five Things Some Media Can’t Seem to Remember About Ivanpah’ #4 ‘We Don’t Control the Weather’. Isn’t that what the whole plant is about, making sure the weather never changes?
But sunlight is free!
/sarc
If I was a Firefighter, I would be worrying whether a miss-aligned mirror was going to fry me while I was putting out the fire!
This has become an overpriced natural gas-fired power plant with a solar bird-flaring system on the side.
“I don’t really know why a lot of people here on WUWT are against solar and wind energy.”
So pulsar why are you for it? Your reasons seem very feeble.
Do we need alternatives? No.
Does wind and solar have less environmental impact? No.
The reason is that wind and solar does not work as claimed.
I am am in favor of wind and solar when it works as claimed. We just traveled in our motor home from Las Vegas to Wallula Gap in Washington State. Saw no utility scale wind or solar along the way.
I am still waiting for any solar protect to demonstrate it is not a scam. Scam artist to not issue press release. If you dig you can find actual performance. Big failure!
There are some places where wind works. For example:
I have been sailing Wallula Gap for twenty years long before the wind farms were built in the dry land wheat fields. While I am not against the wind farms, they are not needed and the environmental impact is huge compared to the nuke plant up river and the coal plant down river.
My point is that the issue is more complex and project specific.
The reason, pulsar, lot of people at WUWT are against wind and solar is ignorance. This is the same reason people are for wind and solar. Yes, both sides of debate can be wrong.
There are clearly many cases where solar makes sense
Here’s an example from Australia, where electricity prices are high, sun shines often, peak demand in day when aircon runs, demand locally increasing
Here the power company reduces local (peak) demand by providing solar plus storage for its customers… saving expansion costs and reducing what it needs to generate.
If electric prices are high in a major costal city like Melbourne then it means someone has screwed up in a major way. If electric prices are high in a major coal producing nation like Australia then it means that screw up was probably Green politics.
The distribution cost in Australia looks ridiculously high.
“It is very, very rare such a plant to drop out unexpectedly.”
I am thinking that MarkW does not have years of experience working in the control room of large nukes like have. While it is not as frequent as many years ago when I worked in the control room, it is an expected occurrence and not rare.
“one of two things happens”
Neither of those things happened. It is no more complicated cruise control. While the most economical steam plants run base loaded, other steam plants with spare capacity run in load following. Steam control valves maintain constant frequency. If load increases or a power plant drops off the grid, frequency will drop a small amount. Steam valves in the load following plants will open increasing power production running the grid to 60 hz. | https://wattsupwiththat.com/2016/05/21/fire-breaks-out-at-worlds-largest-solar-power-plant-ivanpah/?shared=email&msg=fail | CC-MAIN-2020-05 | refinedweb | 17,237 | 63.8 |
Gitweb:;a=commitdiff;h=36c5bb40a27c7faac2a167df8c5cbd7c98c5168d Commit: 36c5bb40a27c7faac2a167df8c5cbd7c98c5168d Parent: 97ba18f4cbedbdf7933d30d719cf997bb06e7214 Author: Alasdair G Kergon <agk redhat com> AuthorDate: Mon Sep 16 19:57:14 2013 +0100 Committer: Alasdair G Kergon <agk redhat com> CommitterDate: Mon Sep 16 19:57:14 2013 +0100 Makefiles: Fix CC variable override. The CC override in commit f42b2d4bbf16345e5b5457f4298e751d5c134776 caused the built-in value to be used instead of the configured value when it wasn't being overridden. The behaviour is explained here: --- WHATS_NEW | 1 + make.tmpl.in | 11 +++++++++++ 2 files changed, 12 insertions(+), 0 deletions(-) diff --git a/WHATS_NEW b/WHATS_NEW index d51f08c..45e0050 100644 --- a/WHATS_NEW +++ b/WHATS_NEW @@ -1,5 +1,6 @@ Version 2.02.101 - =================================== + Fix CC Makefile override which had reverted to using built-in value. (2.02.75) Recognise bcache block devices in filter (experimental). Run lvm2-activation-net after lvm2-activation service to prevent parallel run. Add man page entries for lvmdump's -u and -l options. diff --git a/make.tmpl.in b/make.tmpl.in index 6992255..3218758 100644 --- a/make.tmpl.in +++ b/make.tmpl.in @@ -17,7 +17,18 @@ SHELL = /bin/sh @SET_MAKE@ +# Allow environment to override any built-in default value for CC. +# If there is a built-in default, CC is NOT set to @CC@ here. CC ?= @CC@ + +# If $(CC) holds the usual built-in default value of 'cc' then replace it with +# the configured value. +# (To avoid this and force the use of 'cc' from the environment, supply its +# full path.) +ifeq ($(CC), cc) + CC = @CC@ +endif + RANLIB = @RANLIB@ INSTALL = @INSTALL@ MKDIR_P = @MKDIR_P@ | https://www.redhat.com/archives/lvm-devel/2013-September/msg00061.html | CC-MAIN-2015-18 | refinedweb | 259 | 58.79 |
Migrations¶
Migr.
A Brief History¶
Prior to version 1.7, Django only supported adding new models to the
database; it was not possible to alter or remove existing models via the
syncdb command (the predecessor to
migrate).
Third-party tools, most notably South, provided support for these additional types of change, but it was considered important enough that support was brought into core Django.
The Commands¶.
It’s worth noting that migrations are created and run on a per-app basis. In particular, it’s possible to have apps that do not use migrations (these are referred to as “unmigrated” apps) - these apps will instead mimic the legacy behavior of just adding new models.
You should think of migrations as a version control system for your database
schema.
makemigrations is responsible for packaging up your model changes
into individual migration files - analogous to commits - and
migrate is
responsible for applying those to your database.
The migration files for each app live in a “migrations” directory inside of that app, and are designed to be committed to, and distributed as part of, its codebase. You should be making them once on your development machine and then running the same migrations on your colleagues’ machines, your staging machines, and eventually your production machines. adding columns with default values will cause reasonably
Working with migrations is simple. Make changes to your models - say, add
a field and remove a model - and then run
makemigrations:
$ python manage.py makemigrations Migrations for 'books':: Synchronize unmigrated apps: sessions, admin, messages, auth, staticfiles, contenttypes Apply all migrations: books Synchronizing apps without migrations: Creating tables... Installing custom SQL... Installing indexes... Installed 0 object(s) from 0 fixture(s) Running migrations: Applying books.0003_auto... OK
The command runs in two stages; first, it synchronizes unmigrated apps
(performing the same functionality that
syncdb used to provide), and
then it runs any migrations that have not yet been applied..
Dependencies¶
While migrations are per-app, the tables and relationships implied by
your models are too complex to be created for just.
Be aware, however, that unmigrated apps cannot depend on migrated apps, by the
very nature of not having migrations. This means that it is not generally
possible to have an unmigrated app have a
ForeignKey or
ManyToManyField
to a migrated app; some cases may work, but it will eventually fail.
Warning
Even if things appear to work with unmigrated apps depending on migrated apps, Django may not generate all the necessary foreign key constraints!
This is particularly apparent if you use swappable models (e.g.
AUTH_USER_MODEL), as every app that uses swappable models will need
to have migrations if you’re unlucky. As time goes on, more and more
third-party apps will get migrations, but in the meantime you can either
give them migrations yourself (using
MIGRATION_MODULES to
store those modules outside of the app’s own module if you wish), or
keep the app with your user model unmigrated.
In addition, any models that are used in
RunPython operations must have
migrations so that their relations to other models are properly created.
Migration files¶
Migr.
Adding migrations to apps¶
Adding migrations to new apps is straightforward - they come preconfigured to
accept migrations, and so just run
makemigrations once you’ve made
some changes.
This will make a new initial migration for your app. Now, run
python
manage.py migrate --fake-initial, and Django will detect that you have an
initial migration and that the tables it wants to create already exist, and
will mark the migration as already applied. (Without the
--fake-initial flag, the
migrate.
Historical models¶
When you run migrations, Django is working from historical versions of your
models stored in the migration files. If you write Python code using the
RunPython operation, or if you have
allow_migrate methods on your database routers, you will be exposed to
these versions of your models. base classes of the model are just.
Considerations when removing model fields¶ major. }.
Data Migrations¶:
# -*- coding: utf-8 -*- from django.db import models, migrations class Migration(migrations.Migration): dependencies = [ ('yourappname', '0001_initial'), ] operations = [ ]
Now, all you need to do is create a new function and have
RunPython use it.
RunPython!)
Let’s write a simple:
# -*- coding: utf-8 -*- from django.db import models, just) - Django will then write it back out into a new set of
initial just, just ensure your users upgrade releases in order without skipping any), and then remove the old files, commit and do a second release.
The command that backs all this is
squashmigrations -.
After this has been done, you must then transition the squashed migration to a normal initial migration, by:
- Deleting all the migration files it replaces
- Removing the
replacesargument just,
long,
float,
bool,
str,
unicode,
bytes,
None
list,
set,
tuple,
dict
datetime.date,
datetime.time, and
datetime.datetimeinstances (include those that are timezone-aware)
decimal.Decimalinstances
- Any Django field
- Any function or method reference (e.g.
datetime.datetime.today) (must be in module’s top-level scope)
- Any class reference (must be in module’s top-level scope)
- Anything with a custom
deconstruct()method (see below)
Support for serializing timezone-aware datetimes was added.
Django can serialize the following on Python 3 only:
- Unbound methods used from within the class body (see below)
Django cannot serialize:
- Nested classes
- Arbitrary class instances (e.g.
MyClass(4.3, 5.7))
- Lambdas
Due to the fact
__qualname__ was only introduced in Python 3, Django can only
serialize the following pattern (an unbound method used within the class body)
on Python 3, and will fail to serialize a reference to it on Python 2:
class MyModel(models.Model): def upload_to(self): return "something dynamic" my_file = models.FileField(upload_to=upload_to)
If you are using Python 2, we recommend you move your methods for upload_to
and similar arguments that accept callables (e.g.
default) to live in
the main module body, rather than the class body.(object): def __init__(self, foo=1): self.foo = foo ... def __eq__(self, other): return self.foo == other.foo
The decorator adds logic to capture and preserve the arguments on their way into your constructor, and then returns those arguments exactly when deconstruct() is called.
Supporting Python 2 and 3¶
In order to generate migrations that support both Python 2 and 3, all string
literals used in your models and fields (e.g.
verbose_name,
related_name, etc.), must be consistently either bytestrings or text
(unicode) strings in both Python 2 and 3 (rather than bytes in Python 2 and
text in Python 3, the default situation for unmarked string literals.)
Otherwise running
makemigrations under Python 3 will generate
spurious new migrations to convert all these string attributes to text.
The easiest way to achieve this is to follow the advice in Django’s
Python 3 porting guide and make sure that all your
modules begin with
from __future__ import unicode_literals, so that all
unmarked string literals are always unicode, regardless of Python version. When
you add this to an app with existing migrations generated on Python 2, your
next run of
makemigrations on Python 3 will likely generate many
changes as it converts all the bytestring attributes to text strings; this is
normal and should only happen once..
Upgrading from South¶
If you already have pre-existing migrations created with
South, then the upgrade process to use
django.db.migrations is quite simple:
- Ensure all installs are fully up-to-date with their migrations.
- Remove
'south'from
INSTALLED_APPS.
- Delete all your (numbered) migration files, but not the directory or
__init__.py- make sure you remove the
.pycfiles too.
- Run
python manage.py makemigrations. Django should see the empty migration directories and make new initial migrations in the new format.
- Run
python manage.py migrate --fake-initial. Django will see that the tables for the initial migrations already exist and mark them as applied without running them. (Django won’t check that the table schema match your models, just that the right table names exist).
That’s it! The only complication is if you have a circular dependency loop
of foreign keys; in this case,
makemigrations might make more than one
initial migration, and you’ll need to mark them all as applied using:
python manage.py migrate --fake yourappnamehere
The
--fake-initial flag was added to
migrate;
previously, initial migrations were always automatically fake-applied if
existing tables were detected.
Libraries/Third-party Apps¶
If you are a library or app maintainer, and wish to support both South migrations (for Django 1.6 and below) and Django migrations (for 1.7 and above) you should keep two parallel migration sets in your app, one in each format.
To aid in this, South 1.0 will automatically look for South-format migrations
in a
south_migrations directory first, before looking in
migrations,
meaning that users’ projects will transparently use the correct set as long
as you put your South migrations in the
south_migrations directory and
your Django migrations in the
migrations directory.
More information is available in the South 1.0 release notes.
See also
- The Migrations Operations Reference
- Covers the schema operations API, special operations, and writing your own operations.
- The Writing Migrations “how-to”
- Explains how to structure and write database migrations for different scenarios you might encounter. | https://docs.djangoproject.com/en/1.8/topics/migrations/ | CC-MAIN-2017-22 | refinedweb | 1,540 | 53 |
// This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.4
import java.util.ArrayList;
import java.util.List;
Java class for items complex type.
The following schema fragment specifies the expected content contained within this class.
<complexType name="items"> <complexContent> <restriction base="{}anyType"> <sequence> <element name="item" type="{}string" maxOccurs="unbounded" minOccurs="0"/> </sequence> </restriction> </complexContent> </complexType>
This accessor method returns a reference to the live list,
not a snapshot. Therefore any modification you make to the
returned list will be present inside the JAXB object.
This is why there is not a
set method for the item property.
For example, to add a new item, do as follows:
getItem().add(newItem);
Objects of the following type(s) are allowed in the list | http://grepcode.com/file/repo1.maven.org$maven2@com.sun.jersey.samples$extended-wadl-webapp@1.9-ea07@com$sun$jersey$samples$extendedwadl$model$Items.java | CC-MAIN-2017-04 | refinedweb | 130 | 51.44 |
Re: Shit that makes me LOL
Posted: Fri Nov 04, 2011 1:33 am
What the fuck? I guess they didn't mention the families relation to a certain brentsg? Haha. Geez louise. Not going to sleep right.
import games
Gaijin Punch wrote:I like the headline but it doesn't load.
Gaijin Punch wrote:I like the headline but it doesn't load.
A Queensland man faces criminal charges after allegedly tattooing a 40cm-long penis onto his mate's back, AAP reported
Police said the pair had a disagreement before the tattooing.
"The victim ... said he wanted a Yin and Yang symbol with some dragons," Ipswich Detective Constable Paul Malcolm told the Queensland Times, AFP reported.
dont think it's the tattoo you were after'."
Police have charged a 21-year-old man from Bundamba, near Ipswich, with two counts of assault occasioning bodily harm and one offence relating to the public safety act.
Later reports said the man allegedly punched the victim.
Detective Constable Paul Malcolm said a 25-year-old man had gone to the alleged offender's house and "somehow in the course of the conversation the subject of tattoos came up".
It will cost the 25-year-old alleged victim about $2000 to remove the lewd tattoo, which depicts a 40cm-long image of a penis and a misspelled slogan implying the man is gay.
The man who allegedly etched the tattoo will also face a public safety charge because he was not a professional tattoo artist and there could be hygiene issues, AAP reported police as saying.
Gaijin Punch wrote:Guide for getting drugged & robbed in Tokyo | http://forums.gamengai.com/viewtopic.php?f=5&t=1482&start=450&view=print | CC-MAIN-2020-29 | refinedweb | 274 | 61.36 |
note elusion I hadn't seen [cpan:/. <p> <code><rant></code> <p> Some people will naturally like templates better. That's fine, but let's ignore that at the moment, because I personally don't care much for them. They have their place, but most of the time it's not with me. <p> Where is the best place to put a module like this? I see them in all different namespaces: [cpan://DBIx::XHTML_Table|DBIx], [cpan://HTML::Table|HTML], [cpan://Table|Table], [cpan://Text::Table|Text], [cpan://CGI|CGI]. Where would it actually belong? If it's made more generic with different output options? What if you decide you want to expand it with other HTML functions? This is a general problem with CPAN, I think. Reusable code doesn't fit well in a hierarchy. <p> Now what if different people have different ideas of what a good interface would be? The first module claims a good name, and the second really has no place. <p> Suppose, for instance that I decide I don't like the way CGI.pm works. I decide that I want to redesign it. Sure, reinventing the wheel can be bad, but there's always the possibility of improvement. What would I name my module? <p> <code></rant></code> <p> It's really hard to find the right module sometimes. Too hard. I suppose it's even harder to write one. <p><b><A HREF="/index.pl?node=elusion&lastnode_id=1072">elusion</a></b> <b>:</b> <b><A HREF=""></a></b> 256105 256407 | https://www.perlmonks.org/?displaytype=xml;node_id=256438 | CC-MAIN-2019-51 | refinedweb | 259 | 77.43 |
The QWMatrix class specifies 2D transformations of a coordinate system. More...
#include <qwmatrix.h>
List of all member functions.
The standard coordinate system of a paint device has the origin located at the top left position. X values increase to the right, and Y values increase downwards.
This coordinate system, and the actual transformation is performed by the drawing routines in QPainter and by QPixmap::xForm().
The QWMatrix class contains a 3*3 matrix of the form:
m11 m12 0 m21 m22 0 dx dy 1 horisontal and vertical translation. The elements m11 and m22 specify horisontal and vertical scaling. The elements m12 and m21 specify horisontal and vertical shearing.
The identity matrix has m11 and m22 set to 1, all others has a function that sets rotation directly.
QWMatrix lets you combine transformations like this:
QWMatrix m; // identity matrix m.translate(10, -20); // first translate (10,-20) m.rotate(25); // then rotate 25 degrees m.scale(1.2, 0.7); // finally scale it
The same example, but using basic matrix operations:
double a = pi/180 * 25; // convert 25 to radians double sina = sin(a); double cosa = cos(a); QWMatrix m1(0, 0, 0, 0, that translate, scale, shear and rotate the coordinate system without using a QWMatrix. These functions are very convenient, however, if you want to perform more than a single transform operation, it is more efficient to build a QWMatrix and call QPainter::setWorldMatrix().
See also QPainter::setWorldMatrix() and QPixmap::xForm().
Examples: qtimage/qtimage.cpp movies/main.cpp xform/xform.cpp qmag/qmag.cpp desktop/desktop.cpp drawdemo/drawdemo.cpp
Constructs an identity matrix. All elements are set to zero, except m11 and m22 (scaling) which are set to 1.
Constructs a matrix with the specified elements.
Returns the horizontal translation.
Returns the vertical translation.
Returns the inverted matrix.
If the matrix is singular (not invertible), then the identity matrix is returned.
If *invertible is not null, then the value of *invertible will be set to TRUE or FALSE to tell if the matrix is invertible or not.
Returns the X scaling factor.
Returns the vertical shearing factor.
Returns the horizontal shearing factor.
Returns the Y scaling factor.
Returns the transformed p.
Returns the point array a transformed by calling map for each point.
Returns the transformed rectangle r.
If rotation or shearing has been specified, then the bounding rectangle will be returned.
Transforms (x,y) to (*tx,*ty), using the formulae:
*tx = m11*x + m21*y + dx *ty = m22*y + m12*x + dy
Examples: xform/xform.cpp
Transforms (x,y) to (*tx,*ty), using the formulae:
*tx = m11*x + m21*y + dx -- (rounded to the nearest integer) *ty = m22*y + m12*x + dy -- (rounded to the nearest integer)
Returns TRUE if this matrix is not equal to m.
Returns the result of multiplying this matrix with m.
Returns TRUE if this matrix is equal to m.
Resets the matrix to an identity matrix.
All elements are set to zero, except m11 and m22 (scaling) that are set to 1.
Rotates the coordinate system a degrees counterclockwise.
Returns a reference to the matrix.
See also translate(), scale() and shear().
Examples: xform/xform.cpp desktop/desktop.cpp drawdemo/drawdemo.cpp
Scales the coordinate system unit by sx horizontally and sy vertically.
Returns a reference to the matrix.
See also translate(), shear() and rotate().
Examples: qtimage/qtimage.cpp movies/main.cpp xform/xform.cpp
Sets the matrix elements to the specified values.
Shears the coordinate system by sh horizontally and sv vertically.
Returns a reference to the matrix.
See also translate(), scale() and rotate().
Examples: xform/xform.cpp drawdemo/drawdemo.cpp
Moves the coordinate system dx along the X-axis and dy along the Y-axis.
Returns a reference to the matrix.
See also scale(), shear() and rotate().
Examples: xform/xform.cpp drawdemo/drawdemo.cpp
Reads a matrix from the stream and returns a reference to the stream.
See also Format of the QDataStream operators
Writes a matrix to the stream and returns a reference to the stream.
See also Format of the QDataStream operators
Returns the product m1 * m2.
Remember that matrix multiplication is not commutative, thus a*b != b*a.
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | https://doc.qt.io/archives/2.3/qwmatrix.html | CC-MAIN-2021-49 | refinedweb | 718 | 51.04 |
.
The ReportLab library is available on PyPI. A user guide (not
coincidentally, a PDF file) is also available for download.
You can install ReportLab with
pip:
$ python -m
FileResponse
objects accept file-like objects.
Here’s a “Hello World” example:
import io from django.http import FileResponse from reportlab.pdfgen import canvas def some_view(request): # Create a file-like buffer to receive PDF data. buffer = io.BytesIO() # Create the PDF object, using the buffer as its "file." p = canvas.Canvas(buffer) # Draw things on the PDF. Here's where the PDF generation happens. # See the ReportLab documentation for the full list of functionality. p.drawString(100, 100, "Hello world.") # Close the PDF object cleanly, and we're done. p.showPage() p.save() # FileResponse sets the Content-Disposition header so that browsers # present the option to save the file. buffer.seek(0) return FileResponse(buffer, as_attachment=True, filename='hello.pdf')
The code and comments should be self-explanatory, but a few things deserve a mention:
The response will automatically set the MIME type application/pdf based on the filename extension. This tells browsers that the document is a PDF file, rather than an HTML file or a generic application/octet-stream binary content.
When
as_attachment=True is passed to
FileResponse, it sets the
appropriate
Content-Disposition header and that tells web browsers to
pop-up a dialog box prompting/confirming how to handle the document even if a
default is set on the machine. If the
as_attachment parameter is omitted,
browsers will handle the PDF using whatever program/plugin they’ve been
configured to use for PDFs.
You can provide an arbitrary
filename parameter. It’ll be used by browsers
in the “Save as…” dialog.
You can hook into the ReportLab API: The same buffer passed as the first
argument to
canvas.Canvas can be fed to the
FileResponse class.
Note that all subsequent PDF-generation methods are called on the PDF
object (in this case,
p) – not on
buffer.
Finally, it’s important to call
showPage() and
save() on the PDF
file.
Note
ReportLab is not thread-safe. Some of our users have reported odd issues with building PDF-generating Django views that are accessed by many people at the same time.
Notice that there isn’t a lot in these examples that’s PDF-specific – just the
bits using
reportlab. You can use a similar technique to generate any
arbitrary format that you can find a Python library for. Also see
How to create CSV output for another example and some techniques you can use
when generated text-based formats.
See also
Django Packages provides a comparison of packages that help generate PDF files from Django. | https://django.readthedocs.io/en/latest/howto/outputting-pdf.html | CC-MAIN-2021-43 | refinedweb | 447 | 57.77 |
I told the Microsoft Visual C++ compiler not to generate AVX instructions, but it did it anyway!
Raymond
A customer passed the /arch:SSE2 flag to the Microsoft Visual C++ compiler, which means “Enable use of instructions available with SSE2-enabled CPUs.” In particular, the customer did not pass the /arch:SSE4 flag,¹ so they did not enable the use of SSE4 instructions.
And then they did this:
#include <mmintrin.h> void something() { __m128i v = _mm_load_si128(&mem); ... more SSE2 stuff ... v = _mm_insert_epi32(v, alpha, 3); ... more SSE2 stuff ... }
The
_mm_insert_epi32() intrinsic maps to the
PINSRD instruction, which is an SSE4 instruction, not SSE2.
To the customer’s surprise, this code not only compiled, it even ran! The customer wanted to know what is happening. Did the compiler convert the
_mm_insert_epi32() into an equivalent series of SSE2 instructions?
No, the compiler didn’t do that. You explicitly requested an SSE4 instruction, so the compiler honored your request. The /arch:SSE2 flag tells the compiler not to use any instructions beyond SSE2 in its own code generation, say during autovectorization or optimized
memcpy. But if you invoke it explicitly, then you get what you wrote.
I guess the option could be more accurately (and verbosely) named “Enable automatic use of instructions available with SSE2-enabled CPUs.” Because what this controls is whether the compiler will use those instructions of its own volition.
The customer happened to test their program on a CPU that supported SSE4, so the instruction worked. If they had run it on a a CPU that supported SSE2 but not SSE4, it would have crashed.
The reason SSE4 intrinsics are still allowed even in SSE2 mode is that you might have identified some performance-sensitive operations and written two versions of the code, one that uses SSE2 intrinsics, and another that uses SSE4 intrinsics, choosing between the two at runtime based on a processor capability check.
The compiler won’t generate any SSE4 instructions on its own, so your code is safe on SSE2 systems. When you detect an SSE4 system, you can explicitly call the SSE4 code paths.
¹ As commenter Danielix Klimax noted, there is no actual /arch:SSE4 option. Please interpret the remark in the spirit it was intended. (“The custom did not pass any flags that would enable SSE4 instructions.”)
It surprises me that people need this explained
People need this explained because it’s surprising. Consider sane compiler, e.g. clang:
[[gnu::target(“default”)]] void something(int alpha)
{
__m128i v = _mm_load_si128(&mem);
// … SSE2 version …
}
[[gnu::target(“sse4.1”)]] void something(int alpha)
{
__m128i v = _mm_load_si128(&mem);
// using SSE4.1 is OK here.
mem = _mm_insert_epi32(v, alpha, 3);
}
void test_something() {
// The right function would be picked automatically.
something(42);
}
There you can easily write different versions of functions and compiler will do all the right things: stop compilation if you use improper intrinsic, pick the right version if you support more than one CPU, etc.
As you can see behavior MSVC exhibits is not only not needed, it’s obviously harmful: if you are planning to write two versions of the code then you would like to be sure that the one which is not supposed to use SSE4… doesn’t use it.
MSVC fails even at simple detection phase… this is not surprising, though: MSVC was never all that good as pure C++ compiler, it’s strength lies with tight integration with other Visual Studio tools… some of which are superb and leave anything you may find on other platforms in the dust.
A thing: There is no /arch: SSE4. Compiler only supports IA32, SSE, SSE2, AVX, AVX2 and AVX512. (First three are valid only in x86 compilation) Meaning that customer wouldn’t be able at all to use SSE3, SSSE3 and SSE4.x if arch flag worked the way they thought…
GCC and Clang behave as customers expect.
Well, I prefer the behavior of MSVC.
Why is it preferable? I think in this particular case “explicit is better than implicit”. And it’s not hard to mark the functions where you really need something not supported in the mode selected by command-line switch with [[gnu::target(“sse4.1”)]]
Hm, how does it handle non-SSE instructions (3DNow and XOP/FMA4, (V)AES, SHA and other special cases).
Is there some macro to define before including to only declare prototypes of intrinsics available for specific technology?
Customer desire to have this compile-time checked is very understandable, and it should be simple to define such macro, and sprinkle with conditional preprocessor to only enable appropriate intrinsics. | https://devblogs.microsoft.com/oldnewthing/20201026-00/?p=104397 | CC-MAIN-2021-43 | refinedweb | 764 | 64 |
Ev2 Representation of Moves
Contents
- Summary
- Why is This Necessary?
- Move-Away and Move-Here
- Conclusion
- Further Work
Summary
Ev2 was designed with an intention to support “move” semantics, but the current design of its move() method does not satisfy the requirements. Here I explain why, and present a solution.
The following principles constrain or guide Ev2 design:
- An edit must be able to express any change that can be constructed in a WC and committed.
- An edit must be able to express any change that can be constructed in a repo across multiple revisions.
- Ev2 uses sequential description: a succession of incremental edits applied to an initial state gives a final state.
- Each node should be changed only once.
Given these constraints, not all combinations of moves can be expressed using a “move source to destination” operation, with or without a “rotate” operation, without using temporary paths.
Moves can be expressed, in general, using separate move-away and move-here operations. The shape of this design is:
- The “move away” end of a move is (or can be) a separate operation from the corresponding “move here”.
- While a subtree is “in limbo” after being “moved away” from its source location and before being “moved here” at its destination, the behaviour is as if it has been moved to a temporary path. The temporary path is in a different name-space so that it cannot conflict with any real path, even a real path outside the scope of the edit.
Why is This Necessary?
Express Any Change
Since the purpose of the editor is to communicate a change originating in a WC when it is committed, and a change originating in a repository when the WC is updated, then it must be able to express any such change. This includes updates across multiple revisions, and from a mixed-revision state to a single revision.
Through a series of simple steps of the form “move X to Y”, some quite funky overall changes can be created. For example, starting from this state*:
| +-- A | +-- B
the following sequence:
move /A/B /X move /A /X/B move /X /A
results in swapping the nodes at paths A and B:
| | +-- A mv--\ /--> +-- A | X | +-- B mv--/ \--> +-- B
If those steps are performed in a WC and the result committed all at once, the editor needs to be able to handle it. If those steps are committed separately, and then a working copy is updated, the editor needs to be able to handle it.
More examples are given later, some of which involve other operations such as “make directory” as well as moves. All of this emergent complexity results from the introduction of a simple “move” primitive, and there does not seem to be any acceptable way to further constrain the basic move that would simplify the derived complexity.
* Notation: In the diagrams, a capital letter represents the name of a node; the node's identity (its node-copy-id) is not shown. In sequential instructions of the form “move /A/B /X”, the “/A/B” represents the source path of a node in the tree state that is being incrementally modified, and “/X” represents the destination path that it will have in the new state after this instruction.
Temporary Paths
Paths in the Ev2 editor operations refer to the current state of the tree being edited, at that moment in the sequence of edit operations. (The sole exception is the copy-from path, which is relative to a given committed revision.)
A natural and compact way of expressing moves would be as a mapping from original-state paths to final-state paths. However, that paradigm does not fit well with the desire to express the edit as a sequence of incremental steps. If we are going to include move functionality as steps in a sequence of edit operations, it makes sense to use paths that are relative to the current state.
Ev2 should not require the driver to express a change as a sequence of operations that can include moving a node to a temporary path and then later moving it again to the final path. The end result of the edit is the important part, and there are unlimited potential sequences that lead to that result, and it does not make sense to require an edit driver to construct such a sequence arbitrarily, if there is an alternative method that specifies the result uniquely. The receiver may in fact need to employ temporary paths in its implementation, but then it knows this and it is in a better position to construct such paths when needed, and it will know that they are temporary which may be important.
There are advantages to placing a node in its final position exactly once. It was claimed that Google's BigTable implementation of Svn's back-end would have benefited from knowing that once a node has been edited then it is in its final state. Ev2 aims to provide that guarantee, where Ev1 could not.
Sequential Description
Ev2 (svn_editor_t) intends to express a new state in terms of an old state and a description of the parts of the new state that differ from the old state, or, in other words, a description of the changes against the old state. It uses a sequential, incremental description: for example, “add directory X, then copy file X/Y from somewhere, then edit the contents and properties of file X/Y”.
Only certain parts of the description are incremental. Ev2 aims to allow nodes to be visited in arbitrary order, subject to a small number of restrictions. The parts where ordering matters are:
- tree changes before content changes
- alter/add/copy a directory (if required) before touching its (immediate) children
A Move Event is Not Adequate
Moves, in general, cannot be expressed as “move from path X to path Y” events in such a sequential description without introducing temporary paths. This is because some cases require the source path of the move to be moved away, then some other steps, and then the destination path of the move can be populated. Some classic examples are:
Example 1: Insert a directory level
| | +--A mv--\ (add) +--A \ | \--> +--B
The add cannot happen before the move-away, but must happen before the move-here.
Example 2: Swap two siblings
| | +--A mv--\ /--> +--A | X | +--B mv--/ \--> +--B
Neither of the moves can be completed before doing the move-away part of the other one.
Example 3: Swap two directory levels
| | +--A mv--\ /--> +--A | X | +--B mv--/ \--> +--B
Neither of the moves can be completed before doing the move-away part of the other one.
A Rotate Event is Not Adequate
At one time there was an idea that the addition of a “rotate PATH1 PATH2 … PATHn” event would complete the semantics and allow arbitrary moves to be supported.
While this does enable Example 2 (swap two siblings) and Example 3 (swap two directory levels) and many other cases, it does not help with inserting a directory level (Example 1), and it has been shown [1] to be incapable of resolving other more involved cases involving swapping or rotation. One specific example is swapping A with A/B/C [2]:
| | +-- A mv--\ /--> +-- A | \ / | +-- B mv--- X ---> +-- B | / \ | +-- C mv--/ \--> +-- C
[1] Email thread on dev@, “Ev2 as a move-aware editor”, started on 2013-06-24, e.g. <> or <>.
[2] An example problem in thread [1], of swapping A with A/B/C: <> or <>.
Move-Away and Move-Here
One solution is to describe the two halves of each move separately:
move-away SOURCE-PATH … move-here DESTINATION-PATH
We can then solve Example 1 in the following way: issue the “move-away A”, then create a new directory at path A which replaces that source of the move, and then finally issue the “move-here A/B” which relies on that replacement directory A having been created.
The consumer must be able to put the node aside for an indeterminate amount of time until the “move-here” is received.
Of course there needs to be a way to link each move-away with the corresponding move-here. Remembering that each edit step refers to the current state in a sequence of states, we cannot simply specify the path corresponding to the other end of the move like this:
move-away SOURCE-PATH to DESTINATION-PATH … move-here DESTINATION-PATH from SOURCE-PATH
because the problem cases are when the destination path does not yet exist at the time of a move-away, or the source path no longer exists at the time of a move-here. What we can do is use some other unique reference that is unique within the edit, like this:
move-away SOURCE-PATH as identifier ID1 … move-here DESTINATION-PATH from identifier ID1
The reference could perhaps be the destination path as it will finally exist at the end of the edit, or just an arbitrary number or string. We will just specify the identifier as an “id” and not specify how it is generated.
Explicit Direct Moves
We could have distinct operations for direct and indirect moves:
move SOURCE-PATH DESTINATION-PATH move-away SOURCE-PATH as identifier ID1 move-here DESTINATION-PATH from identifier ID1
In cases where the driver can issue a move(a,b) instead of a (functionally equivalent) pair of move-away immediately followed by move-here, then the receiver is likely to be able to process that single move more efficiently. So having the three methods available is better than just having the latter two.
- Brane wrote: [The inclusion of an explicit direct move] also makes validating the drive easier. Move away without a matching moved here (or the converse) is clearly invalid. It must be trivial for the receiver to detect that. Making the temporary locations explicit makes that so much easier. Regarding direct move without intermediate state, IMO the driver should be required to to use that whenever it can. Driver always has enough info to know that receiver can process such a move. If it cannot, that indicates a bug in the driver.
We should probably just leave it as a "quality of implementation" issue for the editor driver to prefer single move(a,b) instructions over pairs of move-away, move-here. We could try to come up with a requirement that it must do so in certain cases (starting with collapsing adjacent pairs of move-away, move-here), but I think this may be rather difficult to define fully, and of limited benefit.
Ordering Restrictions
The ordering rules regarding move-away and move-here should include:
- mv-away must come before the matching mv-here
- The edit should provide a sequential procedure that the consumer must be able to follow without having to buffer an arbitrary amount of state.
mv-here & cp & add must be in nesting order: create (or put in place) the parent before its children
- mv-away must come before deleting a parent
- Receiver needs to know that it must preserve this path when we delete its parent.
- mv-away must come before mv-away of a parent
- If we allowed “mv-away A; …; mv-away A/B” then the child path “A/B” would have to be specified not relative to the current state of the edit, as all other operative paths are, but in some other way, because the parent has gone into temporary namespace, and has perhaps been replaced so that “A/B” now refers to some other node.
- There is a general rule that all edits within a moved directory “A” must come after A is moved its destination, but a mv-away of a subtree of A is not considered an edit for this purpose.
- ### Reconsider this rule. s/must/may/ ?
Examples Solved
Example 1: Insert a directory level
| | +--A mv--\ add--> +--A \ | \--> +--B
- alter-dir / (children={A})
- mv-away A (id=“original A”)
- add-directory A (children={B})
- mv-here A/B (id=“original A”)
Example 2: Swapping two siblings
| | +--A --\ /--> +--A | X | +--B --/ \--> +--B
- alter-dir / (children={A,B})
- mv-away A (id=“original A”)
- mv-away B (id=“original B”)
- mv-here A (id=“original B”)
- mv-here B (id=“original A”)
Example 3 can also be solved in this way, except for some ordering restriction issues that are discussed below.
Some further examples related to example 1.
Example 1b: Remove a directory level (by deletion)
| | +--A (del) /--> +--A | / +--B mv--/
The move-away must (in principle) happen before the delete, while the move-here cannot (in principle) happen before the delete. However, the Ev2 “add” and “copy” operations are defined to perform a simultaneous “delete” when replacing an existing node, and so the “move” operation could do the same and thus make this case trivial. Compare with example 1c.
Example 1c: Remove a directory level (by move-away)
| | +--A mv-->? /--> +--A | / +--B mv--/
Here there are two approaches. Either we move-away B, then move-away A, then move-here B (to path A); or we move-away A, then move-away B (from its temporary path inside A, which might currently be in limbo), then move-here B (to path A).
The Need for Alter-directory before Tree Changes
Why do we have this ordering rule?
If any path is added (with add_*) or deleted/moved/rotated, then an svn_editor_alter_directory() call must be made for its parent directory with the target/eventual set of children.
The original reason was to simplify the handling of “incomplete” directories in the WC. The root of the problem is the WC DB requirement that, before updating the DB entry for directory "DIR" to revision REV, there must be child entries in the DB corresponding to (at least) all the children of DIR@REV in the repository. Quoting Philip Martin [1], with this requirement in place, “The Ev2 receiver only ever has to handle incomplete directories that are empty and never has to handle incomplete directories with children. This may simplify the implementation. For example it may make it simpler and/or more efficient to update following an interrupted update.”
From that requirement we extended and derived a requirement for Ev2 that an alter-directory(children=X) must be issued before any calls that change the list of children. At first sight, that rule seems to fit well with the principle of the Once Rule, if we assume that a typical receiver (WC or repository) will treat a directory node as an object that contains a list of children, and will want to update it in one shot. However, it may not actually be as good a fit as it seems. A repository will probably need more information than simply a list of child names: it will want to know the corresponding node ids, and it won't know them until any copies, adds come in. So we should be careful about assuming this alter-dir is necessarily useful.
First let's address some simple problems in the wording above. Inside an add-directory, all the children have to be added, and that would imply we need an alter-directory as well as the add-directory, violating the Once Rule. It omits “copied” – just an oversight? It does not specify when the call must occur – presumably it must be before any such modifications to the children. To remedy those three initial problems, I suppose something like this is intended:
Either alter_directory() or add_directory() must be called on a directory, declaring its final set of children, before calling delete(), move_away(), move_here(), copy(), or add_*() on any child. (For delete() or move_away(), it must be alter_directory(), as the children of add_directory() must not be deleted or moved away.)
But there is still a problem. If we require alter_directory() before a move_away(), it leads to a violation of the Once Rule as shown in the following example.
Example 3: Swapping two directory levels
| | +--A --\ /--> +--A | X | +--B --/ \--> +--B
- alter-dir A (children={B})
- mv-away A/B (id=”original A/B”)
- alter-dir / (children={A})
- mv-away A (id=”original A”)
- mv-here A (id=”original A/B”)
- mv-here A/B (id=”original A”)
There is a potential problem here:
- We make an edit within subtree A (the “move-away A/B”) before moving A.
This violates the assumed constraint that we should disallow edits within a subtree before moving the subtree.
[1] Email “Re: Ev2 alter_directory deltas” to dev@ from Philip Martin on 2013-08-14, e.g. <>.
What If We Allow Edit Before Move?
We were assuming that we should disallow edits within a subtree before moving the subtree. [Why?]
One solution might be to drop that requirement and let the subtree be moved (move-away) part way through editing it, allowing all editing operations. To accommodate such a change, some of the other rules that currently refer to “a path” probably need to be reformulated to refer to “a path relative to such a subtree” instead.
If we allow edits before and after moving, should we also allow edits after the move-away and before the move-here? Not sure. It seems like that may be more problematic for certain consumer architectures and so probably should not be allowed. But is there a better way to decide?
What if the Once Rule is Per Node?
The path-based Once Rule was written something like this:
A path should never be referenced more than once by the add_*, alter_*, and delete operations (the "Once Rule"). The source path of a copy (and its children, if a directory) may be copied many times, and are otherwise subject to the Once Rule. The destination path of a copy [or move_here] may have alter_* operations applied, but not add_* or delete. If the destination path of a copy or move is a directory, then its children are subject to the Once Rule. The source path of a move_away() (and its child paths) may be replaced using add_*() or copy() or move_here() (where these new or copied nodes are subject to the Once Rule).
More comprehensively, in tabular form, the sequences of operations allowed on a path are:
Perhaps the Once Rule should not apply per path as it was stated, but rather per node. If a directory is altered and then moved away, we should be able to create a replacement directory, being a different node at the same path, and then alter that.
The Once Rule, Per Node
Let's try to define the Once Rule as applying per “node”, and see how far we can get.
- The Once Rule applies per node, rather than per path. The definition of a “node” is such that “move” moves an existing node (with any children) to a new path and does not create a new node, while “add” creates a new node and “copy” creates a new node (with any children), each new node being different from all other nodes that are or were in the tree even if it replaces an existing node at the same path.
- One of the following actions can be applied, just Once, directly to each node:
- create (only if it does not exist in the initial tree)
- remove (only if it exists in the initial tree or is brought in as a child of a copy)
- modify (only if it exists in the initial tree or is brought in as a child of a copy)
- A node may be created by one of:
- add_*()
- copy(), optionally followed by alter_*()
- Its children (recursively) come with it, and are then subject to the Once Rule as “child of a copy” nodes.
- A node (with any children) may be removed by one of:
- delete()
- add_*() replacing this node
- copy() replacing this node
- move_here() replacing this node
When removing a directory, each child (recursively) is considered to be removed at this time as well, and, as such, must not have been touched by any other operation. However, a previous child node may have been moved away [or deleted?] before this deletion.
[? The driver should not delete a node and then delete an ancestor: instead, it should just delete the ancestor.]
- A node may be modified by either or both of the following, in either order:
- move_away() followed by move_here()
- If the node is a directory, its children (recusively) move with it.
- Only if it exists in the initial tree: not allowed for a child of a copy.
- No operation may touch this node or any children (recursively) between move-away and move-here.
- alter_*()
- alter_directory() is required before “editing” any children
- ### Not sure exactly what we mean here.
- ### The rules would be simpler without this requirement.
- The source of a copy operation may be a node in the initial tree being edited. Such a node (and its children, if a directory) may be copied many times, in addition to being subject to the Once Rule as existing nodes.
In tabular form, the sequences of operations that would be allowed on a node in this scheme are:
But there is a problem. While this per-node definition works for commit editor, the update editor works on WC paths, not on nodes.
Pathwise Edit Drive in an Update
During an update, Subversion runs the editor over the WC paths. Every WC path is not guaranteed to have a distinct node-copy-id, because of (a) switched WC paths and (b) mixed-revision WC paths. Therefore we cannot require distinct node-copy-ids in the edit.
Conclusion
Further work is needed to clarify the meaning of the Once Rule, and the ordering restrictions, especially the requirement about calling alter-dir. Some other aspects of Ev2 rules are still unclear.
Further Work
Alternative Approaches
The following approaches are just theoretical possibilities that could be explored, noted here for completeness and for contrast. There is no suggestion that they are good or even possible alternatives.
1.
An alternative design could perhaps be based on the following: Have a single “move” instruction, and allow it to be issued before there is a parent path ready to accept the destination. The destination's parent directory might need to be created (or replaced), and the destination path itself might need to be freed by deleting or moving away a node that exists with the same name. The destination path might be expressed as a final path, or as a node id and a child name, and the receiver would have to figure out when the destination's parent directory was ready. In other words, rather than have the driver specify the timing of the move-away and move-here, the receiver would choose when to perform the move-here side of the move. How would the receiver know when the destination parent directory is “ready”?
2.
We could issue a complete mapping of initial paths to final paths (or equivalent) at the beginning of the edit. This is quite a natural way of expressing moves, but does not seem a good fit for an editor that is basically sequential.
Half-Moves at the Edge of the Subtree
If an edit is limited to a subtree of the repository, and if we assume this limit is pathwise, then it is possible for a node to move into or out of this subtree during the edit. There are two ways to handle this:
- Driver describes it as an add or delete.
- Driver describes it as a move (or rather as half of a move).
Advantages to describing it as a move include:
- The consumer can then choose whether to warn the user, raise a tree conflict, or even perhaps commit or update another subtree in order to complete the move.
The consumer can convert a move-out to a delete trivially.
The consumer can convert a move-in to an add (of sorts) by requesting the node's content from the server, which is trivial during a commit (when the consumer is the server). | http://wiki.apache.org/subversion/MoveDev/Ev2MovesDesign?action=diff | CC-MAIN-2014-42 | refinedweb | 4,004 | 54.26 |
Hi it's me again.
Well I've done this program where it converts roman numerals to numbers but I've used it with if else.
But I'm supposed to use switch.But I'm supposed to use switch.Code:/* wo0dy at home doing assignment. March 24th 2006 Friday 9am D-48-A Cyberia (5) Write a program that converts a C++ string representing a number in Roman numeral form to decimal form. The symbols used in the Roman numeral system and their equivalents are given below: I 1 V 5 X 10 L 50 C 100 D 500 M 1000 For example, the following are Roman numbers: XII (12), CII (102), XL (40). The rules for converting a Roman number to a decimal number are as follows: a. Set the value of the decimal number to zero. b. Scan the string containing the Roman character from left to right. If the character is not one of the symbols in the numeral symbol set, the program must print an error message and terminate. Otherwise, continue with the following steps. (Note that there is no equivalent to zero in Roman numerals.) If the next character is the last character, add the value of the current character to the decimal value. If the value of the current character is greater than or equal to the value of the next character, add the value of the current character to the decimal value. If the value of the current character is less than the next character, subtract the value of the current character from the decimal value. Solve this problem using a switch statement. Some sample outputs are given below. */ //It works only for letters that correspond to Roman numerals. I think thats what Sir said it should work. #include <iostream> #include <cstring> using namespace std; int main () { char roman_num[10]; cout << "Enter a Roman Numeral: "; cin >> roman_num; int len = (strlen(roman_num)-1); //I put -1 cos I the lecturer said something about int counter = 0; //the program uses an extra space for memory or something int number =; cout<<"\nOne or more of the inputs entered is invalid. \a"<<endl; } if (b4sum > number) sum = b4sum - number; else sum = sum + number; b4sum = number; } cout << "\nThe Roman Numeral is: " << sum << endl; system ("PAUSE"); return 0; }
I've tried it and read my book but it kept saying that switch quantity is not an integer. The examples from the book were also given using integers.
Can anyone explain me the concept so I can understand how to apply this? | https://cboard.cprogramming.com/cplusplus-programming/78270-if-else-works-but-i-cant-do-switch.html | CC-MAIN-2017-26 | refinedweb | 423 | 62.68 |
Note: the generator has evolved since this post. Although the post is still worth reading, please go to for the most up to date doc.
The first blog post I ever wrote was titled “Turning an ascx user control into a redistributable custom control”. It was almost exactly five years ago, and it still gets a lot of hits today. And interestingly, this new blog post is about solving essentially the same problem, but with a much nicer Razor based solution than was available at the time.
The general issue we’re trying to solve is to encapsulate reusable pieces of UI. Unfortunately, this has typically meant choosing between two approaches, each having their pros and cons (this mirrors the intro from my old post):
- Custom code in a library project: this makes it easy to produce a binary that can be used in multiple projects without having to keep source files around. But on the downside, it’s painful to author rendering logic without a view engine language
- Using a non-precompiled markup file: in the WebForms world, that meant an ascx file, while in the Razor world, it means a .cshtml file. This makes it easy to write rendering logic using ascx or Razor syntax. But the drawback is that it’s hard to turn into a reusable library.
Here, I will show you have you can have the best of both worlds in the Razor world: write your helpers using the powerful Razor declarative helpers syntax, while still being able to build them into a reusable library.
The quick ‘getting started’ guide
If you don’t care about the details of how this works and just want to use it, here is what you need to know to run it.
- Go to the VS Extension Gallery and install the Razor Single File Generator.
- Here you may need to restart VS
- In a Library project, create a Razor file (.cshtml extension)
- Under its properties, set the Build Action to None, and set the custom tool to RazorClassGenerator
- Define various @helper methods in the cshtml file. When you save it, it’ll produce a nested .cs file, e.g.
- Reference the library from your Razor view (in an MVC3 or Web Pages app) and use the helpers!
And if you’re looking for the source code, it’s all on CodePlex:
What are Razor declarative helpers?
Scottgu introduced the concept in his Razor post, under the “Declarative HTML Helpers” section. Here is an example:
@helper WriteList(string[] items) { <ul> @foreach (var s in items) { <li> @s </li> } </ul> }
Here, we are using the powerful Razor syntax to define what our helper will output. The code in the method is basically the same thing as you write in regular Razor rendering logic, but the fact that it’s inside an @helper method turns it into a declarative helper.
Normally, those @helper methods must live either in your view itself, or in a .cshtml file in App_Code. When it’s in the view itself, it’s only usable within that one view, while when it’s in App_Code, it’s usable from anywhere in your app. But in either case, it really isn’t very reusable in the sense that you can’t easily turn it into a library of helpers that you can use in any app without having to carry the .cshtml file.
A VS Single File Generator to the rescue
If you’ve ever used T4 (which I’ve blogged quite a bit about), then you pretty much know what a Single File Generator is. It’s something that you can attach to a file in your project, such that it generates another file underneath it.
In this case, the file we attach a SingleFileGenerator to is the .cshtml file, and what it generates is the source code that the Razor engine produces from it.
Writing a VS Single File Generator may seem scary, but luckily there is a good sample in the SDK:. In fact, most of the code in my Razor generator is directly copied from this. The only place that has interesting code that’s specific to Razor is the RazorClassGenerator.GenerateCode() method. Here is the key code (simplified for brevity):
// Determine the project-relative path string projectRelativePath = InputFilePath.Substring(appRoot.Length); // Turn it into a virtual path by prepending ~ and fixing it up string virtualPath = VirtualPathUtility.ToAppRelative("~" + projectRelativePath); // Create the same type of Razor host that's used to process Razor files in App_Code var host = new WebCodeRazorHost(virtualPath, InputFilePath); // Set the namespace to be the same as what's used by default for regular .cs files host.DefaultNamespace = FileNameSpace; // Create a Razor engine nad pass it our host var engine = new RazorTemplateEngine(host); // Generate code GeneratorResults results = null; using (TextReader reader = new StringReader(inputFileContent)) { results = engine.GenerateCode(reader); } // Then results.GeneratedCode has the CodeDom CodeCompileUnit that the generator needs
So it’s all pretty simple. In essence, all it does is give the content of the Razor file to the Razor engine and asks it to generate the right code for it.
Possible areas of improvement
The most obvious pain point when using this is that you need to manually set the custom tool to RazorClassGenerator. It should be relatively easy to add behavior to the VSIX that would add a right click option on .cshtml files that would set this up.
Another potentially very cool thing is to not only use this to precompile @helpers, but also real Views. This is trickier because it requires some logic that will allow the view engine to find the precompiled Views, but it can certainly be done.
Great post, thanks for sharing!
Two ideas:
– About precompilation of views: This could easily be done when using the T4MVC template since that takes care of finding all the views for you.
– Wouldn't it be great if we had an online library where developers can exchange and rate reusable templates? AJAX components take a lot of fine tuning, and it's a pitty having every developer re-invent the wheel.
@adrian: not sure I understand your T4MVC idea. If you contact me offline we can discuss!
@David: Sorry, my mistake. I thought the problem was to find the views that should be precompiled, whereas you actually meant how to make the view engine use the precompiled views instead of their sources.
Sounds very similar to the view-management concept in Portable Areas within MvcContrib. I love the idea, and I hope it makes it into the product.
Great job implementing this generator!
A small improvement I'd like to suggest is to add the GeneratedCode attribute to the compiled helper class, so that code coverage and code quality tools ignore the generated code.
@Dorin: thanks for the suggestion, I just made that change and pushed a new version to the VS gallery.
This is little bit confusing with the already available helpers. Can you please give one scenario where we can create a helper and use them. I mean some practical application.
Thanks,
Thani
@Thanigainathan: the project on bitbucket (mentioned above) has a small sample you can try.
Scott Gu wrote that it's possible to place helpers into Views/Helpers directory. But it doesn't seem to work for me…
David, very interesting! I was also looking for some way to precompile razor views. I'm sure MS has such a tool already: just take a look at system.web.webpages.administration.dll It's full of compiled cshtml views. The answer to your problem is in there as well though: they use PageVirtualPathAttribute on the generated class.
Apparently the RazorViewEngine takes this attribute into account when searching for views if the virtualpath providers don't return anything. I tried asking Haacked and Scott Guthrie how the WebPages team did this, but did not get an answer as of yet. Perhaps you being closer to the fire (hence, I even thought you were on the same team ;)) could help finding out what tool the WebPages team uses for this precompilation.
If there is no such tool, I will take your class and extend it to compile real views somewhere this week (if noone beats me to it :))
@Chris: yes, I actually wrote the code that you're referring to, and in fact the code in this post started from the same base (though I took out the PageVirtualPathAttribute logic). But note that the PageVirtualPathAttribute logic as it currently exists only works for ASP.NET Web Pages (i.e. WebMatrix), and not for MVC views. I actually prototyped doing it for MVC views earlier, but haven't had time to play with it since. But it is definitely something worth looking into 🙂
Hi David,
This looks interesting…
Is there an issue with making this Extension available in VS2010 Express editions (C# and WebDev)?
(I see that the VSIX Manifest doesn't include Express Editions.)
Thanks,
Martin
@Martin: I just didn't try that. If simply adding it to the manifest allows it to work, we should change it indeed. Please let me know.
Hi David,
I think Express support can be added as follows to the manifest:
<SupportedProducts>
<VisualStudio Version="10.0">
<Edition>Ultimate</Edition>
<Edition>Premium</Edition>
<Edition>Pro</Edition>
<Edition>Express_All</Edition>
</VisualStudio>
</SupportedProducts>
This is my reference:
msdn.microsoft.com/…/ee822857.aspx
It would be great if this could be added.
Thanks,
Martin
@Martin: I tried it and unfortunately it doesn't work. When I add that, I can no longer upload the extension to the VS gallery. It fails with "You need to obtain an exception to upload a tool or control that supports the Visual Studio Express SKUs". I think this is a deliberate limitation that they added to the Express versions.
Ok, got it working for mvc views now. Encountered a lot of Internals again 🙁
I hacked together a t4 template, will now use your code to create a IVsSingleFileGenerator out of that in the next days….
@Chris: it might be worth blogging your results when you are ready!
OK – that's unfortunate.
I'm wondering whether there is a way to get this working without the Extension Manager.
Looks like there is a requirement to place the DLL in a particular folder and make some Registry settings.
How does this work with an automated build? Looking through the project I saw the compiled item referencing the generator, but didn't see the generator itself included anywhere. This is something I could see causing problems for an automated build process. Do you have any tips for getting this included into a project so that it could easily be shared between developers on for a command-line build process?
Hmm, now I've got it working, I'm starting to wonder if it wouldn't be a better option to just embed the source of the views, and use a VirtualPathProvider to serve those to the normal ViewEngines and the BuildManager to be compiled at runtime.
There could be stuff in the web.config for example that changes the behavior of the views for example, that would not work for compiled views.
What's your opinon about that? What advantages would pre-compilation have aside from a little performance win at the application startup?
@Jason: it works just like T4 templates. The idea is that the .cs files are generated at design time while you change and save the cshtml files. The generated .cs files are then part of the project, so there is no need for the generator at build time. So this should not be an issue at all.
@Chris: one big potential benefit of pre-compilation is that it could allow you to unit test your views in a much cleaner way than is possible today, since they're just standard classes in your project rather than things that require runtime magic.
Ah, yeah,good one: just needed an extra push to finish it up 🙂
Ok, finished it:
Of course. Thanks, that makes perfect sense!
The mapjects engine is implemented on this cshtml, and are they're using azor with this. I am having trouble razor syntax highlighter, can someone help…
Hi,
I've just tried using this, but it fails for me two ways – maybe I've done something wrong…
Firstly, if I include a @{} block in the @helper method then the C# generated is invalid. There is a WriteTo(@__razor_helper_writer, ); line that has a missing argument. This occurs even if the code block is empty.
Secondly, the generated class inherits from System.Web.WebPages.HelperPage which means that no Html helper methods work, as the Html member is of the wrong type.
Please let me know if I've done something obviously wrong.
Chris
@Chris: you don't need any @{ } blocks top level in the method, since it starts out being code. You only need this inside tags.
For the Html helper, simplest workaround is to pass it explicitly, e.g.
@helper RenderAboutLink(System.Web.Mvc.HtmlHelper Html) {
@Html.ActionLink("About link from helper", "About", "Home");
}
Thanks David.
It's working well now.
I know I'm a bit late in saying this, but why can't you just create a whole bunch of extension methods and place that class within the System.Web.Mvc.Html namepspace? Then you wouldn't have to include a custom namespace in each view, the extensions could be put in to a library and you wouldn't have to use the @helper syntax any more.
Thoughts?
@Vince: that's a valid point. Could you open a bug on razorgenerator.codeplex.com to track this. Thanks!
David, I have converted the same example that you have mentioned in What are Razor declarative helpers?
to a re-usable library .. you can refer my article in codeproject…/MVC3CustomControl.aspx
@Ranjan: good article. But note that it's mentioning the old version of the generator, and things have changed some, so you may want to update.
Great tool, please update the screen shot and this post content to current version 1.4.3 for RazorGenerator CustomTool=RazorGenerator
@Vairam: I'm too lazy to update the screen shot (it's harder than text!), but I do have a note at the top pointing to the codeplex site for the latest info 🙂
I am trying to load external CSS file into the main layout and use DLL but it is not working.
Do I need to do anything for loading external css or scripts in the precompiled views
@Deepika please post all RazorGenerator questions to razorgenerator.codeplex.com. Thanks!
Hi David, can't thank you enough for this tip, although I have been quite dumb not to have discovered it for all these years I have been learning MVC.
I am trying something similar: build the markup generation in helper methods and move them off to a separate class library. I am sure this post will be of tremendous help.
It would be VERY nice if there were some benchmarks comparing DLL-packed vs "simple views" on these metrics:
1) HTML rendering time. (I guess it should be the same, but better safe than sorry)
2) Project build time. (This should be longer than usual, I guess)
One extra advantage of this approach is that deploying a simple DLL is WAY simpler than deploying hundreds of views with their directory structure. If the same could be done for all resources (JS, images, etc) it would be even better for lazy people like me.
Perhaps it even makes defacing a site even more difficult! 😀
@Kostas: sorry, I don't have benchmarks available. I wouldn't think the build time should be measurably slower. The C# compiler is pretty fast, and having an extra 20-ish small .cs files in there won't affect it.
I wouldn't bet to hard on the defacing benefit! 🙂
BTW, note that the better place for all RazorGenerator discussions is razorgenerator.codeplex.com
Hi David!
Thank you for this! But I was just wondering if it's possible to read the contents of the embedded views from other project? 😀 | https://blogs.msdn.microsoft.com/davidebb/2010/10/27/turn-your-razor-helpers-into-reusable-libraries/ | CC-MAIN-2017-39 | refinedweb | 2,686 | 63.29 |
Building a Custom Twig Filter the TDD Way). For example, the
number_format filter can convert a number into a more readable one:
{{price|number_format(2, '.', ',')}}
Assuming
price is a variable with value
1234567.12345, after the filter operation, the output in the page will be
1,234,567.12: 2 decimal places, “.” as the decimal point and “,” as the thousands separator. This makes it much more readable.
As another example,
capitalize will make every first letter of a word in a sentence uppercase and others lowercase:
{{title|capitalize}}
Assuming
title is a variable with the value
this tutorial is nice, after the filter operation, the output will be
This Tutorial Is Nice.
Twig has a number of built-in filters. The full list can be found in its official documentation.
Why Choose a Filter?
Some may argue that the above functionality is also doable in PHP; that is true. We also often find the functionality provided by built-in filters quite limited and insufficient. So, why use a filter?
In an MVC environment, the model layer is responsible for providing data (a book price or an article title, for example). The view layer is responsible for displaying the data. Doing the data conversion, filter-style, in a controller is not advisable because it is against the design role of a controller, and doing it in a model effectively changes the data, which is not good. In my opinion, the view is the only viable option.
Besides, as a particular transformation of data may be requested in many places in a template (as well as in various templates) on the same data from various sources, it is better to call that filter in the template every time such a conversion is required, than to call a function in the controller. The code will be much tidier.
Let’s consider the following code segments comparing using a filter and a PHP function call (using Symfony 2 + Doctrine). We can easily see the differences in elegance and usability.
Filter version:
... <em>{{ book.title|capitalize }}</em> has {{book.pages|number_format(2, ".", ",")}} and costs ${{ book.price|number_format(2, ".", ",")}}. ...
And for this approach, what we do in the controller will be:
$book=$repo->findById('00666'); ... return $this->render('A_Template', ['book'=>$book]);
Find the book (the data) and pass it to the view to display.
But if we use PHP function calls, the code may look like this:
//Using PHP function within a Symfony framework and Doctrine $book=$repo->findById('00666'); $book['tmpTitle'] = ucwords($book->getTitle); $book['tmpPage'] = number_format($book->getPages(), 2, ".", ","); $book['tmpPrice'] = number_format($book->getPrice(), 2, ".", ","); ... return $this->render('A_Template', ['book'=>$book]);
.. and then in the template
<em>{{ book.tmpTitle }}</em> has {{book.tmpPages}} and costs ${{ book.tmpPrice}}.
You can see that the filter approach is much cleaner and easier to manage, with no clumsy temp variables in between.
Let’s build a filter
We’ll build a filter to display the publication date/time of a post in a fancier way. That is, instead of saying something like “Posted at
2015-03-14 13:34“, this timestamp will be transformed into something like “Just now“, “A few hours earlier“, “A few days back“, “Quite some time ago“, “Long, long ago“, etc.
We’ll build it in a TDD way. To get introduced to TDD, see this post and the links within it, but the approaches we take in this tutorial should be easy enough to understand even without looking into TDD beforehand.
First, install
PHPUnit by executing the following Composer command:
composer global require phpunit/phpunit
This will install the most recent version of PHPUnit globally, making it accessible on your entire machine from any folder. We will use PHPUnit to run the tests and assert that all the expectations are met.
Set expectations
The use case is clear: we want to convert a date/time object (
2014-03-19 12:34) into something like “Just now“, “A few hours ago“, “A few days back“, “Quite some time ago“, “Long, long ago“, etc, depending on how far the date/time is from the current moment.
There is no set rule to determine how far away a date/time should be so that it can be displayed as “Quite some time ago“. This is a subjective matter so we will define a customized rule set for our app and these rules will be reflected in our expectations:
Let’s translate these expectations into a test script so that we can test it with PHPUnit. This script is saved in
src/AppBundle/Tests/Twig/timeUtilTest.php:
<?php namespace AppBundle\Tests\Twig; use AppBundle\Twig\AppExtension; class timeUtilTest extends \PHPUnit_Framework_TestCase { /** * @dataProvider tsProvider */ public function testtssFilter($testTS, $expect) { $tu=new AppExtension(); $output=$tu->tssFilter($testTS); $this->assertEquals($output, $expect); } public static function tsProvider() { return [ [date_sub(new \DateTime(), new \DateInterval("PT50S")), "Just now"], [date_sub(new \DateTime(), new \DateInterval("PT2M")), "Minutes ago"], [date_sub(new \DateTime(), new \DateInterval("PT57M")), "Within an hour"], [date_sub(new \DateTime(), new \DateInterval("PT13H1M")), "A few hours ago"], [date_sub(new \DateTime(), new \DateInterval("PT21H2M")), "Within one day"], [date_sub(new \DateTime(), new \DateInterval("P2DT2H2M")), "Some time back"], [date_sub(new \DateTime(), new \DateInterval("P6DT2H2M")), "Ages ago"], [date_sub(new \DateTime(), new \DateInterval("P13DT2H2M")), "From Mars"], ]; } }
If we run this test now:
phpunit -c app/
The test won’t run because we have not defined the
AppBundle\Twig\AppExtension yet. We can quickly create a skeleton file:
src/AppBundle/Twig/AppExtension.php. It can be as simple as this:
namespace AppBundle\Twig; class AppExtension extends \Twig_Extension { public function getFilters() { return [ new \Twig_SimpleFilter('tss', [$this, 'tssFilter']), ]; } public function getName() { return 'app_extension'; } public function tssFilter(\DateTime $timestamp) { // to be implemented } }
Now we can run the test script. All tests (expectations) will fail because we have not done anything to implement the
tssFilter function.
NOTE: Symfony2 works very well with PHPUnit. With the default Symfony2 setup, there is a
phpunit.xml.dist file in the project’s
app folder. The above command will automatically use that file as the configuration file for PHPUnit. Normally, no further adjustment is needed.
The full code of the
tssFilter function is listed below:
public function tssFilter(\DateTime $timestamp) { $TSS=['Just now','Minutes ago','Within an hour','A few hours ago','Within one day','Some time back','Ages ago', 'From Mars']; $i=-1; $compared = new \DateTime(); $ts1=$timestamp->getTimestamp(); $co1=$compared->getTimestamp(); $diff=$ts1-$co1; if($diff<0 ) // Diff is always <0, so always start from index 0 { $i++; } if($diff<-1*60 ) //within one minute { $i++; } if($diff<-10*60) // within ten minues { $i++; } if($diff<-60*60) { $i++; } if($diff<-16*60*60) { $i++; } if($diff<-24*60*60) { $i++; } if($diff<-3*24*60*60) { $i++; } if($diff<-10*24*60*60) { $i++; } return $TSS[$i]; }
The code will reside in
tssFilter. It accepts a
DateTime object so that the program can determine which string in
$TSS should be returned based on far
timestamp is from now.
That’s it! Run the test, and everything should pass!
Integrate it into Symfony
The
tssFilter is still isolated from the Symfony framework. To use it in our template, we need to register it in the
services.yml file:
services: app.twig_extension: class: AppBundle\Twig\AppExtension tags: - { name: twig.extension }
We must provide the fully qualified name of the filter class:
AppBundle\Twig\AppExtension.
Finally, we can use it like this in our Twig template:
The filter name (
tss) is derived from
src/AppBundle/Twig/AppExtension.php file’s
tssFilter() function name and like with other Symfony components, “Filter” is stripped.
Wrapping up
In this quick tutorial, we covered a few things:
- Twig filters and why it is better to use them than pure PHP calls.
- How to build a custom filter the TDD way with PHPUnit.
- How to integrate a filter into the Symfony framework.
Leave your comments and thoughts below and share your achievements! | https://www.sitepoint.com/building-custom-twig-filter-tdd-way/?utm_source=rss | CC-MAIN-2020-05 | refinedweb | 1,307 | 53.81 |
Introduction to Internationalization Programming
If you are going to write a real i18n program, it would be wise to think that you know nothing about a specific language and take charsets into account. Ideographic languages have many more than 26 letters: Japanese has about 2,000, and Chinese has about 5,000. To deal with such characters, the POSIX locale has multibyte and Wide Class (wchar_t). The latter is done for Unicode. To convert one into another, functions like mblen(), mbstowcs(), wctomb(), mbtowc() and wcstombs() are used. However, using Unicode is beyond the scope of this article.
Producing real multilingual software is a complex task. Hopefully, the GNU gettext system that now conforms with SUN XView, will help you write i18n programs.
Figure 1 represents all necessary steps for producing an i18n program:
To create an i18n version, you have to edit a non-i18n program. If you use a special editor mode you will create an additional file at the same time, called a POT file, where PO stands for portable object, and the letter T is for template.
If you merely make a revision of an existing i18n program, or if a POT file does not exist, you have to use the xgettext program to produce it.
Copy the template file into ll.po, where ll refers to a certain language.
Translate messages into the language ll.
Create a ll.mo file with the msgfmt program (mo stands for machine object). Sometimes you can see gmo files (g stands for GNU).
Compile your source; put the binary program and ll.mo files into the right place. This and the previous steps are better accomplished with a Makefile.
Before looking briefly at all the steps of a simple program, please read these golden rules of internationalization:
1) Put the following lines into the non-executable part of your program, and mark messages in the source file as _("message") instead of "message" in the executable part of the program and N_("message") in the non-executable part. Pay attention to the output to guarantee passing the strings declared as constants through gettext, i.e., in the non-executable part:
#include <libintl.h> #include <locale.h> #define _(str) gettext (str) #define gettext_noop(str) (str) #define N_(str) gettext_noop (str)
2) Start your program by setting the locale:
setlocale (LC_ALL, "");3) Indicate the message catalog name, and if necessary, its location:
bindtextdomain(PACKAGE,LOCALEDIR); textdomain (PACKAGE);PACKAGE and LOCALEDIR usually are provided either by config.h or by the Makefile file.
4) To check a symbol's properties and conversion, use calls like isalpha(), isupper(), ..., tolower() and toupper().
5) To compare strings, use the strcoll() and strxfrm() functions instead of strcmp().
6) To guarantee portability with old versions of locale, use a variable of type unsigned char for symbols, or compile your program with the -funsigned-char key.
Let's make a simple internationalized program in which these rules are ignored (Listing 2). The program outputs an invitation to type, reads a string from the terminal and counts the digits in it. The results of this counting are output in the terminal, then the program exits.
Listing 2. A Non-I18n Program
Because the program is small, we can change it easily according to the rules with your favorite editor; if the program is large, it is better to use special tools. Editors like (X)Emacs or vi with po mode can create a counter.pot file at the same time that you are changing the program source!
The changed file is shown in Listing 3. Lines 4-8 are added according to rule 1. Definitions from the locale.h file may not be necessary; they may be included within the libintl.h definitions. Writing gettext and gettext_noop many times is annoying, so we will use macros, as defined in lines 6-8. Using gettext_noop is an example of pre-initialized strings at the compile stage. A possible solution is shown in our program where using gettext_noop allows the strings to be recognized by gettext at the time of executing.
Listing 3. I18n Version of the Program Shown in Listing 2
Without line 15 (rule 2), the program will not understand your locale and will use the C locale. Note that sometimes it is necessary to set special categories of locale, such as LC_CTYPE and LC_MESSAGES. See man setlocale and Table 1 in this article for more information.
Table 1. Categories of Locle and Shell Variables
Lines 16 and 17 were inserted according to rule 3. Usually the parameters of these calls are provided in either a Makefile or a special file (like config.h) that holds configuration information, but in this program we put in the names directly. According to line 16, searching will be started in the current directory. If the line with the call is omitted, Linux will use the default location, /usr/share/locale.
The call textdomain() must be presented in any i18n program. It points the gettext system to the filename with i10n messages.
Lines 19, 25 and 26 where changed according to rule 1. Lines 19 and 25 are simple: instead of using strings directly, we call them through gettext to use a message catalog. Line 26 demonstrates the exception. We cannot transform strings defined in the non-executable part through gettext because there the values are initialized before running the program by the compiler. The problem is solved according to rule 1. We marked the strings with N_ in line 12 to make them recognizable by xgettext; we used _(mess) instead of mess in line 26, as with normal strings. We do not need to do more, because of the function isdigit (see rule 4).
Now the program is internationalized. Compiling and running it, however, produces exactly the same result as the previous non-i18n one. Messages from the counter.pot file have to be translated into a specific language.
There is another way to create an initial .pot file. Once you have an i18n program, you can use xgettext. This scans the source files and creates corresponding strings for translation. In our case, we can invoke it like this:
xgettext -o counter.pot -k_ -kN_ counter.c
where -o is for output file name and -k_ -kN_ is to extract strings that start with corresponding symbols. Consult info xgettext to get i18n support sucks
Wow, linux internationalization support sucks.
There's so little documentation, and no standard system support.
Windows is much better in this aspect. | http://www.linuxjournal.com/article/6176?page=0,1 | CC-MAIN-2014-52 | refinedweb | 1,085 | 65.42 |
Step-by-step
Download the zip file
You can get the zip file containing the TI Debugger here.
Configure TM1 server
Edit the tm1s.cfg file
- Enable ODATA on the TM1 Server (i.e. HTTPPortNumber=8000)
- Enable TI Debugging (EnableTIDebugging=True)
Unzip file and run the debugger
Run the TurboDebugger.bat file to use the default SSL certificates. If you are using your own certificates, you will need to edit the .bat file to use the appropriate ones. After running the TurboDebugger.bat file, the TI Debugger will automatically start.
Could you explain in more detail where we can put breakpoints?! I started both Architect and Performace Modeler but i’m not able to use this capability. Thanks in advance. Regards.
You can only use the breakpoint capabilities within the TI Debugger application. You will not be able to see the breakpoints in Architect or Performance Modeler.
Thanks a lot for your answer.
I can’t log in successfully. I’ve added the parameters to the tm1s.cfg file, and try to log in with the credentials. I click on Login but nothing happens. Could you give me a hint how to get this work? Thanks and best regards.
I’m not sure what’s wrong. If UseSSL = True, then you should use https instead of http in the URL. Could that be the issue?
perfect! https does the job. Thank you very much for the fast reply. Best regards
I have configured the tm1s.cfg file with the parameter. However when I open the tm1 debugger.bat file . Once I provide the url the CAM namespace is greyed out. We are using Integrated security mode = 5. I also restarted the tm1 instance but CAM namespace is still grayed out.
Can you please help with this . thanks in advance
There’s an updated version uploaded that fixes this issue. In the meantime you can use a workaround of pressing the “login” button with no username and password and the CAM detection should refresh then.
I have configured the tm1s.cfg with the necessary parameters, however i am no able to login. I had given admin userid and password along with the TM1 server HTTP URL. After i click on Login, nothing happening. There is no error and its not logging in. Can someone help.
Same thing is happening to me, did you fix it?
Hello,
In the past on a different laptop (Win 7) the version 3 of the TI debugger worked fine.
Now on a new laptop with Win 10 it does not work anymore.
Error message is:
error: unable to access jarfile TurboDebugger.jar
What can I try to solve it please ?
Update: I solved it, the last version (6) works fine.
But only if I open the jar file. By executing the bat file, I get the error message above.
Does anyone know of a manual for the TI debugger, or something that simply explains what each function does?
Could you please implement conditional breakpoints? It would be great to use a simple condition on a variable value, e.g. break only if (pValue = 2). Thank you!
They are implemented. If you right click on an existing breakpoint and select the edit option you will see a condition field. The syntax is not well defined but it should allow variable name lookups and all the same condition operators as TI.
I get the error “Unable to initiate process debugging” do you know what could be the problem? I’m running TM1 10.2.2 fp6. Windows 7 Professional.
Thanks.
Only versions 10.2.2 FP7 and PA 2 (11.x) have the necessary APIs for the tool to run.
Can this be used (or ever be available) for cloud version of PA?
I notice that the current ingredient is Local.
You should be able to use the tool as-is with the cloud version of PA. You may need to create and use automation account credentials though.
Is the Turbo Integrator Debugger working when the TM1 server uses Integrated Security Mode = 2?
When IntegratedSecurityMode = 2, the debugger will be attempting to use SSO sign on. Because Windows and Java don’t nicely share Kerberos tokens by default, you’ll need to update the Windows registry to enable the JVM to retrieve the necessary token. )
Hi,
Firstly this is a fantastic tool. I am having a problem last few lines of code are not being displayed, on tabs. This has been raised on the TM1 Forum by a number of users with a screenshot to explain . For me the problem is not consistent, ie different TI scripts of different lengths are not being displayed. Is there a fix.
Cheers,
Peter | https://developer.ibm.com/recipes/tutorials/ibm-tm1-turbointegrator-debugger/ | CC-MAIN-2017-51 | refinedweb | 782 | 78.04 |
OAuth 2 All the Things with oPRO: Users and API
In my previous article we met oPRO – a Rails engine to build full-fledged OAuth 2 providers. We’ve already created a server (the actual provider) and a client app. For now, the server has a basic authentication system powered by Devise. Users are able to create applications to receive client and secret keys, authenticate via OAuth 2 and perform sample API requests. However, currently there is no storage mechanism for the user’s data on the client side. Moreover, we don’t get any information about a user apart from their tokens. This article will address these issues, as well as introducing some more API actions and refactoring the code.
The source code for the server and client applications can be found on GitHub.
Introducing Users
Storing tokens inside the user’s session isn’t really convenient, therefore I’d like to introduce a new
users table for the client app and save all the relevant data there. Having a model in place will also allow us to extract some methods to tidy up the controllers.
Create a new migration:
$ rails g model User email:string uid:string access_token:string refresh_token:string expires_at:string
Apart from tokens and expiry information, we will also store the user’s uid and email – as you probably do when using authentication providers like Twitter or Facebook.
Modify migration to include some indexes:
migrations/xxx_create_users.rb
class CreateUsers < ActiveRecord::Migration def change create_table :users do |t| t.string :email, index: true t.string :uid, index: true, unique: true t.string :access_token t.string :refresh_token t.string :expires_at t.timestamps null: false end end end
and apply it:
$ rake db:migrate
Also, add basic validation rules:
models/user.rb
[...] validates :access_token, presence: true validates :refresh_token, presence: true validates :expires_at, presence: true validates :uid, presence: true, uniqueness: true [...]
Having this in place, we can tweak the
SessionsController (for now, I’ll just code its skeleton):
sessions_controller.rb
[...] def create user = authenticate_and_save_user if user login_user flash[:success] = "Welcome, #{user.email}!" else flash[:warning] = "Can't authenticate you..." end redirect_to root_path end [...]
There are two things that we have to take care of: perform user authentication and actually log them in. We might leave the code from the previous iteration
JSON.parse RestClient.post( [...] )
right inside the controller’s action, but that’s not a great idea. In the production environment, you’d probably create a separate authentication adapter for your app and serve it as a gem. However, for our purposes, it will be enough to extract the code somewhere to a separate file. I’ll stick with a model, but you may also place it inside the lib directory (just don’t forget to require that file).
models/opro_api.rb
class OproApi TOKEN_URL = "#{ENV['opro_base_url']}/oauth/token.json" API_URL = "#{ENV['opro_base_url']}/api" attr_reader :access_token, :refresh_token def initialize(access_token: nil, refresh_token: nil) @access_token = access_token @refresh_token = refresh_token end def authenticate!(code) return if access_token JSON.parse(RestClient.post(TOKEN_URL, { client_id: ENV['opro_client_id'], client_secret: ENV['opro_client_secret'], code: code }, accept: :json)) end end
TOKEN_URL and
API_URL are just convenience constants.
access_token and
refresh_token are instance variables that will be used in various methods of this class.
When defining the
initialize method, I list arguments in a hash style – this cool feature is supported in newer versions of Ruby and allows to pass arguments easily (you don’t have to remember their order).
authenticate! contains the code from the previous iteration (I only added a
return statement). It sends
a request and returns parsed JSON.
Now, return to the controller:
sessions_controller.rb
[...] def create user = User.from_opro(OproApi.new.authenticate!(params[:code])) if user login_user flash[:success] = "Welcome, #{user.email}!" else flash[:warning] = "Can't authenticate you..." end redirect_to root_path end [...]
Here we are taking advantage of newly created
authenticate! method. The next step is to code the
from_opro class method that takes the user’s auth hash and stores it into the database. If you read my article about OAuth 2, I introduced a very similar method there called
from_omniauth.
models/user.rb
[...] class << self def from_opro(auth = nil) return false unless auth user = User.find_or_initialize_by(uid: auth['uid']) user.access_token = auth['access_token'] user.refresh_token = auth['refresh_token'] user.email = auth['email'] user.save user end end [...]
If, for some reason, the auth hash has no value, just return
false – this will make our controller render an error. Otherwise, find or initialize a user by the uid, update all the attributes, and return the record as a result.
Great! The last step is to perform logging in. For this, I’ll introduce a couple of helper methods:
application_controller.rb
[...] private [...] def login(user) session[:user_id] = user.id current_user = user end def current_user @current_user ||= User.find_by(id: session[:user_id]) end def current_user=(user) @current_user = user end helper_method :new_opro_token_path, :current_user [...]
These are very basic methods that you’ve probably seen countless times.
current_user will be employed in the views, so I mark it as helper.
Let’s also modify the view:
views/pages/index.html.erb
<h1>Welcome!</h1> <% if current_user %> <ul> <li><%= link_to 'Show some money', api_tests_path %></li> </ul> <% else %> <%= link_to 'Authenticate via oPRO', new_opro_token_path %> <% end %>
I guess users might want to logout, so present a corresponding action as well:
config/routes.rb
[...] delete '/logout', to: 'sessions#destroy', as: :logout [...]
sessions_controller.rb
[...] def destroy logout flash[:success] = "See you!" redirect_to root_path end [...]
logout is yet another method:
application_controller.rb
[...] def logout session.delete(:user_id) current_user = nil end [...]
Present a new link:
views/pages/index.html.erb
<h1>Welcome!</h1> <% if current_user %> <ul> <li><%= link_to 'Show some money', api_tests_path %></li> </ul> <%= link_to 'Logout', logout_path, method: :delete %> <% else %> <%= link_to 'Authenticate via oPRO', new_opro_token_path %> <% end %>
Tweaking the Authentication Hash
At this point you might be thinking, “How in the world can we fetch the user’s email and uid if, by default, oPRO only lists tokens and an expiry information?” Well, you are right, an something has to be done about it. The problem, however, is the current stable release has no way to redefine
TokenController. This controller actually takes care or generating all the tokens and creating an authentication hash. However, I did some tweaking) to the oPRO’s source code that were merged into the
master branch. For now you’ll have to specify it directly:
Gemfile
[...] gem 'opro', github: 'opro/opro', branch: 'master' [...]
If something changes in the future, I will update this article as necessary.
First of all, introduce a new route:
config/routes.rb
[...] mount_opro_oauth controllers: { oauth_new: 'oauth/auth', oauth_token: 'oauth/token' }, except: :docs [...]
Create a custom controller inheriting from the original one:
controllers/oauth/token_controller.rb
class Oauth::TokenController < Opro::Oauth::TokenController end
We don’t need to monkey-patch any action, as the only thing that has to be done is changing the view:
views/oauth/token/create.json.jbuilder
json.access_token @auth_grant.access_token json.token_type Opro.token_type || 'bearer' json.refresh_token @auth_grant.refresh_token json.expires_in @auth_grant.expires_in json.uid @auth_grant.user.id json.email @auth_grant.user.email
jbuilder takes care of creating the proper JSON for us.
Everything, apart from the
uid and
You are good to go. Boot the server and try to authenticate. All of the user’s information should be stored properly.
Working with the API
A Bit of Refactoring
We’ve done a good job, but I still don’t like the way our code performs API requests. Let’s extract it to the opro_api.rb file:
models/opro_api.rb
[...] def test_api JSON.parse(RestClient.get("#{ENV['opro_base_url']}/oauth_tests/show_me_the_money.json", params: { access_token: access_token }, accept: :json)) end [...]
Now it can be called it from the controller’s action:
api_tests_controller.rb
class ApiTestsController < ApplicationController before_action :prepare_client def index @response = @client.test_api end private def prepare_client @client = OproApi.new(access_token: current_user.access_token) end end
We will require
@client to perform any API request, therefore I’ve placed it inside the
before_action.
That’s nice, but what if a current user does not have a token for some reason? Let’s check that prior to doing anything else:
api_tests_controller.rb
class ApiTestsController < ApplicationController before_action :check_token before_action :prepare_client [...] private [...] def check_token redirect_to new_opro_token_path and return if !current_user || current_user.token_missing? end end
models/user.rb
[...] def token_missing? !self.access_token.present? end [...]
Now if a token does not exist, the user will be asked to authenticate once again.
Introducing More Actions
Let’s add two more custom API actions: fetching information about a user and updating a user. The first step is adding a new
Api::UsersController on the server app:
config/routes.rb
[...] namespace :api do resources :users, only: [:show, :update] end [...]
controllers/api/users_controller.rb
class Api::UsersController < ApplicationController def show @user = User.find(params[:id]) end def update @user = User.find(params[:id]) @user.last_sign_in_ip = params[:ip] render json: {result: @user.save} end end
It does not really matter what the
update does, so for demonstration purposes let’s simply modify the
last_sign_in_ip column introduced by Devise.
Don’t forget a view:
views/api/users/show.json.jbuilder
json.user do json.email @user.email end
Once again, I am returning some sample data here.
There are two thing to note, however. Before actually performing those actions, we haven’t checked whether an access token is in the request and if it is valid. oPRO provides an
allow_oauth! method that takes
only and
except options just like
before_action. By default, no actions are allowed to be performed based on the access token, so add this line to the controller:
controllers/api/users_controller.rb
class Api::UsersController < ApplicationController allow_oauth! [...] end
Another thing to take into consideration is that we won’t send a CSRF token when performing the
update action, so Rails will raise an exception. To avoid that, modify this line:
application_controller.rb
[...] protect_from_forgery [...]
Now request forgery protection will use
null_session, meaning that a session will be nullified if the CSRF token is not provided, but not reset completely. As long as we are relying on the access token to perform authentication, everything should be fine.
Return to the client’s app and tweak our poor man’s API adapter:
models/opro_api.rb
[...] def get_user(id) JSON.parse(RestClient.get("#{API_URL}/users/#{id}.json", params: { access_token: access_token }, accept: :json)) end def update_user(id) JSON.parse(RestClient.patch("#{API_URL}/users/#{id}.json", { access_token: access_token, ip: "#{rand(100)}.1.1.1" }, accept: :json)) end [...]
Now the controller:
api_tests_controller.rb
[...] def show @response = @client.get_user(params[:id]) end def update @response = @client.update_user(params[:id]) end [...]
Views will simply render a response:
views/api_tests/show.html.erb
<pre><%= @response.to_yaml %></pre>
views/api_tests/update.html.erb
<pre><%= @response.to_yaml %></pre>
Add new routes:
config/routes.rb
[...] resources :api_tests, only: [:index, :show, :update] [...]
Provide links to these new actions:
views/pages/index.html.erb
[...] <ul> <li><%= link_to 'Show some money', api_tests_path %></li> <li><%= link_to 'Get a user', api_test_path(1) %></li> <li><%= link_to 'Update a user', api_test_path(1), method: :patch %></li> </ul> [...]
I just hard-coded the user’s id here, but that does not really matter for this demo. Go ahead and play with the API a bit! Note, however, that for the
update action to work correctly, you must permit “write” access to the app when authenticating. I’ll speak more about various permissions in a later section.
Conclusion
Our application is starting to look pretty nice, but once again that’s not enough. We still have a handful of things to take care of:
- Access tokens should have a limited lifespan and some rate limitation should be introduced.
- We haven’t discussed other advanced topics, like working with scope, introducing your own authentication solution and exchanging user’s credentials for a token.
So, the last, but not the least, part of this article will cover all these topics. Hold on tight! | https://www.sitepoint.com/oauth-2-all-the-things-with-opro-users-and-api/ | CC-MAIN-2020-10 | refinedweb | 1,962 | 51.34 |
Grant Griffin <g2 at seebelow.org> wrote: > Of course, I don't expect that eveyone will share this sensibility; if > Perl makes sense to _you_, more power to 'ya. But I did it off-and-on > for three years, and my fingers did a lot of walking through the > camel--they probably walked a mile for it. I found I had to do a lot of extra research to become fluent in Perl as well. However, nothing truly worthwhile comes too easily, right? ;) > But my own theory is that Perlers actually revel in the nonsensicalness > of their language, much as a tight-rope walker proudly calls attention > to his lack of a net. <<karl walenda died>> It's an ego thing. That > explains the one-liners and the "just another perl hacker" thing. We do, to a point, revel in this. And ego is definitely rampant in clpm. > By the way, here's my own personal Python JAPH implementation: > > print "just another perl hacker" Coincidentally, and I don't want to hurt your pride, but this is also Larry Wall's implementation. Apparently he has also become tired of all these cutesy little tricks. Of course, Larry would remember to put a semi-colon after that statement if there were more than one statement in the codeblock... would you? ;) > But in terms of your point about "the best tool for the job", one _does_ > have to admit in all fairness that there's at least _one_ thing that > Perl is an order-of-magnitude better at than Python: execute Perl > scripts. How 'bout regexps? Perl didn't create regexps, so why bag on them as some in this thread have done? Unix operations demanded the existence of regex pattern matching and Perl, which was so obviously derived from several Unix tools, accomplishes them very well. I've noticed Pythoners like to stand on soapboxes and look down on regexps. ("A true Python solution wouldn't use regex's.") Why look down on something so powerful? Regexps are optimized for matching text against patterns as quickly and efficiently as possible, whereas you can only have so many: if 'x' in word: statements before you're bloating your code unnecessarily. I agree that: if( $char =~ /^a$/ ) {...} would be much more efficiently implemented as: if( $char == 'a' ) {...} but more complicated patterns are just not worth replacing with a system of if statements. The fact that Perl makes regexps so readily available and easy-to-use probably contributed to their over-use, but Perl isn't the only language that can be misused, is it? > But I guess if the mettle of a scripting language is to be judged by its > one-linerability, I should confess that Python is a dismal failure: if > you put together its use of indents for blocks, with the fact that none > but its most elemental functions are built-in...well...it's hardly What elemental functions aren't built-in? I was under the impression that core modules were built-in, they simply exist in a separate namespace? > show-me-a-language-that's-great-for-one-liners-and-i'll-show-you > -a-language-that-doesn't-scale-well-<wink>-ly y'rs, Perl 5 took great strides toward scaling. I would personally prefer to use Python for a larger scale app, but Perl is more than capable of doing it quite well. -- -Tim Hammerquist <timmy at cpan.org> Universities are places of knowledge. The freshman each bring a little in with them, and the seniors take none away, so knowledge accumulates. -- Unknown | https://mail.python.org/pipermail/python-list/2000-September/021499.html | CC-MAIN-2016-50 | refinedweb | 596 | 63.29 |
The system builds the updater binary from
bootable/recovery/updater
and uses it in an OTA package.
ota_update.zip,
incremental_ota_update.zip) that contains the executable binary
META-INF/com/google/android/update-binary.
Updater contains several builtin functions and an interpreter for an
extensible scripting language (edify) that supports commands for typical
update-related tasks. Updater looks in the package .zip file for a script in the
file
META-INF/com/google/android/updater-script.
Note: Using the edify script and/or builtin functions is not a common activity, but can be helpful if you need to debug the update file.
Edify syntax
An edify script is a single expression in which all values are strings. Empty strings are false in a Boolean context and all other strings are true. Edify supports the following operators (with the usual meanings):
(expr ) expr + expr # string concatenation, not integer addition expr == expr expr != expr expr && expr expr || expr ! expr if expr then expr endif if expr then expr else expr endif function_name(expr, expr,...) expr; expr
Any string of the characters a-z, A-Z, 0-9, _, :, /, . that isn't a reserved word is considered a string literal. (Reserved words are if else then endif.) String literals may also appear in double-quotes; this is how to create values with whitespace and other characters not in the above set. \n, \t, \", and \\ serve as escapes within quoted strings, as does \x##.
The && and || operators are short-circuiting; the right side is not evaluated if the logical result is determined by the left side. The following are equivalent:
e1 && e2 if e1 then e2 endif
The ; operator is a sequence point; it means to evaluate first the left side and then the right side. Its value is the value of the right-side expression. A semicolon can also appear after an expression, so the effect simulates C-style statements:
prepare(); do_other_thing("argument"); finish_up();
Built-in functions
Most update functionality is contained in the functions available for
execution by scripts. (Strictly speaking these are macros rather than
functions in the Lisp sense, since they need not evaluate all of their
arguments.) Unless otherwise noted, functions return true on success
and false on error. If you want errors to abort execution of the
script, use the
abort() and/or
assert() functions.
The set of functions available in updater can also be extended to provide
device-specific
functionality.
abort([msg])
- Aborts execution of the script immediately, with the optional msg. If the user has turned on text display, msg appears in the recovery log and on-screen.
assert(expr[, expr, ...])
- Evaluates each expr in turn. If any is false, immediately aborts execution with the message "assert failed" and the source text of the failed expression.
apply_patch(src_file, tgt_file, tgt_sha1, tgt_size, patch1_sha1, patch1_blob, [...])
- Applies a binary patch to the src_file to produce the tgt_file . If the desired target is the same as the source, pass "-" for tgt_file . tgt_sha1 and tgt_size are the expected final SHA1 hash and size of the target file. The remaining arguments must come in pairs: a SHA1 hash (a 40-character hex string) and a blob. The blob is the patch to be applied when the source file's current contents have the given SHA1.
The patching is done in a safe manner that guarantees the target file either has the desired SHA1 hash and size, or it is untouched—it will not be left in an unrecoverable intermediate state. If the process is interrupted during patching, the target file may be in an intermediate state; a copy exists in the cache partition so restarting the update can successfully update the file.
Special syntax is supported to treat the contents of Memory Technology Device (MTD) partitions as files, allowing patching of raw partitions such as boot. To read an MTD partition, you must know how much data you want to read since the partition does not have an end-of-file notion. You can use the string "MTD:partition:size_1:sha1_1:size_2: sha1_2" as a filename to read the given partition. You must specify at least one (size, sha-1) pair; you can specify more than one if there are multiple possibilities for what you expect to read.
apply_patch_check(filename, sha1[, sha1, ...])
- Returns true if the contents of filename or the temporary copy in the cache partition (if present) have a SHA1 checksum equal to one of the given sha1 values. sha1 values are specified as 40 hex digits. This function differs from
sha1_check(read_file(filename), sha1 [, ...])in that it knows to check the cache partition copy, so
apply_patch_check()will succeed even if the file was corrupted by an interrupted
apply_patch() update.
apply_patch_space(bytes)
- Returns true if at least bytes of scratch space is available for applying binary patches.
concat(expr[, expr, ...])
- Evaluates each expression and concatenates them. The + operator is syntactic sugar for this function in the special case of two arguments (but the function form can take any number of expressions). The expressions must be strings; it can't concatenate blobs.
file_getprop(filename, key)
- Reads the given filename, interprets it as a properties file (e.g.
/system/build.prop), and returns the value of the given key , or the empty string if key is not present.
format(fs_type, partition_type, location, fs_size, mount_point)
- Reformats a given partition. Supported partition types:
- fs_type="yaffs2" and partition_type="MTD". Location must be the name of the MTD partition; an empty yaffs2 filesystem is constructed there. Remaining arguments are unused.
- fs_type="ext4" and partition_type="EMMC". Location must be the device file for the partition. An empty ext4 filesystem is constructed there. If fs_size is zero, the filesystem takes up the entire partition. If fs_size is a positive number, the filesystem takes the first fs_size bytes of the partition. If fs_size is a negative number, the filesystem takes all except the last |fs_size| bytes of the partition.
- fs_type="f2fs" and partition_type="EMMC". Location must be the device file for the partition. fs_size must be a non-negative number. If fs_size is zero, the filesystem takes up the entire partition. If fs_size is a positive number, the filesystem takes the first fs_size bytes of the partition.
- mount_point should be the future mount point for the filesystem.
getprop(key)
- Returns the value of system property key (or the empty string, if it's not defined). The system property values defined by the recovery partition are not necessarily the same as those of the main system. This function returns the value in recovery.
greater_than_int(a, b)
- Returns true if and only if (iff) a (interpreted as an integer) is greater than b (interpreted as an integer).
ifelse(cond, e1[, e2])
- Evaluates cond, and if it is true evaluates and returns the value of e1, otherwise it evaluates and returns e2 (if present). The "if ... else ... then ... endif" construct is just syntactic sugar for this function.
is_mounted(mount_point)
- Returns true iff there is a filesystem mounted at mount_point.
is_substring(needle, haystack)
- Returns true iff needle is a substring of haystack.
less_than_int(a, b)
- Returns true iff a (interpreted as an integer) is less than b (interpreted as an integer).
mount(fs_type, partition_type, name, mount_point)
- Mounts a filesystem of fs_type at mount_point. partition_type must be one of:
- MTD. Name is the name of an MTD partition (e.g., system, userdata; see
/proc/mtdon the device for a complete list).
- EMMC.
Recovery does not mount any filesystems by default (except the SD card if the user is doing a manual install of a package from the SD card); your script must mount any partitions it needs to modify.
package_extract_dir(package_dir, dest_dir)
- Extracts all files from the package underneath package_dir and writes them to the corresponding tree beneath dest_dir. Any existing files are overwritten.
package_extract_file(package_file[, dest_file])
- Extracts a single package_file from the update package and writes it to dest_file, overwriting existing files if necessary. Without the dest_file argument, returns the contents of the package file as a binary blob.
read_file(filename)
- Reads filename and returns its contents as a binary blob.
run_program(path[, arg, ...])
- Executes the binary at path, passing args. Returns the program's exit status.
set_progress(frac)
- Sets the position of the progress meter within the chunk defined by the most recent
show_progress()call. frac must be in the range [0.0, 1.0]. The progress meter never moves backwards; attempts to make it do so are ignored.
sha1_check(blob[, sha1])
- The blob argument is a blob of the type returned by
read_file()or the one-argument form of
package_extract_file(). With no sha1 arguments, this function returns the SHA1 hash of the blob (as a 40-digit hex string). With one or more sha1 arguments, this function returns the SHA1 hash if it equals one of the arguments, or the empty string if it does not equal any of them.
show_progress(frac, secs)
- Advances the progress meter over the next frac of its length over the secs seconds (must be an integer). secs may be 0, in which case the meter is not advanced automatically but by use of the
set_progress()function defined above.
sleep(secs)
- Sleeps for secs seconds (must be an integer).
stdout(expr[, expr, ...])
- Evaluates each expression and dumps its value to stdout. Useful for debugging.
tune2fs(device[, arg, …])
- Adjusts tunable parameters args on device.
ui_print([text, ...])
- Concatenates all text arguments and prints the result to the UI (where it will be visible if the user has turned on the text display).
unmount(mount_point)
- Unmounts the filesystem mounted at mount_point.
wipe_block_device(block_dev, len)
- Wipes the len bytes of the given block device block_dev.
wipe_cache()
- Causes the cache partition to be wiped at the end of a successful installation.
write_raw_image(filename_or_blob, partition)
- Writes the image in filename_or_blob to the MTD partition. filename_or_blob can be a string naming a local file or a blob-valued argument containing the data to write. To copy a file from the OTA package to a partition, use:
write_raw_image(package_extract_file("zip_filename"), "partition_name");
Note: Prior to Android 4.1, only filenames were accepted, so to accomplish this the data first had to be unzipped into a temporary local file. | https://source.android.com/devices/tech/ota/inside_packages | CC-MAIN-2017-39 | refinedweb | 1,682 | 56.66 |
On Thu, Apr 29, 2021 at 08:48:26AM +0800, Kefeng Wang wrote:> > On 2021/4/28 13:59, Mike Rapoport wrote:> > On Tue, Apr 27, 2021 at 07:08:59PM +0800, Kefeng Wang wrote:> > > On 2021/4/27 14:23, Mike Rapoport wrote:> > > > On Mon, Apr 26, 2021 at 11:26:38PM +0800, Kefeng Wang wrote:> > > > > On 2021/4/26 13:20, Mike Rapoport wrote:> > > > > > On Sun, Apr 25, 2021 at 03:51:56PM +0800, Kefeng Wang wrote:> > > > > > > On 2021/4/25 15:19, Mike Rapoport wrote:> > > > > > > > > > > > > > On Fri, Apr 23, 2021 at 04:11:16PM +0800, Kefeng Wang wrote:> > > > > > > > > > > > > > I tested this patchset(plus arm32 change, like arm64 does)> > > > > > > based on lts 5.10,add some debug log, the useful info shows> > > > > > > below, if we enable HOLES_IN_ZONE, no panic, any idea,> > > > > > > thanks.> > > > > > > > > > > > > > Are there any changes on top of 5.10 except for pfn_valid() patch?> > > > > > > Do you see this panic on 5.10 without the changes?> > > > > > > > > > > > > > Yes, there are some BSP support for arm board based on 5.10,> > > > Is it possible to test 5.12?> > Do you use SPARSMEM? If yes, what is your section size?> > What is the value if CONFIG_FORCE_MAX_ZONEORDER in your configuration?> > Yes,> > CONFIG_SPARSEMEM=y> > CONFIG_SPARSEMEM_STATIC=y> > CONFIG_FORCE_MAX_ZONEORDER = 11> > CONFIG_PAGE_OFFSET=0xC0000000> CONFIG_HAVE_ARCH_PFN_VALID=y> CONFIG_HIGHMEM=y> #define SECTION_SIZE_BITS 26> #define MAX_PHYSADDR_BITS 32> #define MAX_PHYSMEM_BITS 32It seems that with SPARSEMEM we don't align the freed parts on pageblockboundaries.Can you try the patch below:diff --git a/mm/memblock.c b/mm/memblock.cindex afaefa8fc6ab..1926369b52ec 100644--- a/mm/memblock.c+++ b/mm/memblock.c@@ -1941,14 +1941,13 @@ static void __init free_unused_memmap(void) * due to SPARSEMEM sections which aren't present. */ start = min(start, ALIGN(prev_end, PAGES_PER_SECTION));-#else+#endif /* * Align down here since the VM subsystem insists that the * memmap entries are valid from the bank start aligned to * MAX_ORDER_NR_PAGES. */ start = round_down(start, MAX_ORDER_NR_PAGES);-#endif /* * If we had a previous bank, and there is a space-- Sincerely yours,Mike. | https://lkml.org/lkml/2021/4/29/69 | CC-MAIN-2021-25 | refinedweb | 325 | 72.66 |
be achieved via the Mammoth package. It's an easy, efficient, and fast library used to convert DOCX files to HTML. In this article, we'll learn how to use Mammoth in Python to convert DOCX to HTML.
Installing Mammoth
As a good practice, remember to have your virtual environment ready and activated before the installation:
$ python3 -m venv myenv $ . myenv/bin/activate
Let's then install Mammoth with
pip:
$ pip3 install mammoth
This tutorial uses Mammoth version
1.4.15. Here's a sample document you can use throughout this tutorial. If you have a document to convert, make sure that it's a
.docx file!
Now that you're ready to go, let's get started with extracting the text and writing that as HTML.
Extract the Raw Text of a DOCX File
Preserving the formatting while converting to HTML is one of the best features of Mammoth. However, if you just need the text of the DOCX file, you'll be pleasantly surprised at how few lines of code are needed.
You can use the
extract_raw_text() method to retrieve it:
import mammoth with open(input_filename, "rb") as docx_file: result = mammoth.extract_raw_text(docx_file) text = result.value # The raw text with open('output.txt', 'w') as text_file: text_file.write(text)
Note that this method does not return a valid HTML document. It only returns the text on the page, hence why we save it with the
.txt extension. If you do need to keep the layout and/or formatting, you'll want to extract the HTML contents.
Convert Docx to HTML with Custom Style Mapping
By default, Mammoth converts your document into HTML but it does not give you a valid HTML page. While web browsers can display the content, it is missing an
<html> tag to encapsulate the document, and a
<body> tag to contain the document. How you choose to integrate its output is up to you. Let's say you're using a web framework that has templates. You'd likely define a template to display a Word Document and load Mammoth's output inside the template's body.
Mammoth is not only flexible with how you can use its output but how you can create it as well. Particularly, we have a lot of options when we want to style the HTML we produce. We map styles by matching each DOCX formatting rule to the equivalent (or as close as we can get) CSS rule.
To see what styles your DOCX file has, you have two options:
- You can open your docx file with MS Word and check the Styles toolbar.
- You can dig into the XML files by opening your DOCX file with an archive manager, and then navigate to the
/word/styles.xmland locate your styles.
The second option can be used by those who don't have access to MS Word or an alternative word processor that can interpret and display the styles.
Mammoth already has some of the most common style maps covered by default. For instance, the
Heading1 docx style is mapped to the
<h1> HTML element, bold is mapped to the
<strong> HTML element, etc.
We can also use Mammoth to customize the document's styles while mapping them. For example, if you wanted to change all bold occurrences in the DOCX file to italic in the HTML, you can do this:
import mammoth custom_styles = "b => i" with open(input_filename, "rb") as docx_file: result = mammoth.convert_to_html(docx_file, style_map = custom_styles) text = result.value with open('output.html', 'w') as html_file: html_file.write(text)
With the
custom_styles variable, the style on the left is from the DOCX file, while the one on the right is the corresponding CSS.
Let's say we wanted to omit the bold occurrences altogether, we can leave the mapping target blank:
custom_styles = "b => "
Sometimes the document we're porting has many styles to retain. It quickly becomes impractical to create a variable for every style we want to map. Luckily we can use
docstrings to map as many styles as we want in one go:
custom_styles = """ b => del u => em p[style-name='Heading 1'] => i"""
You may have noticed that the last mapping was a bit different from the others. When mapping styles, we can use square brackets
[] with a condition inside them so that only a subset of elements are styled that way.
In our example,
p[style-name='Heading 1'] selects paragraphs that has a style name
Heading 1. We can also use
p[style-name^='Heading'] to select each paragraph that has a style name starting with
Heading.
Style mapping also allows us to map styles to custom CSS classes. By doing so, we can shape the style of HTML as we like. Let's do an example where we define our basic custom CSS in a docstring like this:
custom_css =""" <style> .red{ color: red; } .underline{ text-decoration: underline; } .ul.li{ list-style-type: circle; } table, th, td { border: 1px solid black; } </style> """
Now we can update our mapping to reference the CSS classes we've defined in the
<style> block:
custom_styles = """ b => b.red u => em.red p[style-name='Heading 1'] => h1.red.underline"""
Now all we need to do is merge the CSS and the HTML together:
edited_html = custom_css + html
If your DOCX file has any of those elements, you will be able to see the results.
Now that we know how to map styles, let's use a more well-known CSS framework (along with the JS) to give our HTML a better look and practice a more likely real-life scenario.
Mapping Styles With Bootstrap (or Any Other UI Framework)
Just like we did with the
custom_css, we need to ensure that the CSS is loaded with the HTML. We need to add the Bootstrap file URI or CDN to our HTML:>'
We'll also slightly tweak our
custom_styles to match our new CSS classes:
custom_styles = """ b => b.mark u => u.initialism p[style-name='Heading 1'] => h1.card table => table.table.table-hover """
In the first line, we're mapping bold DOCX style to the
b HTML element with a class
mark, which is a Bootstrap class equivalent of the HTML
<mark> tag, used for highlighting part of the text.
In the second line, we're adding the
initialism class to the
u HTML element, slightly decreasing the font size and transforming the text into the uppercase.
In the third line, we're selecting all paragraphs that have the style name
Heading 1 and converting them to
h1 HTML elements with the Bootstrap class of
card, which sets multiple style properties such as background color, position, and border for the element.
In the last line, we're converting all tables in our docx file to the
table HTML element, with Bootstrap's
table class to give it a new look, also we're making it highlight when hovered, by adding the Bootstrap class of
table-hover.
Like before, we use dot-notation to map multiple classes to the same HTML element, even though the styles come from another source.
Finally, add the Bootstrap CDNs to our HTML:
edited_html = bootstrap_css + html + bootstrap_js
Our HTML is now ready to be shared, with a polished look and feel! Here's the full code for reference:
import mammoth' with open(input_filename, "rb") as docx_file: result = mammoth.convert_to_html(docx_file, style_map = custom_styles) html = result.value edited_html = bootstrap_css + html + bootstrap_js output_filename = "output.html" with open(output_filename, "w") as f: f.writelines(edited_html)
Also, another point to note here that in a real-life scenario, you probably will not add Bootstrap CSS directly to the HTML content as we did here. Instead, you would load/inject the HTML content to a prepacked HTML page, which already would have the necessary CSS and JS bundles.
So far you've seen how much flexibility we have to style our output. Mammoth also allows us to modify the content we're converting. Let's take a look at that now.
Dealing With Images We Don't Want Shared
Let's say we'd like to omit images from our DOCX file from being converted. The
convert_to_html() accepts a
convert_image argument, which is an image handler function. It returns a list of images, that should be converted and added to the HTML document.
Naturally, if we override it and return an empty list, they'll be omitted from the converted page:
def ignore_image(image): return []
Now, let's pass that function as a parameter into the
convert_to_html() method:
with open(input_filename, "rb") as docx_file: result = mammoth.convert_to_html(docx_file, style_map = custom_styles, convert_image=ignore_image) html = result.value with open('output.html', 'w') as html_file: html_file.write(text)
That's it! Mammoth will ignore all the images when generating an HTML file.
We've been programmatically using Mammoth with Python so far. Mammoth is also a CLI tool, therefore we have another interface to do DOCX to HTML conversations. Let's see how that works in the next section.
Convert DOCX to HTML Using Command Line Tool
File conversion with Mammoth, using the CLI, typically looks like this:
$ mammoth path/to/input_filename.docx path/to/output.html
If you wanted to separate the images from the HTML, you can specify an output folder:
$ mammoth file-sample_100kB.docx --output-dir=imgs
We can also add custom styles as we did in Python. You need to first create a custom style file:
$ touch my-custom-styles
Then we'll add our custom styles in it, the syntax is same as before:
b => b.red u => em.red p[style-name='Heading 1'] => h1.red.underline
Now we can generate our HTML file with custom style:
$ mammoth file-sample_100kB.docx output.html --style-map=my-custom-styles
And you're done! Your document would have been converted with the defined custom styles.
Conclusion
File typecasting is a common situation when working on web technologies. Converting DOCX files into well-known and easy to manipulate HTML allows us to reconstruct the data as much as we need. With Mammoth, we've learned how to extract the text from a docx and how to convert it to HTML.
When converting to HTML we can style the output with CSS rules we create or ones that come with common UI frameworks. We can also omit data we don't want to be available in the HTML. Lastly, we've seen how to use the Mammoth CLI as an alternative option for file conversion.
You can find a sample docx file along with the full code of the tutorial on this GitHub repository. | https://stackabuse.com/how-to-convert-docx-to-html-with-python-mammoth/ | CC-MAIN-2021-17 | refinedweb | 1,765 | 63.7 |
React JSX: How to Do It the Right Way, Part I
React JSX: How to Do It the Right Way, Part I
In this, the first part of React JSX series, we will take a look at multiple ways you can \correctly loop through arrays the using React.js.
Join the DZone community and get the full member experience.Join For Free
Usually, when developing a website, you'll need some dynamic rendering, like a listing of items, showing some element under a certain condition and so on.
You're all aware of the standard JS syntax - a for loop, or an if/else - but when you try to write those under a render method in React, you'll most likely get some weird looking errors.
In this, the first part of React JSX series, we will take a look at how to correctly loop through arrays the reactive way. In second part of the series you can find out more about conditional rendering.
Please note that all of the examples below apply to React Native as well!
Using Loops Inside React Render Methods
Let's say you have an array of movies and that you need to display the Movie component for each of them.
Most of us have tried this at some point:
render() { return ( <div> { // this won't work! } { for (var i=0; i < movies.length; i++) { <Movie movie={movie} /> } } </div> ) }
This, however, won't work. Why? Think of it like you're just calling JavaScript functions. You can't put a for loop as a parameter when calling a function!
Well, how to do it then? There are a few ways.
Keep in mind that all of the code bellow are just examples, pseudo code. There's no text decoration good enough to highlight how important this is: Always send the key prop to the items you're rendering and keep it unique, avoiding array indexes.
You can go through a for loop above the
return statement in the
render method and fill in a list you'll pass into
return:
render() { const movieItems = []; for (var i=0; i < movies.length; i++) { movieItems.push(<Movie movie={movie} />); } return ( <div> { movieItems } </div> ) }
This, however, is not a neat way as it's polluting the
render method. A good programmer wants to keep his code readable, so to make
render more readable, better move the for loop outside of it and then call it as a function:
renderMovies(movies) { const movieItems = []; for (var i=0; i < movies.length; i++) { movieItems.push(<Movie movie={movie} />); } return movieItems; } render() { return ( <div> { this.renderMovies(movies) } </div> ) }
This looks a bit better now. Still, you are using the for loop which doesn't really look so nice. The for should be used when you need to render something a certain number of times. When you have an object or an array, there are neater ways to go about this.
So, let's switch to using the map from JS Arrays:
renderMovies(movies) { // This is ES6 syntax! You'll need babel configured to use it! // You can still use the standard function syntax, // but ES6 is definitely something that'll make your life easier. return movies.map((movie) => { return ( <Movie movie={movie} /> ); }); } render() { return ( <div> { this.renderMovies(movies) } </div> ) }
Now, this looks good! Yet, it might look a bit bulky for being just a simple listing you can do in one single place. However, you can actually use the map syntax directly in a return statement. Why? Because the map function basically passes a freshly created array, compared to the for loop which is just a bulk of code.
render() { return ( <div> { // This is shortened syntax, for where we don't need to manipulate the actual items before rendering them } { movies.map((movie) => <Movie movie={movie} />) } </div> ) }
When you need to manipulate the actual item in the list before rendering it, you can do it this way:
render() { return ( <div> { movies.map((movie) => { // do something with a movie here return ( <Movie movie={movie} /> ); }) } </div> ) }
Now, again, the same as with proper positioning of elements with css, we want our code to be "positioned" properly. If there's a lot of manipulating to be done for a single item, doing it inside a return statement might unnecessarily pollute the render method. In that case, better move this code out of the render method. Here's an example:
renderMovie(movie) { // do something with a movie here return ( <Movie movie={movie} /> ); } render() { return ( <div> { // map will automatically pass the list item to our function } { movies.map(this.renderMovie) } </div> ) }
All of the previous examples can also be used with JavaScript objects, with slight adaptation - you won't be mapping through the object, but through the list of the keys of the object:
render() { return ( <div> { // You can use the lodash keys function (_.keys) as well // instead of Object.keys, but it functions the same way } { Object.keys(moviesObject).map((movieKey) => { const movie = moviesObject[movieKey]; return ( <Movie key={movieKey} movie={movie} /> ); }) } </div> ) }
Now you know multiple ways to loop through arrays in React! Which way you'll use is up to you and the occasion; sometimes one will be more suitable than the other.
Thank you for your time and good luck with coding!
Continue to part II of this text and read about conditional rendering in React now! }} | https://dzone.com/articles/react-jsx-how-to-do-it-the-right-way-part-i?fromrel=true | CC-MAIN-2019-47 | refinedweb | 893 | 70.94 |
No matter what kind of component you’re building, every component needs styles. In this tutorial, we’re going to take a deep dive into styling components using Stencil. We’ll learn how to implement global styles in Stencil, which helps us keep our components visually consistent when building a design system. We’ll also cover a lot of exciting CSS topics like gradients, animations, pseudo-elements, and more — so if that sounds interesting to you, let’s jump on in!
You can find all of the code for this tutorial at the Stencil CSS Github repository here.
NOTE: This tutorial assumes that you have a fundamental understanding of building components in Stencil. This tutorial will emphasize the CSS related aspects of building a component in Stencil.
Creating Our Stencil Component
To illustrate all of these CSS topics, we’re going to be building a credit card component. This kind of component could be used to display a user’s stored credit cards, or can even be modified to serve as a fun way to input credit card information. Here’s what the final component will look like.
Speaking of the final component, let’s take a look at what the final component will look like when used in HTML:
<credit-card</credit-card>
As you can see, our credit card component is going to have five properties. The first four props,
card-number,
card-holder,
expiration-date, and
cvv, are all aspects of a credit card that vary from card to card. The final prop,
gradient, will be used to specify which gradient background we want to use for the credit card. We’ll discuss in more detail how this will work later in the tutorial. Keeping this in mind, let’s create a new Stencil component and take in these values as props.
@Component({ tag: 'credit-card', styleUrl: 'credit-card.css', shadow: true, }) export class CreditCard { @Prop() cardNumber: string; @Prop() cardHolder: string; @Prop() expirationDate: string; @Prop() cvv: string; @Prop() gradient: 'purple' | 'green' | 'orange';
Because our component is largely presentational, we’ll take in all of these values as strings so we can display them on our credit card. Next, let’s write our JSX to create the structure of our credit card.
render() { return ( <Host> <a href="/" class="card-wrapper"> <div class="front"> <div class="row"> <p>Credit</p> <img src=" alt="logo" /> </div> <div class="row"> {this.cardNumber.split(' ').map(number => ( <p class="card-number">{number}</p> ))} </div> <div class="row"> <p class="cardholder">{this.cardHolder}</p> <p class="exp-date">{this.expirationDate}</p> </div> </div> <div class="back"> <p>Security Code</p> <p class="cvv">{this.cvv}</p> </div> </a> </Host> ); }
One thing to note here is that we are using the
split method on the
cardNumber to display the
cardNumber in groups of four digits. Because we use a space (‘ ‘) as our delimiter, the
cardNumber has to be set with a space after every four digits. The rest of the JSX just displays the values we took as props in an organized way. With our props and jsx set, we’re ready to get into the fun part…styling!
Global Styles
When building out a design system, we want most of our styles to be directly tied to our components. This ensures that our components are modular, which makes them easier to manage, debug, and scale. However, there are some styles that we want to share between our components in order to have a consistent look and feel across our design system. The styles you decide to share across components are entirely up to you, but usually they include things like colors, typography, spacing, etc. In order for us to share these styles across our design system, we need global styles. These global styles will be made available to all our components for consistency. Fortunately for us, Stencil has built in support for a global stylesheet. Here’s how we can create a global stylesheet:
- Create a new folder called
globalunder the
srcdirectory of your Stencil project
- Create a new file called
global.cssin the
globalfolder you just created
- In your
stencil.config.tsfile, specify the global style option
globalStyle: 'src/global/global.css'
- In the head of your
index.htmlfile, add a link to your global stylesheet
<link rel="stylesheet" href="/build/{YOUR_PROJECT_NAME}.css" />. Be sure to replace
{YOUR_PROJECT_NAME}with the name of your Stencil project.
Now our global styles are available for us to use! So now let’s open up
global.css and add some styles.
@import url(' html, body { font-family: 'Roboto', sans-serif; } :root { --font-color: #fff; --purple-gradient: linear-gradient(to right bottom, #a460f8, #4c48ff); --green-gradient: linear-gradient(to right bottom, #20e3b2, #0bc2c2); --orange-gradient: linear-gradient(to right bottom, #f9b523, #ff4e50); }
The first thing we are doing here is importing the “Roboto” font from Google fonts. With this font imported, we can use it across our entire design system by setting the
font-family of our
html and
body. In addition, we are declaring a few CSS variables on the
:root of our project. These variables are really useful for ensuring that all our components use the same values for their styling. In our case, we are creating some variables for our font color as well as some gradients that will serve as the background of our credit cards. This setup provides us a lot of flexibility for the future. In the event we want to tweak any of these gradients, we can change them in our global styles and the change will propagate to any component that references that variable.
Styling the Component
Alright, we’ve got our global styles and the structure of our credit card component. Let’s start styling the component itself. Within the
Host element, our credit card is composed of three main parts: a front, a back, and a wrapper for these two sides. Our wrapper is an anchor tag with a class of
card-wrapper. Let’s open
credit-card.css and style this first.
.card-wrapper { display: block; width: fit-content; }
Here, we are making the card a block-level element and fitting the width to its content (the front and back). Next, let’s style the front and back of the card. Naturally, the front and back will share a lot of styles, so let’s select both of them and specify the common styles.
.front, .back { width: 400px; height: 200px; padding: 20px; border-radius: 8px; font-size: 1.125rem; display: flex; flex-direction: column; box-shadow: 4px 8px 24px rgba(0, 0, 0, 0.25); }
Next, we can specify the styles unique to the front and back of the card. These styles are used to organize the layout of the content on each side of the card. Both sides already use flexbox and have set the direction to be columnwise, so now we can use
justify-content and
align-items to organize the children on each side. For the front, we’ll add space between the children, and for the back we’ll put the children in the bottom right corner.
.front { justify-content: space-between; } .back { justify-content: flex-end; align-items: flex-end; }
Finally, we can add some small changes to our
row class to put space between the elements in each row of the credit card. We will also increase the font size of the card number, remove margin and padding on
<p> tags, and set a reasonable height for our image.
.row { display: flex; justify-content: space-between; align-items: center; } .card-number { font-size: 1.75rem; } p { margin: 0; padding: 0; } img { height: 45px; }
With these styles, our component is starting to take shape. Here’s what it should look like so far:
Adding Gradients
Now we get to make use of those fun gradients we set in our global styles! We’re going to let the user choose the gradient, but we want to make sure we limit them to the three gradients we defined. This is why, when we initialized our
gradient prop, we set its type to be one of three specific strings.
@Prop() gradient: 'purple' | 'green' | 'orange';
We can make these three strings serve as class names, and in doing so, we can map background styles to each class.
.purple { background: var(--purple-gradient); } .green { background: var(--green-gradient); } .orange { background: var(--orange-gradient); }
These classes won’t have any effect until we add the value of the
gradient prop to the front and back of the card as a class using string interpolation.
<div class={`front ${this.gradient}`}> //child elements </div> <div class={`back ${this.gradient}`}> //child elements </div>
Now we can pass in “purple”, “green”, or “orange” to our
gradient prop to use whichever gradient background we prefer.
We also want to make use of our
--font-color variable. Since we want our entire component to use this font color, we can set it on our
card-wrapper. We can also remove the anchor tag’s default underline with
text-decoration: none.
.card-wrapper { display: block; width: fit-content; color: var(--font-color); text-decoration: none; }
Now our component is looking a bit more like our final result:
Make it Spin!
Our credit card component is looking good, but let’s liven it up a bit by adding some animation. In order for us to add animation to our component, there are a few CSS properties we will have to make use of. Let’s take a look at each of them and what they do.
transition– this property allows us to move, or transition, between two styles in a gradual and flowing way, as opposed to an abrupt change. It is actually a shorthand property that allows us to specify the transition property, duration, and more.
transform– this property allows us to modify, or transform, an element by translating, scaling, rotating, or skewing it. We’ll use this property to rotate our card and we can do so with the
rotateY()function.
rotateY()– this function is used to rotate an element around the Y axis. It takes a parameter that represents the angle of rotation.
backface-visibility– this property allows us to set whether or not the back of an element is visible when it faces the user
perspective– perhaps the hardest to reason about, this property sets the distance between the screen and the user to create a 3D effect. It will make more sense when we see it in action.
With these properties in mind, the first thing we want to do is spin the card. To do that, we can use the
transform property and the
rotateY() function to turn the card 180 degrees. Because we only want to spin the card when we hover over it or focus on it, we’ll use the
:hover and
:focus pseudo-classes to specify that.
.card-wrapper:hover .front, .card-wrapper:focus .front { transform: rotateY(180deg); }
As a quick aside, we are using the
:focuspseudo-class in addition to
:hoverfor accessibility purposes. Now, when a user focuses on the credit card element with a screen reader, the contents of the front and back of the card will be read aloud.
While this transformation does work, it has a few issues. First, the change is very abrupt. We can fix this by using the
transition property. To use the
transition property, we need to provide the property we want to animate and the duration of the animation. For our case, we want to create a transition for the
transform property and we will give it a duration of 500 ms.
The second issue is that when the card rotates, the card contents become flipped around, as if we’re looking through the card from the back. While this has its use cases, we want to hide this side of the card when it turns. We can do this by using the
backface-visibility property and setting it to
hidden. These fixes will need to be made for both the front and the back of the card, so let’s add these changes where we target both the
front and
back class.
.front, .back { width: 400px; height: 200px; padding: 20px; border-radius: 8px; font-size: 1.125rem; display: flex; flex-direction: column; box-shadow: 4px 8px 24px rgba(0, 0, 0, 0.25); transition: transform 500ms; backface-visibility: hidden; }
Okay, the front of our card is animated. Now, let’s do the same to the back of the card. The back of the card should start out hidden, and rotate into view when we hover over the card. To do this, we can apply a transformation to rotate the card -180 degrees initially. A negative angle of rotation means the element will rotate counterclockwise. Because the
backface-visibility is already set to
hidden this rotation will make the back of the card invisible to start.
.back { justify-content: flex-end; align-items: flex-end; transform: rotateY(-180deg); }
When we hover over the card or focus on it, the front rotates out of view. To rotate the back into view at the same time, we can rotate the back of the card back to 0 degrees.
.card-wrapper:hover .back, .card-wrapper:focus .back { transform: rotateY(0deg); }
Almost there! The back of our card rotates on hover and focus, but we still need to position it in the correct location. Up until now, the back of the card has sat under the front of the card, but we want it to sit behind the front of the card. To do this, we can use absolute positioning. If we set the card wrapper to have a position of
relative, we can give the card back a position of
absolute to position it relative to the card wrapper like so.
.card-wrapper { display: block; width: fit-content; color: var(--font-color); text-decoration: none; position: relative; } .back { position: absolute; top: 0; left: 0; justify-content: flex-end; align-items: flex-end; transform: rotateY(-180deg); }
Finally, we need to set the
perspective on our card wrapper to create a 3D effect. I’ve chosen a value of
3000px, which makes the effect fairly subtle, but feel free to play around with different values to see how the
perspective property works. It becomes clearer the smaller the value.
.card-wrapper { display: block; width: fit-content; color: var(--font-color); text-decoration: none; position: relative; perspective: 3000px; }
Final Touch
The last visual aspect of our credit card component is the magnetic strip. This is the black bar that runs across the back of the credit card. Because this element is purely decorative and doesn’t contain any content, we can build it using the
::before pseudo-element.
::before inserts content as the first child of the element we attach the selector to. This content, however, is not actually part of the DOM, making it a great tool for adding decorative content like this. To build the magnetic strip, we can add content as a child of the back of the card, and style it to look like a magnetic strip.
.back::before { content: ''; display: block; position: absolute; top: 50px; left: 0; right: 0; width: 100%; height: 40px; background: rgba(0, 0, 0, 0.5); }
And with that, our credit card component is complete! As you can see, there is so much you can do with CSS in Stencil, and this only scratches the surface of what’s possible. When it comes to building a design system, it is critically important to have components that are visually consistent. You can use these design elements to build other components that complement this one. I’d love to see what cool visual effects you have implemented in your Stencil components. Leave a comment below. I’m always excited to see what you build. 😀 | https://ionicframework.com/blog/advanced-stencil-component-styling/ | CC-MAIN-2022-21 | refinedweb | 2,638 | 62.48 |
Sashkin - Fotolia
Use Python for easy VM management
Python continues to grow in popularity, as different use cases are established and new features are added. Follow this guide to manage VMs with the simple programming language.
In the age of DevOps, organizations expect system administrators to have some knowledge of programming. If you don't, you should learn. An administrator who can write code is more valuable and productive than one who can't.
Why use Python?
Put simply: Python is easy to use.
Python is one of the most widely used programming languages. And like JavaScript, it's an old language that's surging in popularity, as people keeping finding new uses for it and adding new features.
Most vendors support some kind of Python interface for their product. And if they don't, they support representational state transfer, which Python can do, as well. Whatever you can do with Curl, you can do with Python.
Python is an interpreted language. That means, although you can compile it, there's no need to do so. It interprets code one line at a time. When you run the shell, it works as a read-evaluate-print-loop interactive interpreter. It responds immediately when you type commands, which is the easiest way to write code.
The other reason to use Python is there's an enormous repository of free APIs you can use to interact with everything, from OpenStack to Amazon Web Services to Twitter.
Skills needed to use Python
To use Python, you need deductive reasoning and to understand math, as all programming languages are built on those. You also need to understand abstract concepts like logic, data structures, arrays and lists. At a minimum, you need to understand what integers, decimals and character strings are.
What resources are there?
If you have the time and are comfortable teaching yourself how to program through reading manuals and doing tutorials, you probably don't need to take a class to learn Python.
In many cases, you don't need to write original code to do many tasks, as someone has already done it. Just search the web, copy the code and change it to fit what you are doing -- unless it has a copyright.
You can also find a list of learning resources on the Python website.
Installing Python
One challenging aspect of Python is Linux distributions ship with Python version 2, which is reaching end of life. However, some Linux utilities require Python 2, and changing to version 3 could affect your system. Using Python 3 will ensure your code works in the future, but be sure you understand the implications of upgrading.
Below, I have installed both and have created an alias -- python3 -- to point to /usr/bin/python3.
This dual-version system can cause some troublesome bugs if you get the two environments mixed up. Depending on which version of Python and pip -- used to install Python APIs -- you're using, they can be put in two different places:
ls -d /usr/bin/python*
/usr/bin/python /usr/bin/python2-config /usr/bin/python3m
/usr/bin/python2 /usr/bin/python3 /usr/bin/python-config
/usr/bin/python2.7 /usr/bin/python3.5
/usr/bin/python2.7-config /usr/bin/python3.5m
There are several ways around this problem, like setting the environment variable to PYTHONPATH. And you can type pip3 and python3 when you want to use version 3.
With Ubuntu 16+, you probably won't have any issues, but you might with CentOS 6.8 and earlier.
Some simple Python examples
Python isn't a typed language. That means when you use Python, you can give a variable a value without declaring its type. That makes programming simple.
In the example below, "a" is an integer. In the first line, I define the value of "a." In the second line, I request the value of "a," and the systems responds with the value I set in the earlier line.
>>> a = 2
>>> a
2
See how to declare a function below. The key thing to note here is four blank spaces indicate the indentation level. So, don't use brackets or a semicolon.
>>> def twice(a):
... return a * 2
...
>>> twice(a)
4
Notice above that we made "a" equal two. Then, in the function twice, we define that the function should double the value sent of the integer. There, we also use the letter "a" as the symbol for the parameter passed into the function. Here is where you need to understand one programming concept called scope. The "a" inside the function isn't the same as "a" outside the function. If you aren't aware of scope, you could spend all day trying to find a bug in your code.
Earlier, we spoke of the power of frameworks. For example, to interact with the OS, you can just write:
import os
Then, you can take other actions, such as list the directory contents:
os.listdir("/tmp")
Or, you could use a variable to define a new command:
>>>>> os.listdir(dir)
Python also supports objects, but you won't need to use them for most tasks. Python has a lot of features that a beginner would find complicated, like inline functions called lambda and dictionary objects, which are key-value pairs.
When you use Python, start simple and build your knowledge over time. You can find editors for Python code that will help you catch errors at code time versus runtime, which is far easier than using vi and Bash. And the interactive Python shell is really the best way to write code, as you can get one small paragraph working at a time.
Next Steps
Run SQL Server Python applications
Read an excerpt from Passive Python Network Mapping
Avoid FTP injection attacks on Java and Python | https://searchservervirtualization.techtarget.com/tip/Use-Python-for-easy-VM-management | CC-MAIN-2021-31 | refinedweb | 969 | 73.27 |
#include <CutAndFill.h>
ithCAF_Component.
This method gives access to the internal data structures, namely to a Zone2 object whose vertices have z-values that correspond to the height differences between the two input meshes (SurfaceBefore minus SurfaceAfter). And a map is returned that contains for each vertex the height in the first and in the second input mesh.
A CAF_Component object represents a connected part of the surface such that
For a quick overview a postscript visualization can be created.
A user defined message receiver object (for example your own progress-bar class) can be registered to get progress updates. This step is optional. | https://www.geom.at/fade25d/html/classGEOM__FADE25D_1_1CutAndFill.html | CC-MAIN-2019-22 | refinedweb | 104 | 50.97 |
view raw
I am using Promoted Build plugin. And using some custom groovy scripts to validate the build! I wanted to access the value of
BUILD_NUMBER
println
If it's in runtime you can use:
def env = System.getenv() //Print all the environment variables. env.each{ println it } // You can also access the specific variable, say 'username', as show below String user= env['USERNAME']
if it's in system groovy you can use:
// get current thread / Executor and current build def thr = Thread.currentThread() def build = thr?.executable //Get from Env def stashServer= build.parent.builds[0].properties.get("envVars").find {key, value -> key == 'ANY_ENVIRONMENT_PARAMETER' } //Get from Job Params def jobParam= "jobParamName" def resolver = build.buildVariableResolver def jobParamValue= resolver.resolve(jobParam)
Any println is sending output to the standard output steam, try looking at the console log. Good luck! | https://codedump.io/share/WlvCilL9vh33/1/access-buildnumber-in-jenkins-promoted-build-plugin-scripts | CC-MAIN-2017-22 | refinedweb | 139 | 50.63 |
Xavier <xavier at noreality.net> writes: > I pretty much copied the code from the tutorial. Could the part I ^^^^^^^^^^^ Why not an exact copy? > commented out during the boost compile be causing this error? > > On a slightly unrelated note, I downloaded the MSVC 2003 Toolkit from > MS with the 7.1 compiler - Can I just copy the files in there over the > top of my VC6 files, or do I need to do any extra setup? > > Also, as a MSVC project, should I be compiling this sort of thing as a DLL? > > - Xavier > #include <boost/python.hpp> > using namespace boost::python; > > class Base { > public: > virtual int f() = 0; > }; > > int call_f(Base& b) { return b.f(); } > > class BaseWrap : Base ^^^^^-----------^-------- inheritance is private. The tutorial uses "struct", which implies public inheritance by default. -- Dave Abrahams Boost Consulting | https://mail.python.org/pipermail/cplusplus-sig/2004-April/006950.html | CC-MAIN-2014-15 | refinedweb | 135 | 69.07 |
I really need this, I have to give up with RN for this project if this is currently unsupported. Could someone spend a minute with this issue
? To be more specific I need to show a vis.js graph in my webview.
Some details:
My webview works correctly when I launch on a real device a debug build using
react-native-scripts android. It doesn’t in release build. Maybe it’s just the assets path that changes (html,css, client side js). How can I investigate this?
Hi @alfredopacino - you can detach in order to get android/ios directories - see our guide here for more details: .
Another option would be to request your
vis.js script over-the-air in your webview rather than bundling it with your app, which would be simpler, but would require that users have an internet connection before your graph would load.
Honesly I still do not get what’s the problem with local (client side) javascript
I would avoid detaching, and my app should be offline.
@alfredopacino - the root problem, I believe, is that it’s hard for the react native packager to view JS files as assets that are not part of the main project JS bundle.
This may not work, but have you tried doing something like saving the entire contents of
vis.js as a string inside of your project and then importing it statically into your webview?
Import statically could be a way, by I tried and still doesn’t work (I escaped all the back-ticks in the code), I sure there is a way to make it work: Some plain client side javascript works, can’t see a reason this library don’t have to.
Anyway, there is a way I can catch all the WebView console errors and logs? I suspect
renderError do not doing that the way I’m using it.
I tried to make a snack hoping someone way more skilled than me would help…the browser crashes when I upload a ~2MB single js file
This is the whole code
import React, { Component } from 'react'; import { WebView, StyleSheet,ActivityIndicator } from 'react-native'; import style from "./css" import jsGraph from './jsGraph'; import visjs from './VISJs'; //<-- this is Vis.js lib () wrapped in a <script> tag like jsGraph and backtick escaped const fullPost = ` <html> <head> </head> <body> <div id="mynetwork"></div> </body> </html>`; export default class App extends Component { render() { return ( <WebView style={styles.WebViewStyle} source={{ html: fullPost}} javaScriptEnabled={true} domStorageEnabled={true} renderError={(error)=>console.log('error:'+error)} renderLoading={()=>{return(<ActivityIndicator style={{flex: 1,flexDirection: 'row',justifyContent: 'space-around',padding: 10}})}} startInLoadingState /> ); } } const styles = StyleSheet.create({ WebViewStyle: { justifyContent: 'center', alignItems: 'center', flex:1 } });
jsGraph
const jsGraph = ` }, {from: 3, to: 3} ]); // create a network var container = document.getElementById('mynetwork'); var data = { nodes: nodes, edges: edges }; var options = {}; var network = new vis.Network(container, data, options); </script>`; export default jsGraph;
css
const css = ` <style> body { background: grey; } #mynetwork { width: 600px; height: 400px; border: 1px solid lightgray; } </style> `; export default css;
DIFFERENT OPTION: This is the solution I was talking in the second post of this thread (obiouvsly it’s a different solution than the above code). It works testing on android device in debug release, it doesn’t if I build the expo release build. Basically the solution not uses a plain javascript code, but a webpack bundled code. Really can’t tell what’s the magic webpack does in this case.
I would glad if someone would take the time of try that code.
Since now there is no real external file, it’s just a huge html file with inline javascript.
I think the problem is some unescaped/problematic characters in the lib code.
The only reliable way I found is to include all css and javascript inline in the html file.
const LocalWebURL = require('./index.html'); //with css and js inline <WebView source={LocalWebURL} javaScriptEnabled={true} domStorageEnabled={true} mixedContentMode='always' renderError={(error)=>console.log('error:',error)} ref={webview => { this.myWebView = webview; }} /> | https://forums.expo.io/t/solved-local-javascript-in-webview/9198 | CC-MAIN-2019-09 | refinedweb | 667 | 63.9 |
Thanks guys I fixed it in class today. I had to set my double monthlypay1=0; and that made it work. As for the hiding my variable's I'll fix that up so my variables aren't hiding. But thanks for...
Thanks guys I fixed it in class today. I had to set my double monthlypay1=0; and that made it work. As for the hiding my variable's I'll fix that up so my variables aren't hiding. But thanks for...
The one at the bottom.
while (monthlypay1 < 0) {
System.out.print("You need to enter positive numerical data!");
break;
}
...
Ok i fixed the error's of catch without a try and vice versa. But now its telling me that my variable "monthlypay1" is not initialized which I don't understand how it isn't. Otherwise the code...
Thanks alot guys. I will look at the tutorial. And My java teacher told me to put everything within the try block to make it more simple. Because it will try through everything and catch any error at...
I'm doing a project where I have to catch a user inputting a string into one of the prompt.
Problem is it seems I always get catch without a try error no matter what I do.
it keeps saying expected...
Thanks a lot!
package monthypayment2;
import java.text.NumberFormat;
import java.util.Scanner;
public class Monthypayment2 {
private static double monthlypay1;
private static...
Sorry. No one in the thread answered my question so I thought I could post a new one. So I suppose a moderator can close this thread.
Well first off fix your sentence... I cannot understand the grammar.. Or it might just be because English is not my main language.
Secondly, post the things you have tried to do in your code to fix...
I need help looping my program that I am doing. First off what I am trying to do is prompt the user if they would like to calculate again. If they type in the string "y" then It will loop them back...
I put the statements in the incorrect place and tried this.
if (monthlypay1 >= 0) {
System.out.print("The Monthly Payment is: $ " + monthlypay1);
System.out.print("Would you...
Any help?
--- Update ---
Guys please. I really need help on this. What I've tried is this.
int repeat = 0;
int error = 0;
What I'm trying to do is say the user types in -100000 for a loan amount
or "two" for the rate. I'm trying to give them an error message and put them back into putting in the loan amount or rate....
Thanks so much!
Hello I have a couple of questions. I'm a newbie at Java programming and I have a project due tomorrow and because I cannot figure out how to do these things it's hindering my ability to complete my...
for the math I tried breaking it down but still doesn't calculate correctly.
double calc1 = loan * rate;
double calc2 = Math.pow(1 + rate, months);
double calc3 = Math.pow(1 +...
Guidelines
This assignment is comprised of modifying your program which will calculate a monthly a moritzed amount (e.g. a house mortgage). For this program, you are to add the logic to prompt the...
I'll try that. Because I noticed every time that I did my math it came out incorrect numbers. When I compare my code to the formula they are the exact same. But maybe that will help Java make the...
Can anyone help me out with the math on here?
//calculation monthly
monthlypay1 = loanamount * rate / (Math.pow(1 + rate, months)) / (Math.pow(1 + rate, months)-1);
Is what I have.....
Ok I used the code you had as a guideline .
public class Monthypayment2 {
private static double monthlypay1;
private static String choice;
private static boolean n;
private...
Ah ok That was going to be my next question. Thanks a lot!
int repeat = 0;
while (monthlypay1 < 0) {
System.out.print("You need to enter positive numerical data!");}
...
Ok so basically in my case I can change "int repeat = 0" to be repeat = y;
and while y.equals(true);?
No sir, For the most part i've just been looking at tutorials on websites but they have not been helping much.
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
//format number and currency
NumberFormat NF = NumberFormat.getCurrencyInstance();
...
Well that and I just realized I was diving by 0.. "(1-1 / math.pow(1 + rate, months));" So I fixed the coding in the math to
monthlypay1 = loanamount * rate / (Math.pow(1 + rate, months)) /... | http://www.javaprogrammingforums.com/search.php?s=d55f2e3059776707fcdb04098b2cd711&searchid=784547 | CC-MAIN-2014-10 | refinedweb | 771 | 78.35 |
Class Names
Class declarations introduce new types, called class names or class types, into programs. Except for forward declarations, these class declarations also act as definitions of the class for a given translation unit. There may be only one definition for a given class type per translation unit. Using these new class types, you can declare objects, and the compiler can perform type checking to verify that no operations incompatible with the types are performed on the objects.
An example of such type checking is:
// class_names.cpp // compile with: /EHsc #include <iostream> using namespace std; class Point { public: unsigned x, y; }; class Rect { public: unsigned x1, y1, x2, y2; }; // Prototype a function that takes two arguments, one of type // Point and the other of type pointer to Rect. int PtInRect( Point, Rect & ); int main() { Point pt; Rect rect; rect = pt; // C2679 Types are incompatible. pt = rect; // C2679 Types are incompatible. // Error. Arguments to PtInRect are reversed. // cout << "Point is " << PtInRect( rect, pt ) ? "" : "not" // << " in rectangle" << endl; }
As the preceding code illustrates, operations (such as assignment and argument passing) on class-type objects are subject to the same type checking as objects of built-in types.
Because the compiler distinguishes between class types, functions can be overloaded on the basis of class-type arguments as well as built-in type arguments. For more information about overloaded functions, see Function Overloading and Overloading. | https://msdn.microsoft.com/en-us/library/w1bwzwc4(v=vs.90).aspx | CC-MAIN-2017-30 | refinedweb | 231 | 53.81 |
For this tutorial I've written a simple program in C that overflows a buffer on the stack with whatever it reads from the network. I cross-compiled it for MIPS Linux and ran it using QEMU chrooted into the unpacked filesystem of the Netgear WNDR3700v3 (Firmware 1.0.0.18).
The program, vulnerable.c contains the following function:
/* * vulnerable function. * reads up to 2048 off a socket onto a small buffer on the stack. */ int receive_data(int sockfd) { int read_bytes; char buf[512]; if(sockfd < 0) { return -1; } read_bytes=recv(sockfd,buf,2048,0); if(read_bytes < 0) { perror("recv"); }else { printf("read %d bytes.\n",read_bytes); } return read_bytes; }
This is a contrived example[1] but it should make it easier to focus on the mechanics of using Crossbow without getting bogged down in real-world complications.
The first module to know about when developing a buffer overflow is bowcaster.overflow_development.overflowbuilder. This module contains classes that will be useful for building an overflow buffer. There are two main classes to choose from when bulding your buffer, OverflowBuffer, and EmptyOverflowBuffer. They each represent a different way of solving the same set of problems, and they each have their advantages. For now, we'll use the first; OverflowBuffer. I'll do a subsequent tutorial show how to use the second.
The OverflowBuffer class starts you out with a buffer of a specified length filled with a pattern string consisting of upper and lower alphabetic characters and numbers (to help with debugging). One at a time, you can start replacing sections of that buffer with things like ROP gadgets or your payload.
Here's an example:
buf=OverflowBuffer(LittleEndian,2048)
Here we instantiate the OverflowBuffer object, passing it "LittleEndian" and the size of the buffer we want to create; 2048.
The LittleEndian object is a constant that OverflowBuffer will use whenever data encoding is endianness-sensitive. It is imported and made available for use like so:
from bowcaster.common.support import LittleEndian
The OverflowBuffer object can be converted to a string and sent to the target:
sock.send(str(buf))
With a debugger attached to the vulnerable program we can witness our first crash, controlling the function's return address:
The $ra register contains 0x41367241.
OverflowBuffer provides a function, find_offset(), that takes a string or an integer and will locate that value in the overflow string.
I like to add an option to my exploit program that lets me provide a search string on the command line and find the offset.
if len(sys.argv) == 2: search_value=sys.argv[1] if search_value.startswith("0x"): value=int(search_value,16) offset=buf.find_offset(value) if(offset < 0): print "Couldn't find value %s in the overflow buffer." % search_value else: print "Found value %s at\noffset: %d" % (search_value,offset) exit(0)
When searching for 0x41367241, it is found 528 bytes from the start of the overflow buffer.
Found value 0x41367241 at offset: 528
Now we can insert the first of a series of ROP gadgets at offset 528.
Controlling several S registers is important for staging ROP gadgets, but unfortunately this vulnerable function doesn't restore any S registers before returning. We can only control the $ra register. That means we need to return into a function epilogue that does restore several S registers.
While bowcaster doesn't provide the capability to search for ROP gadgets, this can be done in IDA Pro or using objdump from your compiler toolchain[2].
In the next part I'll cover how to describe your ROP gadgets and add them to your overflow using Crossbow.
------------------------
[1] Although, in the world of embedded MIPS Linux, not as contrived as one might think. :-/
[2] If anyone wants to contribute, a MIPS ROP finder that can be used independently of IDA would be super double awesome.
UPDATE 3/29/2013: Added syntax highlighting using. Thanks for the tip, @0xKD.
UPDATE 4/8/2013: References to Crossbow have been changed to Bowcaster | https://shadow-file.blogspot.com/2013/03/buffer-overflows-with-crossbow-part-1.html | CC-MAIN-2018-34 | refinedweb | 660 | 55.54 |
Serial Communication Between ATtiny UART and Computer
Serial Communication Between ATtiny UART and Computer
ATtiny13 UART How to Serial Communication Between ATtiny and Computer - When we create a project, sometimes we need a debug method.
With this debug method we will send data to the serial, so we can see the response of the program we made on the serial monitor.
ATtiny generally does not have serial communication, like Arduino Uno. On the Arduino Uno, we can see pins 0 and 1 which are the TX and RX pins.
To outsmart the serial communication on the Avr ATtiny13, we will use USI (Universal Serial Interface). With USI we can communicate between attiny arduino ideas.
ATtiny13 Serial Schematics
Before you use the circuit, note that here we are using two circuits. Circuit for programming ATtiny13 and Circuit for serial communication.
To be able to program the ATtiny13 Arduino IDE, please read How to Burn the Bootloader and Program the ATtiny13 With Arduino. Program code you can see the program code below.
After you program the ATtiny13 with the program code below, we will then test the ATtiny13 for serial communication.
In this tutorial I use the FTDI module. Apart from that module, you can use other modules such as CH340 or the like.
Then connect the pins between FTDI and ATtiny13 as shown below:
ATtiny13 Serial Progam Code
This program consists of a send (TX) and receive (RX) program. However, it should be noted that this program is not the same as serial programs in general.
To transmit data, this program use a Sring. However, to read data, this program is only able to accept integers.
Development still needs to be done. Here is an attiny idea arduino program.
#include <avr/io.h> #include <avr/interrupt.h> #include <util/delay.h> volatile uint8_t uart; uint8_t temp; volatile uint8_t count; volatile uint8_t start; volatile uint8_t c; volatile uint8_t uart_data; volatile uint8_t Rece_bit; volatile uint8_t rec; volatile uint8_t usart_r; volatile uint8_t baudSpeed; ISR(INT0_vect) { rec = 1;// Interrupt is purely to determine the start bit when receiving, // rarely used, you can hang something else here } ISR(TIM0_COMPA_vect) { TIMSK0 = 0x00; TCCR0B = 0x00; // Single Timer, used to form a clear gap OCR0A = 0; // between bits, both when receiving and when transmitting c = 1; TCNT0 = 0; TIMSK0 = 0x04; TCCR0B = 0x02; // The value "reset on match" is loaded each time from the variable OCR0A = baudSpeed; // You can quickly change the UART speeds Rece_bit = 1; } int send (uint8_t data) { if (count >= 8) { PORTB |= (1 << 4); start = 0; temp = 0; c = 0; count = 0; TIMSK0 = 0; TCCR0B = 0; OCR0A = 0; goto nah; } if (c == 1) { if (start == 0) { temp = 0x80; start = 1; count--; goto razvet; } temp = data; temp = temp >> count; temp = temp << 7; razvet: switch (temp) { case 0x80 : PORTB &= ~(1 << 4); break; case 0x00 : PORTB |= (1 << 4); break; } count++; c = 0; } nah:; } void mySerial_print(char *text) { while (*text) { UART_trans(*text++); } } void mySerial_println(char *text) { while (*text) { UART_trans(*text++); } UART_trans('\n'); } void itoa(uint16_t n, char s[]) { uint8_t i = 0; do { s[i++] = n % 10 + '0'; } while ((n /= 10) > 0); s[i] = '\0'; // Reversing uint8_t j; char c; for (i = 0, j = strlen(s) - 1; i < j; i++, j--) { c = s[i]; s[i] = s[j]; s[j] = c; } } void send_num(char *text, uint16_t n) { char s[6]; itoa((uint16_t)n, s); mySerial_print(text); mySerial_print(s); } int UART_trans(uint8_t data) { uint8_t f; data = ~data; baudSpeed = 123; TIMSK0 = 0x04; TCCR0B = 0x02; for (f = 0; f < 10; f++) { while (c == 0); send(data); } start = 0; temp = 0; c = 0; count = 0; TIMSK0 = 0; TCCR0B = 0; OCR0A = 0; baudSpeed = 0; } int UART_receiv(void) { uint8_t a; usart_r = 0; MCUCR = 0x02; // INT0 Interrupt GIMSK = 0x40; // INT0 Interrupt while (rec == 0); // Wait until the start bit happens MCUCR = 0; // INT0 Interrupt GIMSK = 0; // INT0 Interrupt baudSpeed = 123; TIMSK0 = 0x04; TCCR0B = 0x02; rec = 0; TCNT0 = 0xDC; for (a = 0; a < 9; a++) { while (Rece_bit == 0); if (bit_is_set(PINB, 1)) { usart_r |= (1 << 7); } else { usart_r &= ~(1 << 7); } usart_r = usart_r >> 1; Rece_bit = 0; } } int main(void) { DDRB &= ~(1 << 1); // set to RX (INPUT) DDRB |= (1 << 4); // set to TX (OUTPUT) asm("sei"); while (1) { UART_receiv(); // Receive a byte first _delay_ms(10); // Pause for clarity UART_trans(usart_r); // Send back to serial //mySerial_println("123"); } }
UART_receive() is used to receive data. UART_trans() is used to send data. These two functions only receive and send 1 character.
To send String data, use the mySerial_print() or mySerial_println() functions.
Use Serial Monitor to see the response of the arduino ide attiny13a serial communication.
ATtiny13 Datasheet
If you need the ATtiny13 datasheet, you can read the following datasheet:
Hopefully the article ATtiny13 UART How to Serial Communication Between ATtiny and Computer can help your project.
Thank you for visiting the Chip Piko website. May be useful.
Source:
Post a Comment for "Serial Communication Between ATtiny UART and Computer" | https://www.chippiko.com/2020/09/attiny13-uart.html | CC-MAIN-2021-39 | refinedweb | 801 | 54.05 |
Exploring interesting use cases for the Ethereum blockchain by building a simple dice roll DApp game with Truffle Framework.
‘DApp’ is an abbreviation for Decentralized app. DApps are a new paradigm for building apps where a back end centralized server is replaced by a decentralized peer to peer network.
Industry-wide, we’re just beginning to scratch the surface of potential blockchain use cases. Most people associate ‘blockchain’ with ‘cryptocurrencies,’ but new use cases for blockchain technology are emerging everyday.
Today, I’ll show you how to build a simple dice game with Ethereum as a means to explore different and interesting uses cases for blockchain technology.
What is an Ethereum DApp?
Ethereum DApps build on Ethereum blockchain technology, where Ethereum serves as the backend for the application.
One of the most popular DApps, cryptokitties, is collectibles-style game built on Ethereum. When we build a game with Ethereum, essentially, each game action and transaction is stored on the Ethereum blockchain.
Roll the Dice DApp
Let’s create a simple dice roll game.
‘Player’ will roll the dice and chance to win 0.00001 ether. ‘Target’ will be set between 1 to 6.
You can try our Roll the Dice here. Note: Game is deployed on Rinkeby network only.
Now let’s dive into the code.
How To Play The Game
- Click on “Get new bet” to get the target.
- Next, roll the Dice by clicking “Roll it”.
- If you get the same number, 0.00001 ether will be transfer to your account.
Prerequisite
- Understanding of Nodejs
- Basic understanding of Smart contract
- Truffle framework
- Understanding of Html and Javascript
- Metamask wallet
Dice smart contract
Our
Dice.sol smart contract will control the core logic for our game. Let’s have a look at the code:
struct Bet{ uint8 currentBet; // this is target bool isBetSet; //default value is false uint8 destiny; // } mapping(address => Bet) private bets; uint8 private randomFactor;
We will define Bet structure as having 3 variables:
currentBet: This is used to set a new bet.
isBetSet: To check if the bet is set or not?
destiny: Contains the number when you roll the dice.
We will also create mapping which will track the bets of players, and add a
randomFactor, which will be used to randomize our results.
Events: We have 2 events, one will emit when the bet is set for a player and other is for the game result. Events help in conveying state change on the frontend.
event NewBetIsSet(address bidder , uint8 currentBet); event GameResult(address bidder, uint8 currentBet , uint8 destiny);
Getting a new bet
To take a bet from a new player, we will perform the following steps:
- Check if a bet is already set for the player
- Mark the bet as set now
- Get the random number and set the current bet
- Emit ‘ NewBetIsSet’ event and return the current bet
function getNewbet() public returns(uint8){ require(bets[msg.sender].isBetSet == false); bets[msg.sender].isBetSet = true; bets[msg.sender].currentBet = random(); randomFactor += bets[msg.sender].currentBet; emit NewBetIsSet(msg.sender, bets[msg.sender].currentBet); return bets[msg.sender].currentBet; }
Rolling the Dice
Next, we will roll the dice. For this, we will perform the following steps:
- Check if a bet is set for the player
- Get the random number and set the
destiny
- Mark the
isBetSetvariable false
- Check if
currentbetand
destinyis equal, if yes transfer the prize(0.00001 ether) to the player and emit the GameResult event.
- Else only emit the
GameResultevent if
currentBetand
destinyis not equal.
function roll() public returns(address , uint8 , uint8){ require(bets[msg.sender].isBetSet == true); bets[msg.sender].destiny = random(); randomFactor += bets[msg.sender].destiny; bets[msg.sender].isBetSet = false; if(bets[msg.sender].destiny == bets[msg.sender].currentBet){ msg.sender.transfer(100000000000000); emit GameResult(msg.sender, bets[msg.sender].currentBet, bets[msg.sender].destiny); }else{ emit GameResult(msg.sender, bets[msg.sender].currentBet, bets[msg.sender].destiny); } return (msg.sender , bets[msg.sender].currentBet , bets[msg.sender].destiny); }
We are adding
destiny and
currentBet to our
randomFactor every time. This helps us randomize our bets efficiently.
Other than above core functions, we have an
isBetSet function to tell if the bet is set for a player and a random function to get random numbers for our dice.
function isBetSet() public view returns(bool){ return bets[msg.sender].isBetSet; } function random() private view returns (uint8) { uint256 blockValue = uint256(blockhash(block.number-1 + block.timestamp)); blockValue = blockValue + uint256(randomFactor); return uint8(blockValue % 5) + 1; }
Fallback Function: We will also add a fallback function. This function is executed if a contract is called and no other function matches the specified function identifier, or if no data is supplied. These functions are also executed whenever a contract would receive plain Ether, without any data.
function() public payable{}
Note: You should test your smart contract. You can learn more about testing smart contracts in some of our previous tutorials.
Truffle framework
We will use Truffle framework to develop our DApp. Truffle has prebuilt packages which they call boxes. Truffle boxes help in getting the boilerplate code to develop a DApp. You can check more about truffle boxes here. We will use basic pet-box which will give us the boilerplate code for Dice game DApp. You can learn more about truffle pet-box here.
Let’s walk through app.js and understand what's happening on the frontend.
We will define our app variable and declare variables we will use throughout app.js.
We will also add an ‘init’ function in which will initialize web3 provider. Web3 provider allows your application to communicate with an Ethereum Node.
MetaMask, an ethereum chrome extension wallet, will inject web3.js. Here, we will see if any web3 provider already exists. If not, it will try to connect a local blockchain. For testing purposes, we will run ganache on our local machine and connect to it.
App = { web3Provider: null, contracts: {}, account: '0x0', hasVoted: false, init: function() { return App.initWeb3(); }, initWeb3: function() { if (typeof web3 !== 'undefined') { // If a web3 instance is already provided by Meta Mask. App.web3Provider = web3.currentProvider; web3 = new Web3(web3.currentProvider); } else { App.web3Provider = new Web3.providers.HttpProvider(''); web3 = new Web3(App.web3Provider); } return App.initContract(); },
Getting Dice Contract
Truffle petbox gives us truffle-contract.js which gives us the boilerplate code to interact with the contract. We use ABI (Application Programming Interface), a JSON representation of our contractl to interact with our contract on the frontend.
If we don’t use truffle we manually need to change this ABI every time when change and compile our contract. Whenever you compile a solidity smart contract, it will generate a JSON file. This JSON file is ABI to interact with the smart contract.
initContract: function() { $.getJSON("Dice.json", function(dice) { // Instantiate a new truffle contract from the artifact App.contracts.Dice = TruffleContract(dice); // Connect provider to interact with contract App.contracts.Dice.setProvider(App.web3Provider); App.listenForEvents(); return App.render(); }); },
We now get the
Dice.json file, which is JSON representation of our smart contract. We initiate our Dice contract and set the web3.provider. truffle-contract.js are helping us here by providing TruffleContract function. You can check
Dice.json and
truffle-contract.js for more details.
Listening Events
Events are a crucial part of any DApp.
Asynchronous and blockchain transactions take time. Events help us in tracking the status inside the DApp.
We will change our interface to show changes to users. We are listening to both events GameResult and NewBetIsSet and passing event object to render UI accordingly.
listenForEvents: function() { App.contracts.Dice.deployed().then(function(instance) { instance.GameResult({}, {}).watch(function(error, event) { console.log("event triggered", event) // Reload when a new vote is recorded App.render(event); }); instance.NewBetIsSet({}, {}).watch(function(error, event) { console.log("event triggered", event) // Reload when a new vote is recorded App.render(event); }); }); },
Check
render method. We are checking events and showing results to users accordingly and also we are calling
isBetSet method to know if a bet is set for a user or not.
render: function(event) { var gameInstance; // Load account data web3.eth.getCoinbase(function(err, account) { if (err === null) { App.account = account; $("#accountAddress").html("Your Account : " + account ); } }); if(event.event == "NewBetIsSet"){ $("#newBet") .text("Your target is : " + event.args.currentBet.toNumber()); } if(event.event == "GameResult") { var destiny = event.args.destiny.toNumber(); var currentBet = event.args.currentBet.toNumber(); var doWeHaveAWinner = (destiny == currentBet); if(doWeHaveAWinner){ $("#result").text("we have a winner"); }else{ $("#result").text("Sorry bad luck, your got " + destiny); } } // Load contract data App.contracts.Dice.deployed().then(function(instance) { gameInstance = instance; return gameInstance.isBetSet(); }).then(function(isBetSet) { var message = $("#message"); if(isBetSet){ message.text('Bet is Set, Roll the Dice') }else{ message.text('Set New Bet'); } }).catch(function(error) { console.warn(error); }); }
Also, we are defining
roll and
getNewBet method which we are calling from index.html on button clicks.
roll : function(){ App.contracts.Dice.deployed().then(function(instance) { return instance.roll({ from: App.account }); }).then(function(result) { }).catch(function(err) { console.error(err); }); }, getNewBet: function() { $("#result").text(""); App.contracts.Dice.deployed().then(function(instance) { return instance.getNewbet({ from: App.account }); }).then(function(result) { }).catch(function(err) { console.error(err); }); }
Deploying our Dapp
We will use ganache for deploying our DApp locally. If you don’t have ganache, you can download it here. Run the commands below to deploy our contracts.
truffle compile truffle migrate --reset
This will deploy our smart contract to interact with the smart contract you can use
truffle console command.
Full code
You can see full code for our dice game here.
You can play around with the live DApp here.
Improvement
There are a lot of improvements which can be made to our Dice apart from UI.
Here are a few suggested improvements which you add to your version of the Game.
- Optimize the memory. There is no need to story destiny, you can remove that.
- Optimize the amount of Gas we are using.
- Add a withdraw function to take out the extra ether from the contract.
- Make random function better. Generating random numbers on blockchains are itself a challenge as everything on blockchain is public.
- Improve the UI. Make it roll 😃
Conclusion
DApps are a new paradigm to for building applications on the internet, and we’re just scratching the surface. Instead of hosting an app on Heroku, we can host our app on IPFS (decentralize peer to peer file system).
DApps decentralize the way we interact on the internet. DApps run on decentralized networks, in our case Ethereum blockchain, but not every DApp needs to be built with a blockchain.
In the future, you will see more Dapps with awesome UX and better use cases. Now’s the time to explore! | https://blog.crowdbotics.com/building-a-dice-game-dapp-on-the-ethereum-blockchain/ | CC-MAIN-2021-17 | refinedweb | 1,784 | 61.02 |
Stateful Session Beans
In this section we will introduce, how to create Stateful Session Beans and the various lifecycle events that are provided by the bean.
Bean Creation
Same as Stateless beans, Stateful session beans consists of 1 class, the bean class and 2 interfaces, local and remote. The class is required and the interfaces are optional.
Bean Class
Marking a class as a Stateful session bean requires annotating the class by the annotation @Stateful.
Example of bean class:
import javax.ejb.Stateful; @Stateful public class ShopCartBean { }
The previous lines will create a stateless bean and will be registered in the JNDI trees as “ShopCartBean”, you can override the default name by providing a value to the name attribute on the Stateful annotation, as an example:
import javax.ejb.Stateful; @Stateful(name="ShopCart") public class ShopCartBean { }
That way the bean will be registered in the JNDI tree with name “ShopCart” instead of “ShopCartBean”.
Local Interface / Remote Interface
The rules applied to Stateless Session Beans are valid here also, you use the same annotations (@Remote/@Local) to declare the remote and local interfaces.
Stateful Bean Lifecycle
Provided callback methods by the Stateful Beans are:
- : This method is called before destroying the bean object and can be used to release all the reserved resources by the bean object.
- @PrePassivate: This method is called before moving the bean object to passive state. Here the developer should release all the reserved resources of the bean and prepare the bean state for storage persistent.
- @PostActivate: This method is called after the bean moves from the passivated state. Here you should reallocate the resources, released during the passivation action.
- @Remove: This method is the only method that gets called by the client and not by the container. The method marks the bean for removal. Once the bean is marked for removal, the container invokes the method annotated by @PreDestroy if exists and then destroys the bean object.
The following is a state diagram that visualizes the stateful session bean lifecycle:
| http://www.wideskills.com/ejb/stateful-session-beans | CC-MAIN-2018-09 | refinedweb | 334 | 51.99 |
Here's a way to get the server and database name of the content database that a site belongs to using C#:
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administrator;
public class Test
{
private static SPGlobalAdmin oGlobAdmin = new SPGlobalAdmin();
public static SPVirtualServer GetVirtualServerBySite(SPSite oSite) { foreach(SPVirtualServer iServer in oGlobAdmin.VirtualServers) { foreach(SPSite iSite in iServer.Sites) { if(iSite.ID == oSite.ID) { return iServer; } } } throw new IndexOutOfRangeException("Unable to find site in configuration database: " + oSite.Url); }
public static void Main(String[] Args)
SPSite oSite = new SPSite([Your site's URL here]);
SPVirtualServer oVS = GetVirtualServerBySite(oSite);
string SPContentDBServer = oVS.ContentDatabases[0].Server;string SPContentDB = oVS.ContentDatabases[0].Name;
Console.WriteLine(SPContentDBServer);
Console.WriteLine(SPContentDB);
}
Hopefully someone finds this the least bit interesting...Personally I'm trying to recreate the connection string for the SharePoint DB, but shhhhh, don't tell anyone. Mucking w/ the SharePoint databases is not supported by MS.
Anyone out there have a reference to what the best practices for programming SharePoint are? Specifically, how much normalization one should use in their database/lists? Here's my scenario:
I'd like to create a Time Off Request and Reporting application. It would have the following operation:
With a normal database application, I'd have 4 related tables: Employees, Departments, Administrators (but actually, this could be a boolean field for the employee table), and the PTOData table. However, I'm not sure if you can write all of these within the context of SharePoint lists. Anyone have any ideas? Should I just create separate tables, and only use SharePoint as a wrapper? Let me know what you think!
BTW - Dustin - I asked my manager if they'd send me to the dev training. If I can go, this is something I'd like to cover. For those of you who also would like to go, but are afraid of asking your boss for a few grand, I can send you my request document (the sucker was 2 1/2 pages long!) as a template.
I think I found a bug in using SMigrate to move a site - it's only a minor one. I'm putting it here because I'm not sure where to submit bugs to Microsoft anymore, and I'm not willing to spend an hour to figure it out.
Here's the scenario: I used SMigrate to copy our production WSS site collection to some development servers. It worked great, actually: the prod server was mis-configured to use a MSDE database rather than a Web Farm w/ SQL, but when I moved it to a server w/ that configuration, it moved the data into SQL just fine. However, when I went to add a Web Part to the development server, it tried to connect to the old server's web part library. I know this because it prompted me to log into the old server.
I'm not sure how to fix it quite yet, but I'm going to hack into the database in the next few days to figure out where it's storing the old server's name. If I find anything, I'll update you.
I'm starting to feel pretty constricted by SharePoint. There's a lot of little things that I'd like to customize that you just can't with SharePoint right out of the box. For instance, I've gotten requests to make attachments open in a new window when you click on them. In fact, most of my limitations seem to come from the inherent inflexibility of the out of the box web parts. Yes, you can change some of the attributes. However, in order to customize the display of the data, you need to convert the views from web parts to data views. Of course, in a data view, you can't edit the data. It's all very frustrating...I wish you could edit the default web parts...If you know a way, please let me know.
I'd like to be able to automatically create alerts for the people who actually submit an item to issue lists, rather than only send out an email only to the person the problem is assigned to.
Also, there aren't any books on using only SharePoint, at least none I could find. The documentation is pretty sparse. It seems like the primary method for learning SharePoint remains seminars like those put on by SharePoint Experts. They are good classes, don't get me wrong, but they are few and far between. Not only that, but the price for a book would be around $50 - $70, while classes are around $2000 w/o room and board. You do the math...
It looks like in order to get SharePoint to do what we want it to do, we're going to have to write custom web parts and web pages. That isn't too big of a problem, as that the SDK does in deed seem to have a lot of information in it. I'm just ignorant of how to code specifically for SharePoint, and I have a pretty high learning curve to overcome to learn it. Dustin, when's the next coding class?!
Enough ranting for me. If you have any ideas how to correct or work around my problems, let me know, please! ;)
This is actually a problem with the Windows upgrade, but nonetheless the problem manifested itself while trying to install the sharepoint central database.
I had a development computer that was running Windows 2000 Server. I wanted to run SharePoint and use Visual Studio.Net 2003 to create some web parts. However, to do so, I had to upgrade to Windows 2003 Server. Normally I don't upgrade, since problems occur. However, I had heard some good things with the upgrade wizard. Note to self: never believe them. The upgrade itself went smoothly, however, I started having problems once I tried installing SharePoint.
The installation itself was smooth, but once I had to configure/create the central database, I started having problems. I'd fill out all the information, but every time I'd click submit I would be forwarded to a generic error message screen. I then tried creating the DB from the command line, by running: “stsadm -o setconfigdb -databaseserver [name] -databasename [db name]” Then I got a decent error: GetTextExtentPointI could not be located in msdart.dll. I found some good information here, and it fixed my problem for now. Basically, you need to copy over the oledb32.dll from another server that got a fresh install of Windows Server 2003 to the upgrade server.
I hope that helps someone else out.
Like I've mentioned before, I've customized a lot of my sharepoint site by changing the style sheet and backend standard graphics. Unfortunately, the limitation is that you have to rely on Microsoft to properly assign stylesheet classes properly to all the elements. For reference, here's a link to a screen shot of my page, since the gallery still won't work.
My latest issue was that a user suggested the colors in the list view nav bar change from the default blue to white. I didn't have a problem with this one bit - I viewed the source of the list page, found the link, and saw that it belonged to class ms-toolbar (fairly intuitive, huh?). So I opened the style sheet (Program Files\Common Files\Microsoft Shared\web server extensions\60\TEMPLATE\LAYOUTS\1033\STYLES - default location) and changed the sheet to have color: #FFFFFF;. Problem solved.
However, someone coding the web part had a brain fart, because any links within the web part are assigned the same class. However, the background at this point is also white, so you get invisible text. I didn't even realize that until I got another complaint not 5 minutes later. Unfortunately, this wasn't a case of a higher level tag being assigned the ms-toolbar class: it was the a tag itself. Now, will someone please explain why your class would be called ms-toolbar, and you'd have something in the main content window be assigned to the ms-toolbar class? It didn't and still doesn't compute for me.
So, if anyone out there has any ideas for how to work around this, I'd love to hear them. Right now, I just changed the color of ms-toolbar to #990000 so that you can see it in both places. It's still hard to see against my orange background, though. Shoot me a line if you have any ideas.
Addendum: People have been asking me why I wanted to do this. I explained it in a previous blog (I think) but here's the short version:
I have an Issues List for internal system operations. If a phone goes down, or a computer dies, a user could enter a ticket into the system. However, most of our users are ignorant as to who would fix each type of issue. So I have two drop down boxes: a Category and Assigned to. When you select a certain category, the assigned to box automatically changes to the person who handles that type of issue. The “listener“ therefore is used to listen to the onChange event for the category.
------------------------------------------------------------------------------
W00t! I got the listeners to work. Here is the JavaScript, and then I'll explain it below:
window.onload = startup;function startup(){ setAssignment(); document.forms[0].elements[getElementIndex("OWS:Category")].onchange = setAssignment;}
function setAssignment(){ var getCategoryTagIndex, getAssignToTagIndex, getCategoryText; getAssignedToTagIndex = getElementIndex("AssignedTo"); getCategoryTagIndex = getElementIndex("OWS:Category"); getCategoryText = getSelectedElement(document.forms[0].elements[getCategoryTagIndex]); setAssignedTo(document.forms[0].elements[getCategoryTagIndex], document.forms[0].elements[getAssignedToTagIndex]); }
function setAssignedTo(CategoryObj,AssignedToObj){ switch(CategoryObj.options[CategoryObj.selectedIndex].text) { case "LoopNet.com website error": searchAssignedTo("[User Name]", AssignedToObj); break; case "InsideLoop website error/suggestion": searchAssignedTo("[User Name", AssignedToObj); break; case "E-mail": searchAssignedTo("[User Name]", AssignedToObj); break; case "Network file access": searchAssignedTo("[User Name]", AssignedToObj); break; case "Phone problem": searchAssignedTo("[User Name]", AssignedToObj); break; case "Local workstation problem": searchAssignedTo("[User Name]", AssignedToObj); break; case "Other": searchAssignedTo("[User Name]", AssignedToObj); break; default: searchAssignedTo("[User Name]", AssignedToObj); break; }}function searchAssignedTo(userName, AssignedToObj){ for(i = 0; i < AssignedToObj.options.length; i++) { if (userName.toLowerCase() == AssignedToObj.options.text.toLowerCase()) { AssignedToObj.options.selected = true; return; } }}function getSelectedElement(selectObj){ return selectObj.options[selectObj.selectedIndex].text;}function getElementIndex(elementName){ var e; e = -1; for (i = 1; i { var temp; temp = document.forms[0].elements.name; if(temp.indexOf(elementName)>= 0) { e = i; break; } } return e; }
//-->
I hope this shows up, since it's javascript. Basically I put this at the end of my issues form, right after the end table tag. This is so that the page loads before the script runs, allowing time for the web part to load. The first thing I do is set the window.onload event handler to be my startup function. That way the values are set on every page load. Within that I run two other commands. First, I run the setAssignment function, to do the initial setting of the assigned task, just in case the user doesn't change it. Then I set the onchange event for my Category to be the setAssignment again. You'll notice a couple of things. First of all, we aren't sure of the names of the form or the drop down list. So instead of using the name, we use the forms[0] element and then search for the element index for the category drop down list. So, here's a description of the functions:
So that's basically it. Let me know if you have any questions on this, and I'll be happy to explain. It took a team effort here to come up with the right script. Hopefully this will save you some time.
UPDATE: I got this to work - see the next post!
So I'm trying to set up a listener on my Issues list. It's not going well. Microsoft looks to be using some kind of custom drop down box, rather than a stardard select. It's killing me. When I find out how to fix it, I'll post it here. Hopefully it won't take more than a day.
Well, I blew it already on my WSS server. I installed WSS on it before I installed SQL Server, so it installed the MSDE database instead. Well, I tried to upgrade it to SQL Server 2000, SP 3, but I couldn't get it to upgrade correctly. Instead of upgrading the MSDE database, it installed SQL along side it, and I don't know if it transfered the data. I couldn't get the SQL Server service started, and I couldn't get into MSDE via Enterprise Manager. I'm glad 99% of my customization at this point had been to the .CSS and XML files rather than the database. I've backed that up.
I tried uninstalling, well, everything, but there was a problem with the MSSearch service. For whatever reason, it decided it wanted to install itself on my network home directory (H: drive). I couldn't get it off. I removed all the registry entries I could find for SQL, I removed the registry entries under HKLM\Service for MSSearch, and I tried simply reinstalling SQL. Nothing worked. SQL would install OK for me, but when I tried applying SP3, it would fail on MSSearch.
So, I'm doing the only thing left to do now: reinstall Windows 2003 from scratch. Oh well...previously I had used Windows Server 2003 Standard when I wanted to use the Web Server Edition. I couldn't find the disks at the time, but I found them now, so I get to have that right, at least.
A word from the newly experienced: if you plan on using SQL Server, select Web Farm configuration during WSS installation, and more importantly, install SQL Server BEFORE you install WSS. In writing that last sentence, it all seems pretty obvious, but I guess when you feel you know what you're doing, you get careless.
Good luck to you!
Andrew
You know, that's probably the first question that's on your mind. It would be for me, too. Well, this post will hopefully explain my purpose in setting up this blog.
I'm pretty much new to SharePoint - WSS 2003 is the only one I've used - so I'd like to share my experiences with you other newbies out there and hopefully save you some time and heartbreak. I've been charged to use SharePoint to create our company's intranet site. There's some fairly intense stuff I need to do to make it work, so hopefully there will be a little something in it if your bother reading this stuff. One of the things I'll be doing in the next few weeks is migrating a stand-alone WSS server that is currently using an MSDE database to the web-farm model of WSS that talks to a SQL 2000 database on a remote server. Then I need to copy the data from that server to another server, and then sync both servers to have the same data and pages. Fun stuff coming down the pipes. So stay tuned.
BTW: I recently attended the SharePoint BootCamp in Anaheim, Cali. I'll post a longer review in a later blog, but for now know that it was a lot of fun and very informative. Oh, yeah, and if you made it this far into my blog, have some mints (inside joke).
One more thing, if you ever have a comment, correction, question, or answer about a blog I write PLEASE post a comment or e-mail me. This is meant to be a discussion that anyone can add to.
Later! | http://www.sharepointblogs.com/avelez/ | crawl-001 | refinedweb | 2,646 | 64.41 |
Unable to create WLAN access point with old firmware
- Sirajuddin Asjad last edited by Sirajuddin Asjad
Hi, I'm trying to set up an access point and I must use firmware version 1.20.0.rc11 because I'm using an old LoRa library which only works with this specific version. The LoRa network is up and running, however I am unable to set up a network access point. When I upgrade the firmware to a newer version (any of the latest versions), the access point works but the LoRa does not work (because of old mesh libraries).
The code runs fine and it outputs "Wifi is up", however the network does not appear on my list of networks when I'm using firmware 1.20.0.rc11. If I upgrade the firmware version, it suddenly appears as a network on my laptop. How can I fix this issue and how can I get the access point and LoRa both working at the same time?
Code to setup access point:
from network import WLAN wlan = WLAN() wlan.init(mode=WLAN.AP, ssid="hello", auth=(WLAN.WPA2, "eightletters"), channel=1) print("Wifi is up!")
Link to the LoRa mesh library I am using (which requires firmware 1.20.0.rc11):
I have been trying to upgrade and downgrade versions all day, but can't find a version that works with both LoRa and AP.
Does anyone know why I cannot see the network on my laptop? Thanks in advance!
@Sirajuddin-Asjad apparently in recent releases you need to add PyMesh support via PyBytes before you can use it.
See for details.
- Sirajuddin Asjad last edited by
@jcaron Thanks for your reply!
The code examples from GitHub are verified and tested on firmware 1.20.0.rc11 (this is a legacy version). I have tested the code examples on many different firmware versions and the results are similar.
Running the code on a more recent version (1.20.2.r4):
Results: WLAN.AP works, LoRA does not work.
Comment: The WLAN access point is enabled and appears as a network on my laptop. LoRa crashes and does not start.
rst:0x1 (POWERON Enabling WiFi... Enabling Lora... LoRa MAC: 70b3d54991d40180 Traceback (most recent call last): File "main.py", line 320, in <module> File "/flash/lib/lorameshlib.py", line 46, in __init__ AttributeError: 'LoRa' object has no attribute 'Mesh' Pycom MicroPython 1.20.2.r4 [v1.11-ffb0e1c] on 2021-01-12; LoPy4 with ESP32 Type "help()" for more information. >>>
Running the same code on firmware version 1.20.0.rc11:
Results: LoRA works, WLAN.AP does not work.
Comment: The LoRa mesh network starts and works as expected, but the WLAN access point does not appear on my list of networks and I'm unable to find it on my laptop. It does not work properly, even though it says "Enabling WiFi" in the output results.
rst:0x1 (POWERON_RESET),boot:0x1 Initializing filesystem as FatFS! Enabling WiFi... Enabling Lora... LoRa MAC: 70b3d54991d40180 2: PYMESH_ROLE_DETACHED ... waiting. 4: PYMESH_ROLE_DETACHED ... waiting. 6: PYMESH_ROLE_DETACHED ... waiting. 8: PYMESH_ROLE_DETACHED ... waiting. 10: PYMESH_ROLE_DETACHED ... waiting. 12: PYMESH_ROLE_DETACHED ... waiting. 14: PYMESH_ROLE_DETACHED ... waiting. 16: PYMESH_ROLE_DETACHED ... waiting. 18: PYMESH_ROLE_DETACHED ... waiting. 20: PYMESH_ROLE_DETACHED ... waiting. 22: PYMESH_ROLE_DETACHED ... waiting. 24: connected as PYMESH_ROLE_LEADER. IP: fdde:ad00:beef:0:d06:83c1:e0f0:78bc opened: 0x3f952260
I'm trying to find out why the WLAN access point stops working on firmware 1.20.0.rc11 and why it does not appear as a connectable network on my laptop.
I'm force-erasing the LoPy4 whenever I upgrade or downgrade firmware versions, so that means any Pybytes configuration should be deleted every time I change versions. Unless there is any manual configuration required to disable Pybytes, however that would not explain why the WLAN works on the latest firmware version but not the old legacy version.
I hope the information provided is helpful, and thanks a lot in advance!
@Sirajuddin-Asjad the link you included is not for a mesh library (which is built into the LoPy firmware if you pick the right version), those are just examples. They should work on any version of the library with mesh support.
What happens when you use a more recent version (please include full logs of the output)?
Also, you may want to make sure Pybytes is disabled, as it tries to do a lot of things behind the scenes both with WiFi and LoRa, so it can lead to confusing results. | https://forum.pycom.io/topic/6793/unable-to-create-wlan-access-point-with-old-firmware | CC-MAIN-2021-10 | refinedweb | 740 | 73.78 |
Please can someone take some time to read my comments in the below script. Now, I understand how I can use argv. However, I'm having trouble explaining how it actually works. I think it's because there's gaps in my terminology. I was wondering whether someone can tell me if the comments are correct ans show examples of how I could of improved my description. Yes, I'm a n00b. Thank you for your time.
from sys import argv # argument variable allows python to unpack variables. script, user_name = argv # the user will need to type a variable for the script to run prompt = '>' print "Hi %s, I'm the %s script." % (user_name, script) print "I'd like to ask you a few questions" print "Do you like me %s" % user_name likes= raw_input(prompt) print "Where do you live %s?" % user_name lives = raw_input(prompt) print "What kind of computer do you have?" computer = raw_input(prompt) print """ Alright so you said %r about liking me. You live in %r. Not sure where that is. And you have a %r computer. Nice """ % (likes, lives, computer) | http://www.dreamincode.net/forums/topic/240996-problem-explaining-argv/ | CC-MAIN-2016-07 | refinedweb | 184 | 85.79 |
Have you ever wondered how your ID tag works? In this tutorial, we will be able to show you how to read a RFID button or tag if it’s held up against ID-12 RFID reader or any 125KHz module. Also this project is an easy and fun way to explain the basic concept of digital identification technique using ID-12 together with ping ultrasonic and LEDs.
Hardware used:
- ID-12LA RFID Reader
- 2 - RFID buttons 125 kHz (Black)
- ATMEGA32U2 USB Dev Board for ID-12 and ID-20
- Ping Ultrasonic Sensor
- 2 - LEDs (1 Red and 1 Green)
- 2 - 220ohm Resistors
- Jumper Wires
ID-12LA RFID Kit:
In this kit there are three things:
- ID-12LA RFID reader:
It uses radio frequency to identifying any object with a RFID tag wirelessly within a certain range.
i. This reader is going to send out a radio signal that is going to be picked up and answered with a unique string of data from the RFID tag or button. This module has been used in applications such as access control, load identification, alarm system, and so on.
ii. The only things that make ID-12 different from the other RFID module (ID-2 and ID-20) is that it has a built-in antenna, and the size of the antenna. All these different ID readers use the same communication protocols, and works within a three data output format. This tutorial is using the ASCII (American Standard Code for Information Interchange) data format. That means out of 32 bit unique number of the RFID button, this tutorial concentrates on the 12 bit from the DATA and CHECKSUM.
- STX stands for Start-of-Text. It indicates that communication between the RFID button and RFID reader has started while ETX is an End-of-Text communication. .
- DATA are the 10 ID tag number
- Check Sum is the sum of the data for the purpose of detecting error.
- CR is the Carriage Return and LF is Linefeed.
iii. It’s designed to include 11 pins, which are shown in the block diagram below. In order for the reader to work, it requires a power supply to the ground and digital pin that are connected to the Arduino for serial communication. It can draw around 65mA current and works within about 100mm range.
- Fig1: Top left is the ID-12 RFID reader, top right is the RFID button and the bottom is the block diagram of ID-12 RFID reader
2. RFID Button: Having a diameter of 30mm, this button holds a unique 32 bit ID. This ID is read by the ID-12LA RFID, or any other device using 125kHZ module. The unique serial number of each button makes them useful as a key system, or to track individual objects. Refer to the above figure for more information.
3. ATMEGA32U2 USB Dev: This device is made compatible for the RFID reader (ID-12) that we are using to do this tutorial. It contains an Atmega32u2 microcontroller and an extra pin that can be used to control other sensors. Also the board has an LED and Buzzer to indicate whether the reader made a scan or not. We only used this board for connection purpose since the ID-12 has a large pin that makes it hard to connect on to a breadboard.
Fig 2: Image of ATMEGA32U2 USB Device
Ping Ultrasonic Sensor:
This sensor is used to measure distance by using ultrasonic waves. Ultrasonic describes sound that has a frequency above human ear limit, which is about 20,000Hz. The device can measure a distance from about 2cm (0.8in) to 3m (3.3yd). It works by sending an ultrasonic wave, and then provides an output pulse to approximate the time it took for the echoed wave to return to the sensor. The distance is the result of the rate of speed multiplied with how long it took for the wave to hit the object and to echo back to the sensor. These kinds of sensors are widely used in robotic applications.
Distance between two objects = (Speed of Sound * Time) / 2
Fig 3: Image and diagram of the Ping Ultrasonic Sensor
Hardware Assembly
1. Place a black and red wire on the breadboard. Connect the red wire to 5V on the Arduino while the black wire is connected to the GND.
2. Place the Ping Ultrasonic sensor on the breadboard. Connect the left most pin to the GND, the middle to the power supply (+5V), and the right most pin to digital pin 6 on the Arduino.
3. Attach ID-12 reader to the Atmega32U2 USB dev board and place it on the breadboard. Then connect from the board to the Arduino/ breadboard as follows:
- VCC => =5V
- GND = > GND
- TX => Pin 3
4. Place the LEDs on the breadboard and connect the resistors from the GND to the short pin of the LED. Connect the long pin to the digital pin 7 for the Red LED and digital pin 9 for the Green LED.
Software program
In this section we are going to program the Arduino to identify the specific person assigned for each RFID button.
First, we need a program to read the RFID button. After compiling and loading, we will use the unique numbers that were read from each RFID button for identification purposes.
/*Using RFID ID-12 to read unique character that is assigned to each RFID button or tag*/ char val = 0; //variable to store the char read from the RFID button
void setup() {
Serial.begin(9600); //connect to the serial port
} void loop () {
//read the serial port
if(Serial.available() > 0)
val = Serial.read(); //read from one char from the ID-12 and store it
Serial.print(val); //display the char in the serial monitor
}
}
At this point, we will be able to know each RFID button’s unique identifier.
Since we already have the unique identifier from the earlier code, we are going to store it in the system. Once the user comes near to the ultrasonic sensor, it will ask for an ID. After the reader scans, check to make sure it matches the one stored in the system. The output will be displayed on the serial monitor as well as the ON and OFF functions of different LEDs to indicate matching.
/*This program uses ID-12 reader to read any RFID button or card, and Identify if this ID is known in the system*/ // include a header file to...........
#include <SoftwareSerial.h>
SoftwareSerial mySerial(3,2); // virtual serial port
//////////////////////////////////////////////////////////////////
int R_LED= 7; // pin attached to the red LED
int G_LED = 9;
int LEDpin= 13; int Reset = 12; int Sonic_pin = 6; // pin from the ultrasonic sensor ///////////////////////////////////////////////////////////// char val = 0; // what if it's byte val...how many is it storing // because I already read and stored the ID's and I assigned it to the people char id_tag1[] ="78003BDE66FB"; char id_tag2[] = "78003BF78B3F"; char* nam_tag[] = {"Derek", "Jay"}; char IDstring[13]; int i=0; int p =0; //////////////////////////////////////////////////////////// void setup() { Serial.begin (9600); // begin serial communication mySerial.begin(9600); pinMode (Sonic_pin, INPUT); pinMode (R_LED, OUTPUT); pinMode (G_LED, OUTPUT); pinMode (Reset, OUTPUT); pinMode (LEDpin, OUTPUT); digitalWrite(LEDpin ,LOW); } void loop() { //calculate the distance measured from the ping ultrasonic sensor until the target is less than 4cm away float distance; do{ float time; // send out a pulse tone by creating pinMode(Sonic_pin, OUTPUT); digitalWrite(Sonic_pin, LOW); delay(2); digitalWrite(Sonic_pin, HIGH); delay(5); digitalWrite(Sonic_pin, LOW); //let the ultrasonic sensor be an input to take back the echoed wave pinMode(Sonic_pin, INPUT); time = pulseIn(Sonic_pin, HIGH); // get the echoed duration in microsecond distance = SecToCm(time); // call function to calculate the distance delay(1000);} while (distance>4.0); Serial.println("Hello, please swipe your ID"); //Serial.println(" "); delay(2000); //Open_Door(); Red_tag(); Iden_tag(); resetID(); Serial.println(" "); } //////////////////////////////////////////////////////////////////////////// void Access() { // turns the green LED on if the ID is known to the system digitalWrite(G_LED, HIGH); delay(2000); digitalWrite(G_LED, LOW); } /////////////////////////////////////////////////////////////////////////////// void Denied() { // turns the red LED on if the ID is not known to the system for(int i=0; i<3; i++) { digitalWrite(R_LED, HIGH); delay(1000); digitalWrite(R_LED, LOW); delay(300); } } ////////////////////////////////////////////////////////////////////////////// float SecToCm(float time) { // convert the time measured into distance in centimeter //the ultrasonic read 29 microseconds per centimeter return time/29.0/2.0 ; } /////////////////////////////////////////////////////////////////////////////////////////////// void Red_tag() { // if there is a radio frequency available from let the ID-12 read each char and store the value while (mySerial.available() >0) { //for(p=0;p<13;p++) { val= mySerial.read(); IDstring[p] = val; //Serial.println(val); p++; } } p=0; delay(500); Serial.print("ID Number: "); delay(500); // print out on the serial monitor the RFID unique ID stored in string for ( i=0; i<13;i++) { Serial.print(IDstring[i]); } Serial.println(); delay(1000); Serial.print("Please wait while checking"); Serial.println(); } //////////////////////////////////////////////////////////////////////////////// void Iden_tag() { boolean reading = true; //to check if the ID is known boolean reading1 = false;// to check if the first ID is not known boolean reading2 = false;// to check if the second ID is not known for (int i=0; i<12;i++)// comparing the new button read to the ID 2 stored in the system { if (IDstring[i+1] != id_tag2[i]) { reading2= true; //indicate that the ID is not the same reading = false; // break; } } if (reading==true) { delay (1500); Serial.println("Access granted"); Serial.print("ID belongs to "); Serial.println(nam_tag[1]); Access(); // call a function to light up the green LED delay(2000); } reading = true; for (int i=0; i<12;i++)// why 12 not 13 try to figure out { if (IDstring[i+1] != id_tag1[i]) { reading1= true; reading = false; //break; } } { if (reading==true) { delay (1500); Serial.println("Access granted"); Serial.print("ID belongs to "); Serial.println(nam_tag[0]); Access(); delay(2000); } if (reading1== true && reading2 ==true) { delay(1000); Denied(); Serial.println("Access Denied"); Serial.println("Please try again "); } } } /////////////////////////////////////////////////////////// void resetID() { for(int i = 0; i < 13; i++){ IDstring[i] = 0; } }
The first part of the code includes the serial header file that is used to implement serial communication between the Arduino and RFID reader module. We are then going to create an object, in this case “mySerial”, to assign the communication pins. Then we define the global variable pin that is used by the LED and ping ultrasonic sensor. Also, as a global variable, we have created a character array to hold the twoID numbersthat we got from the previous code as well as two names that are assigned for each ID number. These variables can be used by any function declared in this program.
Inside the setup function we have defined the serial communication between the computer and the Arduino as well as ID-12 and Arduino using same frequency band (9600). Then we defined which pin is used as an output or input.
When we look at the loop function, the first thing we did was use another loop known as do while loop. Inside this loop, we calculated the distance at least once and checked if the target is located less than 4cm away. In order to calculate the distance, we first have to send out a pitched tone from the ultrasonic sensor. And this is done by sending Low-High-Low sequence by triggering digital pin 6. After it hit a target and returned back to the sensor, the sensor will output echo pulse. Using pulseln(), we can measure the echoed time pulse in microsecond and then converted it into distance. According to the datasheet for ping ultrasonic sensor, the speed of the sound is 340m/s and that means there are 29 seconds per centimeter. Calling out function Red_tag, we are going to scan and store any RFID button. In order to do this, we created a new array to hold on to each character. The Iden_tag function is going to compare the button that was stored on to the new array and the Id_tag created at the beginning of the code. Then we are going to see the output on the serial monitor as well as the LEDs.
Fig 4: Screenshot of the serial montior displaying the output
If the detector doesn’t work after running the code, check the connection as well as any mistyping when writing the code. As we conclude this tutorial, we can now use this project for monitoring any object with a RFID tag.
If you have any questions about this tutorial, do not hesitate to post a comment, shoot us an email, or post it in our forum. | https://www.jayconsystems.com/tutorials/RFIDmonitor | CC-MAIN-2019-04 | refinedweb | 2,082 | 60.35 |
io-pkt-v4, io-pkt-v4-hc, io-pkt-v6-hc
Networking manager
Syntax:
io-pkt-variant [-d driver [driver_options]] [-i instance] [-P priority] [-p protocol [protocol_options]] [-t threads] [-v]
where variant is one of v4, v4-hc, or v6-hc.
The BlackBerry 10 OS includes only io-pkt-v6-hc.
Options:
- -d driver [driver_options]
- Start the specified devn-* or devnp-* driver:
- You can specify driver without the devn- or devnp- prefix or the .so extension. If you specify the driver this way, io-pkt* looks for a devnp- version first. If there isn't one, io-pkt* loads the legacy io-net (devn-) version, using a special "shim" layer, devnp-shim.so .
- If you want to load a specific version of a driver, specify the full path of the module (e.g. /lib/dll/devn-i82544.so).
The driver_options argument is a list of driver-specific options that the stack passes to the driver.
Use commas, not spaces, to separate the options.
The stack processes various driver options; for more information, see " Generic driver options," below.
- -i instance
- The stack instance number, which is useful if you're running multiple instances of io-pkt. The io-pkt manager will service mount requests of type io-pkt X, where X is the instance number. For example:
io-pkt-v4 -i1 -ptcpip prefix=/alt mount -Tio-pkt1 /lib/dll/devnp-i82544.so
- -P priority
- The priority to use for io-pkt's main thread. The default is 21.
- -p protocol [protocol_options]
- The protocol to start, followed by a list of protocol-specific options.
Use commas, not spaces, to separate the options.
The available protocols include:
- -S
- Don't register a SIGSEGV handler to quiesce the hardware if a segmentation violation occurs. This can help with debugging if it isn't possible to get a backtrace to the original code that generated the SIGSEGV through the signal handler.
- -t threads
- The number of processing threads to create. By default, one thread is created per CPU. These threads are the packet-processing threads that operate at Layer2 and may become the stack thread.
- -v
- If any errors occur while loading drivers and protocols, io-pkt sends messages to slogger . If you specify this option, io-pkt also displays them on the console.
If you specify the -p tcpip protocol, the protocol_options list can consist of one or more of the following, separated by commas without whitespace:
- bigpage_strict
- If the value of the pagesize option is bigger than sysconf(_SC_PAGESIZE), it's used only for the mbuf and cluster pools unless you also specify this option, in which case the page size is used for all pools.
- cache= 0
- Disable the caching of packet buffers. This should be needed only as a debugging facility.
- confstr_monitor
- Monitor changes to configuration strings, in particular CS_HOSTNAME. By default, io-pkt gets the hostname once at startup.
- enmap
- Prevent automatic stack mapping of en XX interface names to the actual interface names. By default, the stack automatically maps the first registered interface to en0 (if a real en0 isn't present), the second interface to en1, and so on, in order to preserve backwards compatibility with io-net-style command lines.
- fastforward= X
- Enable (1) or disable (0) fastforwarding path. This is useful for gateways. This option enables, and is enabled by, forward; to enable only forward, specify forward,fastforward=0.
- forward
- Enable forwarding of IPv4 packets between interfaces; this enables fastforward by default. The default is off.
- forward6
- (io-pkt-v6-hc only) Enable forwarding of IPv6 packets between interfaces; off by default.
- ipsec
- (io-pkt-v4-hc and io-pkt-v6-hc only) Enable IPsec support; off by default.
- mbuf_cache= X
- As mbufs are freed after use, rather than returning them to the internal pool for general consumption, up to X mbufs are cached per thread to allow quicker retrieval on the next allocation.
- mclbytes= size
- The mbuf cluster size. A cluster is the largest amount of contiguous memory used by an mbuf. If the MTU is larger than a cluster, multiple clusters are used to hold the packet. The default cluster size is 2 KB (to fit a standard 1500-byte Ethernet packet).
- pagesize= X
- The smallest amount of data allocated each time for the internal memory pools. This quantum is then carved into chunks of varying size, depending on the pool.
- pfil_ipsec
- (io-pkt-v4-hc and io-pkt-v6-hc only) Run packet filters on packets before encryption. The default is to do it after encryption.
- pkt_cache= X
- As mbuf and cluster combinations are freed after use, rather than return them to the internal pool for general consumption, up to X mbufs and clusters are cached per thread to allow quicker retrieval on the next allocation.
- pkt_typed_mem= object
- Allocate packet buffers from the specified typed memory object. For example:
io-pkt -ptcpip pkt_typed_mem=ram/dma
- prefix= /path
- The path to prepend to the traditional /dev/socket. The is useful when running multiple stacks. Clients can target a particular stack by using the SOCK environmental variable. For example:
io-pkt -ptcpip prefix=/alt SOCK=/alt ifconfig -a
- random
- Use /dev/random as the source of random data. By default, io-pkt uses a builtin pseudo-random number generator.
- recv_ctxt= X
- Specify the size of the receive context buffer, in bytes. The default is 65536; the minimum is 2048.
- reuseport_unicast
- If using the SO_REUSEPORT socket option, received unicast UDP packets are delivered to all sockets bound to the port. The default is to deliver only multicast and broadcast to all sockets.
- rx_prio= X or rx_pulse_prio= X
- The priority for receive threads to use (the default is 21). A driver-specific priority option (if supported by the driver) can override this priority.
- somaxconn= X
- Specify the value of SOMAXCONN, the maximum length of the listen queue used to accept new TCP connections. The minimum is the value in <sys/socket.h>.
- stacksize= X
- Specify the size of each thread's stack, in bytes. The default is 4096.
- threads_incr= X
- If the supply of threads is exhausted, increment their number by this amount, up to the value of threads_max. The default is 25.
- threads_max= X
- Specify the maximum number of threads. The default is 200.
- threads_min= X
- Specify the minimum number of threads. The default is 15, and the minimum is 4.
- timer_pulse_prio= priority
- The priority to use for the timer pulse. The default is 21.
Description:
The io-pkt manager provides support for Internet domain sockets, Unix domain sockets, and dynamically loaded networking modules. It comes in several stack variants:
- io-pkt-v4
- An IPv4 memory-reduced variant that doesn't support:
- IPv6
- Crypto / IPSec
- 802.11 a/b/g Wi-Fi
- Bridging
- GRE / GRF
- Multicast routing
- Multipoint PPP
-.
- You can use umount to unmount legacy io-net drivers, but not io-pkt* drivers. Other drivers may allow you to detach the driver from the stack, by using ifconfig 's destroy command (if the driver supports it).
- If io-pkt runs out of threads, it sends a message to slogger , and anything that requires a thread blocks until one becomes available.
- Native io-pkt and ported NetBSD drivers don't put entries into the /dev/io-net namespace, so a waitfor command for such an entry won't work properly in buildfiles or scripts. Use if_up -p instead; for example, instead of waitfor /dev/io-net/en0, use if_up -p en0.
- If a TCP/IP packet is smaller than the minimum Ethernet packet size, the packet may be padded with random data, rather than zeroes.
The io-pkt manager supports TUN and TAP. To create the interfaces, use ifconfig:
ifconfig tun0 create ifconfig tap0 create
For more information, see the NetBSD documentation:
-
-
Generic driver options
The stack processes the following generic driver options:
- name= prefix
- Override the default interface prefix used for network drivers. For example:
io-pkt-v4 -di82544 name=en
starts the devnp-i82544.so driver with the io-net-style interface naming convention (en XX). You can also use this option to assign interface names based on (for example) functionality:
io-pkt-v4 -di82544 pci=0,name=wan
- unit= number
- The interface number to use. If number is negative, it's ignored. By default, the interfaces are numbered starting at 0.
These options don't work with legacy io-net legacy drivers. If you attempt to use them with a devn- driver, the driver won't be loaded, and the log will include an "unknown option" error.
The stack also
For example:
io-pkt-v4-hc -drum did=0x0020,vid=0x13b1,devno=1,busno=1
Examples:
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.utilities/topic/i/io-pkt.html | CC-MAIN-2020-34 | refinedweb | 1,454 | 56.96 |
#include <CGAL/Combinatorial_map.h>
Inherited by CGAL::Linear_cell_complex_for_combinatorial_map< class, class, class, class, class >.
The class
Combinatorial_map represents a dD combinatorial map.
Darts and non void attributes are stored in memory using
Compact_container, using
Alloc as allocator.
CombinatorialMap
Complexity
The complexity of
sew and
unsew is in O( \( | \) S \( | \) \( \times\) \( | \) c \( | \) ), S being the set of darts of the orbit \( \langle{}\) \( \beta_1\), \( \ldots\), \( \beta_{i-2}\), \( \beta_{i+2}\), \( \ldots\), \( \beta_d\) \( \rangle{}\) for the considered dart, and c the biggest j-cell merged or split during the sew such that j-attributes are non void. The complexity of
is_sewable is in O( \( | \) S \( | \) ).
The complexity of
set_attribute is in O( \( | \) c \( | \) ), c being the i-cell containing the considered dart.
The complexity of
is_without_boundary(i) is in O( \( | \) D \( | \) ), D being the set of darts of the combinatorial map, and the complexity of
is_without_boundary() is in O( \( | \) D \( | \) \( \times\) d ).
The complexity of
unmark_all and
free_mark is in O( 1 ) if all the darts of the combinatorial map have the same mark, and in O( \( | \) D \( | \) ) otherwise.
The complexity of
is_valid is in O( \( | \) D \( | \) \( \times\) d \( ^2\) ).
The complexity of
clear is in O( \( | \) D \( | \) \( \times\) d ).
Other methods have all a constant time complexity.
GenericMapItems
Itemshad to define the type of dart used, and the default class was
Combinatorial_map_min_items. This is now deprecated, the
Darttype is no more defined in the item class, but replaced by the
Dart_infotype.
CGAL_CMAP_DART_DEPRECATEDcan be defined to keep the old behavior.
The tuple of cell attributes.
Equal to
CGAL::cpp11::tuple<> if
Attributes is not defined in the items class.
Information associated with darts.
Equal to
void if
Dart_info is not defined in the items class. | https://doc.cgal.org/latest/Combinatorial_map/classCGAL_1_1Combinatorial__map.html | CC-MAIN-2019-35 | refinedweb | 282 | 56.35 |
25 July 2007 03:24 [Source: ICIS news]
SHANGHAI (ICIS news)--China’s methanol output reached 4.6m tonnes in the first half of the year, up a sharp 34.1% compared with the same period a year ago on growing demand from downstream sectors and emerging industries, the National Bureau of Statistics said late on Tuesday. ?xml:namespace>
Formaldehyde, as the main downstream user of methanol, has seen steady growth propelled by increasing demand from thermosetting resins, methylene diphenyl diisocyanate (MDI) and polymethylene oxide (POM).
?xml:namespace>
Output of acetic acid, another major derivative of methanol, has also grown fast, surging by 56% to 1.4m tonnes in 2006 from a year ago, the National Acetic Acid Industry Association said.
Rising demand from purified terephthalic acid (PTA), acetic ether, vinyl acetate, chloroactic acid and acetic anhydride contributed to the hike.
In addition, huge potential from emerging sectors, such as methanol/dimethyl ether (DME) fuels and methanol-to-olefins, would also play an important role in boosting methanol demand, the China Petroleum and Chemical Industry Association (CPCIA) said.
On the whole, China’s methanol demand was estimated to reach 18-21m tonnes/year by 2010, while the country was expected to bring on stream 12m tonnes/year of new methanol capacity by the same year.
Total national capacity and output is estimated to reach 25m tonnes/year and 18m tonnes/year respectively, CPCIA said.
In the first half of this year, benzene output also saw significant growth of 29% to 2.2m tonnes, the bureau said.
First-half output of calcium carbide, polyvinyl chloride (PVC) and polypropylene (PP) increased by more than 20% to 6.9m, 4.7m and 3.5m tonnes respectively, while PP fibre output fell 6.3% year on year.
*% changes are year on year
*% changes are year on yearSource: National Bureau | http://www.icis.com/Articles/2007/07/25/9047198/methanol-leads-hike-in-china-h1-chem-output.html | CC-MAIN-2014-10 | refinedweb | 305 | 63.09 |
C Programming/C Reference/stdlib.h/abort
C programming standard library function .It is used in programming process or environment to stop the program or process abnormally.This function is required in the program when wrong condition gets encountered in program execution then to come out of process this function is used.
Introduction[edit]
It deletes buffers and closes all open files before ending the program.) , then it returns control to host environment.This means that the abort call never returns. Because of this characteristic, it is often used to signal fatal conditions in support libraries, situations where the current operation cannot be completed but the main program can perform cleanup before exiting. It is also used if an assertion fails.
Header files & Syntax[edit]
#include<stdio.h> void abort( void );
Return Value[edit]
This function does not return any value i.e. it is of void datatype.
Thread safety[edit]
It is one of the thread safe functions from standard c library.i.e. function can be called by different threads without any problem.
Example[edit]
This example tests for successful opening of the file myfile. If an error occurs, an error message is printed, and the program ends with a call to the abort() function.
#include <stdio.h> #include <stdlib.h> int main(void) { FILE *stream; if ((stream = fopen("mylib/myfile", "r")) == NULL) { perror("Could not open data file"); abort(); } } | http://en.wikibooks.org/wiki/C_Programming/C_Reference/stdlib.h/abort | CC-MAIN-2013-48 | refinedweb | 232 | 59.6 |
Diophantine Reciprocals
September 19, 2014
Career Cup claims that Amazon asked this as an interview question; it is also Problem 108 at Project Euler:
In the following equation x, y and n are positive integers: 1 / x + 1 y = 1 / n. For n = 4 there are exactly three distinct solutions: 1/5 + 1/20 = 1/6 + 1/12 = 1/8 + 1/8 = 1/4. What is the least value of n for which the number of distinct solutions exceeds one thousand?
Your task is to solve Amazon’s question; you might also like to make a list of the x, y pairs that sum to a given n. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Here’s another way to solve this (taking the problem to be finding the smallest number whose square has at least 2000 divisors).
A number of the form (p1^a1)..(pn^an), for primes p1,…pn, has (a1+1)…(an+1) divisors and its square has (2a1+1)…(2an+1) divisors. For a given a1…an, which we can assume to be in non-increasing order, the smallest such number is where p1 = 2, p2 = 3 etc. Therefore by generating all such smallest numbers in order and checking the number of divisors, we will find the solution we seek. So, we need to generate all non-increasing sequences a1,a2,a3,… in order of (2^a1)(3^a2), and we can do this by using a variant of Dijkstra’s shortest path algorithm on a graph of sequences, with edges (a1,…,an) -> (a1,…,an,1) and (a1,…,ai,…,an) -> (a1,…,ai+1,…an) (as long as that is non-increasing), so, for example, ()->(1)->(2)->(2 1)->(2 2)->(3 2)->(3 2 1) is a path in the graph, representing 1->2->4->12->36->72->360. We maintain a queue of unvisited sequences and each stage we output the smallest unvisited sequence/number and add its successors into the queue (if they aren’t already there).
Here’s a C++ implementation using STL heap operations to manage the queue and an STL set to keep track of seen items, runtime is about 4 ms.
Erlang solution, factoring is done by a 2,3,5,7 factor wheel.
Here’s another solution: if n = (p^k)m where p is prime and doesn’t divide m, then divisors(n) = (k+1)*divisors(m), with m < n, so for each n, we just need to apply trial division until we have found a single prime power factor, then apply the recurrence with a table lookup:
#include
#include
const int K = 2; // We want divisors of n^2
int divisors(int n, const std::vector &a)
{
// Find a prime power factor, reduce n and
// apply recurrence.
int k = 0;
for (int p = 2; p*p 0) return (1+K*k)*a[n];
}
return 1+K; // n is prime
}
int main()
{
int N = 2000;
std::vector a;
a.push_back(0); a.push_back(1);
for (int n = 2; ; n++) {
a.push_back(divisors(n,a));
if (a[n] >= N) {
std::cout << n << "\n";
break;
}
}
}
Don’t know what happened to the formatting there, let’s try again: | https://programmingpraxis.com/2014/09/19/diophantine-reciprocals/ | CC-MAIN-2021-04 | refinedweb | 549 | 62.21 |
*
using Objects and Classes to Add Students to Courses
Kd Martin
Ranch Hand
Joined: Nov 28, 2011
Posts: 58
posted
Nov 28, 2011 00:33:07
0
I have a few questions pertaining to this assignment I have. Firstly, I'm having some trouble figuring out how to add a student to a course and keep an index of the courses the student is enrolled in. I also don't understand what exactly I am supposed to use the Student array for. The instructions said to add the setCapacity method but I don't know what that is supposed to be used for. We just started covering objects and creating classes and I feel so lost
If anyone could shed some light on these questions for me it would be greatly appreciated
Here is my assignment:
Part I: (15 pts) Create a Student class with the following:
A private
String
variable named “name” to store the student’s name
A private integer variable named “UFID” that contains the unique ID number for this student
A private String variable named “DOB” to store the student’s date of birth
A private integer class variable named numberOfStudents that keeps track of the number of students that have been created so far
A public constructor Person(String name, int UFID, String dob)
Several public get/set methods for all the properties
getName/setName
getUFID/setUFID
getDob/setDob
Part II: (15 pts) Create a Course class with the following:
A private String variable named “name” to store the course’s name, such as COP3502 etc.
A private array named “students” Student[] students.
A private integer variable “capacity” for the maximum number of students allowed in this class
A private integer variable “currentEnrollment” for the number of students enrolled in the course right now
A private integer class variable named numberOfCourses that keeps track of the number of courses that have been created so far
A public constructor Course(String n, int cap)
Several set/set methods for all the properties
getName/setName
getCap/setCap
etc.
A public method enrollStudent (Student s) to add s to this course and return true if the student was successfully added, or false if not
A public method removeStudent(Student s) to remove s from the students array
Part III: (20 pts) Create a
test
class to check whether above classes work properly. It should be able to create Course and Student objects, enroll student or drop student to these courses, printout current enrolled courses for a given student.
Here is the code I have so far:
Course class:
public class Course { private String name; //Name of course private Student[] students; //Declare array private int capacity; //Number of students allowed in course private int currentEnrollment; //number of students enrolled private static int numberOfCourses; //Number of courses created so far //Constructor Course(String n, int cap) { name = n; capacity = cap; students = new Student[capacity] //Create array } //Get and set methods String getName() { return name; } public void setName(String newName) { name = newName; } int getCapacity() { return capacity; } public void setCapacity(int newCapacity) { capacity = newCapacity; } //Method to add a student (s) to the course and return true if added and false if not public static boolean addStudent(Student s) { if (currentEnrollment >= capacity) { //See if course has room currentEnrollment++; return true; } else { return false; } } //Method to remove student from a class public static void removeStudent(Student s) { currentEnrollment--; } }
Student class:
public class Student { private String name; private int UFID; private String DOB; private static int number; //Constructors Student(String name, int UFID, String DOB) { this.name = name; this.UFID = UFID; this.DOB = DOB; } /; } }
Thank you so much!
Pranav Raulkar
Ranch Hand
Joined: Apr 20, 2011
Posts: 73
I like...
posted
Nov 28, 2011 01:05:23
1
Hi Kd Martin,
In order to know which courses the student has enrolled to, you need to have something, ideally, an collection of Course objects in your student class.
Hope this helps.
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 36453
15
posted
Nov 28, 2011 04:56:48
1
Why have you got a
number
field in the student class? You do not appear to use it for anything. Are you supposed to use that field to set the ID number? If you, you would not want to pass the ID to the constructor.
Have you got any way of getting courses from the student class? The question about which course a student enrols for sounds like something out of a database assignment, where you get a relation between student and course and you can query it both ways. You can search through all the courses for a particular student, but that is hardly an elegant or efficient solution. You can try a
bidirectional map
, but that is probably too complicated for your level of experience.
You can add some way of recording courses inside the Student class, you can have students recorded in each course, and courses recorded in each student. You would have to set some sort of maximum, limiting how many courses each student may take at once. You need a method for each course which records a student, and a method for each student which records courses.
Those methods mustn’t be
static
. If you have a means of recording students, you must be able to add a student to the array, and also remove a student. Adding is quite easy, removing from an array slightly more difficult. If you go into your Java™ installation folder, you find a file called src.zip. Unzip that, go into the
java
folder, the util folder, and the ArrayList.java file, and you can see how it is done there.
What happens when you change the capacity of the course? If you increase capacity, are you creating a new array and copying the students from the old array?
If you use an array, you don’t need a
capacity
field; you can use the
length
field of the array. Otherwise you are storing the same datum in two places and there is a risk of getting the two values different from each other.
What happens if you reduce the capacity, say from 20 to 15, after 16 students have already enrolled? You should consider what I have struck through and discuss it with your teacher.
Do you really want to change the ID in a student object?
How are you going to print out the details of a student or a course? Have you got
toString()
methods ready to write? I would remind you not to use the + operator on Strings in more than one line; use a
StringBuilder
instead.
I have made lots of suggestions. Lots and lots. You will have to implement them. Take it easy and don’t let a 2-inch block of solid text on screen scare you. You implement those suggestions one at a time, not writing more than 5 lines of code before you compile and run it. You need a class with some sort of method which creates Student objects, prints their details, changes their details, enrols them on courses, etc., etc. You build up this method one line at a time, running it frequently . . . and when you finish that, you have got half your “
testing
class” ready-made
Kd Martin
Ranch Hand
Joined: Nov 28, 2011
Posts: 58
posted
Nov 29, 2011 19:39:29
0
Great thank you!!
Also, how do I go about passing a student object I created to the array created in the course class?
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 36453
15
posted
Nov 30, 2011 08:17:38
1
Same way you would do it for any other method
myObject.foo(myArray[123]);
Obviously you have to ensure that your object has the appropriately named method, which takes an argument of the type of the elements of that array, that the array actually has a member No 123, etc etc.
Kd Martin
Ranch Hand
Joined: Nov 28, 2011
Posts: 58
posted
Nov 30, 2011 17:31:18
0
I ended up figuring it out!! Thanks so much!
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 36453
15
posted
Dec 01, 2011 11:37:30
0
Well done
It is sorta covered in the
JavaRanch Style Guide
.
subject: using Objects and Classes to Add Students to Courses
Similar Threads
Student File and Checking Validity of DOB and Student ID
Passing Objects Created By User to Array in Different Class
Write to File
Get and Set methods for array of objects
Drop and Add Students to Course
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/560014/java/java/Objects-Classes-Add-Students-Courses | CC-MAIN-2014-15 | refinedweb | 1,444 | 64.04 |
I am new to develop network app. I am currently developing metro app for Windows 8.
I have a dll that use winsock api to support raw socket. I found out that winsock is not supported on Windows 8 metro app.
So I am planning to re-write my own dll in C++ using Windows.Networking.Sockets.
My questions are...
(1) What is the alternative function to make socket object like below.
sock = socket(iFamily, iType, iProtocol);
(2) What is the alternative function to connect.
connect((SOCKET)sockfd, (SOCKADDR*)&addr, sizeof(SOCKADDR_IN);
It would be appreciated if someone could provide me samples or webpage I can refer.
Thanks much!
Take a look at the the
DatagramSocket sample and the
StreamSocket sample , depending on what type of socket you need to use.
Also see
Connecting to peers, web and network services for a non-socket specific overview of networking support.
Connecting to network services is the socket specific sub-section.
--Rob
Thank you Rob,
Let me ask you with this way. How do I create socket descriptor object using StreamSocket?
I need to create socket descriptor which I could create using winsock api.
I need it because....I have a library that accept raw socket and have to pass the socket descriptor that I can create using socket function under winsock.
Thank you for your help.
StreamSocket does not expose a socket descriptor. The Windows::Networking::Sockets namespace abstracts the connection quite differently from how winsock did. You will need to modify your library not to rely on winsock structures or objects. | https://social.msdn.microsoft.com/Forums/en-US/49aaa678-73f1-47e3-989c-c411d9bbc106/windows-socket-2-and-windows-8-metro-app?forum=winappswithnativecode | CC-MAIN-2020-40 | refinedweb | 261 | 59.5 |
rbb 00/08/05 19:21:06
Modified: src/include mpm_common.h
Log:
Update the mpm_common.h file with docs to use ScanDoc
Revision Changes Path
1.8 +41 -0 apache-2.0/src/include/mpm_common.h
Index: mpm_common.h
===================================================================
RCS file: /home/cvs/apache-2.0/src/include/mpm_common.h,v
retrieving revision 1.7
retrieving revision 1.8
diff -u -r1.7 -r1.8
--- mpm_common.h 2000/08/02 05:25:29 1.7
+++ mpm_common.h 2000/08/06 02:21:06 1.8
@@ -74,14 +74,55 @@
extern "C" {
#endif
+/**
+ * @package Multi-Processing Modules functions
+ */
+
#ifdef HAVE_NETINET_TCP_H
#include <netinet/tcp.h> /* for TCP_NODELAY */
#endif
+/**
+ * Make sure all child processes that have been spawned by the parent process
+ * have died. This includes process registered as "other_children".
+ * @warning This is only defined if the MPM defines
+ * MPM_NEEDS_RECLAIM_CHILD_PROCESS
+ * @param terminate Either 1 or 0. If 1, send the child processes SIGTERM
+ * each time through the loop. If 0, give the process time to die
+ * on its own before signalling it.
+ * @tip This function requires that some macros are defined by the MPM: <PRE>
+ * MPM_SYNC_CHILD_TABLE -- sync the scoreboard image between child and parent
+ * MPM_CHILD_PID -- Get the pid from the specified spot in the scoreboard
+ * MPM_NOTE_CHILD_KILLED -- Note the child died in the scoreboard
+ */
void ap_reclaim_child_processes(int terminate);
+
+/**
+ *
+ */
void ap_wait_or_timeout(ap_wait_t *status, apr_proc_t *ret, apr_pool_t *p);
+
+/**
+ * Log why a child died to the error log, if the child died without the
+ * parent signalling it.
+ * @param pid The child that has died
+ * @param status The status returned from ap_wait_or_timeout
+ */
void ap_process_child_status(apr_proc_t *pid, ap_wait_t status);
+
(int s);
#else
#define ap_sock_disable_nagle(s) /* NOOP */ | http://mail-archives.apache.org/mod_mbox/httpd-cvs/200008.mbox/%3C20000806022106.37964.qmail@locus.apache.org%3E | CC-MAIN-2016-50 | refinedweb | 268 | 59.8 |
Did you get an answer for this problem? (being 7 days old and all)
You might want to post this to the geronimo mailing list for XBean. It's
really hard to get any answers from anyone except David Jencks and Dain
Sundstrom. You can also try the IRC channel.
Alex
On Wed, Nov 19, 2008 at 4:50 PM, Emmanuel Lecharny <elecharny@gmail.com>wrote:
> Hi,
>
> as I have added some properies into the AbstractProtocolService class, I'm
> now trying to get them listed on the web site (documentation effort ...).
>
> The problem I have is that all the setXXX() methods are immediately seen as
> configuration parameters, even if it's not the case.
>
> We are using a generic XBean annotation :
>
> * @org.apache.xbean.XBean
> */
> public class NtpServer extends AbstractProtocolService
> ...
>
> which use reflection to construct the XSD file from the javaclass. What I
> would like to do is to remove the useless parameters from this XSD file when
> the maven-xbean-plugin is run. For instance, I don't want :
> - DatagramAcceptor
> - SocketAcceptor
> - DirectoryService (we don't use it for the NtpServer)
> - ServiceID (This is a technical info which will never change)
> - ServiceName (This is a technical info which will never change)
> - started (it's a protected boolean set by the server itself, no need to
> configure it)
> - TransportProtocols
>
> Anyone knows how to get those elements not generated as part of the XSD
> file ?
>
> I have looked at the very sparse xbean doco, but didn't find anywhere
> something helpful. What I would like to do is to add some annotation to tell
> XBean not to use a setter as a configuration element. Or the opposite :
> declare all the configurable element in the top level class, telling xbean
> not to dig into the class and its parents for a new configurable element.
>
> Is it possible ?
>
> Thanks !
>
> --
> --
> cordialement, regards,
> Emmanuel Lécharny
>
> directory.apache.org
>
>
> | http://mail-archives.apache.org/mod_mbox/directory-dev/200811.mbox/%3Ca32f6b020811261146j13ef60b4lea1cb9255e9778b0@mail.gmail.com%3E | CC-MAIN-2019-18 | refinedweb | 312 | 62.38 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.