text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I have the following code: #include <iostream> using namespace std; int main(int argc, char *argv[]) { string &s1 = argv[0]; // error const string &s2 = argv[0]; // ok argv[0] = "abc"; cout << argv[0] << endl; // prints "abc" cout << s2 << endl; // prints program name } s1 invalid initialization of reference of type 'std::string& {aka std::basic_string<char>&}' from expression of type 'char*' s2 argv[0] s2 -std=c++11 Then why does the compiler accept s2? Because a constant reference can bind to a temporary, and std::string has a constructor that can take a char * for a parameter. That assignment constructs a so-called "temporary" object, and binds a constant reference to it. Interestingly, when I assign a new value to argv[0] then s2 does not change. Why should it change? s2 is a separate object. A std::string object does not maintain a pointer to a char * that created it. There are many ways to create a std::string. The actual string is owned by the object. If the std::string is constructed from a literal character string, it gets copied into the std::string. So, if the literal character string comes from a buffer, and the buffer is subsequently modified, it has no effect on the std::string.
https://codedump.io/share/9Zi2HrKJd7Lu/1/convert-char-to-stringamp
CC-MAIN-2017-26
refinedweb
210
62.27
reviews,. Oracle Developer Cloud Service is included as a free entitlement with Oracle Java Cloud Service and Oracle Messaging Cloud Service. The Developer Cloud Service includes all the tools you need to support the team development lifecycle. There are popular open source tools such as Git, Maven and Hudson. There's also task management, code reviews and a wiki. The easiest way to experience the Oracle Developer Cloud Service is through a trial of the Oracle Java Cloud Service - SaaS Extension (click the "Try It" button). In this article I will introduce the Developer Cloud Service by using Maven to create a new web application, Git to manage the application source and Hudson to build and deploy my application to the Java Cloud Service. If you plan to follow along, this tutorial also assumes you already have Maven and Gitinstalled. I will be using the Git Console from my desktop to interface with the Developer Cloud Service. The tutorial also assumes, of course, that you have access to the Developer Cloud Service! As a first step, log into the Developer Cloud Service and create a new project. Give the project a Name, Description and select the Privacy setting (for team projects you should select Organization Private): For this example, we will not be using a template: And finally, chose the Wiki markup style you prefer and click Create Project to begin: As the project is being created, you can watch as the services are being provisioned for you. This should only take a couple of seconds: Soon, project service provisioning is complete and you're ready to start using your new project: Here we'll use Maven to quickly create a web application. I'm going to do this from the Git Console so I'll have a consistent local user interface throughout this tutorial. This is the Maven command I'll be running, which creates a fully functional Hello World web application: mvn archetype:generate -DgroupId=HelloDeveloperCloudService -DartifactId=HelloDeveloperCloudService -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false Let's now put the application under source control management. We begin by creating a Git repository for our web application. If you're not already familiar with Git, their Try Git tutorial provides an excellent introduction. Change into the newly created application directory and run: git init Next we need to add the files to the repository index and stage them for commit: git add . Now commit the new files to the local repository. git commit -m "Add initial webapp files" Now we need to add our remote repository (the one in the Developer Cloud) to Git. For this you'll need to copy the Git source code repository URL from your Developer Cloud Service project. 'dcs' is the name we will use to refer to this repository going forward: git remote add dcs {copy https URL from Developer Cloud Service project} Now push our web application project files to the Developer Cloud Service. The name of our remote is 'dcs' and the default local branch name is 'master'. The -u tells Git to remember the parameters, so that next time we can simply run git push and Git will know what to do. git push -u dcs master At this point you'll also be prompted for your Developer Cloud Service password: Now that our source code is in the Oracle Cloud, we can run a Hudson build Job. The Developer Cloud Service comes pre-configured with a sample Maven build job that will properly build this application. If you first want to see your code, click the Browse tab and click View Commits: Click the Build tab, where you'll see the provided Sample_Maven_Build: Click the Sample_Maven_Build link to open the job: Feel free to review the configuration (click the Configure link). When you're ready, click the Build Now link. The build will get queued, waiting for an executor: Soon thereafter, the build will begin: Feel free to click the console icon to track the Console Output. In my case, the build has already completed, so the build indicator ball has turned green: .......... Now that we have a successful build, we can configure deployment. Click the Deploy tab and then click New Configuration: Complete the Deployment Configuration. As I'll be using this configuration for development, I'll set it to deploy automatically after a successful build: Click Save to see your new Deployment Configuration: This deployment will run any time there's a new build. To run this build manually, click the gear icon and select Start: And wait a few seconds for the deployment to succeed: Click the Java Service link to navigate to your Java Service. There, under Jobs, you'll also find that the Deploy Application job is Complete: On the same page, under Applications, click the Test Application icon: Then click the Application URL in the window that pops up: Which will open your application in a new tab: As this application stands right now, it would require cloud credentials in order to run it. Let's quickly remedy that. Back in the Git Console, add and empty <login-config/> tag to the web.xml. You can read about the details of Updating web.xml in the Cloud Documentation. While were are here, let's also make a visible change to the index.jsp: Then commit: Push the changes to the Developer Cloud Service: Return to the Developer Cloud Service Console and optionally view the changed files: Switch to the Build tab and click the Schedule Build icon (just to the right of the Console icon). In the true spirit of continuous integration, it is also possible to configure Hudson to poll for changes to the repository, but that's now how this sample build has been configured. When the build completes, remember deployment is automatic. Just return to your browser and refresh the page: There is much more to experience with the Developer Cloud Service - Tasks, Code Reviews, Wikis. The plan for this tutorial was to give you enough to started. Are you ready?Updating web.xml in the Cloud Documentation.! I'm expanding on an earlier post where I explained how to deployJAX-RS Web Services to the Oracle Cloud. In that entry the web service simply returned a hard-coded "Hello World". In reality, you most likely want your web service to expose something more meaningful. In this post I'm going to expand the HelloJerseyApp to display the results from a table, which I'll expose using a JPA entity. Downloadthe HelloJerseyApp, which was created in the previous blog post, and open it in JDeveper. As I already have the Oracle HR sample database set up in the cloud, I will create an entity to represent the COUNTRIES table. Note, I also have a local copy of the HR sample database that I will initially use the generate the entity. I will switch the configuration to use the cloud datasource just before I deploy. Open the New Gallery and select Entities from Tables: Select EJB 3.0 - JPA Entities: Select Next to accept the default Persistence Unit: Select Online Database Connection as the Connection Type: Create a new (or copy an existing) connection to the HR schema: Select the COUNTRIES table: Optionally set the Package Name: And optionally change the entity name. I like to keep them singular: And click Finish to generate the Entity. The Java Service Facade has a Main method, which will allow you to easily test your methods before deploying to the cloud. Right-click the persistence.xml (that was generated when we created the entity) and select New Java Service Facade. I'm updating the Service Class Name tofacade.CountryFacade. For reasons beyond my understanding, this wizard does not recognize the persistence unit that was just created as part of the Create Entities from Tables wizard. No worries, we can have 2 persistence units. Select the defaults provided to create a new persistence unit: You can leave the Java Service Facade Method defaults: And click Finish to generate the facade class. public String[] getCountryList() { List<Country> allCountries = getCountryFindAll(); String[] countryList = new String[allCountries.size()]; for (int i = 0; i < allCountries.size(); i++) { countryList[i] = allCountries.get(i).getCountryName(); System.out.println(countryList[i]); } return countryList; } With the System.out.println, you can then call this method from the Main method to quickly test the entity: public static void main(String [] args) { final CountryFacade countryFacade = new CountryFacade(); // TODO: Call methods on countryFacade here... countryFacade.getCountryList(); } Right-click the source and select Run. Look for the results in the Output window: Add the @Path annotation to the class: import javax.ws.rs.Path; @Path("/country") public class CountryFacade { At the @Get, @Path and @Produces annotations to the method and delete the System.out.println: @GET @Path("/countries") @Produces(MediaType.APPLICATION_JSON) public String[] getCountryList() { List<Country> allCountries = getCountryFindAll(); String[] countryList = new String[allCountries.size()]; for (int i = 0; i < allCountries.size(); i++) { countryList[i] = allCountries.get(i).getCountryName(); } Now that the service method is behaving as we like, we need to update the persistence unit's datasource to the Oracle Cloud. Remember when we ran the New Java Service Facade wizard, a new persistence unit was created, HelloJerseyProj-1. We are going to switch the CountryFacade class to use the original persistence unit, HelloJerseyProj. @Path("/country") public class CountryFacade { private EntityManagerFactory emf = Persistence.createEntityManagerFactory("HelloJerseyProj"); We will then update the HelloJerseyProj persistence unit to call the cloud datasource, which is database. <persistence-unit <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>javatrial0129db</jta-data-source> <class>entity.Country</class> <properties> <property name="eclipselink.target-server" value="WebLogic_10"/> <property name="javax.persistence.jtaDataSource" value="javatrial0129db"/> </properties> </persistence-unit> Deploy as before. Once deployed, you'll be able to access the list of countries at the following URL (swapping out my Identity Domain for yours): Web Forms is one of the exciting new features of the recently released BPM 11.1.1.7. To really embrace the power of Web Forms, one must understand Forms Rules, which can give your forms dynamic behavior, such as showing a field or calculating a sum total. In this example, I'm going to show you how to use Form Rules to dynamically populate a dropdown (or list). Using Business Process Composer, create a new BPM Process calledDynamicDropdown. Then create a new Web Form calledDropdownForm. Add 2 dropdowns on the form:Country and City. Your form should look as follows: Click the rules icon to toggle to the Form Rules: Create a new rule and click the edit Rule icon , then create the rule PopulateCountry as follows: If you're playing along at home, you are free to use the same public URL as it returns a list of 5 countries as JSON output (if you want your country added to the list, just let me know):. Just note that you will also need to add the Oracle Cloud's certificate to your WebLogic KeyStore See Calling an Oracle Cloud Service from Java for details. Note, the JavaScript editor validates when you tab out of the Rule field. So if there's an error for example: If you want to test run your form at this point, click the blue running man icon : It's very common to have the results of one field determine the contents of another. In this case we only want to see cities that apply to the selected country. Create a second Form Rule as follows: And that's all there is to it:: Exception in thread "main").:. Deploying the HRSystem application to the Oracle Cloud took 4 easy steps.. The HRSystem tutorial application uses the Oracle HR schema. SeeOracle HR Schema Objects in the Cloud for an example of how to do this step. you can find on the My Account Dashboard To update the connection information, Edit the Application Module: OK, I took the easy road here and just made my application publicly available. This is achieved by simply adding the empty <login-config/> tag to the bottom of web.xml (see Hello Oracle Cloud for instructions). However, I realize that's not realistic for most enterprise deployments. TheOracle Cloud documentation has a nice section on Managing Application Security: Managing Application Security This section describes how to secure Java EE and ADF applications targeted for a Java Cloud Service instance. Topics: - About Built-in Authentication - About Java EE Application Security - About ADF Application Security See Also: - "Securing Oracle Cloud" in Getting Started with Oracle Cloud There's also a nice tutorial available: Securing a Web Application on a Java Cloud Service. scripts directly. One thing to note when working with the Oracle Cloud is that you are given a single database schema, which appears to be just some randomly generated GUID. For example, my assigned schema is QBNVJDSFBKHK: So, technically, we will not be creating the HR schema, but rather the HR schema objects (tables, views, indices, ...) in our Oracle Cloud provided schema. The scripts to create the HR demo schema are included with your installation of the Oracle Database. I have provided copies of them here as well. Right click the scripts and save them to your hard drive: Sign in to access your Oracle Cloud Services and launch the database service. Then select SQL Workshop > SQL Scripts. Use the Upload button to Upload your scripts. At this time you need to upload them one at a time. When completed it should look as follows: Click the arrow to run the the hr_cre.sql script. You'll get a notice about the statements that will be ignored, but these are mostly for output formatting which are not a concern in our case: Click Run Now at the bottom of the page to run the script. After a couple seconds the job will complete: View the Results and most importantly, note the bottom of the page which summarizes the number of successful statements and statements with errors: Run the remaining scripts in the following order: And that's it. Now you have the Oracle HR schema objects available to you in the. Looking at the MySQL web site, the instructions for Installing MySQL Community Server seem more complicated then they need to be. Maybe that's because there are no instructions for OpenSolaris (yet - I hope). Here are the easy steps that got me up and running.:
https://community.oracle.com/blogs/bleonard?page=3
CC-MAIN-2017-43
refinedweb
2,401
52.6
OpenGL Spec Now Controlled by Khronos Group 245 Posted by ScuttleMonkey from the passing-the-torch dept. from the passing-the-torch dept.." Kronos? (Score:5, Funny) Re:Kronos? (Score:5, Funny) That would be the Q'onos group, you spineless p'taq! BTW, I included the "spineless p'taq" comment in order to keep with the theme, not because I'm trying to be insulting. I think of it as an "insensitive clod" joke, only with more glory and honor. Qapla! Re:Kronos? (Score:3, Funny) (Besides, this is Joy, Sorrow? (Score:2, Funny) Great! (Score:3, Insightful) Re:Great! (Score:4, Informative) [1] Note that nVidia actually have more than one namespace for their extensions, depending on how stable they are. What "affect" ** (Score:5, Insightful) Well reading TFA and not finding Microsoft on either their promoters [khronos.org] page or their contributors [khronos.org] page I'm cautiously optimistic. ** affect? effect? I can never keep this one straight either. of course... (Score:5, Informative) Although Microsoft has not been openly hostile. They distribute OpenGL with Windows. And although there are concerns that they are "crippling" the implementation they are shipping with Vista (of which I, personally, am skeptical), hardware vendors ATI and nVidia will be shipping the latest versions with their cards. Re:of course... (Score:2, Informative) It was up to the user to download proprietary drivers for their brand of video card. Re:of course... (Score:2) Re:of course... (Score:2) "I could care less about the pirated XP Home disk your Packard Bell came with..." LOL. Re:of course... (Score:2) Uh, they distributed OpenGL with NT4 and Windows 2000, without any 3d acceleration. Microsoft provided a full and compliant (to whatever version) software implementation of OpenGL at least since NT 3.51, the first version I ever used. Actually, I think it was the last version worth a shit, too, but it doesn't support modern hardware so it wouldn't be an option even if you could get programs to run on it :/ It's total news to me that the OpenGL layer in XP even has acceleration, not that I don't believe y Re:of course... (Score:2) Re:of course... (Score:3, Informative) So? OpenGL is an API specification, not a processor architecture. If GL-on-DX does what the spec says then it's every bit as much a 'real' OpenGL implementation as any other. Re:What "affect" ** (Score:2) Don't they just call that the Slashdot front page? (swish) Seriously though, affect/effect is just about the best known common error - how hard is it to pay attention when you post? What's next, confusing then and than? Re:What "affect" ** (Score:2) Ah, dont be so harsh... their not teh worst mistakes one's could make. Re:What "affect" ** (Score:2) For a complete treatment of the subject, read this [teachersfirst.com]. Re:What "affect" ** (Score:2) Actually "effect" and "affect" can both be either a verb or a noun. This is off-topic, of course, but these words are geniunely confusing. The way we use 'affect' as a verb sometimes seems to have more to with the noun 'effect' than either the verb 'effect' has to do with the noun 'effect' or the verb 'affect' has to do with the noun 'affect'. However, they aren't really connected linguistically (at least as far as I'm aware). In fact, you can effect an effect, affect an affect, effect an affect, or affe Google also a member (Score:5, Insightful) The fact that Google and Apple are involved gives me hope that people will start making applications for Linux and Macs soon. Also, since DirectX 10 is only available for Vista, this may be the prime time for OpenGL to start stealing some market share. Re:Google also a member (Score:5, Insightful) What would be so different about the exclusivity of DX10 on Vista as opposed to the exclusivity of DXs 1, 2, 3, 4, 5, 6, 7, 8, and 9 on Win 95, 98, NT (DX3 only), 2K, and XP that makes now the proper time for OpenGL to become dominant? DX wins out in terms of "market share" (as if an API can be measured against something like that) becuase of two things...the dominance of Windows in the marketplace and the fact that DirectX has pretty much wiped the floor with OpenGL when it comes to support for contemporary rendering hardware features. Extensions be damned, the OpenGL ARB moves *way* to slowly to be competitive. Maybe the Khronos group will help with that...Lord knows they can't be any worse. Will OpenGL have a ratified spec for equivilent DX10 features like geometry shaders by the time DX 10 comes out? Re:Google also a member (Score:2, Troll) The difference would be that none of them REQUIRED you to update all your hardware in order to run it. DX10 requires it. Vista requires it. The Monitor DRM requires it. As such, adoption of DirectX10 may be a long time coming. Sure hardcore gamers will probably get a new machin Re:Google also a member (Score:2) Re:Google also a member (Score:5, Informative) All of them required you to update if you wanted to use the features. You can't run a DX9 app on DX3 hardware and get the advantages of DX. The necessary transistors aren't on the DX3 board. There's nothing different on the OpenGL side. To run OpenGL 1.x along with you *need* given board. If you don't have it the extension won't work. Vista does not require DX10. It runs just fine under DX9. It will ship with both DX9 and DX10. The UI rendering layer is not DX10 specific. I've run Vista on a two year old machine with integrated Intel graphics (pixel shader 2.x, vertex shaders handled by the CPU) and Vista worked 100%, including Aero Glass. Re:Google also a member (Score:2) To run OpenGL 1.x along with a given extension that exposes a feature on a given board you need that board installed. If you don't have it the extension won't work in hardware. Re:Google also a member (Score:2) Because it would break both the flow of conversations and the moderation system entirely? This one should be intuitively obvious to pretty much anyone. Frankly, I don't think ANY message base systems should allow comment editing. Re:Google also a member (Score:2) Whats yer point? The current openGL works on my board from 1997 as well as modern boards. Re:Google also a member (Score:2, Insightful) Funny. Didn't have to update my Win98 box to move to Win2k. Just needed more RAM. Didn't have to update my Win2K to move to XP. So by all of them... were you talking only Vista or were you merely talking outta your ass? Re:Google also a member (Score:3, Informative) In terms of DirectX, you are quite correct. When programming a DirectX game engine you have to query the given hardware capability (hwcaps) for the PC on which your game engine is being installed. There's nothing different on the OpenGL side. To run OpenGL 1.x along with you *need* given board. If you don't have it the e Re:Google also a member (Score:2) If you're writing a game, and you want it to have 3D features beyond what DX9 offers, but also work on Windows XP or 2K, OpenGL may be the only choice. The advanced features will require the user have appropriate hardware, but only under OpenGL will all its capabilities be accessible. Re:Google also a member (Score:5, Informative) DX3 shipped on NT4 DX4, DX5, DX6, DX7, DX8, and DX9 ship for Windows 2k DX4, DX5, DX6, DX7, DX8, and DX9 ship for Windows XP DX10 only ships on Vista. So the difference that would allow OpenGL to become dominant is that, at least overnight with the release of Vista, you can install OpenGL on your Windows2k, WindowsXP, and your Vista OSes, whereas a game written in DX10 is only playable in Vista. A game coded in OpenGL is therefore open to more users, a bigger customer base, and potentially more profit, than a game coded in DX10. As per support for geometry shaders, I guess it depends on whether the GLSL has it or not, I wouldn't know myself. Re:Google also a member (Score:2) Oddly though all versions of DirectX have a version number starting with 4.0x with an unusual numbering scheme... 1.0 = 4.02 2.0 = 4.03 3.0 = 4.04 4.0 = N/A - Did it ever really exist? 5.0 = 4.05 6.0 = 4.06 7.0 = 4.07 8.0 = 4.08 9.0 = 4.09 10.0 = Just reports DirectX 10 in current Betas. Re:Google also a member (Score:2) Expect to see games companies developing for DX9 and targeting Windows 2000, Windows XP, Windows 2003 and Windows Vista for a while yet. Re:Google also a member (Score:2) Re:Google also a member (Score:4, Informative) 1.Microsoft implementation. This is basicly a layer that translates to Direct3D and supports up to OpenGL 1.4 plus a few selected extentions on top of that. (there is talk that MS deliberatly picked 1.4 over 1.5 because the main difference is that 1.5 supports Vertex Buffer Objects which are important for high speed games but not for stuff like CAD and 3D) 2.Existing OpenGL ICD provided by the hardware vendor. This will work just fine and give the same full OpenGL interface as you get now on windows XP (including all provided extentions). However, when this is used, the Vista Aeroglass interface is disabled. and 3.A new OpenGL ICD built to cooperate with DirectX and Aeroglass. This option is the prefered option however microsoft has so far refused to provide graphics card vendors with all the information and specs required to make it happen (again, there is speculation that this is to "cripple" OpenGL). Of course, microsoft may provide (or may have already provided) the necessary information that the vendors require. Anyone running games knows to install the latest graphics card drivers for their card (and game readme files often say to do that anyway) so gamers who choose to upgrade to Vista will just download and install an ICD written by the display card manufacturer following option 3 and everyone is happy. Not my fault! (Score:3, Funny) I voted for Kodos. Re:Not my fault! (Score:2) Well, kudos to you Good -- maybe now it will progress faster! (Score:5, Insightful) Part of the reason Direct3D took off (aside from Microsoft's market influence) is that the ARB worked too damn slow and caused OpenGL to lag behind in terms of capability. If Khronos can make decisions faster such that OpenGL can keep feature parity with (or even get ahead of) Direct3D, it'll be great! It would also probably help if they form close ties with the people making OpenAL, SDL, etc. so that there can be a big, open, complete solution to compete with the whole of DirectX. They need to partner with video card companies (Score:2) Re:They need to partner with video card companies (Score:2, Interesting) Re:They need to partner with video card companies (Score:2) Re:They need to partner with video card companies (Score:2) Re:They need to partner with video card companies (Score:2) Re:They need to partner with video card companies (Score:2) Re:Maybe OpenGL and DirectX need to diverge (Score:2) Re:Maybe OpenGL and DirectX need to diverge (Score:5, Insightful) The problem with that is that DirectX isn't a standard -- it's a proprietary Microsoft technology. We'd still need a standard to use for gaming on Mac, Linux, PS3, Wii, etc. FYI (Score:3, Interesting) Should assure future of OpenGL for a while (Score:3, Interesting) On the negative side, this probably means that yes, SGI is going to be asset-stripped and wound up in short order. One must remember that the writing was on the wall a long time ago. Like CBM before them, Microsoft placed a "mole" in an executive position to wreak havoc, and SGI never really recovered from that period of moronic rebranding and windows NT workstations. Re:Should assure future of OpenGL for a while (Score:2) Evidence, please. Especially for CBM -- Irving and Medhi were selfish gits, but I never heard a suggestion that they were secretly working for Redmond. Schwab Re:Should assure future of OpenGL for a while (Score:2) Re:Should assure future of OpenGL for a while (Score:2) I dunno about that; I suspect the VIC-20 was designed so that they could make as many mistakes as possible so that they could learn from them when putting together the C-64 :-). I held no love for Atari, but it was clear even at the time that the VIC-20 was way, way below the required standard for power, RAM, and graphics. I think perhaps they thought they were designing a game console with a built-in keyboard. COLLADA (Score:5, Informative) I did a little more looking after submitting this article and while I was not familiar with the Khronos group's work aside from mobile applications, it seems they are also responsible for the COLLADA standard Sony is promoting for open exchange of graphics/models primarily for video games. Perhaps with OpenGL, COLLADA, and some multimedia standards all under the same roof, we'll see development directed to be a better alternative to OpenGL aimed at multiple platforms (Windows, PS3, Mac, and Nintendo?) to offset the threat of MS's DirectX development aimed at Windows and Xbox simultaneously. Hopefully this will be good for OpenGL (Score:2) Re:Hopefully this will be good for OpenGL (Score:3, Interesting) a This might be good (Score:2, Interesting) OpenGL, IMHO, has no place on mobile phones... not yet anyway. Poor Java s Re:This might be good (Score:5, Interesting) OpenGL, IMHO, has no place on mobile phones... not yet anyway... How on earth can OpenGL grow if it always has to support the lowest common denominator. I agree. Since Khronos already maintains OpenGL ES for phones, hopefully they will not unify them. Re:This might be good (Score:2, Informative) Re:This might be good (Score:3) Unless your calculator is vastly different than the ones I'm familiar with, that's a bit of an exaggeration, don't you think? Re:This might be good (Score:3, Informative) How much better support for pixel shaders do you want? glsl is a quite ni Re:This might be good (Score:2) Granted, OpenGL extensions aren't that bad, for the reasons you cited. I'm just tired of supporting ATI-specific *and* Nvidia-specific extensions. The vendor-specific junk has got to go. Re:This might be good (Score:3, Interesting) I've been reading about some of the proposals for "OpenGL 3.0." Apparently, there is talk of a "primitive shader." Haven't been able to find much information about it yet, but it may well allow arbitrary curved surfaces defined by shaders. I could also just be reading Re:This might be good (Score:2) Increased Costs (Score:2) Re:Increased Costs (Score:2) Re:Increased Costs (Score:3, Informative) Some notes to people that may not know a whole lot (Score:5, Informative) Doesn't realy matter, for a couple reasons: Only games are written using Direct3D/DirectX. It is very rarely used for anything beyond that. If given a choice no developer would ever use Direct3d for anything.. but if your making games for technically challenged people and your target platform is Windows then writing it to use Direct3D/directX makes more sense since it's more likely to work well in Windows. All the major gaming engines already run on Linux. They already run using either OpenGL or Direct3D.. All except HL2/Steam stuff. The reason Linux doesn't have more games isn't because of DirectX. It's because of lack of ease of use for OpenGL acceleration and market share.. If your programming a 3d application and it's not a game and your not Microsoft.. Then your using OpenGL or OpenGL-based system. Period, end of story. 2. OpenGL ARB is 'Advanced Review Board'. They create a set of extensions to the current OpenGL standard to create proven/established OpenGL-related stuff that they can then wrap up together and place into the next generation OpenGL standard. This is were all that extra stuff goes that people say that OpenGL lacks and DirectX has. OpenGL has a much more formal review system then DirectX/3D has. It needs to be carefull as any standard they create will need to be replicated by multiple people on multiple platforms and be sustainable into the forseeable future. Microsoft and Direct3D/DirectX doesn't have to deal with that. They can abtrarially make decisions becasue they only have to worry about one platform. 3. Kronos group is partially responsable for the OpenGL-EGL extensions which allow for easier OpenGL based displays for embedded devices. This is required for a stand-alone XGL-based X Windows server. Current AIGLX (Redhat) and XGLX (Novel) require you to either run a OpenGL-based X server on top of a normal X server (XGLX) or run OpenGL extensions to a normal X server (AIGLX). This approach has numerious issues. Instead of making a clean break and going with pure OpenGL system your dealing with multiple legacy drivers that can only do a fraction of what OpenGL can do in addition to OpenGL acceleration drivers. To put it another way.. The current driver model for X is broken. Right now we have 2-3 drivers acting on the same video card at the same time and they need to share resources. These drivers come from different vendors. This is technically difficult and doesn't lead to good acceleration or performance. Another point: Legacy 2D X drivers (EXA, XAA) can only provide 2D acceleration. OpenGL 3d drivers can provide 2D AND 3D acceleration. OpenGL 3d drivers can provide faster 2D acceleration then what the legacy 2D drivers can do. (due to the nature of the hardware GPU, not so much the drivers) Having 2D and 3D drivers at the same time makes things much more complecated then just having 3D that can do everything. 3D acceleration is a hard requirement for a modern desktop. So obviously having OpenGL-based X server is the way to go. And stuff like GLITZ (Xrender replacement) and other things means we can move to a pure OpenGL X server and still keep binary compatability. It's quite a acheivement. Now the reason we cna't have a pure OpenGL-based display yet is because OpenGL lacks the API hooks to allow you to control the display and other items like that. There is nothing in OpenGL that says "Set the monitor at this resolution". That has to be handled by other stuff. Kronos had to solve this same exact problem for it's embedded OpenGL display stuff. So they created the OpenGL-EG Re:Some notes to people that may not know a whole (Score:2) >> your not going to be able to run DirectX 9 >> If your programming a 3d Dude, your is actually spelt you're. Re:Some notes to people that may not know a whole (Score:2) Re:Some notes to people that may not know a whole (Score:2) Ok, then a better way to word it would be: HL2/Steam stuff only runs on Win32 (Wine is a Win32/etc layer for linux). Microsoft's DirectX/Direct3d implementation (the official one) is tied directly to the hardware. He never said it would be impossible to make an implementation that wasn't. Re:Some notes to people that may not know a whole (Score:3, Insightful) Besides the spelling gaffe (your is the possessive of you, you're is a contraction of "you are"), this statement is not 100% correct. DirectX/Direct3D developers can mandate that certain API features are handled in hardware in order for the application to run, but they can just as easily allow DirectX to emulate in so Re:Some notes to people that may not know a whole (Score:2) Re:Some notes to people that may not know a whole (Score:5, Informative) ." This is VERY misleading. Presuming scenario 1 where the developer (for either D3D or OpenGL) has coded a support for only a particular version of the API, neither API will run partially in software if the driver does not support that level of the API. D3D9 will not run in software unless you're going to use a debugging rasterizer (highly unlikely), and OpenGL 2.0 WILL NOT RUN on a card with a 1.0, 1.1, 1.2, 1.4 driver. Now, there are some 1.4 drivers which were written so that people like myself could write 2.0 code and execute before the hardware was available, in which case the 2.0 distinctions were supported via software emulation, but this was for developers. You're confusing the ability of a specific OpenGL implementation supporting a specification to the maximum of its ability. For example, if a I have an OpenGL 1.4 driver but the card I'm running on doesn't have Hardware T&L, OpenGL's pipeline is quite capable of transparently deciding whether or not it should offload the lighting to the card or doing it in software. This is not the same as some future version of OpenGL running on my old OpenGL card with an old driver. "If your programming a 3d application and it's not a game and your not Microsoft.. Then your using OpenGL or OpenGL-based system. Period, end of story" - I certainly hope you're not in a decision making capacity at your job (or that your job is doing something other than writing rendering code) because you're screwing your company over. Right tool for the right job, every time. It's a toolbox not a religious jihad. (2)"OpenGL has a much more formal review system then DirectX/3D has" - No it doesn't. Crimony. Do you know what the specification process for DirectX is? You can say they're different, but it certainly isn't less "formal." You could say it is less open, but that's because it isn't an open API. Re:Some notes to people that may not know a whole (Score:2) Why can't you use nvidia on linux with OpenGL? I do dual monitor (seperate X screens, you need to throw a big IMHO on there... (Score:2) For example, your argument about the difference of piecemeal acceleartion in OpenGL versus presence or absence of capabilities in Direct X. This is known as the caps bits (or caps flags) argument. You present the factual part, then you skip over some of the intermediate steps and go straight to the (incorrect) conclusion that you can't run Direct X 9 games if you don't have a Direct X 9 card. First of all, you can run Direct X 9 games on Direct X 8 cards as long as the games check Close ties to mobility affecting OpenGL? (Score:3, Insightful) Khronos? (Score:2) Re:Ha. (Score:2) CAN WE PLEASE NOT HAVE THIS DISCUSSION?! (Score:5, Insightful) Whether Apple contributes back to Free Software isn't really relevant here, and it's been beaten to death in other threads already. Could we please save it for the next KHTML article, at least?! Besides, the more relevant thing regarding Apple is their behavior regarding other standards (as opposed to software implementations), such as USB, WebDAV, ZeroConf (aka Rendesvous, Bonjour), etc. Effect as verb (Score:4, Insightful) Re:Effect as verb (Score:2) Pretty hard, I guess.... (Score:3, Funny) It amazes me that you have such a grasp of the use of "affect" and "effect" but don't seem to grasp that the word "fucking" should only be used as a verb or adverb and not an adjective. Unless your really meant to express that Jesus is copulating with God, which to answer your question, would seem to be pretty hard to do. In the interest of meta-meta-nitpicking... (Score:3, Informative) fucking Pronunciation Key (fkng) Vulgar Slang adv. & adj. Used as an intensive. Re:In the interest of meta-meta-nitpicking... (Score:3, Informative) Too bad there is no Top Web results for "fucking" [reference.com]. What is this world coming to? Re:Pretty hard, I guess.... (Score:3, Informative) Say what? "fucking" is a gerund. Like all gerunds, that means it can be a noun (when referring to the act itself), or an adjective ("which one?" "The fucking one!"), among other things [wikipedia.org]. Re:Pretty hard, I guess.... (Score:2) In this case, "fucking" is acting as a participle [wikipedia.org], not a gerund. But in English, present participles and gerunds look the same. Like the wikipedia article says, participles are adjectives and are often used in front of nouns. In this case, the entire phrase "Jesus fucking God" is an interjection. Depending on how you look at it, either Jesus is being used as an adjective to describe God (in which case it should have been followed by a comma), or more likely the entire thing is bein Re:Pretty hard, I guess.... (Score:4, Funny) I dunno, I think that'd be a pretty good expletive. Certainly better than things like "By the balls of Zeus!" or "May Apollo rape me in the night!" or "By Aphrodite's breasts!" (OT: at least one of these expletives is genuine.) In any case, since they're supposed to be "consubstantial" with one another, it'd be at most just a kinky kind of masturbation (which also wouldn't be offensive in most pagan religions, incidentally). Re:Pretty hard, I guess.... (Score:3, Funny) I disagree. Fucking should and will be used in any part of any sentence at any time. It simply transcends all grammatical boundaries and can fucking well fucking go fucking where the fucking hell it wants to, fucking. Not to be crass, of course. Re:ITM effects. (Score:2) What is the emotional affect this sentence will affect on the way you affect your affection for the slashdot affect. Re:ITM effects. (Score:2) Effect is a verb too (Score:2) Effect == verb "Effect a change in your face" verb [ trans. ] (often be effected) cause (something) to happen; bring about : nature always effected a cure | budget cuts that were quietly effected over four years. Wow this is OT. Parent got modded Informative? Re:ITM effects. (Score:2) (Effect as a verb is a bit more common, but I still hold to what I see above. And when used as a verb, effect is almost always followed by "change".) Re:ITM effects. (Score:4, Funny) Yes, your tiredness had an effect on your affect and affected the effect of your post. Re:ITM effects. (Score:2) Re:ITM effects. (Score:2) I speak three languages, and was raised biligually. I would say that English is (barely) my native tongue, but I don't see any particular reason why English is an order of magnitude more nuanced than other languages. As far as I know, its more accurate to define languages as particularly good or particularly poor at expressing details on a certain topic. Tea, for example, is best discussed in Japanese, Science is primarily English, while Latin would be the appropriate language for Weste Re:ITM effects. (Score:2) Re:ITM effects. (Score:2) You're wrong about that - Infix [wikipedia.org]: Re:ITM effects. (Score:2) Re:ITM effects. (Score:2) Re:ITM effects. (Score:2) Okay, I agree that English is a big piece of shit, and it's literally the only human language I speak. (computer languages don't count) But... it's highly useful. How long that will last is a great question; Once upon a time the whole fucking world spoke Greek, well into Roman civilization, because it was the language of the learned. Later, it was French; later, science went somewhat German. Now, it's Re:ITM effects. (Score:2) Re:SGI was considering it an asset to sell. (Score:3, Insightful) I'm with you, as long as nVidia doesn't lock it up and throw away the key. I have a longstanding fondness for OpenGL but it doesn't work if it stays on just one graphics platform either. It's for portability. So by that reckoning, Apple would make a better steward. Apple has good reason not to tie itself to any one component vendor, and OpenGL helps it in that purpose.
http://slashdot.org/story/06/08/01/1856259/opengl-spec-now-controlled-by-khronos-group
CC-MAIN-2013-48
refinedweb
4,875
72.36
Connect Azure to ITSM tools using IT Service Management Connector . However, the work items related to an issue typically reside in an ITSM product/service. The ITSM connector provides a bi-directional connection between Azure and ITSM tools to help you resolve issues faster.. Read more about the legal terms and privacy policy. You can start using the ITSM Connector using the following steps: - Add the ITSM Connector Solution - Create an ITSM connection - Use the connection Adding the IT Service Management Connector Solution Before you can create a connection, you need to add the ITSM Connector Solution. In Azure portal, click + New icon. Search for IT Service Management Connector in the Marketplace and click Create. In the OMS Workspace section, select the Azure Log Analytics workspace where you want to install the solution. Note - As part of the ongoing transition from Microsoft Operations Management Suite (OMS) to Azure Monitor, OMS Workspaces are now referred to as Log Analytics workspaces. - The ITSM Connector can only be installed in Log Analytics workspaces in the following regions: East US, West Europe, Southeast Asia, Southeast Australia, West Central US, East Japan, South UK, Central India, Central Canada. In the OMS Workspace Settings section, select the ResourceGroup where you want to create the solution resource. Note As part of the ongoing transition from Microsoft Operations Management Suite (OMS) to Azure Monitor, OMS Workspaces are now referred to as Log Analytics workspaces. Click Create. When the solution resource is deployed, a notification appears at the top right of the window. Creating an ITSM connection Once you have installed the solution, you can create a connection. For creating a connection, you will need to prep your ITSM tool to allow the connection from the ITSM Connector solution. Depending on the ITSM product you are connecting to, use the following steps: Once you have prepped your ITSM tools, follow the steps below to create a connection: Go to All Resources, look for ServiceDesk(YourWorkspaceName). Under WORKSPACE DATA SOURCES in the left pane, click ITSM Connections. This page displays the list of connections. Click Add Connection. Specify the connection settings as described in Configuring the ITSMC connection with your ITSM products/services article. Note By default, ITSMC refreshes the connection's configuration data once in every 24 hours. To refresh your connection's data instantly for any edits or template updates that you make, click the Sync button on your connection's blade. Using the solution By using the ITSM Connector solution, you can create work items from Azure alerts, Log Analytics alerts and Log Analytics log records. Create ITSM work items from Azure alerts Once you have your ITSM connection created, you can create work item(s) in your ITSM tool based on Azure alerts, by using the ITSM Action in Action Groups. Action Groups provide a modular and reusable way of triggering actions for your Azure Alerts. You can use Action Groups with metric alerts, Activity Log alerts and Azure Log Analytics alerts in Azure portal. Use the following procedure: In Azure portal, click Monitor. In the left pane, click Action groups. The Add action group window appears. Provide Name and ShortName for your action group. Select the Resource Group and Subscription where you want to create your action group. In the Actions list, select ITSM from the drop-down menu for Action Type. Provide a Name for the action and click Edit details. Select the Subscription where your Log Analytics workspace is located. Select the Connection name (your ITSM Connector name) followed by your Workspace name. For example, "MyITSMMConnector(MyWorkspace)." Select Work Item type from the drop-down menu. Choose to use an existing template or fill the fields required by your ITSM product. Click OK. When creating/editing an Azure alert rule, use an Action group, which has an ITSM Action. When the alert triggers, work item is created/updated in the ITSM tool. Note For information on pricing of ITSM Action, see the pricing page for Action Groups. Visualize and analyze the incident and change request data Based on your configuration when setting up a connection, ITSM connector can sync up to 120 days of Incident and Change request data. The log record schema for this data is provided in the next section. The incident and change request data can be visualized using the ITSM Connector dashboard in the solution. The dashboard also provides information on connector status which can be used as a starting point to analyze any issues with the connections. You can also visualize the incidents synced against the impacted computers, within the Service Map solution. Service Map automatically discovers the application components on Windows and Linux systems and maps the communication between services. It allows you to view your servers as you think of them – as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, and ports across any TCP-connected architecture with no configuration required other than installation of an agent. Learn more. If you are using the Service Map solution, you can view the service desk items created in the ITSM solutions as shown in the following example: More information: Service Map Additional information Data synced from ITSM product Incidents and change requests are synced from your ITSM product to your Log Analytics workspace based on the connection's configuration. The following information shows examples of data gathered by ITSMC: Note Depending on the work item type imported into Log Analytics, ServiceDesk_CL contains the following fields: Work item: Incidents ServiceDeskWorkItemType_s="Incident" Fields - ServiceDeskConnectionName - Service Desk ID - State - Urgency - Impact - Priority - Escalation - Created By - Resolved By - Closed By - Source - Assigned To - Category - Title - Description - Created Date - Closed Date - Resolved Date - Last Modified Date - Computer Work item: Change Requests ServiceDeskWorkItemType_s="ChangeRequest" Fields - ServiceDeskConnectionName - Service Desk ID - Created By - Closed By - Source - Assigned To - Title - Type - Category - State - Escalation - Conflict Status - Urgency - Priority - Risk - Impact - Assigned To - Created Date - Closed Date - Last Modified Date - Requested Date - Planned Start Date - Planned End Date - Work Start Date - Work End Date - Description - Computer Output data for a ServiceNow incident Output data for a ServiceNow change request Troubleshoot ITSM connections If connection fails from connected source's UI with an Error in saving connection message, take the following steps: - For ServiceNow, Cherwell and Provance connections, - ensure you correctly entered the username, password, client ID, and client secret for each of the connections. - check if you have sufficient privileges in the corresponding ITSM product to make the connection. - For Service Manager connections, - ensure that the Web app is successfully deployed and hybrid connection is created. To verify the connection is successfully established with the on premises Service Manager machine, visit the Web app URL as detailed in the documentation for making the hybrid connection. If data from ServiceNow is not getting synced to Log Analytics, ensure that the ServiceNow instance is not sleeping. ServiceNow Dev Instances sometimes go to sleep when idle for a long period. Else, report the issue. If Log Analytics alerts fire but work items are not created in ITSM product or configuration items are not created/linked to work items or for any other generic information, look in the following places: - ITSMC: The solution shows a summary of connections/work items/computers etc. Click the tile showing Connector Status, which takes you to Log Search with the relevant query. Look at the log records with LogType_S as ERROR for more information. - Log Search page: view the errors/related information directly using the query *ServiceDeskLog_CL *. Troubleshoot Service Manager Web App deployment - In case of any issues with web app deployment, ensure you have sufficient permissions in the subscription mentioned to create/deploy resources. - If you get an "Object reference not set to instance of an object" error when you run the script, ensure that you entered valid values under User Configuration section. - If you fail to create service bus relay namespace, ensure that the required resource provider is registered in the subscription. If not registered, manually create service bus relay namespace from the Azure portal. You can also create it while creating the hybrid connection from the Azure portal. For any queries or feedback on the IT Service Management Connector, contact us at omsitsmfeedback@microsoft.com. Next steps Add ITSM products/services to IT Service Management Connector. Feedback
https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview
CC-MAIN-2019-47
refinedweb
1,379
51.28
Source cpython_sandbox / Doc / howto / pyporting.rst Porting Python 2 Code to Python 3 Abstract With Python 3 being the future of Python while Python 2 is still in active use, it is good to have your project available for both major releases of Python. This guide is meant to help you choose which strategy works best for your project to support both Python 2 & 3 along with how to execute that strategy. If you are looking to port an extension module instead of pure Python code, please see :ref:`cporting-howto`. Choosing a Strategy When a project chooses to support both Python 2 & 3, a decision needs to be made as to how to go about accomplishing that goal. The chosen strategy will depend on how large the project's existing codebase is and how much divergence you want from your current Python 2 codebase (e.g., changing your code to work simultaneously with Python 2 and 3). If you would prefer to maintain a codebase which is semantically and syntactically compatible with Python 2 & 3 simultaneously, you can write :ref:`use_same_source`. While this tends to lead to somewhat non-idiomatic code, it does mean you keep a rapid development process for you, the developer. If your project is brand-new or does not have a large codebase, then you may want to consider writing/porting :ref:`all of your code for Python 3 and use 3to2 <use_3to2>` to port your code for Python 2. Finally, you do have the option of :ref:`using 2to3 <use_2to3>` to translate Python 2 code into Python 3 code (with some manual help). This can take the form of branching your code and using 2to3 to start a Python 3 branch. You can also have users perform the translation at installation time automatically so that you only have to maintain a Python 2 codebase. Regardless of which approach you choose, porting is not as hard or time-consuming as you might initially think. You can also tackle the problem piece-meal as a good portion of porting is simply updating your code to follow current best practices in a Python 2/3 compatible way. Universal Bits of Advice Regardless of what strategy you pick, there are a few things you should consider. One is make sure you have a robust test suite. You need to make sure everything continues to work, just like when you support a new minor/feature release of Python. This means making sure your test suite is thorough and is ported properly between Python 2 & 3. You will also most likely want to use something like tox to automate testing between both a Python 2 and Python 3 interpreter. Two, once your project has Python 3 support, make sure to add the proper classifier on the Cheeseshop (PyPI). To have your project listed as Python 3 compatible it must have the Python 3 classifier (from): setup( name='Your Library', version='1.0', classifiers=[ # make sure to use :: Python *and* :: Python :: 3 so # that pypi can list the package on the python 3 page 'Programming Language :: Python', 'Programming Language :: Python :: 3' ], packages=['yourlibrary'], # make sure to add custom_fixers to the MANIFEST.in include_package_data=True, # ... ) Doing so will cause your project to show up in the Python 3 packages list. You will know you set the classifier properly as visiting your project page on the Cheeseshop will show a Python 3 logo in the upper-left corner of the page. Three, the six project provides a library which helps iron out differences between Python 2 & 3. If you find there is a sticky point that is a continual point of contention in your translation or maintenance of code, consider using a source-compatible solution relying on six. If you have to create your own Python 2/3 compatible solution, you can use sys.version_info[0] >= 3 as a guard. Four, read all the approaches. Just because some bit of advice applies to one approach more than another doesn't mean that some advice doesn't apply to other strategies. This is especially true of whether you decide to use 2to3 or be source-compatible; tips for one approach almost always apply to the other. Five,. So choose the newest version of Python which you believe can be your minimum support version and work from there. Six, target the newest version of Python 3 that you can. Beyond just the usual bugfixes, compatibility has continued to improve between Python 2 and 3 as time has passed. This is especially true for Python 3.3 where the u prefix for strings is allowed, making source-compatible Python code easier. Seven, make sure to look at the Other Resources for tips from other people which may help you out. Python 3 and 3to2 If you are starting a new project or your codebase is small enough, you may want to consider writing your code for Python 3 and backporting to Python 2 using 3to2. Thanks to Python 3 being more strict about things than Python 2 (e.g., bytes vs. strings), the source translation can be easier and more straightforward than from Python 2 to 3. Plus it gives you more direct experience developing in Python 3 which, since it is the future of Python, is a good thing long-term. A drawback of this approach is that 3to2 is a third-party project. This means that the Python core developers (and thus this guide) can make no promises about how well 3to2 works at any time. There is nothing to suggest, though, that 3to2 is not a high-quality project. Python 2 and 2to3 Included with Python since 2.6, the 2to3 tool (and :mod:`lib2to3` module) helps with porting Python 2 to Python 3 by performing various source translations. This is a perfect solution for projects which wish to branch their Python 3 code from their Python 2 codebase and maintain them as independent codebases. You can even begin preparing to use this approach today by writing future-compatible Python code which works cleanly in Python 2 in conjunction with 2to3; all steps outlined below will work with Python 2 code up to the point when the actual use of 2to3 occurs. Use of 2to3 as an on-demand translation step at install time is also possible, preventing the need to maintain a separate Python 3 codebase, but this approach does come with some drawbacks. While users will only have to pay the translation cost once at installation, you as a developer will need to pay the cost regularly during development. If your codebase is sufficiently large enough then the translation step ends up acting like a compilation step, robbing you of the rapid development process you are used to with Python. Obviously the time required to translate a project will vary, so do an experimental translation just to see how long it takes to evaluate whether you prefer this approach compared to using :ref:`use_same_source` or simply keeping a separate Python 3 codebase. Below are the typical steps taken by a project which tries to support Python 2 & 3 while keeping the code directly executable by Python which 2to3 cannot handle but are known to cause issues. porting to Python 3. But if you project must keep support for Python 2.5 (or even Python 2.4) then it is still possible to port to. from __future__ import print_function This is a personal choice. 2to3 handles the translation from the print statement to the print function rather well so this is an optional step. This future statement does help, though, with getting used to typing print('Hello, World') instead of print 'Hello, World'. from __future__ import unicode_literals Another personal choice. You can always mark what you want to be a (unicode) string with a u prefix to get the same effect. But regardless of whether you use this future statement or not, you must make sure you know exactly which Python 2 strings you want to be bytes, and which are to be strings. This means you should, at minimum mark all strings that are meant to be text strings with a u prefix if you do not use this future statement. Python 3.3 allows strings to continue to have the u prefix (it's a no-op in that case) to make it easier for code to be source-compatible between Python 2 & 3. Bytes literals This is a very important one. The ability to prefix Python 2 strings that are meant to contain bytes with a b prefix help to very clearly delineate what is and is not a Python 3 string. When you run 2to3 on code, all Python 2 strings become Python 3 strings unless they are prefixed with b. This point cannot be stressed enough: make sure you know what all of your string literals in Python 2 are meant to become everyone one of your string literals and you should mark them as appropriate. There are some differences between byte literals in Python 2 and those in Python 3 thanks to the bytes type just being an alias to str in Python 2. Probably the biggest "gotcha" is that indexing results in different values. In Python 2, the value of b'py'[1] is 'y', while in Python 3 it's 121. You can avoid this disparity by always slicing at the size of a single element: b'py'[1:2] is 'y' in Python 2 and b'y' in Python 3 (i.e., close enough). You cannot concatenate bytes and strings in Python 3. But since Python 2 has bytes aliased to str, it will succeed: b'a' + u'b' works in Python 2, but b'a' + 'b' in Python 3 is a :exc:`TypeError`. A similar issue also comes about when doing comparisons between bytes and strings. Supporting Python 2.5 and Newer Only If you are supporting Python 2.5 and newer there are still some features of Python that you can utilize. from __future__ import absolute_import Implicit relative imports (e.g., importing spam.bacon from within spam.eggs with the statement import bacon) does Unicode strings where appropriate. That leaves all unmarked string literals to be considered byte literals in Python 3. Handle Common "Gotchas" There are a few things that just consistently come up as sticking points for people which 2to3 cannot handle automatically or can easily be done in Python 2 to help modernize your code. :mod: :func:`io.open`. Since :func:`io.open` is essentially the same function in both Python 2 and Python 3, it will help iron out any issues that might arise. - If pre-2.6 compatibility is needed, then you should use :func: handling this issue is to make sure that every string literal in your Python 2 code is either syntactically :class: :class:). There are two ways to solve this issue. One is to use a custom 2to3 fixer. The blog post at specifies how to do this. That will allow 2to3 to change all instances of def __unicode(self): ... to def __str__(self): .... This does require that you define your __str__() method in Python 2 before your __unicode__() method. The other option is to use a mixin class. :attr:`BaseException.args` attribute which is a sequence containing all arguments passed to the :meth:`__init__` method. Even better is to use the documented attributes the exception provides. Don't use __getslice__ & Friends Been deprecated for a while, but Python 3 finally drops support for __getslice__(), etc. Move completely over to :meth:`__getitem__` and friends. Updating doctests 2to3 will attempt to generate fixes for doctests that it comes across. It's not perfect, :mod:`unittest`. Update map for imbalanced input sequences With Python 2, map would pad input sequences of unequal length with None values, returning a sequence as long as the longest input sequence. With Python 3, if the input sequences to map are of unequal length, map will stop at the termination of the shortest of the sequences. For full compatibility with map from Python 2.x, also wrap the sequences in :func: 2to3 cannot handle automatically (e.g., modules that have been removed). Try to eliminate those warnings to make your code even more portable to Python 3. Run 2to3 Once you have made your Python 2 code future-compatible with Python 3, it's time to use 2to3 to actually port your code. Manually To manually convert source code using 2to3, you use the 2to3 script that is installed with Python 2.6 and later.: 2to3 <directory or file to convert> This will cause 2to3 to write out a diff with all of the fixers applied for the converted source code. If you would like 2to3 to go ahead and apply the changes you can pass it the -w flag: 2to3 -w <stuff to convert> There are other flags available to control exactly which fixers are applied, etc. During Installation When a user installs your project for Python 3, you can have either :mod:`distutils` or Distribute run 2to3 on your behalf. For distutils, use the following idiom: try: # Python 3 from distutils.command.build_py import build_py_2to3 as build_py except ImportError: # Python 2 from distutils.command.build_py import build_py setup(cmdclass = {'build_py': build_py}, # ... ) For Distribute: setup(use_2to3=True, # ... ) This will allow you to not have to distribute a separate Python 3 version of your project. It does require, though, that when you perform development that you at least build your project and use the built Python 3 source for testing. Verify & Test At this point you should (hopefully) have your project converted in such a way that it works in Python 3. Verify it by running your unit tests and making sure nothing has gone awry. If you miss something then figure out how to fix it in Python 3, backport to your Python 2 code, and run your code through 2to3 again to verify the fix transforms properly. Python 2/3 Compatible Source While it may seem counter-intuitive, you can write Python code which is source-compatible between Python 2 & 3. It does lead to code that is not entirely idiomatic Python (e.g., having to extract the currently raised exception from sys.exc_info()[1]), but it can be run under Python 2 and Python 3 without using 2to3 as a translation step (although the tool should be used to help find potential portability problems). This allows you to continue to have a rapid development process regardless of whether you are developing under Python 2 or Python 3. Whether this approach or using :ref:`use_2to3` works best for you will be a per-project decision. To get a complete idea of what issues you will need to deal with, see the What's New in Python 3.0. Others have reorganized the data in other formats such as . The following are some steps to take to try to support both Python 2 & 3 from the same source code. Follow The Steps for Using 2to3 All of the steps outlined in how to :ref:`port Python 2 code with 2to3 <use_2to3>` apply to creating a Python 2/3 codebase. This includes trying only support Python 2.6 or newer (the :mod:`__future__` statements work in Python 3 without issue), eliminating warnings that are triggered by -3, etc. You should even consider running 2to3 over your code (without committing the changes). This will let you know where potential pain points are within your code so that you can fix them properly before they become an issue. Use six The six project contains many things to help you write portable Python code. You should make sure to read its documentation from beginning to end and use any and all features it provides. That way you will minimize any mistakes you might make in writing cross-version code. Capturing the Currently Raised Exception One change between Python 2 and 3 that will require changing how you code (if you support Python 2.5 and earlier) is accessing; Python 2.6 will "leak" pass Because of this syntax change you must change to capturing the current exception to: try: raise Exception() except Exception: import sys exc = sys.exc_info()[1] # Current exception is 'exc' pass You can get more information about the raised exception from :func: :term:`garbage collection` pass. In Python 2, this problem only occurs if you save the traceback itself (e.g. the third element of the tuple returned by :func:`sys.exc_info`) in a variable. Other Resources The authors of the following blog posts, wiki pages, and books deserve special thanks for making public their tips for porting Python 2 code to Python 3 (and thus helping provide information for this document): - - - - - - - - If you feel there is something missing from this document that should be added, please email the python-porting mailing list.
https://bitbucket.org/ncoghlan/cpython_sandbox/src/ae7fef62b462/Doc/howto/pyporting.rst
CC-MAIN-2015-11
refinedweb
2,813
69.62
MonkeeSage wrote: > Proposal: > > When an attribute lookup fails for an object, check the top-level > (and local scope?) for a corresponding function or attribute and apply > it as the called attribute if found, drop through to the exception > otherwise. This is just syntactic sugar. > > > Example: > > a = [1,2,3] > > a.len() > # -> fails, > # -> finds len() in the top-level symbol table, > # -> applies len(a) > # -> 3 > > a.foobar() > # -> fails, > # -> no foobar() in scope, > # -> raise NameError > > > Benefits: > > - Uniform OO style. Top-levels can be hidden as attributes of data. > Most of the top-level functions / constructors can be considered as > attributes of the data; e.g., an int() representation of a string can > be considered as _part_ of the semantics of the string (i.e., one > _meaning_ of the string is an int representation); but doing it this > way saves from storing the int (etc) data as part of the actual > object. The trade-off is speed for space. > > - Ability to "add" attributes to built-in types (which is requested > all the time!!) without having to sub-class a built-in type and > initialize all instances as the sub-class. E.g., one can simply define > flub() in the top-level (local?) namespace, and then use "blah".flub() > as if the built-in str class provided flub(). > > - Backwards compatible; one can use the top-level functions when > desired. No change to existing code required. > > - Seemingly trivial to implement (though I don't know much C). On > attribute lookup failure, simply iterate the symbol table looking for > a match, otherwise raise the exception (like current implementation). > > > Drawbacks: > > - Could hide the fact that an extra (On?) lookup on the symbol table > is necessary for attribute lookup failure. (Maybe there could be a > switch/pragma to enable (or disable) the functionality?) > > - As above, attribute lookup failure requires an extra lookup on the > symbol table, when normally it would fall through directly to > exception. > > - ??? > > > Disclaimer: > > I realize that very often what seems good to me, ends up being half- > assed, backwards and generally bad. So I'd appreciate input on this > proposition. Don't take it that I think the idea is wonderful and am > trying to push it. I am just throwing it out there to see what may > become of it. It would be unoriginal of me to suggest that this violates the explicit is better than implicit maxim. But it does. Also, you have picked the perfect use case for a counter argument: py> class NoLen(object): ... pass ... py> len(NoLen) ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> <type 'exceptions.TypeError'>: object of type 'type' has no len() So this proposal would send the interpreter through two cycles of trying to find the proper attribute before it failed. Plus, I probably haven't even raised the best arguments against it, but my feeling is that it has serious problems and is better left out of the language. But it is an interesting idea nonetheless. James -- James Stroud UCLA-DOE Institute for Genomics and Proteomics Box 951570 Los Angeles, CA 90095
https://mail.python.org/pipermail/python-list/2007-November/457792.html
CC-MAIN-2014-10
refinedweb
511
66.33
MapLongPressToPinDrop Since: BlackBerry 10.0.0 #include <bb/cascades/maps/MapLongPressToPinDrop> To link against this class, add the following line to your .pro file: LIBS += -lbbcascadesmaps A utility action class for performing a pin drop, which is the creation of a new point of interest (pin) triggered by a user's action. This class connects to the MapView::mapLongPressed() signal. When a user performs a long-press on an empty map space, this class creates the new pin. Create a new GeoLocation at the point the map was pressed. Asynchronously initiate a reverse geocode to get the street address. Set the map's focus to the new pin. Emit a pinCreated() signal. Update the name of the GeoLocation with the address information, when the reverse geocode has completed. This action is connected to a MapView instance. When the action and a map are associated, the MapView object becomes the parent of the action object. Thus, when an instance of this class has been created and associated with a MapView instance, the instance should not be explicitly destroyed by the client. MapView* theMapView = root->findChild< MapView* >( "mapview" ); MapLongPressToPinDrop* action = new MapLongPressToPinDrop( theMapView ); connect( action, SIGNAL( pinCreated( const QString& ) ), this, SLOT( onPinCreated( const QString& ) ) );When a new pin is created, the corresponding new GeoLocation object is added to the associated MapView's MapData object. To retrieve the new GeoLocation when the pinCreated() signal is emitted: Q_SLOT void onPinCreated( const QString& newId ) { bb::platform::geo::Geographic* newPin = mapView->mapData()->geographic( newId ); // Do something with the new pin. } Overview Public Functions Index Signals Index Public Functions Constructor. BlackBerry 10.0.0 virtual Destructor. BlackBerry 10.0.0 Signals void Emitted when a new pin is created. BlackBerry 10.0.0 void Emitted when the reverse geocode of the address of a new pin has completed. BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__cascades__maps__maplongpresstopindrop.html
CC-MAIN-2017-13
refinedweb
321
57.77
Hello i am trying to make a simple program that counts the words in a given .txt file, the .txt file will be included when running the program like: ./a.out < test.txt I am very new to programming and this is my first program using another file, so any help would be greatly appreciated The wordcounting part of the program is only going to count the blank spaces between the words and should be alright, but the reading of the file is were i think the problem is. am i using the correct way to opening the file? right now its not even compiling. I kinda messed that up trying to fix it, but when it did it froze when you tried to run it.am i using the correct way to opening the file? right now its not even compiling. I kinda messed that up trying to fix it, but when it did it froze when you tried to run it.Code: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char* argv[]) { int c; int i; FILE* fopen(const char* argc,const char* "r"); /* openign the file */ if (fopen==0) /* error check */ { printf("Could not open file!\n"); exit(0); } while (argv[c] != EOF) /* word counting part, the part i think is ok */ { if (argv[c]==' ') { i++; c++; } else c++; } fclose(argc);/* close file */ printf("the number of words in the file are %i",i+1);/* print result */ system("pause");/* to get the program to stay open in the program im using (bloodshed dev-c++) */ return 0; as i said any help would be greatly appreciated! sincerly hemulen
http://cboard.cprogramming.com/c-programming/131622-wordcounting-program-problem-filereading-printable-thread.html
CC-MAIN-2014-23
refinedweb
275
80.82
* "Position" Concept as it applies to Data Structures Dario Romani Greenhorn Joined: Oct 03, 2003 Posts: 13 posted Nov 01, 2003 00:11:00 0 I am trying to understand the concept of "position", as it relates to data structures and as it is defined in the book "Data Structures and Algorithms in Java", 2nd. Ed., by Goodrich & Tomassia. For those who don't have access to this textbook, the concept is also explained in an article by same authors at the following link: I am a little confused and I was wondering if someone could help me out. Sorry for the long post but I wanted to provide background for those who don't have the textbook. Reference 1 -------------- According to the textbook, "A position is itself an abstract data type that supports the following simple method: element():Return the element stored at this position Input: NoneOutput: Object Reference 2 --------------- Code fragment 5.3 (p.199) in the textbook shows a class called DNode that realizes a node of a doubly-linked list and implements Position. class DNode implements Position { private DNode prev, next;// References to the nodes before and after private Object element;// Element stored in this position // Constructor public DNode(DNode newPrev, DNode newNext, Object elem) { prev = newPrev; next = newNext; element = elem; } // Method from interface Position public Object element() throws InvalidPositionException { if ((prev == null) && (next == null)) throw new InvalidPositionException("Position is not in a list!"); return element; } // get and set methods..... } Reference 3 --------------- In Section 5.3.3 (p.209), in section entitled "Implementing a Sequence with an Array", the book defines a soultion where "..we store a new kind of position object in each cell of A, and we store elements in positions. The new position object p holds the index i, and the element e associated with p." Reference 4 --------------- In section 6.4.1, a section entitled "A Vector-Based Structure for Binary Trees", on page 264, states: "The level numbering function p suggests a representation of a binary tree T by means of a vector S such that node v of T is associated with the element of S at rank p(v). ........ Such an implementation is simple and efficient, for we can use it to easily perform the methods root, parent, leftChild, ......... by using simple arithmetic operations on the numbers p(v) associated with each node v involved in the operation. That is, each position object v is simply a wrapper for the index p(v) into the vector S." Reference 5 --------------- In Section 12.2 entitled "Data Structures for Graphs", subsection 12.2.1 entitled "The Edge List Structure", on page 547, it states: "In this representation, a vertex v of G storing an element o is explicitly represented as a vertex object. All such vertex objects ares tored in a container V, which would typically be a list, vector, or dictionary. If we represent V as a vector, for example, we would automatically think of the vertices as being numbered. ..... Note that the elements of container V are the vertex positions of graph G. Vertex Objects The vertex object for a vertex v storing element o has instance variables for: - A reference to o - Counters for the numbers of incident undirected edges, incoming directed edges, and outgoing edges - A reference to the position of the vertex object in container V. End of references ----------------- I am tryng to set up Java classes to implement a graph using a vector implementation of an edge-list data structure. My first step is to set up a class representing a Vertex. I will ignore the instance variables for edge counters mentioned above to keep it simple. I also want to add an instance variable to give the Vertex a name (e.g., vA, vB, etc.). So following fromt he textbook (see Ref.5 above) I have set up the following class: public class Vertex implements Position{ private String vName; private Object vCargo; private Position vPos; //constructor public Vertex(String n, Object o, Position p) { vName = n; vCargo = o; vPos = p; } //get and set methods go here //implementation of element() method of Position interface goes here } At some point I am going to have to use the constructor to create new vertex objects. Question 1: What I will be passing to the constructor to represent parameter p ? Question 2: What exactly should the element() method return? Just vCargo? or an object that includes vName and vCargo? Or an object that includes vname, vCargo and vPos? I think I understand the concept of Position as it is used in the doubly-linked-list context, where you have to define nodes in relation to previous and next position. But when it comes to a vector context, where you can make direct reference to cells, I feel like I just don't have a full grip of the Position concept. I would really appreciate if someone could help me, both conceptually and practically as it is applied above. William Barnes Ranch Hand Joined: Mar 16, 2001 Posts: 984 I like... posted Nov 01, 2003 08:20:00 0 Your "position" parameter in the class Vertex wouldn't be a "position" within a data structure. Normally you create a class which represents something, anything. You than use a data structure to hold multiple of those. So you create a class to represent a car, than have a data structure to represent a parking garage to hold one or more "cars". So the first thing you need to do is figure what defines a vector and those things are a "vector" class. You than pick a data structure to hold one or more vectors. Each data sturcture has different ways of holding and accessing the data members which it contains. This is the what you are refering to when you are talking about "postition", I think. Don't worry so much about all the different ways different data structures access their data. You need to pick the data structure which works best for you than start dropping in those "vectors". Please ignore post, I have no idea what I am talking about. Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24166 30 I like... posted Nov 01, 2003 08:34:00 0 William, You have to go read the article. The authors are presenting some specific concepts as alternatives to traditional iterators. They give a specific, subtle meaning to the term "position". Dario is just trying to understand this dry academic paper. Dario, I looked at the paper, didn't study it in detail, but I don't follow their arguments very well, either, and have a hard time understanding how you would extend what they're doing to vector-like containers. I think you could do it by introducing a layer of indirection -- the Position object could be (for example) a key into a hashtable that contained the index of the object in an vector container. But I have no idea why anybody would want to do this -- the time and space complexity are awful compared to a traditional vector. My personal, possibly misinformed opinion: if you're in a CompSci class and a professor told you to read this paper and implement a vector version, then OK, go ahead and do it. But otherwise, file this article in the "Interesting, but whatever" pile and go on with your life. [Jess in Action] [AskingGoodQuestions] Dario Romani Greenhorn Joined: Oct 03, 2003 Posts: 13 posted Nov 01, 2003 11:10:00 0 William, I understand what you are saying, and the next step will be to build an EdgeListGraph class to represent the entire graph, and it will contain a vector of vertex objects (along with other things). But if you look at Reference 5 in my initial post, you will notice that the authors specifically identify as a Vertex instance variable "A reference to the position of the vertex object in container V." However the position of a particular vertex object in the container will not be known until I create a Vertex object and use the insertVertex method in EdgeListGraph to add the vertex object to the Vector. So perhaps this is a field that stays "null" until it is populated by the insertVertex method. But my question is stil the same: what value or reference would I be placing into this field? Ernest, Thank you for your reply. It is gratifying that someone else at least understands what it is that I don't understand. There is some fundamental concept here that is eluding me, perhaps due to the subtlety of the concept (or not?). I would love to just "get on with my life", but this Position concept seems to come back with every assignment! Follow-ups and additional opinions welcome! William Barnes Ranch Hand Joined: Mar 16, 2001 Posts: 984 I like... posted Nov 01, 2003 17:45:00 0 You have two objects a graph, and a vertex. The graph uses a vector data structure to hold the vertex(s). The vertex object wll contain (among other things) its location on/in the graph. Your problem is that you don't understand how to provide a value for the vertex location. So we need to somehow map the location of the vertex on the graph to its location with in vector. If we could change the problem a little it would be easyer to solve. (Is that allowed?) If the data structure to hold the vertex(s) was a 2d array than the mapping from the location in the data structure to its location on the graph would be the same. So if vertex Z was at graph location 3,4 (using x - y coridinates) that would also be its location within the 2d array. So you would have a 1 to 1 mapping between the location within the data stucture to the location on the graph. Is it required that you use a vector to hold these vertex(s)? Dario Romani Greenhorn Joined: Oct 03, 2003 Posts: 13 posted Nov 01, 2003 18:47:00 0 william, Just to clarify, the type of graph I am talking about is defined as a collection of vertices and edges. In this context, it is not at all related to Cartesian co-ordinates or to other common uses of the word graph, such as "graphing a line" or "graphing data". Here is a link that explains the kind of graph I am talking about: I am developing a java class to implement the Graph ADT mentioned in the link. I am following one of the approaches described in the textbook for doing this. The approach I am using is called using an edge list structure to represent a graph. It involves defining a Vertex class and an Edge class. The Graph class, which represents the entire graph, will include a container to hold the Vertex objects and a container to hold the Edge objects. There are several types of containers you can use. I have chosen to use a Vector implementation for the Vertices container and a Vector implementation for the Edges container. I didn't mention edges in my earlier post because I wanted to keep it simple. But now that edges have come up, the concept of position applies to edges too: the textbook lists the following description fo one of the instance variables of an edge object: "- a reference to the position of the edge-object in container E." My questions relate specifically on how to implement the concept of "position" (in the sense that it is described in the textbook, which I have tried to convey above) in the context of the approach I have taken to implement the Graph ADT. Have a look at the following link (from my initial post) if you want a better idea of the concept of "position" in this context. Are we still on the same wavelength? William Barnes Ranch Hand Joined: Mar 16, 2001 Posts: 984 I like... posted Nov 01, 2003 19:52:00 0 Hopefully someone smart will show up and help you out. 8-> Dario Romani Greenhorn Joined: Oct 03, 2003 Posts: 13 posted Nov 01, 2003 21:07:00 0 Thanks for trying to help me out William. I think you were headed down the right path - what you described is a type of abstration, which is the stated purpose of the Position ADT - to allow client to use Position when referring to Verices and Edges, instead of using references that require the client to know the underlying implementation(e.g., indexed reference to a vector cell). My problem is that I don't know how to accomplish this abstraction in the context of my implementation. Regarding your last comment - I'll settle for someone who knows Data Structures and Algorithms in Java William Barnes Ranch Hand Joined: Mar 16, 2001 Posts: 984 I like... posted Nov 01, 2003 23:56:00 0 Well I see no one has showed up yet. I guess all the smart folks have more interesting things to do than sit around at home reading Javaranch.com. I will get it another try here. And this time I actually read some of the original post, and followed some of the links! You have a Map which holds pointers to the Vertex[s] stored in the Vector data structure. This will give you that layer of redirection you are looking for. The Vextex[s] can be moved around the Vector all you want, the pointer in the Map will always be able to find the exact Vertex you are looking for. The Key entry in the Map is the name of the Vertex, and the Value is the Vertex (a pointer into the Vector). class Vertex() { Vertex(){} //ctor } class Graph() { // Associative array. Key is name of Vertex, // Value is "position" of Vertex (pointer to Vertex in Vector). HashMap vertexMap = new HashMap() ; // Pointers to Vertex(s). Vector vertexVec = new Vertor() ; // Vector of Vertex(s). Graph(){} // ctor void AddVertex( String key, Vertex value) { vertexMap.add( key, value) ; vertexVec.add( ver) ; } } William Barnes Ranch Hand Joined: Mar 16, 2001 Posts: 984 I like... posted Nov 02, 2003 21:28:00 0 The code below shows how you can create a level of redirection using two vectors. Both of the vectors maintain pointers to Vertex instances. You can move the elements around in the �PostionV� vector and the elements in the �LocationV� vector will not lose track of the elements. I think that this is one of the items you are looking for. Sorry about the ugly code. And I am sure that someone else will show up and give a better example. class Vertex { int Xcord ; int Ycord ; Vertex( int X, int Y) { Xcord = X ; Ycord = Y ; } public String toString() { return("Xcort = " + Xcord + ", Ycord = " + Ycord) ; } } import java.util.Vector ; class Graph { Vector LocationV ; Vector PositionV ; Graph() { LocationV = new Vector() ; PositionV = new Vector() ; } void AddVertex( Vertex V) { LocationV.add( V) ; PositionV.add( V) ; } public void MixUpPositionV() { // Exchange element 0 with 2. // Vertex temp = new Vertex( 4, 4) ; // Cheating. temp = (Vertex)PositionV.elementAt( 0) ; PositionV.set( 0, (Vertex)PositionV.elementAt( 2)) ; PositionV.set( 2, temp) ; } void ShowAll() { Vertex temp ; System.out.println("Location vertex(s)") ; for( int i = 0; i < LocationV.size(); i++) { temp = (Vertex)LocationV.elementAt( i) ; System.out.println("At " + i + ", " + temp) ; } System.out.println("Position vertex(s)") ; for( int i = 0; i < PositionV.size(); i++) { temp = (Vertex)PositionV.elementAt( i) ; System.out.println("At " + i + ", " + temp) ; } } } class testGraph { public static void main( String arg[]) { Graph g = new Graph() ; Vertex v = new Vertex( 1, 1) ; g.AddVertex( v) ; Vertex v2 = new Vertex(2, 2) ; g.AddVertex( v2) ; Vertex v3 = new Vertex(3, 3) ; g.AddVertex( v3) ; g.ShowAll() ; g.MixUpPositionV() ; g.ShowAll() ; } } I agree. Here's the link: subject: "Position" Concept as it applies to Data Structures Similar Threads Graph problem July Newsletter Puzzle (Maze Solver) Exception in thread "main" java.lang.NullPointerException how to move and reverce a geometric object? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/394728/java/java/Position-Concept-applies-Data-Structures
CC-MAIN-2014-15
refinedweb
2,692
62.27
From: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>The below warning was added in place of pte_mkyoung(); if (is_write)pte_mkdirty();In fact, if the PTE is not marked young/dirty, our dirty/accessed bit emulationwould cause the TLB permission not to be changed, and so we'd loop, and given wedon't support preemption yet, we'd busy-hang here.However, I've seen this warning trigger without crashes during a loop ofconcurrent kernel builds, at random times (i.e. like a race condition), and Irealized that two concurrent faults on the same page, one on read and one onwrite, can trigger it. The read fault gets serviced and the PTE gets markedwritable but clean (it's possible on a shared-writable mapping), while thegeneric code sees the PTE was already installed and returns without action. Inthis case, we'll see another fault and service it normally.Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>--- arch/um/kernel/trap_kern.c | 9 +++++++++ 1 files changed, 9 insertions(+), 0 deletions(-)diff --git a/arch/um/kernel/trap_kern.c b/arch/um/kernel/trap_kern.cindex 95c8f87..0d4c10a 100644--- a/arch/um/kernel/trap_kern.c+++ b/arch/um/kernel/trap_kern.c@@ -95,7 +95,16 @@ survive: pte = pte_offset_kernel(pmd, address); } while(!pte_present(*pte)); err = 0;+ /* The below warning was added in place of+ * pte_mkyoung(); if (is_write) pte_mkdirty();+ * If it's triggered, we'd see normally a hang here (a clean pte is+ * marked read-only to emulate the dirty bit).+ * However, the generic code can mark a PTE writable but clean on a+ * concurrent read fault, triggering this harmlessly. So comment it out.+ */+#if 0 WARN_ON(!pte_young(*pte) || (is_write && !pte_dirty(*pte)));+#endif flush_tlb_page(vma, address); out: up_read(&mm->mmap_sem);-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2005/11/12/100
CC-MAIN-2016-44
refinedweb
313
55.95
DarksshadesMember Content count4 Joined Last visited Community Reputation125 Neutral About Darksshades - RankNewbie Luabind Property not working when from luabind::globals Darksshades replied to Darksshades's topic in Engines and MiddlewareSorry about that, I didn't actually compile that pasted code. But anyways I compile it now the result is: 8 function: 0056D180 Basically, .def_readwrite and .property don't really work. I found 2 similar questions with google and it appears that its a wrong build with either boost or luabind. here: and here: The thing is... I can't seem to compile boost or luabind very well... I'm using Code::Block with gcc mingw. What I did was compile luabind from a dynamic lib project with defined LUABIND_DYNAMIC_LINK As for the boost I simply extrated boost 1_44_0 and added it to the includes directories and it seemed to work fine. Can someone help me compile boost and luabind correctly?? Unity Where can I find tutorials/examples/etc. on how to integrate C++ and Lua? Darksshades replied to AniMerrill's topic in Engines and MiddlewareJust to follow up, LuaBind an tolua++ are REALLY easy and simple to bind C++ classes to lua. Here is a tutorial of how to bind classes with LuaBind: You might have some problems in windows if you're new to programming thought. I didn't find any compiled luabind for windows, but even so, its fairly easy to compile it from source code and the luabind manual has every instructions you might need to help you with that. Luabind Property not working when from luabind::globals Darksshades posted a topic in Engines and MiddlewareEDIT 2: Solved! As I couldn't compile luabind with gcc using bjam, I created a new static library project for Lua 5.1.4 and Luabind and it finally worked as it should. I was using the installer version of Lua 5.14 for windows before, which was a dynamic library and I guess it was messing with the static lib project for luabind I created previously. But anyways, its working now.... Thanks o/ And nice tip Gambini, that helps a lot when debugging. EndEdit2 Edit: It really is the build of luabind. I download VisualStudio and compiled luabind with: "bjam toolset=msvc-10.0 variant=debug threading=multi link=static" -> This doesnt solve anything, the same result and "bjam toolset=msvc-10.0 variant=debug threading=multi link=static define=_BIND_TO_CURRENT_VCLIBS_VERSION" -> This makes it work. The only problem is that I use gcc mingw with code::blocks so that doesn't actually solve it. The define that solves the problem is specific to msvc. just to make sure, now compiling luabind with gcc(gcc-mingw-4.4.1) with the define produces: "bjam toolset=gcc variant=debug threading=multi link=static define=_BIND_TO_CURRENT_VCLIBS_VERSION" -> Bunch of undefined references to lua functions "bjam toolset=gcc variant=debug threading=multi link=shared define=_BIND_TO_CURRENT_VCLIBS_VERSION" -> Luabind works but with .def_readwrite and .property not working still Does anyone know how to properly compile luabind with gcc? EndEdit: I'm trying to make lua recognize a local variable in his global scope and makes chances in it. So I'm creating a Test ts; and passing it to lua with luabind::globals(l)["test"] = &ts; It works fine, lua makes changes to his variable 'test' and the changes are transferred to the local variable 'ts'. The only problem is that the Property is not working: ".property("stack", &Test::getStack, &Test::setStack)" When calling the getter it spills out some random numbers I'm assuming is an memory addres and the setter does nothing. I notice thought that this doesnt happen when the variable is created inside luaString, only when getting a variable with luabind::globals. Inside luaString: "t = Test()\n print(t.stack)" <- this works fine, just doesnt when variable from luabind::globals This is not a major problem but it would be nice to know how to solve this... am I doing something wrong or am missign something? The class using namespace std; class Test { private: int stack; public: Test(){stack = 0; valor = 2;} ~Test(){} void addStack(){stack++;} void sayStack(){cout << stack << endl;} int getStack(){return stack;} void setStack(int stk){stack = stk;} }; Example main and Lua code int main() { lua_State* l = luaL_newstate(); luaL_openlibs(l); luabind::open(l); //Bind the class to lua luabind::module(l) [ luabind::class_<Test>("Test") .def( luabind::constructor<>( ) ) .def("sayStack", &Test::sayStack) .def("addStack", &Test::addStack) .property("stack", &Test::getStack, &Test::setStack) ]; //Create a Test class Test ts; //Make lua global test equal to local ts luabind::globals(l)["test"] = &ts; //First, make a new variable and test property, it WORKS luaL_dostring(l, "t = Test()\n" "t.stack = 8\n" "print(t.stack)\n" ); //Then, using the variable grom luabind::globals... DOESN'T WORK. luaL_dostring(l, "print(ts.stack)\n" "ts.stack = 8\n" "print(ts.stack)\n" ); //Close lua lua_close(l); return 0; } Thanks Luabind - .property getter not working Darksshades posted a topic in Engines and MiddlewareHey, I just began learning luabind and I'm trying to get the .property getter to work but with no luck. When I set the getter to a lua variable and try to print it... it prints the function address instead of the value. I made a simple program to test it here: [code] using namespace std; class Test { private: int stack; public: Test(){stack = 0;} ~Test(){} void addStack(){stack++;} void sayStack(){cout << stack << endl;} int getStack(){return stack;} void setStack(int stk){stack = stk;} }; int main() { lua_State* l = luaL_newstate(); luaL_openlibs(l); luabind::open(l); //Bind the class to lua luabind::module(l) [ luabind::class_<Test>("Test") .def("sayStack", &Test::sayStack) .def("addStack", &Test::addStack) .property("stack", &Test::getStack, &Test::setStack) ]; //Create a Test class Test ts; //Make lua global test equal to local ts luabind::globals(l)["test"] = &ts; //Should make rs = stack, //Say stack and then print the rs(print same as sayStack) luaL_dostring(l, "rs = test.stack\n" "test:sayStack()\n" "print(rs)\n" ); //Close lua lua_close(l); return 0; } [/code] The output here: [code] 0 function: 005FC568 [/code] I tried the setter and it works just fine.... its just the getter that doesnt work. Does anybody have an idea why and how to get the getter to work?
https://www.gamedev.net/profile/197495-darksshades/
CC-MAIN-2018-05
refinedweb
1,039
54.22
GXT Image Viewer: w/Pan and zoom GXT Image Viewer: w/Pan and zoom I did this class to manage images in an any-size content panel, it works for me... Last edited by jeroni; 24 Nov 2010 at 4:35 AM. Reason: error uploading classes I have written a layout and plugins for this very purpose, I just asked my manager if I can post it and I'm waiting for him to discuss it with legal to see what I can post or not. I will start a new thread and link it here if they allow me to. is there a javacsript extjs version of this ;-) Since I haven't gotten a response on what I am or am not allowed to post from our code, here is my Suggestion on how to simply accomplish this pan and zoom image viewer. you will likely have to build on this idea. (haven't tested this snippet either) what you would do then, is create a LayoutContainer that is the same size ratio as your image you want to display, set the layout to a PanZoomLayout, then set the zoom and position of your layout, call layout and it should do the trick. Code: /** * Copied from FitLayout mostly */ public class PanZoomLayout extends Layout { Point position = new Point(0, 0); float zoom = 1.0f; /** * Creates a new fit layout instance. */ public PanZoomLayout () { monitorResize = true; } public void setPosition(Point position) { this.position = position; setLayoutNeeded(container, true); } public void setZoom(float zoom) { this.zoom = zoom; setLayoutNeeded(container, true); } @Override protected void onLayout(Container<?> container, El target) { if (container.getItemCount() == 0) { return; } activeItem = activeItem != null ? activeItem : container.getItem(0); super.onLayout(container, target); Size size = new Size(target.getStyleSize().width, target.getStyleSize().height); target.makePositionable(); setItemSize(activeItem, size); activeItem.makePositionable(true); activeItem.el().setPagePosition(-position.x, -position.y); } protected void setItemSize(Component item, Size size) { if (item != null && item.isRendered()) { size.width -= getSideMargins(item); size.height -= item.el().getMargins("tb"); setSize(item, size.width, size.height); } } } thanks. A little futher searching and a found a place to start. above is useful nonetheless. Hi jeroni, Whats the value of the constant that is used in the attached file ResourcePreview. That is IconStyle.ZOOM_IN, IconStyle.ZOOM_OUT....etc Don mind plz... Thanks Naveen. Those are just CSS icon styles: IconStyle.ZOOM_IN is "icon-zoom-in". The CSS style is: .icon-zoom-in { background-image: url('images/zoom-in.png') !important; } use any icons you like (16x16 pixels) Similar Threads Image Rotate/ Zoom/ PanBy durlabh in forum Community DiscussionReplies: 4Last Post: 1 Dec 2010, 9:09 AM [Ext JS 3+] Ext.ux.grid.plugins.Pan - Pan support for LockingGridViewBy mankz in forum Ext 3.x: User Extensions and PluginsReplies: 0Last Post: 9 Aug 2010, 4:47 AM Image zoom-in questionBy Liminality in forum Ext 2.x: Help & DiscussionReplies: 2Last Post: 16 Jun 2009, 9:22 AM
http://www.sencha.com/forum/showthread.php?108577-GXT-Image-Viewer-w-Pan-and-zoom
CC-MAIN-2014-52
refinedweb
480
50.02
11 November 2012 00:16 [Source: ICIS news] RIO DE JANEIRO (ICIS)--?xml:namespace> “The project is progressing well. Site preparation is complete and basic engineering finished. We have also made orders to procure around 50% of all the equipment,” said Jose Luis Uriegas, CEO of Grupo Idesa. Mexico-based Idesa owns 35% of Braskem Idesa, the joint venture building Ethylene XXI, with Braskem owning 65%. Uriegas made his comments on the sidelines of the Latin American Petrochemical Association (APLA) annual meeting. “One week ago, we also put our first structures in place on site – the racks for the pipes. Previously it had been levelling, piling and work on the foundations,” he added. Ethylene XXI. The $3.2bn (€2.5bn) financing package for the project is expected to close on 30 November, said Uriegas. This represents around 70% of the total cost of $4.5bn, which includes construction costs of $3.7bn, plus working capital and interest. The project is expected to make a substantial dent in “By 2015, this deficit will be 1.7m tonnes/year, so we are basically supplying a significant part of that with Ethylene XXI,” Uriegas said. The APLA conference ends on Tuesday. ($ =
http://www.icis.com/Articles/2012/11/11/9612966/APLA-12-Mexico-Ethylene-XXI-on-track-for-July-2015.html
CC-MAIN-2014-42
refinedweb
198
67.25
Optimize with a SATA RAID Storage Solution Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs Java has always had many different faces to its security model. It has a strongly typed compiler to eliminate programming bugs and help enforce language semantics, a bytecode verifier that makes sure the rules of Java are followed in compiled code, a classloader that's responsible for finding, loading, and defining classes and running the verifier on them, and the security manager -- the main interface between the system itself and Java code. We'll be concentrating on the age-old security manager and the new addition to the JDK, the access controller. To refresh our memories, the security manager in Java is composed of a series of checkXXX methods that we can override, defining the logic we desire. In JDK 1.1, this logic either disallows the request (by throwing a java.lang.SecurityException) or allows the request (either via some ornate logic scheme or by simply returning). Security managers exist to enforce the rules of the sandbox. The sandbox concept is fairly simple: When you run a piece of Java code, you may want the sandbox to provide an area for the code to do what it needs to do. But in many situations, you need to restrict the bounds of this area. Code that is trusted resides outside the sandbox; code that is untrusted is confined within it. Trusted code is the code in the Java API and code loaded from the classpath. Untrusted code is code loaded from outside the classpath, usually from the network. So Java applications, by default, live outside the sandbox and Java applets, by default, are confined within it. The need for this "play area" is obvious when you think of an applet. Suppose a site called decided to create an applet that ostensibly showed slides of Sunday cartoons. You might enjoy that. But behind the scenes, while you were lazily chuckling at the sideshow, the applet would really be scouring your hard drive for private information. Without the sandbox, this scenario is all too possible. The sandbox represents the limits a program has put upon it. The sandbox is another term for a security policy. As Java grows, the need for varying security policies increases. It is agreed by almost everyone that the original sandbox model of JDK 1.0.x, though safe, was too restrictive. With JDK 1.1 the addition of digital signatures allowed expansion of the original sandbox policy. If the user trusted the digitally signed code, users could allow normally untrusted code to access resources. (Though their discussion is beyond the scope of this article, digital signatures still play an important role in JDK 1.2.) The sandbox, though, is just the security policy -- the equivalent of a law. And a policy or law must be enforced to be effective. The government can pass a law in your town tomorrow outlawing red shoes, but if nobody enforces it, it's not much of a law. The sandbox won't effectively confine code and its behavior without an enforcement mechanism. This is where the security manager comes into the picture. Security managers make sure all restricted code stays in the sandbox. Here's an example of how to create a security manager in JDK 1.1 that allows reading files, but disallows writing files: public class MySecurityManager extends java.lang.SecurityManager { public void checkRead(String file) throws SecurityException { // reading is allowed, so just return return; } public void checkWrite(String file) throws SecurityException { // writing is not allowed, so throw the exception throw new SecurityException("Writing is not allowed"); } } // end MySecurityManager The MySecurityManager class does its job, but in a purely binary fashion: Either you can perform the requested action or you cannot. Often it is desirable to maintain a little more granularity in the system. Perhaps you want to permit reading, but only from files ending in txt. This might be achieved if we change the previous code as follows: public class MySecurityManager2 extends java.lang.SecurityManager { public void checkRead(String file) throws SecurityException { //check the file extension to see if it ends in ".txt" int index=file.lastIndexOf('.'); String result=file.substring(index, file.length()); if(result.equalsIgnoreCase(".txt")){ return; }else{ throw new SecurityException("Cannot read file: "+file); } } } // end MySecurityManager2 This version of the code offers a little more fine-tuned control than the original. With work, we could create a security manager that had very a granular security policy. This format -- subclassing java.lang.SecurityManager and overriding the appropriate methods -- was the only way security was controlled prior to JDK 1.2. This system offers many advantages: SecurityManagerclass are called for you by the Java API; there's no need for you to call the code at all The traditional model of Java security falls short, however, when it comes to flexibility and intricate granularity. Its disadvantages as follows: These disadvantages are erased with the security architecture of JDK 1.2 while the advantages remain. Our discussion so far indicates that the traditional sandbox model, though useful, is simply not flexible enough for most systems. What may be a good policy today may not be appropriate tomorrow. A user could need his access level changed over time, not an easy thing to accomplish with the traditional security model. To provide greater control to both the developer and the end user, the java.security package was expanded to include classes like AccessController, Permission, and Policy. We'll discuss the Permission and Policy objects a bit later. The AccessController is the muscle of the security manager. The sandbox, remember, is the policy in effect. The security manager enforces this policy by making use of the AccessController. In JDK 1.2, the SecurityManager class has the same interface as prior releases (allowing for backward compatibility), but the logic has changed. Now each checkXXX method makes a call to a static method inside the AccessController class. When you call the checkRead(String file) method now, it defers the work to the AccessController. Let's assume a user, Grover, wants to access a file called joe.rec. Compile-time isn't the best time to set security boundaries, as it isn't yet known who that user will be. It makes more sense to discover whether or not Grover can access the file at runtime. Someone, probably a security administrator, could create a policy file containing Grover's permissions. The flow of this process is given in the diagram below. Security Flow Diagram I've simplified the procedure here; the code samples aren't literally from the source code, but the functionality is exactly the same. The important thing to understand here is that the SecurityManager is no longer solely responsible for the logic of the desired security policy. In fact, it could be said that the SecurityManager is no longer needed in the JDK. There are a few reasons it has remained part of the system, but the main one is to ensure that code written with previous versions of the JDK will still function. Remember that the methods inside the SecurityManager are just that: methods. You can perform any logic you choose inside them. If you're using a Java program from an earlier release of the JDK and there is a security manager installed for it, the code does not have to be changed to still work. The original logic will apply for all the methods you overrode and the AccessController will be called for the rest. Now we'll look at how the AccessController works with user-defined policies. If you look at the above diagram, you'll see that when the SecurityManager's checkRead(String file) method is called, it creates a FilePermission object. It then passes this FilePermission object to the AccessController, which compares it to the current Policy object for acceptance or denial. Whew! The good news is, all of this calling of methods and passing of objects is handled for you by the Java API. Though there are times when you may wish to create your own Permission objects or call the AccessController yourself, we'll leave discussion of that for a future article. What we need to define now are the system-provided permissions we wish to allow a program to have and the policy file those permissions will be stored in. A permission in Java represents a specific access privilege. The abstract java.security.Permission class is subclassed to create specific permission types. To represent a file access permission, the java.io.FilePermission class is used. To represent access to a property, the java.util.PropertyPermission class is used. Socket access is encapsulated in the java.net.SocketPermission class and so on. Let's take a look at the FilePermission class. As a demonstration of this whole security system, we'll write a Java app that is allowed to read any file in the /tmp directory. The code for the application is called FileApp.java. When we run this code, the doit method is called and an attempt is made to create a FileInputStream. This attempt results in a check-in with the SecurityManager to see if one is loaded. If one is loaded, it will call the AccessController and check whether or not read access to the specified file is allowed. If you run this code directly, as normal, you'll get no errors. That's because Java applications are still run without any sandbox limitations at all. To force the system to load a security manager and enforce the permissions defined, you have to pass the -usepolicy flag to the interpreter. (Note: In JDK 1.2 beta 3, to use the -usepolicy flag, you also must pass the -new flag.) java -new -usepolicy FileApp test.txt This -usepolicy flag tells the virtual machine to make use of the policy files on the system. In the /lib/security directory of your Java installation, you'll find a file called java.security. This is a list of properties that are used for security. You'll find a section toward the middle of the file that contains these lines: # The default is to have a single systemwide policy file, # and a policy file in the user's home directory. policy.url.1=file:${java.home}/lib/security/java.policy policy.url.2=file:${user.home}/.java.policy You may specify as many policy files as you wish in the java.security file, but they must be numbered sequentially. The first one listed (policy.url.1) specifies the system security policy, java.policy. If you look closely at this file, you'll see that it's a very limited sandbox. If an application makes use of only this policy file, it has no access to system resources. Notice, for example, that there are no permissions granted for reading files. That's right! An application that uses this default security policy is as restrained as the normal applet. The next policy entry in the java.security file is the .java.policy file (notice the leading period [.]). This is the default name given the user security policy. By default, this policy file is stored in the user's home directory. More policy files can be added sequentially, perhaps related to a specific application. The virtual machine will create a java.security.Policy object from all these defined files. It forms a union of the files, so all the policies in .java.policy are first, then the user-specific policy, then the third file, and so on, until there are no more policy files to read. Since policy files only grant privileges, there is no danger of clashing. In other words, there is no provision for denying a privilege except to simply not grant it. A policy file entry is composed of one or more grant entries. A grant entry has the form: grant [signedBy >signer<] [, codeBase >url of code source<] { permission >permission class< [>name< [, >actions<] ]; permission >permission class< [>name< [, >actions<] ]; permission >permission class< [>name< [, >actions<] ]; ... } The:
http://www.javaworld.com/javaworld/jw-08-1998/jw-08-sandbox.html?page=1
crawl-003
refinedweb
2,014
56.45
Can Kruskal-Wallis test be used to test significance of multiple groups within multiple factors? I have tried to read what I can on Kruskal-Wallis and while I have found some useful information, I still seem to not find the answer to my question. I am trying to use the Kruskal-Wallis test to determine the significance of multiple groups, within multiple factors, in predicting a set of dependent variables. Here is an example of my data: ID Date Point Season Grazing Cattle_Type AvgVOR PNatGr NatGrHt 181 7/21/2015 B22 late pre Large 0.8 2 20 182 7/21/2016 B32 early post Small 1.0 4 24 In this example, my dependent variables are "AcgVor", "PNatGR" and"NatGrHt" while the independent variables (factors) are "Season', 'Grazing", and "Cattle_Type". As you can see, each of my factors has 2 group levels each. What I am trying to accomplish is to run a non-paramatric test that looks at the separate and combined importance of my factor groups to each of my dependent variables. I chose Krukal-Wallis and it seems to work for testing one of my grouping factors at a time. Here is the result for AvgVor ~ Grazing kruskal.test(AvgVOR ~ Grazing, data = Veg) Kruskal-Wallis rank sum test data: AvgVOR by Grazing Kruskal-Wallis chi-squared = 94.078, df = 1, p-value < 2.2e-16 This tells me that AvGVor is significantly different according to whether they were recorded pre or post grazing. Is there a way to build a similar model using Kruskal-Wallis that includes all of my grouping factors? Even if I have to run a separate one for each dependent variable. I attempted the following code, but it is flawed. lapply(Veg[,c("Grazing", "Cattle_Type", "Season")]),function(AvgVOR) kruskal.test(AvgVOR ~ Veg) See also questions close to this topic - R Package Dependencies in Conflict I currently have Mac 10.13.1 High Sierra and when I go to install the ff package for R on Anaconda using the Terminal command conda install -c conda-forge r-ff I get this... Solving environment: failed UnsatisfiableError: The following specifications were found to be in conflict: - r-drr - r-ff Use "conda info " to see the dependencies for each package. To determine the dependencies I typed in conda info but I a new to R so not sure what to do or look for when I get this..... active environment : None user config file : /Users/johnchristospanagiotopoulos/.condarc populated config files : /Users/johnchristospanagiotopoulos/.condarc conda version : 4.4.7 conda-build version : 3.0.27 python version : 2.7.14.final.0 base environment : /Users/johnchristospanagiotopoulos/anaconda2 (writable) channel URLs : package cache : /Users/johnchristospanagiotopoulos/anaconda2/pkgs /Users/johnchristospanagiotopoulos/.conda/pkgs envs directories : /Users/johnchristospanagiotopoulos/anaconda2/envs /Users/johnchristospanagiotopoulos/.conda/envs platform : osx-64 user-agent : conda/4.4.7 requests/2.18.4 CPython/2.7.14 Darwin/17.2.0 OSX/10.13.1 UID:GID : 501:20 netrc file : None offline mode : False I have looked to other places to resolve this issue, but I am new to R. If anyone has advice on how to get rid of error like these it would be much appreciated. -? - How to accurately calculate a running average in JavaScript without summing entire set Say I am tracking the time it takes for a function to execute, and am showing the average, updating each time a function completes. The array of times would be like: var completionTimes = [123, 1234, 128, 1000, ...] But it could get very large, into the millions or billions of runs. Averaging that every frame would be expensive. var sum = completionTimes.reduce(function(a, b) { return a + b }) var avg = sum / completionTimes.length Wondering if there is a trick of some sort to perform this running average without having to sum up all the values each time. Wondering if there is a way to do this without loss of accuracy/precision, but if not, knowing how to do it with small loss of accuracy works too. Maybe there is a way to sort them and group them into chunks, average the chunks, then do it that way. Not sure what best practices are here. - Is there a C++ library for Truncated gamma distribution? I have been looking for C++ libraries for Normal Distribution, Gamma Distribution, Truncated Normal Distribution and Truncated Gamma Distribution. I found libraries for the first three here but couldn't find any for Truncated Gamma Distribution. Is there a Truncated Gamma Distribution library for C++? Thanks. - How do I implement multiple linear regression in Python? I am trying to write a multiple linear regression model from scratch to predict the key factors contributing to number of views of a song on Facebook. About each song we collect this information, i.e. variables I'm using: df.dtypes clicked int64 listened_5s int64 listened_20s int64 views int64 percentage_listened float64 reactions_total int64 shared_songs int64 comments int64 avg_time_listened int64 song_length int64 likes int64 listened_later int64 i'm using number of views as my dependent variable and all other variables in a dataset as independent ones. The model is posted down below: import scipy.stats as stats import matplotlib import matplotlib.pyplot as plt import sklearn from sklearn import linear_model from sklearn.cross_validation import train_test_split #df_x --> new dataframe of independent variables df_x = df.drop(['views'], 1) #df_y --> new dataframe of dependent variable views df_y = df.ix[:, ['views']] names = [i for i in list(df_x)] regr = linear_model.LinearRegression() x_train, x_test, y_train, y_test = train_test_split(df_x, df_y, test_size = 0.2) #Fitting the model to the training dataset regr.fit(x_train,y_train) regr.intercept_ print('Coefficients: \n', regr.coef_) print("Mean Squared Error(MSE): %.2f" % np.mean((regr.predict(x_test) - y_test) ** 2)) print('Variance Score: %.2f' % regr.score(x_test, y_test)) regr.coef_[0].tolist() Output here: regr.intercept_ array([-1173904.20950487]) MSE: 19722838329246.82 Variance Score: 0.99 Looks like something went miserably wrong. Trying the OLS model: import statsmodels.api as sm from statsmodels.sandbox.regression.predstd import wls_prediction_std model=sm.OLS(y_train,x_train) result = model.fit() print(result.summary()) Output: R-squared: 0.992 F-statistic: 6121. coef std err t P>|t| [95.0% Conf. Int.] clicked 0.3333 0.012 28.257 0.000 0.310 0.356 listened_5s -0.4516 0.115 -3.944 0.000 -0.677 -0.227 listened_20s 1.9015 0.138 13.819 0.000 1.631 2.172 percentage_listened 7693.2520 1.44e+04 0.534 0.594 -2.06e+04 3.6e+04 reactions_total 8.6680 3.561 2.434 0.015 1.672 15.664 shared_songs -36.6376 3.688 -9.934 0.000 -43.884 -29.392 comments 34.9031 5.921 5.895 0.000 23.270 46.536 avg_time_listened 1.702e+05 4.22e+04 4.032 0.000 8.72e+04 2.53e+05 song_length -6309.8021 5425.543 -1.163 0.245 -1.7e+04 4349.413 likes 4.8448 4.194 1.155 0.249 -3.395 13.085 listened_later -2.3761 0.160 -14.831 0.000 -2.691 -2.061 Omnibus: 233.399 Durbin-Watson: 1.983 Prob(Omnibus): 0.000 Jarque-Bera (JB): 2859.005 Skew: 1.621 Prob(JB): 0.00 Kurtosis: 14.020 Cond. No. 2.73e+07 Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 2.73e+07. This might indicate that there are strong multicollinearity or other numerical problems. It looks like somethings went seriously wrong just by looking at this output. I believe that something went wrong with training/testing sets and creating two different data frames x and y but can't figure out what. This problem must be solvable by using multiple regression. Shall it not be linear? Could you please help me figure out what went wrong? - In R how do you label the x-axis for T2 plots using mqcc? - Data Science Analyze questionaires with multiple responses Let's say we have a questionaires with several responses possible for certain questions. Example What are your reasons of going to work early and leave early ? (3 choices possible) a. Avoid traffic in public transport b. Have time with family in the evening c. Sport in the evening d. You are a morning person e. Job nature etc. What would be your strategy to normalize and analyze this questionaire, taking into consideration that people may put from 0-3 responses. Should we convert this to nominal data ? - How to generate multivariate normal distribution in J Can anyone tell me how to generate multivariate distribution in J given the mean value vector and the covariance matrix? For example, in Python, np.random.multivariate_normal([0,0],[[1,.75],[.75,1]],1000) generate multivariate distribution with [0,0] as mean value vector and [[1,.75],[.75,1]] as the variance-covariance matrix? Thanks - Maximum Likelihood Hypothesis Definition? I need the definition of ML Hypothesis ; I know that the maximum likelihood estimation is a statistic method to perform fitting parameters for different models. So can I say that the ML Hypothesis is the hypothesis with parameters found using ML estimation ? Thanks for the help - How to compare two Probability Density Functions? I have two Probability Density Functions and I want to know if their distributions are similar or not. I know that KS test in R can do this, but when I run the code, an error occurs. Thanks for any help. set.seed(100) a=density(sample(x=1:30,size = 30,replace = T)) b=density(sample(x=1:40,size = 35,replace = T)) plot(a) lines(b) ks.test(a,b) Error in ks.test(a, b) : 'y' must be numeric or a function or a string naming a valid function - Testing the null hypothesis of zero skew I need to test the null hypothesis that my steady returns have a zero skewness with a confidence level of 95%. Do you have any ideas which formula I can take for this kind of test ? I tried the Agostino test for skewness, but think it's not the best way, because I can't set a confidence level. library(moments) ?agostino.test - how can I implement kruskal-wallis test in spark-scala - If ignoring repeated measures data in kruskal wallis test, is it assumption violated? This data is repeated measures data and groups are included.(Mixed Design) I want to Non-parametric test between Group effect (Kruskal-Wallis Test) and Within effect(Friedman Test) separatedly. (Don't want Parametric Test. and Friedman Test is testing Later.) Like testing 'difference between Grand mean effect' & 'difference between Column mean effect'. Data is Stacked by repeated measures. If ignoring repeated measures data in Kruskal-Wallis test, Is it assumption violated? (Assumption: Random Sample or Independent Sample) My data is below here. # From Table 1 of Deal et al (1979): Role of respiratory heat exchange in production of exercise-induced asthma. J Appl Physiol 46:467-475 dat <- data.frame(ID = c(1,2,3,4,5,6,7,8), temp1 = c(74.5,75.5,68.9,57,78.3,54,72.5,80.8), temp2 = c(81.5,84.6,71.6,61.3,84.9,62.8,68.3,89.9), temp3 = c(83.6,70.6,55.9,54.1,64,63,67.8,83.2), temp4 = c(68.6,87.3,61.9,59.2,62.2,58,71.5,83), temp5 = c(73.1,73,60.5,56.6,60.1,56,65,85.7), temp6 = c(79.4,75,61.8,58.8,78.7,51.5,67.7,79.6)) # stacked Data dat.s <- stack(dat[,-1]) SUBJNO <- rep(seq(1,8),6) GROUP <- rep(rep(seq(1,4),2),6) dat.s2 <- cbind(dat.s,SUBJNO) dat.s2 <- cbind(dat.s,GROUP) head(dat.s2) # Kruskall-Wallis Test # I think this is like difference between grand mean # Is it violated assumption 'Random Samplle (or Independent Sample'? kruskal.test(values ~ GROUP, data = dat.s2) # I think this is like difference between column mean. # maybe this is not violated assumption (Random Sample or Independent Sample) kruskal.test(values ~ GROUP, data = dat.s2[dat.s2$ind=='temp1',]) kruskal.test(values ~ GROUP, data = dat.s2[dat.s2$ind=='temp2',]) kruskal.test(values ~ GROUP, data = dat.s2[dat.s2$ind=='temp3',]) kruskal.test(values ~ GROUP, data = dat.s2[dat.s2$ind=='temp4',]) kruskal.test(values ~ GROUP, data = dat.s2[dat.s2$ind=='temp5',]) kruskal.test(values ~ GROUP, data = dat.s2[dat.s2$ind=='temp6',]) Thanks a lot! - Kruskal-Wallis in R with large dataset (n = 106000) - Any alternative? I am using the Kruskal-Wallis test on my dataset, but it's now been running for about 20 minutes. My dataset contains 160000 observations. Does anyone has experience with running this test on these kind of large datasets? Will the test be completed eventually and how long does it take? Are there any alternatives?
http://codegur.com/48215526/can-kruskal-wallis-test-be-used-to-test-significance-of-multiple-groups-within-m
CC-MAIN-2018-05
refinedweb
2,161
60.72
🚀 Interested in making these for a project? Get in touch If you are making an application that requires some basic true/false, yes/no, a/b/c/d input from a customer, a very reliable way to achieve this is by making USB push-buttons they can press. When plugged into your machine, these buttons will appear as a generic keyboard. Each different button-press will appear as a different keyboard keystroke. This is a really reliable, reuseable and easy-to-install way of mapping hardware input to your application. Overview To do this, you’re going to need to buy a few parts and be comfortable with some basic soldering (if not, start here!). We will have two big arcade-style push buttons that connected to a USB HID device. A HID (“human interface device”) is a tiny microcontroller that is powered by a USB cable and can be programmed to perform various operations. These microcontrollers usually have input/output pins to allow you to interact with the device. We are going to use the “Teensy” USB HID. This is a really cheap and easy to use device that can be programmed using the Arduino framework. In our case we are going to connect our two buttons to the USB HID device’s input pins and program the device to listen for presses from our buttons and then output particular keystrokes. These keystrokes can then be collected by your application for input. In our case, our two buttons will output either a t for true, or a f for false. Shopping List - x2 USB buttons. We use the “Massive Arcade Button with LED” from Adafruit.com. In the EU, you can get them from the Pimoroni.co.uk shop. Each of these buttons come in two parts - the big button itself as well as the actual underlying switch (that includes an LED that we won’t be using). - x1 Teensy USB HID controller. We use the 2.0 board that can be bought on the PRJC.com website or Adafruit.com. You can also get the Teensy 3.2 from Pimoroni.co.uk - x1 USB mini-b cable. You probably have one lying around, but if not you should pick one up from PRJC.com or Adafruit.com when you are buying the other parts. - Some male/female jumper wires (optional) - Some wire (single core is probably best although we are using stranded wire below). - A soldering iron, solder, flux, wire cutters, electrical tape, … Setting up the Teensy header pins First thing to do is to set up our Teensy. The Teensy comes with some included header pins. If you want to reuse the Teensy board for other projects, it’s a good idea to first solder these header pins onto the Teensy instead of soldering wires straight onto the board itself. You can then use the jumper wires to more easily connect to the input pins. Wiring up our physical buttons With the header pins soldered to our Teensy we can now wire up our physical buttons. We are simply going to solder some wire to our buttons and a then a jumper cable to our wire to make it easy to connect to the Teensy. We’ll need four sections of equal-length wire as well as four jumper cables (two for each button). First, connect the wire to the button. On this button there are four connections, two are for the actual switch itself and the other two are for an included LED. We are going to ignore the two LED connections and just connect directly to the switch: Now take the jumper cables and cut the male end off. Strip the wire back and wrap it into our existing wire for soldering. Note that we are directly connecting our buttons to the input pins. Usually you would have to make a pull-up or pull-down circuit if you wanted to use buttons as input. Luckily, the Teensy already has pull-up circuits build into the input pins: All of the pins have a pullup resistor which may be activated when the pin is an input. Just use pinMode() with INPUT_PULLUP. … The pullup resistors are useful when connecting pushbuttons that can connect the pin to ground (low), but when the button is not pressed there is no connection at all. The pullup resistor causes the voltage to be high when nothing is connected. so we can simply connect our buttons directly. Programming the Teensy With all of our hardware set up, we can now program the Teensy to actually take the input from the buttons and output keystrokes. To interact with our device, we need to first download the Teensy loader application. Once we have the application installed we can connect our device to our laptop via USB. Communicating with the Teensy The Teensy operates in two different modes: the default mode - which is what runs when you initially plug it in - is the application mode where whatever program you have installed runs in the background. For example, the default program installed on the Teensy simply blinks the little LED on the board. So when you first connect the Teensy you will see the LED start blinking: The second mode is the program mode where instead of executing the on-board program you can actually upload new programs to the device. To enter this mode, simply press the small little button on the Teensyboard itself. Writing our program Once the board is ready to accept new programs, you have to actually create and upload them. To do this, there is the hard way and the easy way: - The hard way is to write your application in C, compile it on your laptop and upload the compiled binary to the Teensy using the Teensy loader application that you download and installed a minute ago. - The easy way is to install the Arduino IDE as well as the custom Teensyduino add-on libraries that make the Teensy device compatible and accessible via the Ardunio IDE. We’re going to use the latter and make life easy. Follow the PJRC guide on installing both the Arduino IDE and the Teensyduino libraries, then open the Arduino IDE and create a new sketch. We will setup two buttons. In both cases, when a button is pressed, the little LED on the board will be lit. Furthermore, when one button is pressed it will output a t (true) character, while the other will output a f (false) character. #include <Bounce.h> // Define our two buttons. We are using the // 3rd party "bounce" library to more reliably // read the button clicks. // // Read more: // Bounce buttonTrue = Bounce(PIN_D4, 10); Bounce buttonFalse = Bounce(PIN_B1, 10); // Setup the two buttons with the pins we // have connected to and make sure to // make use of the internal pull-up circuitry void setup() { pinMode(PIN_D4, INPUT_PULLUP); // True button pinMode(PIN_B1, INPUT_PULLUP); // False button pinMode(PIN_D6, OUTPUT); // LED } void loop() { buttonTrue.update(); buttonFalse.update(); // Our 'true' button; outputs a 't' keypress if (buttonTrue.fallingEdge()) { Keyboard.print("t"); digitalWrite(PIN_D6, HIGH); // LED ON } if(buttonTrue.risingEdge()){ digitalWrite(PIN_D6, LOW); // LED OFF } // Our 'false' button; outputs a 'f' keypress if (buttonFalse.fallingEdge()) { Keyboard.print("f"); digitalWrite(PIN_D6, HIGH); // LED ON } if(buttonFalse.risingEdge()){ digitalWrite(PIN_D6, LOW); // LED OFF } } Hit verify to compile the program and upload to send it to the Teensy: You should see some successful output in the Arduino IDE console: Setting the device type The last thing we need to do is to tell the Teensy in what capacity it should operate when we connect it to our laptop. Should it be a mouse? A serial device? A disk device? A toaster? To do this, we simply go to the Tools > USB Type in the Arduino IDE file menu and select Keyboard + Mouse + Joystick:
https://timmyomahony.com/blog/making-usb-push-buttons
CC-MAIN-2021-17
refinedweb
1,307
71.24
URL dispatcher This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: 0.96, 0.95. in your settings file, but if the incoming HttpRequest object has an attribute called urlconf,. - news would not match any of these patterns, because each pattern requires that the URL end with a slash. - /articles/2003/03/3/ would match the final pattern. Django would call the function news.views.article_detail(request, '2003', '03', '3'). Named groups: urlpatterns = patterns('', (r'^articles/2003/$', 'news.views.special_case_2003'), (r'^articles/(?P<year>\d{4})/$', 'news.views.year_archive'), (r'^articles/(?P<year>\d{4})/(?P<month>\d{2})/$', 'news.views.month_archive'), (r'^articles/(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d+)/$', 'news.views.article_detail'), ) This accomplishes exactly the same thing as the previous example, with one subtle difference: The captured values are passed to view functions as keyword arguments rather than positional arguments. For example: - A request to /articles/2005/03/ would call the function news.views.month_archive(request, year='2005', month='03'), instead of news.views.month_archive(request, '2005', '03'). - A request to /articles/2003/03 [, optional name]]) …where optional dictionary and optional name are optional. (See Passing extra options to view functions below.)). url New in Django development version='') See Naming URL patterns for why the name parameter is useful. The prefix parameter has the same meaning as the first argument to patterns() and is only relevant when you’re passing a string as the view parameter. captured argument is sent to the view as a plain Python string, regardless of what sort of match the regular expression makes. For example, in this URLconf line: (r'^articles/(?P<year>\d{4})/$', 'news.views.year_archive'), …the year argument to news.views.year_archive() will be a string, not an integer, even though the \d{4} will only match integer strings. A convenient trick is to specify default parameters for your views’ arguments. Here’s an example URLconf and view: # URLconf urlpatterns = patterns('', (r'^blog/$', 'blog.views.page'), (r'^blog/page(?P<num>\d+)/$', 'blog.views.page'), ) # View (in blog/views.py) def page(request, num="1"): # Output the appropriate page of blog entries, according to num. In the above example, both URL patterns point to the same view — blog.views.page — but the first pattern doesn’t capture anything from the URL. If the first pattern matches, the page() function will use its default argument for num, "1". If the second pattern matches, page() will use whatever num value was captured by the regex. Performance.news.views'. Instead of typing that out for each entry in urlpatterns, you can use the first argument to the patterns() function to specify a prefix to apply to each view function. With this in mind, the above example can be written more concisely as: from django.conf.urls An included URLconf receives any captured parameters from parent URLconfs, so the following example is valid: # In settings/urls/main.py urlpatterns = patterns('', (r'^(?P<username>\w+)/blog/', include('foo.urls.blog')), ) # In foo/urls/blog.py urlpatterns = patterns('foo.views', (r'^$', 'blog.index'), (r'^archive/$', 'blog.archive'), ) In the above example, the captured "username" variable is passed to the included URLconf, as expected. Passing extra options to view functions URLconfs have a hook that lets you pass extra arguments to your view functions, as a Python dictionary. Any URLconf tuple can have an optional third element, which should be a dictionary of extra keyword arguments to pass to the view function. For example: urlpatterns = patterns('blog.views', (r'^blog/(?P<year>\d{4})/$', 'year_archive', {'foo': 'bar'}), ) In this example, for a request to /blog/2005/, Django will call the blog.views.year_archive() view, passing it these keyword arguments: year='2005', foo='bar' This technique is used in generic views and() Similarly, you can pass extra options to include(). When you pass extra options to include(), each line in the included URLconf will be passed the extra options. For example, these two URLconf sets are functionally identical: Set one: # main.py urlpatterns = patterns('', (r'^blog/', include('inner'), {'blogid': 3}), ) # inner.py urlpatterns = patterns('', (r'^archive/$', 'mysite.views.archive'), (r'^about/$', 'mysite.views.about'), ) Set two: # main.py urlpatterns = patterns('', (r'^blog/', include('inner')), ) # inner.py urlpatterns = patterns('', (r'^archive/$', 'mysite.views.archive', {'blogid': 3}), . Passing callable objects instead of strings Some developers find it more natural to pass the actual Python function object rather than a string containing the path to its module. This alternative is supported — you can pass any callable object as the view. For example, given this URLconf in “string” notation: urlpatterns = patterns('', (r'^archive/$', 'mysite.views.archive'), (r'^about/$', 'mysite.views.about'), (r'^contact/$', 'mysite.views.contact'), ) You can accomplish the same thing by passing objects rather than strings. Just be sure to import the objects: from mysite.views import archive, about, contact urlpatterns = patterns('', (r'^archive/$', archive), (r'^about/$', about), (r'^contact/$', contact), ) The following example is functionally identical. It’s just a bit more compact because it imports the module that contains the views, rather than importing each view individually: from mysite import views urlpatterns = patterns('', (r'^archive/$', views.archive), (r'^about/$', views.about), (r'^contact/$', views.contact), ) The style you use is up to you. Note that if you use this technique — passing objects rather than strings — the view prefix (as explained in “The view prefix” above) will have no effect. Naming URL patterns New in Django development version permalink() decorator or the {% url %} template tag). Continuing this example, if you wanted to retrieve the URL for the archive view, Django’s reverse URL matcher would get confused, because two URLpatterns. Utility methods reverse() If you need to use something similar to the {% url %} template tag in your code, Django provides the django.core.urlresolvers.reverse(). The reverse() function has the following signature: reverse(viewname, urlconf=None, args=None, kwargs=None) viewname is either the function name (either a function reference, or the string version of the name, if you used that form in urlpatterns) or the URL pattern name. Normally, you won’t need to worry about the urlconf parameter and will only pass in the positional and keyword arguments to use in the URL matching. For example: from django.core.urlresolvers import reverse def myview(request): return HttpResponseRedirect(reverse('arch-summary', args=[1945])) permalink() The permalink() decorator is useful for writing short methods that return a full URL path. For example, a model’s get_absolute_url() method. Refer to the model API documentation for more information about permalink(). Questions/Feedback If you notice errors with this documentation, please open a ticket and let us know! Please only use the ticket tracker for criticisms and improvements on the docs. For tech support, ask in the IRC channel or post to the django-users list.
http://www.djangoproject.com/documentation/url_dispatch/
crawl-001
refinedweb
1,140
58.58
When automating some tasks in Windows OS, you may wonder how to automatically close Windows process if you do not have the direct control of the running application or when the application is just running for too long time. In this article, I will be sharing with you how to close the Windows process with some python library, to be more specific, the pywin32 library. Prerequisites You will need to install the pywin32 library if you have not yet installed: pip install pywin32 Find the process name from Windows Task Manager You will need to first find out the application name which you intend to close, the application name can be found from the Windows task manager. E.g. If you expand the “Windows Command Processor” process, you can see the running process is “cmd.exe”. Let’s get started with the code! Import the below modules that we will be using later: from win32com.client import GetObject from datetime import datetime import os And we need to get the WMI (Windows Management Instrumentation) service via the below code, where we can further access the window processes. For more information about WMI, please check this. WMI = GetObject('winmgmts:') Next, we will use the WMI SQL query to get the processes from the Win32_Process table by passing in the application name. Remember we have already found the application name earlier from the task manager. for p in WMI.ExecQuery('select * from Win32_Process where Name="cmd.exe"'): #the date format is something like this 20200613144903.166769+480 create_dt, *_ = p.CreationDate.split('.') diff = datetime.now() - datetime.strptime(create_dt,'%Y%m%d%H%M%S') There are other properties such as Description, Status, Executable Path, etc. You can check the full list of the process properties from this win32-process documentation. Here we want to base on the creation date to calculate how much time the application has been running to determine if we want to kill it. Assuming we need to close windows process after it is running for 5 minutes. if diff.seconds/60 > 5: print("Terminating PID:", p.ProcessId) os.system("taskkill /pid "+str(p.ProcessId)) With this taskkill command, we will be able to terminate all the threads under this Windows process peacefully. Conclusion The pywin32 is super powerful python library especially when dealing with the Windows applications. You can use it to read & save attachments from outlook, send emails via outlook , open excel files and some more. Do have a check on these articles. As per always, welcome any comments or questions.
https://www.codeforests.com/category/resources/page/5/
CC-MAIN-2022-21
refinedweb
424
64
Is there a script to delete all children of an object? Need this fast…thanks Is there a script to delete all children of an object? Need this fast…thanks import bge cont=bge.logic.getCurrentController() own=cont.owner() if object.children!=None: for objects in children: objects.endObject() import bge cont=bge.logic.getCurrentController() own=cont.owner if own.children!=None: for objects in own.children: objects.endObject() Ok, but why doe this happen? (press play and press 1,2,3 multiple times) the guns glitch up…I checked everything and don’t know what’s wrong I’ll attach a google drive one Ok, I see this setup and it’s a little hard to handle I will try and cook you up something less “divided” as it’s hard to tell what is going on. ok check this out, this does the exact same thing, but the gun is self contained. mind you you will have to setup each gun, but it should make it much easier WeaponSelfContained.blend (460 KB) I am going to build the full system, as I need the same thing for wrectified, here is a updated version (all I am doing for tonight) this manages reloading (R key) the player can activate firing, but firing is in the gun the player activates reloading but reloading is in the gun the player gun swichty system removes the ammo, and adds it to stock, then ends the gun WeaponSelfContainedV2.blend (494 KB) @BluePrintRandom, there is an EDIT button. We told you multiple times to use it. It is left of the reply button. Even a child is able to find it. Your subsequent posts are flooding the forum. It does get a little obnoxious, so I agree with Monster. Still looks good though, BPR. But I bet my dog could find the edit button. IIRC, you also told him that “objects” is not a proper name when referencing a single object, among other things. In reading (or trying to read) a number of his posts, I am now convinced that he suffers from a learning disability of some sort. I feel sorry for him, but I don’t think he should be allowed to spam threads, and the forum in general. PS: The function that should actually end all children (his code doesn’t deal with children’s children): def end_children(parent): for c in parent.children: end_children(c) c.endObject() isn’t that what I’ve already done? the eyes trigger the gun controller to add the gun, and shooting and reload are in the gun. I just haven’t made reload yet though. I’ll post a more complete version of what I’ve done… Oh and there’s an annoying upload problem I’m having…My game is 5MB compressed so every time I want to upload the problem, I have to delete most of the objects, textures and materials to do so…any ideas on how to eliminate this problem? Mediafire. Or Google Drive. Or Dropbox. Or other. Off topic: While I do agree that reading multiple tiny posts can be a hassle, I don’t mind it. I think he’s just using it like a chat Some posts, like #7 are meant to be just a single post. I kind of don’t like reading one giant post that might take up the whole page On topic: The function that should actually end all children (his code doesn’t deal with children’s children): Doesn’t that already happen automatically? all children are input into the scene if the parent is and deleted when the parent is deleted right? I would connect this post and the previous one I made, but there’s no “connect” button or something. Just a suggestion…can you guys put one in there? I think it would benefit more people than just me Oh, yes, of course (my head was stuck in a different engine). Still, I would recommend you use this code, instead of his: def end_children(parent): for c in parent.children: c.endObject() It’s better, due to proper naming, and no redundant checks. I don’t feel sorry for me, where are your games btw? back to ON Topic. this is far less complicated. his whole system is a mess of states, this is not. WeaponSelfContainedLogicV4.blend (497 KB) Point of information, objects do also have a childrenRecursive attribute, though I suspect you’ve already considered this case, given that ending objects linearly will likely cause reference errors when an object is ended before its children (as you iterate through, unless the attribute traverses bottom up) Jeez, a little extreme, don’t ya think? All the blatant name callings, and insults… Sure, he posts a lot (I never noticed until now, tbh) but he’s incredibly helpful. OT: I have nothing to contribute to this thread, so I’ll be on my way I would only parent one item to the gun manager ever, and that way if there is a gun, its own.children[0] (check my solution) it uses .02ms->.04ms peak logic btw you can use the property “Fire” to handle animations fire and reload, and “Init” can handle “drawing” animation I’ll need to rethink “gun remove” as it is instant at the moment. *** Moderation *** Your complete game is not needed as long as your complete game is not the problem. I strongly suggest you recreate your problem in a small demonstration .blend. The benefits: Drawback: Unfortunately this is not a chat and we do not want to be one. BluePrintRandom has to follow the forum rules as everybody else here. One rule says: “Don’t flood the forums with … unnecessary posts …” He knows the solution: edit button You can edit even your older posts. But if there are other posts already it is better to post a new one, except you want to correct the information of the old post. If you add information I recommend to mark it somehow that a reader knows it was added later. *** End of Moderation *** To provide some help on topic (not moderating): It took me a while to identify the object that execute the deleteChildren.py code. An demonstration .blend as mentioned above would surely be an help. I do not even know how to test this game. You mention you loop backwards through the code to avoid issues on deleting. This is not the case in this situation as you do not remove elements from the list. You end game objects. This has no influence on the list. The list will stay the same so you are save to loop forward. Anyway the mentioned code from BluePrintRandom is sufficient (but a little bit too much). def endChildren(object): for child in object.children: child.endObject() Attribute children always returns a list. If there is no child the list is empty. You can test this situation. You can but you do not need to end the “children of the children”. As you already expect the BGE ends objects together with their children. You can test that either. Do you still have problems?
https://blenderartists.org/t/delete-children/614577
CC-MAIN-2019-30
refinedweb
1,195
72.76
#include <gromacs/utility/textstream.h> Interface for reading text. Concrete implementations can read the text from, e.g., a file or an in-memory string. The main use is to allow unit tests to inject in-memory buffers instead of writing files to be read by the code under test, but there are also use cases outside the tests where it is useful to abstract out whether the input is from a real file or something else. To use more advanced formatting than reading raw lines, use TextReader. Both methods in the interface can throw std::bad_alloc or other exceptions that indicate failures to read from the stream.. Implemented in gmx::TextInputFile, gmx::StringInputStream, and gmx::StandardInputStream. Reads a line (with newline included) from the stream. falseif nothing was read because the stream ended. On error or when false is returned, line will be empty. Implemented in gmx::TextInputFile, gmx::StringInputStream, and gmx::StandardInputStream.
https://manual.gromacs.org/documentation/2019-beta2/doxygen/html-full/classgmx_1_1TextInputStream.xhtml
CC-MAIN-2021-17
refinedweb
154
50.02
I2C master troublesarunas.straigis_1530751 Dec 6, 2015 10:56 AM Hello, I started exploring the world of PSoC with PSOC 4 BLE kit and it was great until I ran into trouble making i2c bus work. I am trying to hook up i2c character LCD, however it seems like I am unable to initialize i2c properly. I tried different pins/speeds, using low level i2c api and high level - nothing helps. I check the SCL pin with oscilloscope and don't see any signal. I tried example projects - still no signal on SCL and SDA.. I attach my code and project: <pre><code> #include <project.h> #include "LiquidCrystal_I2C.h" __IO uint32_t TimingDelay = 0; uint32_t DeviceAddr = 78; uint16 shit=0; int i=0; int main(void) { CyGlobalIntEnable; PWM_1_Start(); I2C_M_Start(); PWM_1_WriteCompare(0); while(1){ I2C_M_I2CMasterWriteBuf(DeviceAddr, (uint8 *)DeviceAddr, 1, 0); I2C_M_write_byte(DeviceAddr,67); I2C_M_I2CMasterSendStart(DeviceAddr,0); CyDelay(1000); } } </code> - i2c_lcd.zip 2.1 MB 1. Re: I2C master troublebob.marlowe Dec 6, 2015 11:49 AM (in response to sarunas.straigis_1530751) Welcome in the forum, Sarunas. I would suggest you to use P5_0 and P5_1 for I2C, the required pullup resistors are already provided on the board and the FRam is on the same bus Your first access to the I2C interface must fail: I2C_M_I2CMasterWriteBuf(DeviceAddr, (uint8 *)DeviceAddr, 1, 0); When the device address of the I2C-LCD really is 0x78 you are trying to submit a databyte from the address of 0x78 which is from flash, Probably not what you want. You did not provide a link to the LCD's datasheet, so I cannot check what you have to send. The following WriteByte() function will fail because you did not issue a Start() condition. The next SendStart() is not followed by a SendStop() All I2C-APIs return a status indicating success or failure. Best is to check that. For both the APIs SendStart() and WriteBuf() the last parameter has a meaningful #define (to be found in the .h-file), better to use that, makes it clear what you want to do. Bob 2. Re: I2C master troublesarunas.straigis_1530751 Dec 6, 2015 11:57 AM (in response to bob.marlowe)thank you for your suggestions. I probably did not make myself clear. shouldnt there be a signal in SCL even though the data is not being transmitted? I am not initializing lcd yet, just checking if I get a signal on I2c and any combination seems not to be working... 3. Re: I2C master troubleuser_242978793 Dec 6, 2015 12:04 PM (in response to sarunas.straigis_1530751) Sarunas: Well I have look over your program and I think there is a few issue with it. The I2C pins are P3-4 and P3-5 these are set for the Arduino I2C output pins on the Pioneer board J3. Also you need to have pullup resistors on these pins. Also I have no ideal what I2C LCD display you are using. So I can't check your address and commands. Also if you look at the PSOC Ble example program they have both an I2C device and also an I2C LCD device in their program. There is also a I2C test program in the examples that should help also. 4. Re: I2C master troublebob.marlowe Dec 6, 2015 12:04 PM (in response to sarunas.straigis_1530751) You will not see a signal without pullups. Bob 5. Re: I2C master troublesarunas.straigis_1530751 Dec 6, 2015 1:50 PM (in response to bob.marlowe) Thanks! I will try your suggestions :) i think i tried all allowed ports already, but will do it again. UPDATE: I managed to see the signal. The problem was indeed passing an argument to I2C_MASTERWRITEBUFFER. I tried sending the pointer of the variable, while it wants an array :) P.S. Are there pins on the pioneer baseboard to which pins 5.0 and 5.1 are connected? Can't find anything in documentation.. also doesn't seem to be on the board. 6. Re: I2C master troublebob.marlowe Dec 6, 2015 2:03 PM (in response to sarunas.straigis_1530751) You find documentation and schematics for all the kits you successfully installed under Programs(x86)\Cypress\,,,\Hardware The port5 pins fur I2C are on header J10 pins 12 and 14 Bob 7. Re: I2C master troublesarunas.straigis_1530751 Dec 7, 2015 3:34 AM (in response to bob.marlowe) I got it working on other pins. Thanks for your help. It turns out, that address had to be shifted by one bit (compared to stm32 i2c configuration). 8. Re: I2C master troublebob.marlowe Dec 7, 2015 5:12 AM (in response to sarunas.straigis_1530751) Fine! Happy coding Bob
https://community.cypress.com/thread/15188
CC-MAIN-2018-22
refinedweb
775
76.11
Python Generate random integers with random from 0 to 10 The simplest generation of a series of random integers is done by from random import randint. Below you can find example of generating 1000 integers in interval from 0 to 10: from random import randint import collections nums = [] for x in range(1000): nums.append(randint(0, 10)) print (collections.Counter(nums)) result: Counter({7: 112, 9: 101, 4: 100, 2: 93, 5: 93, 1: 92, 8: 88, 10: 88, 0: 85, 3: 74, 6: 74}) In order to verify the randomness we are using group by and count from Collections. AS you can see the result seems to be random. Could we do more random numbers? We could try with from secrets import randbelow Python 3.6 Generate random integer numbers with secrets - randbelow If you have Python 3.6 you can use secrets - randbelow to generate numbers. Producing 1000 integers in interval from 0 to 10: import collections from secrets import randbelow nums = [] for x in range(1000): nums.append(randbelow(10)) print (collections.Counter(nums)) result: Counter({7: 113, 0: 108, 2: 106, 5: 102, 3: 101, 9: 97, 4: 97, 6: 95, 8: 94, 1: 87}) This seems to be better in terms of randomness. And the spread of numbers looks much better. This can be considered as "cryptographically strong" random. You can read more here: PEP 506 -- Adding A Secrets Module. Python Generate random integer numbers with random choice You can generate number from a list of values using: random.choice(nums) import random import collections nums = [x for x in range(10)] rand = [] for x in range(1000): rand.append(random.choice(nums)) print (rand) print (collections.Counter(rand)) result: Counter({4: 115, 8: 110, 2: 106, 6: 106, 9: 104, 1: 94, 3: 93, 5: 92, 0: 91, 7: 89}) Python Generate random numbers from predefined list random.choice(nums) can be used when you want to get random elements of list: import random import collections nums = [2,4,67,3] rand = [] for x in range(1000): rand.append(random.choice(nums)) print (rand) print (collections.Counter(rand)) result: Counter({4: 268, 2: 260, 3: 252, 67: 220}) Python Generate random float numbers with uniform Below you can find example of generating 1000 floats in interval from 0 to 10: from random import uniform nums = [] for x in range(1000): nums.append(uniform(0, 10)) result: 0.18487854282652982: 1, 0.6016200481265632: 1, 2.3920220120668825 ... Python Generate random list from 0 to 20 random.choice(nums) can be used when you want to get random elements of list: import random nums = [x for x in range(20)] random.shuffle(nums) print(nums) result: [6, 7, 1, 2, 17, 0, 10, 3, 5, 15, 14, 11, 19, 8, 18, 12, 4, 16, 9, 13]
https://blog.softhints.com/python-random-number-examples/
CC-MAIN-2021-25
refinedweb
469
64.81
TAG This page is for maintaining a record of changes between each revision of "Architectural Principles of the World Wide Web, produced by the TAG. If you find the list is incomplete or inaccurate please send email to the TAG at www-tag@w3.org. Last modified: $Date: 2004/12/15 01:48:53 $ Changes between 5 November 2004 Proposed Recommendation and the 15 December 2004 Recommendation (diff). ---------------------------- revision 1.809 date: 2004/12/14 20:18:36; author: ijacobs; state: Exp; lines: +4 -4 some minor tweaks based on DC suggestions ---------------------------- revision 1.808 date: 2004/12/13 22:45:40; author: ijacobs; state: Exp; lines: +2 -2 tweak ---------------------------- revision 1.807 date: 2004/12/13 22:45:26; author: ijacobs; state: Exp; lines: +3 -3 editorial ---------------------------- revision 1.806 date: 2004/12/13 22:44:58; author: ijacobs; state: Exp; lines: +2 -2 editorial tweak (missing comma) ---------------------------- revision 1.805 date: 2004/12/13 22:40:15; author: ijacobs; state: Exp; lines: +5 -2 added stuff missing from status section: - link to changes - mailing lists for comments, discussion ---------------------------- revision 1.804 date: 2004/12/13 22:37:40; author: ijacobs; state: Exp; lines: +9 -2 added info about two mailing lists. ---------------------------- revision 1.803 date: 2004/12/13 22:27:14; author: ijacobs; state: Exp; lines: +2 -2 link fix ---------------------------- revision 1.802 date: 2004/12/13 22:06:04; author: NormanWalsh; state: Exp; lines: +2 -2 Fixed a typo ---------------------------- revision 1.801 date: 2004/12/13 21:56:50; author: NormanWalsh; state: Exp; lines: +9 -7 Updated definition of link per Nokia comment, discussed with TimBL/DanC on 13 Dec. ---------------------------- revision 1.800 date: 2004/12/13 21:12:12; author: NormanWalsh; state: Exp; lines: +6 -4 Updated namespace document per Stickler, decided at 13 Dec telcon. ---------------------------- revision 1.799 date: 2004/12/13 20:54:32; author: NormanWalsh; state: Exp; lines: +4 -9 Change for Stickler per 13 Dec telcon ---------------------------- revision 1.798 date: 2004/12/12 19:38:32; author: ijacobs; state: Exp; lines: +2 -2 updated errata link ---------------------------- revision 1.797 date: 2004/12/12 19:35:55; author: ijacobs; state: Exp; lines: +8 -1 added tentative links for errata, translations ---------------------------- revision 1.796 date: 2004/12/12 19:33:30; author: ijacobs; state: Exp; lines: +2 -2 tweak ---------------------------- revision 1.795 date: 2004/12/12 19:33:14; author: ijacobs; state: Exp; lines: +2 -2 dated update ---------------------------- revision 1.794 date: 2004/12/12 19:32:42; author: ijacobs; state: Exp; lines: +21 -35 updated status ---------------------------- revision 1.793 date: 2004/12/09 18:44:21; author: ijacobs; state: Exp; lines: +5 -27 updated status section ---------------------------- revision 1.792 date: 2004/12/09 18:40:49; author: ijacobs; state: Exp; lines: +2 -2 added missing period ---------------------------- revision 1.791 date: 2004/12/09 18:40:06; author: ijacobs; state: Exp; lines: +2 -2 made xml-id cap and consistent with other keys ---------------------------- revision 1.790 date: 2004/12/09 18:31:25; author: ijacobs; state: Exp; lines: +7 -7 updated for 9 Dec 2004 editor's draft, also added link to my people page ---------------------------- revision 1.789 date: 2004/12/09 18:24:27; author: ijacobs; state: Exp; lines: +11 -10 editorial fixes based on NM comments: ---------------------------- revision 1.788 date: 2004/12/09 18:20:57; author: ijacobs; state: Exp; lines: +15 -10 editorial fixes in story about representation reuse based on ---------------------------- revision 1.787 date: 2004/12/09 18:02:35; author: ijacobs; state: Exp; lines: +2 -2 editorial tweak ---------------------------- revision 1.786 date: 2004/12/09 17:59:14; author: ijacobs; state: Exp; lines: +5 -5 attempt to incorporate comment from HT (number 3 in the exec summary) ---------------------------- revision 1.785 date: 2004/12/09 17:56:00; author: ijacobs; state: Exp; lines: +4 -3 attempt to incorporate comment from HT (number 2) ---------------------------- revision 1.784 date: 2004/12/09 17:51:29; author: ijacobs; state: Exp; lines: +2 -2 s/Representation/representation per HT: ---------------------------- revision 1.783 date: 2004/12/09 17:43:28; author: ijacobs; state: Exp; lines: +4 -4 adopted NM's editorial rewrite: ---------------------------- revision 1.782 date: 2004/12/09 17:42:40; author: ijacobs; state: Exp; lines: +8 -7 based on Patrick Stickler comment: And discussion on tag@w3.org: Adopted PS's first sentence as the new definition text. ---------------------------- revision 1.781 date: 2004/12/09 17:34:39; author: ijacobs; state: Exp; lines: +34 -37 more edits to CL/HWL's text (use of active voice, e.g.) ---------------------------- revision 1.780 date: 2004/12/09 17:20:44; author: ijacobs; state: Exp; lines: +14 -5 some edits to section on separation of presentation/content based on HWL comments: In particular these changes make his point: "As discussed above, the "size" argument is false, but the "computation" is correct." ---------------------------- revision 1.779 date: 2004/12/07 03:14:26; author: ijacobs; state: Exp; lines: +7 -7 changed edition to volume per 6 Dec TAG teleconf Changes between 5 November 2004 Proposed Recommendation and the 3 December 2004 Editor's Draft (diff). ----------------------- revision 1.778 date: 2004/12/03 19:57:43; author: NormanWalsh; state: Exp; lines: +4 -4 Updated metadata ---------------------------- revision 1.777 date: 2004/12/03 19:51:46; author: NormanWalsh; state: Exp; lines: +8 -4 Changes per mail from Yuxiao Zhao discussed at 29 Nov 2004 f2f. Updated Acknowledgements. ---------------------------- revision 1.776 date: 2004/12/03 19:44:01; author: NormanWalsh; state: Exp; lines: +27 -15 Change definition of namespace; change presentation/content text per CL proposal Changes between 2 November 2004 Editor's Draft and the 5 November 2004 Proposed Recommendation (diff since 2 November; diff since Last Call, 16 Aug, with deletes and without). ---------------------------- revision 1.773 date: 2004/11/05 00:35:29; author: ijacobs; state: Exp; lines: +2 -2 added rel="disclosure" ---------------------------- revision 1.772 date: 2004/11/05 00:34:02; author: ijacobs; state: Exp; lines: +2 -2 made one link a mailto: link for comments ---------------------------- revision 1.771 date: 2004/11/05 00:33:09; author: ijacobs; state: Exp; lines: +3 -3 updated status, removed @@s ---------------------------- revision 1.770 date: 2004/11/05 00:31:49; author: ijacobs; state: Exp; lines: +2 -2 uri fix ---------------------------- revision 1.769 date: 2004/11/05 00:31:14; author: ijacobs; state: Exp; lines: +2 -2 fixed link to changes ---------------------------- revision 1.768 date: 2004/11/05 00:31:01; author: ijacobs; state: Exp; lines: +1 -3 removed: A list of current W3C Recommendations and other technical documents can be found at the W3C Web site.---------------------------- revision 1.755 date: 2004/11/04 18:01:08; author: ijacobs; state: Exp; lines: +7 -7 moved a para ---------------------------- revision 1.754 date: 2004/11/04 18:00:41; author: ijacobs; state: Exp; lines: +17 -12 italicized boilerplate ---------------------------- revision 1.753 date: 2004/11/04 17:56:54; author: ijacobs; state: Exp; lines: +6 -7 updated ipr policy ---------------------------- revision 1.752 date: 2004/11/04 17:51:03; author: ijacobs; state: Exp; lines: +2 -0 added $Id: changes.html,v 1.71 2004/12/15 01:48:53 ijacobs Exp $ ---------------------------- revision 1.751 date: 2004/11/04 17:24:43; author: NormanWalsh; state: Exp; lines: +2 -1 Checkpoint; still fiddling with the status section ---------------------------- revision 1.750 date: 2004/11/04 17:23:51; author: NormanWalsh; state: Exp; lines: +6 -2 Checkpoint; still fiddling with the status section ---------------------------- revision 1.749 date: 2004/11/04 17:21:41; author: NormanWalsh; state: Exp; lines: +4 -4 Checkpoint; still fiddling with the status section ---------------------------- revision 1.748 date: 2004/11/04 17:17:42; author: NormanWalsh; state: Exp; lines: +3 -2 Checkpoint; still fiddling with the status section ---------------------------- revision 1.747 date: 2004/11/04 17:15:14; author: NormanWalsh; state: Exp; lines: +14 -5 Checkpoint; still fiddling with the status section ---------------------------- revision 1.746 date: 2004/11/04 17:09:06; author: NormanWalsh; state: Exp; lines: +9 -7 Checkpoint; still fiddling with the status section ---------------------------- revision 1.745 date: 2004/11/04 17:05:24; author: NormanWalsh; state: Exp; lines: +17 -14 Checkpoint ---------------------------- revision 1.744 date: 2004/11/04 16:54:52; author: NormanWalsh; state: Exp; lines: +9 -4 Checkpoint ---------------------------- revision 1.743 date: 2004/11/04 16:45:15; author: NormanWalsh; state: Exp; lines: +3 -2 Snapshot ---------------------------- revision 1.742 date: 2004/11/04 16:44:09; author: NormanWalsh; state: Exp; lines: +7 -7 Snapshot ---------------------------- revision 1.741 date: 2004/11/04 16:14:19; author: NormanWalsh; state: Exp; lines: +14 -9 Added information resource to the glossary; fixed apostrophes Changes between 1 November 2004 Editor's Draft and the 2 November 2004 Editor’s Draft (diff since 1 November). ---------------------------- revision 1.740 date: 2004/11/02 20:16:39; author: NormanWalsh; state: Exp; lines: +172 -147 Updated status; editorial changes, mostly suggested by IanJ Changes between 28 October 2004 Editor's Draft and the 1 November 2004 Editor’s Draft (diff since 28 October). ---------------------------- revision 1.739 date: 2004/11/01 22:13:02; author: NormanWalsh; state: Exp; lines: +3 -2 Change negotiated at PR request telcon, XLink 'should' to 'may' ---------------------------- revision 1.738 date: 2004/11/01 21:42:21; author: NormanWalsh; state: Exp; lines: +3 -4 Change negotiated at PR request telcon, XLink 'should' to 'may' ---------------------------- revision 1.737 date: 2004/11/01 19:14:32; author: NormanWalsh; state: Exp; lines: +3 -3 Tweaked pubdate ---------------------------- revision 1.736 date: 2004/11/01 19:12:45; author: NormanWalsh; state: Exp; lines: +10 -9 Small editorial changes to 2.2 in response to IanJ Changes between 26 October 2004 Editor's Draft and the 28 October 2004 Editor’s Draft (diff since 26 October). ---------------------------- revision 1.735 date: 2004/10/28 18:20:54; author: NormanWalsh; state: Exp; lines: +45 -50 Editorial tweaks Changes between 21 October 2004 Editor's Draft and the 26 October 2004 Editor’s Draft (diff since 21 October). ---------------------------- revision 1.734 date: 2004/10/26 16:01:30; author: NormanWalsh; state: Exp; lines: +50 -119 Editor's attempt at edits from 2004-10-25 telcon Changes between 19 October 2004 Editor's Draft and the 21 October 2004 Editor’s Draft (diff since 19 October). ---------------------------- revision 1.733 date: 2004/10/21 19:55:08; author: NormanWalsh; state: Exp; lines: +114 -48 Fixed typo in abstract; integrated Noah's suggested changes to generalize 'octet streams' Changes between 14 October 2004 Editor's Draft and the 19 October 2004 Editor’s Draft (diff since 14 October). ---------------------------- revision 1.731 date: 2004/10/19 16:42:40; author: NormanWalsh; state: Exp; lines: +23 -11 Editorial changes from 18 Oct telcon Changes between 28 September 2004 Editor's Draft and the 14 October 2004 Editor’s Draft (diff since 28 September). ---------------------------- revision 1.730 date: 2004/10/14 20:47:11; author: NormanWalsh; state: Exp; lines: +9 -5 Tweak metadata; publish new draft ---------------------------- revision 1.729 date: 2004/10/14 17:28:01; author: NormanWalsh; state: Exp; lines: +32 -3 Tweaked acknowledgements ---------------------------- revision 1.728 date: 2004/10/14 17:07:59; author: NormanWalsh; state: Exp; lines: +9 -10 Pubrules tweaks ---------------------------- revision 1.727 date: 2004/10/14 16:48:06; author: NormanWalsh; state: Exp; lines: +86 -150 Norm's end-to-end editorial pass ---------------------------- revision 1.726 date: 2004/10/14 14:13:03; author: NormanWalsh; state: Exp; lines: +83 -59 Changes from the f2f meeting record ---------------------------- revision 1.725 date: 2004/10/13 18:31:43; author: NormanWalsh; state: Exp; lines: +111 -103 Editorial suggestions from Ian Jacobs; thanks Ian ---------------------------- revision 1.724 date: 2004/10/12 21:27:01; author: NormanWalsh; state: Exp; lines: +2 -2 Whitespace ---------------------------- revision 1.723 date: 2004/10/12 20:36:56; author: NormanWalsh; state: Exp; lines: +3 -3 Completed: ACTION NDW: to fix cross references to 'uri allocation' that read 'uri assignment' ---------------------------- revision 1.722 date: 2004/10/09 13:33:34; author: NormanWalsh; state: Exp; lines: +11 -164 Checkpoint ---------------------------- revision 1.721 date: 2004/10/07 14:45:52; author: NormanWalsh; state: Exp; lines: +227 -41 Minor editorial changes and significant revamping of information resource text Changes between 16 August 2004 Working Draft and the 28 September 2004 Editor’s Draft (diff since 16 August). ---------------------------- revision 1.720 date: 2004/09/29 13:08:12; author: NormanWalsh; state: Exp; lines: +1 -1 Updated pubdate ---------------------------- revision 1.719 date: 2004/09/29 12:57:15; author: NormanWalsh; state: Exp; lines: +39 -6 Worked on the extensibility section per the 27 Sep TAG telcon minutes ---------------------------- revision 1.718 date: 2004/09/28 19:57:20; author: NormanWalsh; state: Exp; lines: +55 -37 Editorial changes based on Graham Klyne's comments ---------------------------- revision 1.717 date: 2004/09/24 18:09:10; author: NormanWalsh; state: Exp; lines: +1 -1 Fix pubdate ---------------------------- revision 1.716 date: 2004/09/24 18:08:26; author: NormanWalsh; state: Exp; lines: +118 -91 More drafting; mostly in response to Tim Bray's comments ---------------------------- revision 1.715 date: 2004/09/23 20:57:35; author: NormanWalsh; state: Exp; lines: +55 -46 Integrate SKW's text on URI ownership ---------------------------- revision 1.714 date: 2004/09/23 20:45:06; author: NormanWalsh; state: Exp; lines: +3 -1 Added reference to CHIPS ---------------------------- revision 1.713 date: 2004/09/03 16:23:12; author: NormanWalsh; state: Exp; lines: +19 -16 Editorial fixes from public comments The pervasive nature of the changes in this draft make generating a colorized "diff" version impractical. ---------------------------- revision 1.708 date: 2004/08/17 18:01:35; author: NormanWalsh; state: Exp; lines: +41 -109 Editorial changes from IJ ---------------------------- revision 1.707 date: 2004/08/17 17:31:58; author: NormanWalsh; state: Exp; lines: +43 -41 Another slug of editorial changes suggested by SW (mid:41218F7C.5000704@hp.com) ---------------------------- revision 1.706 date: 2004/08/17 14:43:47; author: NormanWalsh; state: Exp; lines: +29 -26 Editorial suggestions from SW (mid:E864E95CB35C1C46B72FEA0626A2E80803E3BC2E@0-mail-br1.hpl.hp.com) except ToC reorg ---------------------------- revision 1.705 date: 2004/08/16 19:29:22; author: ijacobs; state: Exp; lines: +1 -1 one more fix ---------------------------- revision 1.704 date: 2004/08/16 19:26:59; author: ijacobs; state: Exp; lines: +3 -3 tweaks to make a 16 Aug editor's copy ---------------------------- revision 1.703 date: 2004/08/16 19:21:32; author: ijacobs; state: Exp; lines: +23 -23 converted two-byte chars back to ascii ---------------------------- revision 1.702 date: 2004/08/13 17:15:41; author: NormanWalsh; state: Exp; lines: +1 -1 Fixed typo in URI ---------------------------- revision 1.701 date: 2004/08/13 15:37:28; author: NormanWalsh; state: Exp; lines: +4 -4 Fixed copyright, per pubrules ---------------------------- revision 1.700 date: 2004/08/13 15:32:30; author: NormanWalsh; state: Exp; lines: +141 -141 TOC reorg ---------------------------- revision 1.699 date: 2004/08/13 15:19:08; author: NormanWalsh; state: Exp; lines: +15 -108 Minor editorial tweaks ---------------------------- revision 1.698 date: 2004/08/13 12:17:40; author: NormanWalsh; state: Exp; lines: +2 -0 Snapshot ---------------------------- revision 1.697 date: 2004/08/12 19:44:06; author: NormanWalsh; state: Exp; lines: +17 -16 Pubrules tweaks to copyright and last version ---------------------------- revision 1.696 date: 2004/08/12 19:36:29; author: NormanWalsh; state: Exp; lines: +40 -23 Begin the pubrules dance ---------------------------- revision 1.695 date: 2004/08/11 17:18:33; author: NormanWalsh; state: Exp; lines: +103 -65 More f2f fiddling ---------------------------- revision 1.694 date: 2004/08/11 12:59:25; author: NormanWalsh; state: Exp; lines: +313 -112 Updated with 10 Aug changes: I couldn’t check these in individually because of connectivity problems. I fixed: MSM5 -- editorial; provide an antecedent for "this property"; note that there are multiple kinds of processing; make it clear that the subset is extensible MSM6, 5.2, make it clear that there should be a default action; default depends on context (country code on a phone number in a purchase order) --- diwg1 New section in 3.6: Supporting Navigation Add examples of URIs:;userid=abcde Inability to link to sub-pages of financial information --- diwg2 Chris to write text; will address content negotiated language selection --- diwg3 Incorporate text from 2.2.3 of into our 4.3 and add a reference to di-princ. --- manola27 Add to 3.6.2 that "that doesn't mean everyone has to know every URI" or words to that effect. "Security considerations may require you to keep some URIs secret" change "but it is unreasonable to prohibit others from merely identifying the resource" to "but merely identifying the resource is like referring to a book by title. Except when people have agreed to keep titles or URIs confidential, they are free to exchange them. --- --- In 2.2, suggest changing "We use the term information resource to refer to those resources for which you can retrieve a representation over the network." to "A special class of resources, information resources, is discussed in section 3.1. Information Resources and Representations" In 3.1, Information Resources are resources that convey information. If a resource has a representation then it is an information resource. --- Although there are benefits (such as naming flexibility) to URI aliases, there are also costs. Deployment of URI aliases raises the cost or may even make it impossible for software to determine that the URIs identify the same resource. URI owners should thus be conservative about the number of different URIs they associate with the same resource. --- [URI] --- In 2.3.2, change the story to to say that Dirk asks Nadia if it matters which one she bookmarks. --- In 2.5, remove (i.e., sequence of characters) --- Rename, 2.5 URI Allocation The URI spec[RFC2717,RFC2396] is an agreement about how the internet community allocates names and associates them with protocols by which they take on meaning; for example, the HTTP URI scheme[RFC2616*] uses DNS in such a way that the names such as take on meaning by way of messages from the domain holder (or somebody they delegate to). While other communications (documents, messages, ...) may suggest meanings for such names, it's a local policy decision whether those suggestions should be heeded, while the meaning obtained thru HTTP GET is, by internet-wide agreement, authoritative. 2.5.1 URI Ownership A widely deployed technique to avoid URI overloading is delegated ownership. 2.5.2 Other Allocation Schemes UUID/MID; news:comp.text.xml has a social process for creating them... --- 2.4 URI Collision As discussed above, a URI identifies one resource. To use the same URI to identify different resources produces a collision.." Principle: URIs identify a single resource Assign distinct URIs to distinct resources. --- 3.3.1 In one of his XHTML pages, Dirk creates a hypertext link to an image that Nadia has published on the Web. He creates a hypertext link with Nadia's hat. Emma views Dirk's XHTML page in her Web browser and follows the link. The HTML implementation in her browser removes the fragment from the URI and requests the image "" Nadia serves an SVG representation of the image (with Internet media type "image/svg+xml"). Emma's Web browser starts up an SVG implementation to view the image. It passes it the original URI including the fragment, "" to this implementation, causing a view of the hat to be displayed rather than the complete image. Xref orthogonality from the following paragraph. --- 3.4 Incorporate Tim's text. Remove "documents" from expiry date. Add pointers to relevant sections of http: and Apache docs. --- 3.5 reasons: document might be confidential, uri might be impractically large --- 3.6 Delete "There are applications where it may be impossible or impractical to provide a representation for every resource (consider, for example, a system that used URIs to identify memory locations in a running program). The fact that such applications cannot provide representations should not discourage designers from developing applications that treat them as web resources." Change: Principle: Reference does not imply dereference An application developer or specification author SHOULD not require networked retrieval to representations each time they are referenced. Dereferencing a URI has a cost, potentially a significant cost, perhaps in terms of security, network latency, or other factors. Attempting to retrieve representations of resources when such retrieval is not necessary should be avoided. --- 4.3 Change: For instance digital signature technology, access control, and other technologies are appropriate for controlling access to content. to: Designers should consider appropriate technologies, such as encryption and access control, for limiting the audience --- 4.5.1 Salt again: The data's usefulness should outlive the tools currently used to process it (though obviously XML can be used for short-term needs as well) Chris proposes: Need for data that can outlinve the applicatins that produced it --- Principle: Orthogonality Orthogonal abstractions benefit from orthogonal specifications. Experience demonstrates that problems arise where orthogonal concepts occur in a single specification. Consider, for example, the HTML specification which includes the orthogonal x-www-form-urlencoded specification. Software developers (for example, of [CGI] applications) might have an easier time finding the specification if it were published separately and then cited from the HTTP, URI, and HTML specifications. Problems also arise when specifications attempt to modify orthogonal abstractions described elsewhere. An historical version of the HTML specification specified an HTTP header field ("Refresh"). The authors of the HTTP specification ultimately decided not to provide this header and that made the two specifications awkwardly at odds with each other. HTML eventually removed the header. (The problem here is the use of http-equiv with the value "refresh" which *has no equivalent* in the http spec!) --- Glossary: Resource [delete: A general term for ]Anything that might be identified by a URI. ---------------------------- revision 1.693 date: 2004/08/10 11:30:24; author: NormanWalsh; state: Exp; lines: +1 -1 Bumped date ---------------------------- revision 1.692 date: 2004/08/10 11:27:19; author: NormanWalsh; state: Exp; lines: +6 -6 Tweaked definition of information resource ---------------------------- revision 1.691 date: 2004/08/10 11:03:07; author: NormanWalsh; state: Exp; lines: +14 -5 Rearranged the definitional use of the word 'resource'; added text from 2396bis ---------------------------- revision 1.690 date: 2004/08/10 10:52:34; author: NormanWalsh; state: Exp; lines: +6 -0 Added an explicit note about using POST for safe operations ---------------------------- revision 1.689 date: 2004/08/10 10:47:26; author: NormanWalsh; state: Exp; lines: +8 -4 Added note about using URIs even when representations cannot be provided ---------------------------- revision 1.688 date: 2004/08/10 10:34:33; author: NormanWalsh; state: Exp; lines: +12 -33 Updated 5.1 to address nottingham1 ---------------------------- revision 1.687 date: 2004/08/10 10:21:52; author: NormanWalsh; state: Exp; lines: +1 -1 Changed 'unknown elements' to 'unknown tags' to address msm7 ---------------------------- revision 1.686 date: 2004/08/10 10:20:17; author: NormanWalsh; state: Exp; lines: +16 -0 Added good practice about server managers allowing users control of metadata to address msm4 ---------------------------- revision 1.685 date: 2004/08/10 10:09:43; author: NormanWalsh; state: Exp; lines: +13 -0 Added good practice about unneccessary network access to address schema12 ---------------------------- revision 1.684 date: 2004/08/10 10:00:35; author: NormanWalsh; state: Exp; lines: +27 -1 Added 2.3.2 Representation reuse to address schema3 ---------------------------- revision 1.683 date: 2004/08/10 09:33:26; author: NormanWalsh; state: Exp; lines: +6 -0 Address klyne21 ---------------------------- revision 1.682 date: 2004/08/08 18:54:59; author: NormanWalsh; state: Exp; lines: +36 -22 Addressed schema16, parsia11, klyne7, and klyne9. Integrated CL's updates to Section 3.3.1 ---------------------------- revision 1.681 date: 2004/07/07 16:58:28; author: ijacobs; state: Exp; lines: +2 -1 tweak ---------------------------- revision 1.680 date: 2004/07/05 17:09:26; author: ijacobs; state: Exp; lines: +21 -21 link fixes ---------------------------- revision 1.679 date: 2004/07/05 16:51:50; author: ijacobs; state: Exp; lines: +1 -1 link fix ---------------------------- revision 1.678 date: 2004/07/05 16:50:03; author: ijacobs; state: Exp; lines: +2 -2 editorial ---------------------------- revision 1.677 date: 2004/07/05 16:49:37; author: ijacobs; state: Exp; lines: +2 -1 added statement that doc expected to become a rec ---------------------------- revision 1.676 date: 2004/07/05 16:47:55; author: ijacobs; state: Exp; lines: +2 -0 added statement that not covered by any w3c patent policy. ---------------------------- revision 1.675 date: 2004/07/05 16:31:42; author: ijacobs; state: Exp; lines: +1 -1 removed some nbsp ---------------------------- revision 1.674 date: 2004/07/05 15:48:51; author: ijacobs; state: Exp; lines: +1 -1 spell fix Changes between 8 June 2004 Editor's Draft and the 5 July 2004 Working Draft (diff since 8 June). ---------------------------- revision 1.673 date: 2004/07/05 15:45:47; author: ijacobs; state: Exp; lines: +12 -9 updated status; attempt at 5 July publication ---------------------------- revision 1.672 date: 2004/07/05 15:36:57; author: ijacobs; state: Exp; lines: +11 -10 per skw comments, changed some instances of license to allow or specify ---------------------------- revision 1.671 date: 2004/07/05 15:32:17; author: ijacobs; state: Exp; lines: +6 -4 moved a Note per ---------------------------- revision 1.670 date: 2004/07/05 15:31:26; author: ijacobs; state: Exp; lines: +5 -4 Turned from negative to positive ---------------------------- revision 1.669 date: 2004/07/05 15:28:22; author: ijacobs; state: Exp; lines: +2 -2 s/create/use ---------------------------- revision 1.668 date: 2004/07/05 15:25:18; author: ijacobs; state: Exp; lines: +2 -1 Sentence changed to "Globally adopted assignment policies make some URIs appealing as general-purpose identifiers." ---------------------------- revision 1.667 date: 2004/07/05 15:23:36; author: ijacobs; state: Exp; lines: +2 -1 Attempted to improve a GPN based on: However, I agree with SKW that this GPN requires more review. ---------------------------- revision 1.666 date: 2004/07/05 15:18:24; author: ijacobs; state: Exp; lines: +3 -2 Change to GPN language per ---------------------------- revision 1.665 date: 2004/07/05 15:15:26; author: ijacobs; state: Exp; lines: +2 -2 Per Changed GPN: A URI owner SHOULD NOT create arbitrarily different URIs for the same resource. to: A URI owner SHOULD NOT associate arbitrarily different URIs with the same resource. ---------------------------- revision 1.664 date: 2004/07/05 15:13:55; author: ijacobs; state: Exp; lines: +0 -7 Per skw30, deleted: "Are there resources that are not identified by any URI? In a system where the only resource identification system is the URI, the question is only of philosophical interest. The advent of other resource identification systems may change the nature of this question and answer. ---------------------------- revision 1.663 date: 2004/07/05 15:12:05; author: ijacobs; state: Exp; lines: +13 -13 Per skw: 1) s/assign/associate in many instances I found an instance of "associate" in RFC2396bis (1.2.2) and thus felt comfortable making this change. ---------------------------- revision 1.662 date: 2004/07/04 20:46:58; author: ijacobs; state: Exp; lines: +20 -15 Based on Global examination of the usage of link. In many situations, s/link/hypertext link where it made sense. Also added a note next to the definition of "link": Note: In this document, the term "link" generally means "relationship", not "physical connection". ---------------------------- revision 1.661 date: 2004/07/04 20:40:34; author: ijacobs; state: Exp; lines: +1 -1 one more instance of s/identification mechanism/identification system per skw ---------------------------- revision 1.660 date: 2004/07/04 20:39:17; author: ijacobs; state: Exp; lines: +16 -16 Per comments from swk, reduced use of word "mechanism" In particular: s/identification mechanism/identification system/ ---------------------------- revision 1.659 date: 2004/07/04 20:35:40; author: ijacobs; state: Exp; lines: +1 -1 Per d/mechanism ---------------------------- revision 1.658 date: 2004/07/04 20:34:51; author: ijacobs; state: Exp; lines: +1 -1 Per s/used/used consistently/ ---------------------------- revision 1.657 date: 2004/07/04 20:31:10; author: ijacobs; state: Exp; lines: +1 -1 Per s/sequence/sequencing constraints ---------------------------- revision 1.656 date: 2004/07/04 20:30:41; author: ijacobs; state: Exp; lines: +1 -1 For HTTP 404, s/message/status code ---------------------------- revision 1.655 date: 2004/07/04 20:27:48; author: ijacobs; state: Exp; lines: +1 -1 Inspired by s/transparent/evident ---------------------------- revision 1.654 date: 2004/07/04 20:27:00; author: ijacobs; state: Exp; lines: +1 -1 Per Added to end of para: "by addressing the fact that the error has occurred." ---------------------------- revision 1.653 date: 2004/07/04 20:24:17; author: ijacobs; state: Exp; lines: +1 -1 Per Added HTTPEXT ---------------------------- revision 1.652 date: 2004/07/04 20:14:47; author: ijacobs; state: Exp; lines: +8 -9 - Dissolved paragraph in question. - Moved first sentence to first sentence of following para. - Rewrote second paragraph and moved to after bulleted list. ---------------------------- revision 1.651 date: 2004/07/04 20:08:58; author: ijacobs; state: Exp; lines: +2 -1 Per Added xref to coneg ---------------------------- revision 1.650 date: 2004/07/04 19:57:48; author: ijacobs; state: Exp; lines: +21 -13 Some edits to the introduction based on In particular, added this note: Note: In this document, the noun "representation" means "octets that encode resource state information". These octets do not necessarily describe the resource, or portray a likeness of the resource, or represent the resource in other senses of the word "represent". ---------------------------- revision 1.649 date: 2004/07/04 19:29:28; author: ijacobs; state: Exp; lines: +4 -4 Minor editorial changes based on this comment: ---------------------------- revision 1.648 date: 2004/07/04 19:27:08; author: ijacobs; state: Exp; lines: +2 -1 Took into account s/a travel scenario/ Examples such as the following travel scenario/ ---------------------------- revision 1.647 date: 2004/07/04 19:23:55; author: ijacobs; state: Exp; lines: +3 -3 In an attempt to satisfy an swk comment: Changed first para of abstract to avoid the word "link": "The World Wide Web is a network-spanning information space of interrelated resources. This information space is the basis of, and is shared by, a number of information systems. Within each of these systems, people and software retrieve, create, display, analyze, relate, and reason about resources." Also, checked for other uses of "connect" in the document; replaced one instance with "relate" ---------------------------- revision 1.646 date: 2004/07/04 18:45:41; author: ijacobs; state: Exp; lines: +10 -9 Per 28 June teleconf, Made these changes: 1) moved note before 4.2 to third para of 4.1 2) Added this sentence for following para: In practice, some types of content (e.g., audio and video) are generally represented using binary formats. ---------------------------- revision 1.645 date: 2004/07/04 18:42:55; author: ijacobs; state: Exp; lines: +16 -0 Per 28 June teleconf, adopted text from NW With some edits for readability and length. ---------------------------- revision 1.644 date: 2004/07/04 17:54:54; author: ijacobs; state: Exp; lines: +47 -29 Per DC observation, To address this comment: - See TAG issues abstractComponentRefs-37 and DerivedResources-43. - Made these changes: 1) Globally added more context to these refs. See also previous comments from Martin Duerst about clearer idea why tag issues referenced. Still more probably needs to be done here. 2) In the specific place DC identified, removed ref to issue 43 since current TAG state is "we don't know what this issue is about." ---------------------------- revision 1.643 date: 2004/07/04 17:39:41; author: ijacobs; state: Exp; lines: +4 -1 Per DC observation, To address this comment: - Overly brief: For example, the assignment of more than one URI for a resource undermines the network effect. how so? no suggestion handy yet. - Made these changes: Instead of adding an example, added an xref to error handling section. ---------------------------- revision 1.642 date: 2004/07/04 17:35:12; author: ijacobs; state: Exp; lines: +3 -3 Per DC observation, To address this comment: - Unclear: Thus, the "http" URI specification licenses applications to conclude that authority components in two "http" URIs are equivalent you haven't made the connection between "are equivalent" and "identify the same resource". no suggestion handy just yet. - Changed "are equivalent" to "identify the same resource" ---------------------------- revision 1.641 date: 2004/07/04 17:30:08; author: ijacobs; state: Exp; lines: +8 -9 s/character-for-character/character-by-character (so that only one phrase is used) Also, per DC: Wordy: "Software developers should expect that it will prove useful to be able to share a URI across applications, even if that utility is not initially evident." suggest: "Software developers should expect that sharing a URI across applications will be useful, even if that utility is not initially evident." Also, per DC: "The most straightforward way of establishing that two parties are referring to the same resource is to compare, character-by-character, the URIs they are using. Two URIs that are identical (character for character)" suggest striking the 2nd "(character for character)" Instead: changed "(character for character)" to "in this way" since I'm not sure that syntactic identity is the only kind. ---------------------------- revision 1.640 date: 2004/07/04 17:25:58; author: ijacobs; state: Exp; lines: +4 -4 add some classes so that destination sections numbered automatically. ---------------------------- revision 1.639 date: 2004/07/04 17:25:16; author: ijacobs; state: Exp; lines: +4 -4 add some classes so that destination sections numbered automatically. ---------------------------- revision 1.638 date: 2004/07/04 16:49:53; author: ijacobs; state: Exp; lines: +3 -3 editorial: changed one e.g., to a for example ---------------------------- revision 1.637 date: 2004/07/04 16:47:12; author: ijacobs; state: Exp; lines: +1 -1 Per suggestions from DC and NW, s/MUST NOT/SHOULD NOT in: Agents making use of URIs SHOULD NOT attempt to infer properties of the referenced resource except as licensed by relevant specifications. Furthermore, this is more consistent with the text in the penultimate paragraph of 3.3.1 ---------------------------- revision 1.636 date: 2004/07/04 16:46:23; author: ijacobs; state: Exp; lines: +4 -2 per suggestions from DC and NW, added forward ref from 2.2 to 2.4 ---------------------------- revision 1.635 date: 2004/07/04 16:42:48; author: ijacobs; state: Exp; lines: +1 -1 Per DC suggestion, removed "agents" from abstract and left "people and software" (though it could also be hardware). Agents is defined later. ---------------------------- revision 1.634 date: 2004/07/04 16:41:44; author: ijacobs; state: Exp; lines: +536 -534 moved section on general arch principles to back per DC suggestion ---------------------------- revision 1.633 date: 2004/07/04 16:00:30; author: ijacobs; state: Exp; lines: +6 -6 ensure utf; prepare for 7 July publication ============================================================================= Changes between 10 May 2004 Editor's Draft and the 8 June 2004 Editor's Draft (diff since 10 May). ---------------------------- revision 1.624 date: 2004/06/08 23:08:33; author: ijacobs; state: Exp; lines: +12 -7 some changes in how we discuss authoritative representations ---------------------------- revision 1.623 date: 2004/06/08 23:05:48; author: ijacobs; state: Exp; lines: +20 -20 state in section on URI ownership that representations from URI owners are authoritative for those URIs. ---------------------------- revision 1.622 date: 2004/06/08 22:59:49; author: ijacobs; state: Exp; lines: +34 -64 put back some missing info on derference details ---------------------------- revision 1.621 date: 2004/06/08 21:24:53; author: ijacobs; state: Exp; lines: +9 -8 editorial, some based on comments from Jacek Kopecky: ---------------------------- revision 1.618 date: 2004/06/08 18:35:48; author: ijacobs; state: Exp; lines: +2 -3 editorial tweak regarding URI overloading based on comments from Kendall Clark ---------------------------- revision 1.617 date: 2004/06/08 18:24:21; author: ijacobs; state: Exp; lines: +20 -22 edits to section on XML namespaces basd on comments from MSM: ---------------------------- revision 1.616 date: 2004/06/08 18:10:54; author: ijacobs; state: Exp; lines: +20 -16 Took into account the following editorial suggestions from Mario Jeckle The order of the techniques mentioned in brackets in the first sentence of section 4 should be changed to "XHTML, RDF/XML, XMIL, XLink, CSS, and PNG" to be consistent with the following section. After the first paragraph of section 4 insertion of an additional comment recommending the re-use of at least the meta-format (such as XML) is helpful even if a concrete instance format (e.g, myFunnyML) is not possible or intended. The sentence "textual formats also have the considerable advantage that they can be directly read and understood by human beings" should be formulated more restrictive. Proposed addition: "? can be understood if sufficient knowledge about the underlying semantics is present or available". Section 4.5.3 sketches the "p" element as an example of a XML element which is "defined in two or more XML formats". Who did we not refer to the "set" element which is already present in both MathML and SVG? This would make the statement more precise and also give a useful example. Below the good practice "Namespace adoption" there is the term "fully qualified" introduced. Why isn't "qualified" sufficient here also? In section 4.5.4 there is a bullet-point list collecting reasons for provide information about a namespace. Should we add "wish to retrieve the namespace policy" here? Sect. 4.5.6: It could be helpful to expand "ID" to "unique identification" the first time it is being used. Sect. 4.5.6 a type named "xs:ID" is used there. Should this not read "ID within the namespace assigned to XML Schema" here? [This one subsumed] ---------------------------- revision 1.615 date: 2004/06/07 22:47:18; author: ijacobs; state: Exp; lines: +7 -18 Per 7 June 2004 tag teleconf, - Edits to section to 2.2 to remove large number discussion. - Left in mid/cid as examples where there is hierarchical delegation. - Note rationale for establishing uri/social entity relationship: "It is useful for a URI scheme to...". Not sure if that' sufficient. ---------------------------- revision 1.614 date: 2004/06/07 22:38:29; author: ijacobs; state: Exp; lines: +17 -8 Per 7 June 2004 teleconf, and - Some rewrites regarding mappings in section on xml namespaces - link from qname mappings to this section. ---------------------------- revision 1.613 date: 2004/06/07 19:02:21; author: ijacobs; state: Exp; lines: +108 -109 Per, - s/namespace representation/namespace document. - Based on discussion with TBL, a number of changes about the definitions and relations among: xml namespace namespace document namespace uri ---------------------------- revision 1.612 date: 2004/06/07 03:59:03; author: ijacobs; state: Exp; lines: +11 -7 Per, for issue, Some tweaks regarding links and net effect. ---------------------------- revision 1.611 date: 2004/06/07 03:50:34; author: ijacobs; state: Exp; lines: +4 -1 Per, for issue In section on with primary/secondary resource, included ref to dfns in RFC2396bis. ---------------------------- revision 1.610 date: 2004/06/07 03:47:46; author: ijacobs; state: Exp; lines: +5 -2 Per, for issue: In section 4.5.6, new fourth bullet: <li>In practice, applications may have independent means (such as those defined in the XPointer specification, [<a>XPTRFR</a>] <a href="">section 3.2</a>) of locating identifiers inside a document. ---------------------------- revision 1.609 date: 2004/06/07 03:43:10; author: ijacobs; state: Exp; lines: +2 -2 ediotrial ---------------------------- revision 1.608 date: 2004/06/07 03:41:12; author: ijacobs; state: Exp; lines: +11 -12 Per, for issue: In section 4.5.3, changed para to start: . " ---------------------------- revision 1.607 date: 2004/06/07 03:30:13; author: ijacobs; state: Exp; lines: +3 -2 Per, for issue In section on xml namespaces (4.5.3), - Modified last sentence of para after story to say: "Namespaces in xml use URIs in order to obtain the properties of a global namespace." - Included cross ref to section on URI ownership. ---------------------------- revision 1.606 date: 2004/06/07 03:23:43; author: ijacobs; state: Exp; lines: +25 -16 Per, for issue, Introduced new section 4.6. ---------------------------- revision 1.605 date: 2004/06/07 03:04:56; author: ijacobs; state: Exp; lines: +2 -3 Per, for issue Third bullet in section 4.2.4 changed to: <li>The semantics of combining RDF documents containing multiple vocabularies is well-defined. ---------------------------- revision 1.604 date: 2004/06/07 03:03:03; author: ijacobs; state: Exp; lines: +1 -1 Previous change should close ---------------------------- revision 1.603 date: 2004/06/07 03:02:38; author: ijacobs; state: Exp; lines: +1 -3 Per, - In section 4.2.3, deleted "As part of defining an extensibility mechanism, specification designers should set expectations about agent behavior in the face of unrecognized extensions." ---------------------------- revision 1.602 date: 2004/06/07 03:00:51; author: ijacobs; state: Exp; lines: +2 -3 Per, for issue GPN now reads: A data format specification SHOULD provide for version information. ---------------------------- revision 1.601 date: 2004/06/07 02:58:01; author: ijacobs; state: Exp; lines: +12 -5 Per, for issue, Added para to section 3.3: "Internet media type mechanism does have its limitations. For instance, media type strings do not support versioning or other parameters. The TAG issue mediaTypeManagement-45 concerns the appropriate level of granularity of the media type mechanism." ---------------------------- revision 1.600 date: 2004/06/07 02:52:25; author: ijacobs; state: Exp; lines: +12 -6 Per, in section 4 tried to talk about protocol extensibility in section 1.2 ---------------------------- revision 1.599 date: 2004/06/07 02:38:50; author: ijacobs; state: Exp; lines: +3 -3 Per, for issue, - Deleted "falling back to default behavior" in section "Extensibility" ---------------------------- revision 1.598 date: 2004/06/07 02:36:23; author: ijacobs; state: Exp; lines: +2 -2 Per, In section 4, first sentence, changed order to "XHTML, RDF/XML, SMIL, XLink, CSS and PNG" per suggestion from MJ. ---------------------------- revision 1.597 date: 2004/06/07 02:35:11; author: ijacobs; state: Exp; lines: +1 -3 Per, change text in 2.7 to "peer-to-peer systems. ---------------------------- revision 1.596 date: 2004/06/07 02:32:32; author: ijacobs; state: Exp; lines: +3 -2 Per, In 2.2, changed sentence about delegation to read: "This document does not address how the benefits and responsibilities of URI ownership may be delegated to other parties, such as to a server manager or to someone who has been delegated part of the URI space on a given Web server." ---------------------------- revision 1.595 date: 2004/06/07 02:31:07; author: ijacobs; state: Exp; lines: +3 -2 Per, In 3.6.2: s/authorities servicing URI/URI owner/ ---------------------------- revision 1.594 date: 2004/06/07 02:29:58; author: ijacobs; state: Exp; lines: +2 -1 Per, added for 3.6.1: "there is a benefit to the community in providing representations." ---------------------------- revision 1.593 date: 2004/06/07 02:27:38; author: ijacobs; state: Exp; lines: +1 -1 Per, s/unpredictable/unreliable or unpredictable/ ---------------------------- revision 1.592 date: 2004/06/07 01:43:17; author: ijacobs; state: Exp; lines: +17 -27 Per, simplified discussion in 3.5.1 and distinguished transaction requests from results. ---------------------------- revision 1.591 date: 2004/06/07 01:25:52; author: ijacobs; state: Exp; lines: +2 -4 Deleted: Note that even though the response to an HTTP POST request may contain a representation, the response to an HTTP POST request is not necessarily a representation of the resource identified in the POST request. From earlier section since looks like it will fit in 3.5.1 ---------------------------- revision 1.590 date: 2004/06/07 01:24:10; author: ijacobs; state: Exp; lines: +3 -2 Per, in 3.5: added "necessarily" to "the word "unsafe" does not necessarily mean "dangerous"; s/results/requests in "It is a breakdown of the Web architecture if agents cannot use URIs to reconstruct a "paper trail" of transaction requests," ---------------------------- revision 1.589 date: 2004/06/07 01:22:23; author: ijacobs; state: Exp; lines: +1 -1 chnaged title of GPN in 3.4 ---------------------------- revision 1.588 date: 2004/06/07 01:21:40; author: ijacobs; state: Exp; lines: +79 -80 Per, significant edits to 3.4 and 3.4.1: - New title: Inconsistencies between Representation Data and Metadata - New first para. - No more discussion of "authoritative metadata" - Added example of html + text/plain as not being an inconsistency. ---------------------------- revision 1.587 date: 2004/06/07 00:21:27; author: ijacobs; state: Exp; lines: +68 -59 Per, significant edits to 3.3.2. Fragment Identifiers and Multiple Representations: - New title: Fragment Identifiers and Content Negotiation - Defined coneg up front. - Per ftf meeting, cited three possible outcomes (consistent, inconsistent, defined in only one spec). - Distinguish server management error from error on agent side in case three. - Refer to CUAP in case there. - Removed story. - Removed GPN. - Removed reference to httpRange-14 ---------------------------- revision 1.586 date: 2004/06/06 23:23:39; author: ijacobs; state: Exp; lines: +7 -15 Per, edits in 3.6.2. URI Persistence: Removed bulleted list. Now talk about inconsistency, and that defined from perspective of representation provider (or URI owner). ---------------------------- revision 1.585 date: 2004/06/06 23:07:25; author: ijacobs; state: Exp; lines: +1 -1 Per, in 3.3.2: s/SHOULD NOT/MUST NOT in GPN ---------------------------- revision 1.584 date: 2004/06/06 23:00:44; author: ijacobs; state: Exp; lines: +6 -5 Per, edits in 3.3.1: - Edited the sentence "Parties that draw" to be about syntactic analysis rather than representations. - n story, section 3, deleted "by /satimage/oaxaca". ---------------------------- revision 1.583 date: 2004/06/06 22:53:38; author: ijacobs; state: Exp; lines: +10 -10 Per, edits in 3.3.1: - Remove "during a retrieval action" in penultimate para. - Delete from "Note..." to end of same paragraph. HOWEVER: I think that the point made at the ftf meeting that a resource can be both a primary and secondary resource is worth making, so I made it at end of section. ---------------------------- revision 1.582 date: 2004/06/06 22:47:25; author: ijacobs; state: Exp; lines: +11 -6 Per, Added: <li>One cannot carry out an HTTP POST operation using a URI that identifies a secondary resource. Combined it in OL in 3.3.1 with: All Information Resources are primary resources. ---------------------------- revision 1.580 date: 2004/06/06 22:40:22; author: ijacobs; state: Exp; lines: +21 -23 Editorial: moved example of dereference (SVG) to own subsection. ---------------------------- revision 1.579 date: 2004/06/06 22:31:44; author: ijacobs; state: Exp; lines: +64 -85 more editing about info resources, some based on: ---------------------------- revision 1.578 date: 2004/06/06 22:07:05; author: ijacobs; state: Exp; lines: +54 -28 starting to incorporate Information resource (new section near beginning of 4) and also that all Info Resources are primary resources. ---------------------------- revision 1.574 date: 2004/06/06 21:14:19; author: ijacobs; state: Exp; lines: +0 -11 Per, deleted third para. ---------------------------- revision 1.572 date: 2004/06/06 21:13:04; author: ijacobs; state: Exp; lines: +8 -7 Moved a para about multiple reprs. from different URI owners form 2.7.2 to 2.3 since not specific to sem web technologies. ---------------------------- revision 1.571 date: 2004/06/06 21:08:15; author: ijacobs; state: Exp; lines: +5 -2 Some edits to sentence on delection of URI ownership authority. ---------------------------- revision 1.570 date: 2004/06/06 21:05:00; author: ijacobs; state: Exp; lines: +7 -6 Per, Section 2.3: - Added "For example, " before ". When the HTTP protocol is used to provide representations, the HTTP origin server (defined in [RFC2616]) is the software agent acting on behalf of the URI owner." - Moved previous sentence to section on authoritative metadata. Section 2.4: - Removed "normative" in first para. ---------------------------- revision 1.569 date: 2004/06/05 01:18:24; author: ijacobs; state: Exp; lines: +28 -17 Per, rewrite of 2.2.2 in terms of indirect identification. Included rhetorical analogy. Also included this attempt at a definition, taken from RF comments: "A URI acts as an indirect identifier when it is a component in a string of references that, taken together, identify something." ---------------------------- revision 1.568 date: 2004/06/05 00:34:46; author: ijacobs; state: Exp; lines: +32 -28 Per, recast GPN 2.2: Avoiding URI Overloading: Agents SHOULD find out what resource a URI identifies before using that URI. removed from 2.2: "In many contexts, inconsistent use may not lead to error or cause harm. However, in some contexts such as the Semantic Web, much of the value of a relies on consistent use of URIs." The Semantic Web does not fail in the face of inconsistency, though its value certainly increases with consistent usage. ---------------------------- revision 1.567 date: 2004/06/05 00:09:26; author: ijacobs; state: Exp; lines: +1 -1 RFC2396bis uses the term "percent-encode" instead of "escape"; that change was made here as well. ---------------------------- revision 1.566 date: 2004/06/05 00:08:56; author: ijacobs; state: Exp; lines: +18 -26 Deleted this example from 2.1.1 since it is NOT about URI Aliases; these URIs identify different resources. -Publish a URI (such as "") for a resource with multiple representations in different human languages. Agents use content negotiation to select a representation of this resource according to user preferences for language. - Publish one URI per available language of the previous resource, such as "" (the Italian resource) and "" (the Spanish resource). One may wish to refer to the language-independent resource or to a language-specific resource; they are all different resources identified by different URIs. ---------------------------- revision 1.565 date: 2004/06/04 23:49:34; author: ijacobs; state: Exp; lines: +2 -6 removed more oaxaca example from 2.1.1 ---------------------------- revision 1.564 date: 2004/06/04 23:48:11; author: ijacobs; state: Exp; lines: +18 -28 Per, in section 2.1, removed two http URI examples, leaving instead a reference to section 6 of [URI]. ---------------------------- revision 1.563 date: 2004/06/04 23:44:00; author: ijacobs; state: Exp; lines: +2 -2 In 2.1: s/false negative comparision/false negative same for positive ---------------------------- revision 1.561 date: 2004/06/04 23:42:22; author: ijacobs; state: Exp; lines: +4 -9 Per, deleted the "URI multiplicity" constraint from the beginning of 2.1. This text was moved (as an ordinary sentence) to the new subsection: URI/Resource Relationships. ---------------------------- revision 1.560 date: 2004/06/04 23:40:10; author: ijacobs; state: Exp; lines: +1 -1 editorial: removed an instance of "people" where not necessarily people. ---------------------------- revision 1.559 date: 2004/06/04 23:39:11; author: ijacobs; state: Exp; lines: +120 -64 Per, a number of changes to the beginning of sections 2 and 2.1. - Replaced initial constraint with a new principle and a new GPN: Principle: Global Identifiers: Global naming leads to global network effects. GPN: Identify with URIs: To benefit from and increase the value of the World Wide Web, agents should provide URIs as identifiers for resources. - Created a new subsection specifically about the benefits of the URI mechanism. Moved subsequent text up to that section. - Introduced a little more text about "other mechanisms" for identifying resources, with some forward links. Also about the costs of new mechanisms that replicate URIs. - Created a new subsection about URI/Resource Relationships that states clearly: - URI identifies one resource. - More than one URI may identify same resource. Also provided a little (new) rationale for each of those constraints. - In that subsection, asked some frequently asked questions with forward pointers to sections with discussion about the answers. In particular, tried to address the question of whether resource can have zero identifiers by saying (a) in world with only URIs, doesn't matter and (b) with advent of other systems, might change. - This simplified the initial discussion of 2.1; in second paragraph, added last sentence about generally higher cost of determining whether two different URIs identify the same resource. - Also trying to introduce more 'pro and con' discussion per the ftf meeting. - Other text (e.g., on format network effects) was moved to other sections. ---------------------------- revision 1.558 date: 2004/06/03 23:33:37; author: ijacobs; state: Exp; lines: +2 -2 tweak re: 404 (not found) ---------------------------- revision 1.557 date: 2004/06/03 23:32:55; author: ijacobs; state: Exp; lines: +39 -24 Per, in section 1.2.3, distinguish error correction from error recovery. Related issues: ---------------------------- revision 1.556 date: 2004/06/03 22:17:19; author: ijacobs; state: Exp; lines: +6 -5 Per, editorial: - 1.1.1, bullet 1: deleted "i.e., " to end of bullet. - 1.1.2, para 3: s/document involve human activity/document that involve human activity/ - 1.1.2, para 3: Added ref to voicexml2 ---------------------------- revision 1.555 date: 2004/06/03 22:08:46; author: ijacobs; state: Exp; lines: +1 -1 editorial: s/periodically-updated/periodically updated/ ---------------------------- revision 1.554 date: 2004/06/03 22:07:59; author: ijacobs; state: Exp; lines: +32 -25 Per, in 1.2.1: - Changed "independent" back to "orthogonal" (globally) - Deleted "loosely coupled" - Expanded this paragraph after the first bulleted list: When two specifications are orthogonal, one may change one without requiring changes to the other, even if one has dependencies on the other. For example, although the HTTP specification depends on the URI specification, the two may evolve independently. This orthogonality increases the flexibility and robustness of the Web. For example, one may refer by URI to an image without knowing anything about the format chosen to represent the image. This has facilitated the introduction of image formats such as PNG and SVG without disrupting existing references to image resources. ---------------------------- revision 1.553 date: 2004/06/03 21:28:12; author: ijacobs; state: Exp; lines: +8 -2 Issue Per, In section 1.2.4, modified first paragraph to talk more about large-scale protocols v. traditional software APIs. ---------------------------- revision 1.552 date: 2004/06/03 21:13:49; author: ijacobs; state: Exp; lines: +19 -12 Issue Per, In section 1.2.2, * Extended language: A+B * Extension: B ---------------------------- revision 1.551 date: 2004/06/03 20:37:54; author: ijacobs; state: Exp; lines: +18 -6 Per, 2.1.1: Expanded example to show N resources: one language-independent and N-1 language-specific. ---------------------------- revision 1.550 date: 2004/06/03 20:18:10; author: ijacobs; state: Exp; lines: +0 -3 Per, 1.1.3: Deleted 'This categorization is derived from Roy Fielding's work on "Representational State Transfer" [REST].' Changes between 7 May 2004 Editor's Draft and the 10 May 2004 Editor's Draft (diff since 7 May; diff since 9 Dec 2003). Changes between 9 Dec 2003 Last Call Working Draft and the 7 May 2004 Editor's Draft (diff). ---------------------------- revision 1.547 date: 2004/05/10 21:32:07; author: ijacobs; state: Exp; lines: +16 -16 Per 9 Feb 2004 TAG teleconf: s/namespace document/namespace representation/ ---------------------------- revision 1.546 date: 2004/05/10 21:25:10; author: ijacobs; state: Exp; lines: +1 -1 Per TAG decision, s/unreliable/unpredictable ---------------------------- revision 1.531 date: 2004/05/07 23:22:42; author: ijacobs; state: Exp; lines: +4 -1 Per 3 May meeting, added : Note also that since dereferencing a URI (e.g., using HTTP) does not involve sending a fragment identifier to a server or other agent, certain access methods (e.g., HTTP PUT, POST, and DELETE) cannot be used to interact with secondary resources. ---------------------------- revision 1.530 date: 2004/05/07 23:12:06; author: ijacobs; state: Exp; lines: +2 -7 Per 26 April teleconf, deleted:.). ---------------------------- revision 1.528 date: 2004/05/07 23:01:11; author: ijacobs; state: Exp; lines: +51 -50 moved a section around to try to tell a story of disambiguation ---------------------------- revision 1.507 date: 2004/05/07 20:22:52; author: ijacobs; state: Exp; lines: +13 -8 Maybe also ---------------------------- revision 1.506 date: 2004/05/07 20:10:03; author: ijacobs; state: Exp; lines: +0 -4 Deleted: "On the other hand, it is considered an error if the semantics of the fragment identifiers used in two representations of a secondary resource are inconsistent." Added text from RFC2396 per ---------------------------- revision 1.505 date: 2004/05/07 20:08:52; author: ijacobs; state: Exp; lines: +2 -1 Added "necessarily" per ---------------------------- revision 1.504 date: 2004/05/07 19:24:08; author: ijacobs; state: Exp; lines: +11 -5 Editorial clarification for ---------------------------- revision 1.503 date: 2004/05/07 19:18:13; author: ijacobs; state: Exp; lines: +22 -19 Editorial changes based on NOT treated: (a) Illustration The shadows in this graphic convey no information; they are (in the sense defined by Edward Tufte) chartjunk. Please remove them! ---------------------------- revision 1.502 date: 2004/05/07 18:59:13; author: ijacobs; state: Exp; lines: +9 -8 Editorial changes based on However, did NOT address these: [Section 1] The initial part of section 1 is good, but section 1.1 is very jarring following it. It doesn't flow well at all. [3.5.1] We are surprised to not see a best practice recommendation here. [4.5.3] (And elsewhere) If namespace prefixes are used, there should be a table indicating their bindings to URIs. [4.5.6] and [4.5.8] highlight a lot of problems, but make no recommendations about what to do about them. ---------------------------- revision 1.501 date: 2004/05/07 18:51:38; author: ijacobs; state: Exp; lines: +6 -7 Adopted proposed text:." ---------------------------- revision 1.500 date: 2004/05/07 18:50:12; author: ijacobs; state: Exp; lines: +3 -2 Adopted proposal: "The xsi:type attribute, provided by W3C XML Schema for use in XML instance documents, is an example of a global attribute." ---------------------------- revision 1.499 date: 2004/05/07 18:48:23; author: ijacobs; state: Exp; lines: +1 -2 s/ that can be understood in any context// Per ---------------------------- revision 1.498 date: 2004/05/07 18:46:24; author: ijacobs; state: Exp; lines: +1 -1 s/JPEG/SVG [My understanding is: no binary in XML. The JPEG could be encoded (base64), but happy to put svg instead] ---------------------------- revision 1.497 date: 2004/05/07 18:39:18; author: ijacobs; state: Exp; lines: +38 -17 Added some text from RFC2396 bis per 3 May teleconf. The new text does NOT say "don't use content negotiation". ---------------------------- revision 1.496 date: 2004/05/07 18:22:58; author: ijacobs; state: Exp; lines: +23 -23 Editorial changes based on ---------------------------- revision 1.495 date: 2004/05/07 18:15:56; author: ijacobs; state: Exp; lines: +4 -5 - Removed legal case as example. - changed "application context may required" to "favor" ---------------------------- revision 1.494 date: 2004/05/07 18:12:05; author: ijacobs; state: Exp; lines: +1 -1 This text satisfies ---------------------------- revision 1.493 date: 2004/05/07 18:09:01; author: ijacobs; state: Exp; lines: +6 -0 added good practice for URI owners in response to: ---------------------------- revision 1.492 date: 2004/05/07 18:01:24; author: ijacobs; state: Exp; lines: +3 -6 Edited this sentence: Formats that allow content authors to use URIs instead of local identifiers foster the "network effect": the value of these formats grows with the size of the deployed Web. ---------------------------- revision 1.491 date: 2004/05/07 17:54:02; author: ijacobs; state: Exp; lines: +1 -1 I think edited text in 3.4 might address ---------------------------- revision 1.490 date: 2004/05/07 17:52:02; author: ijacobs; state: Exp; lines: +1 -1 Attempt to soften claim about cost of overloading by adding "often" ---------------------------- revision 1.489 date: 2004/05/07 17:50:19; author: ijacobs; state: Exp; lines: +12 -4 No longer talks about silent recovery, but rather recovery without user consent. Some text taken." This GPN also updated: Authoritative metadata: Agents MUST NOT ignore authoritative metadata unless the user given consent to this behavior. ---------------------------- revision 1.488 date: 2004/05/04 23:41:45; author: ijacobs; state: Exp; lines: +4 -2 Added "or transformed dynamically to the hardware or software capabilities of the recipient" to section 3.4 ---------------------------- revision 1.487 date: 2004/05/04 23:33:46; author: ijacobs; state: Exp; lines: +14 -0 NEW: Added a paragraph to scope of document on voice browsing and other interaction contexts. ---------------------------- revision 1.486 date: 2004/05/04 22:49:27; author: ijacobs; state: Exp; lines: +1 -1 Based on comment from Voice WG [1], changed: "It is a breakdown of the Web architecture if agents cannot use URIs to reconstruct a "paper trail" of transactions" to "It is a breakdown of the Web architecture if agents cannot use URIs to reconstruct a "paper trail" of transaction results" [1] (see discussion of 3.4.2) ---------------------------- revision 1.485 date: 2004/05/04 22:07:27; author: ijacobs; state: Exp; lines: +2 -1 added link to summary ---------------------------- revision 1.480 date: 2004/05/03 15:57:34; author: ijacobs; state: Exp; lines: +30 -27 Per Martin Duerst Comment: Now only using principle, constraint, good practice (in that order). Also highlighted in abstract ---------------------------- revision 1.479 date: 2004/05/03 15:46:57; author: ijacobs; state: Exp; lines: +5 -5 Changed indicated titles of GPNs ---------------------------- revision 1.478 date: 2004/04/28 20:31:40; author: ijacobs; state: Exp; lines: +26 -25 - Moved story from beginning of 3.5 to a few paragraphs in. - Moved one sentence from 3.5 story to 3.5.1 story. ---------------------------- revision 1.477 date: 2004/04/28 20:24:43; author: ijacobs; state: Exp; lines: +18 -15 Did the following editorial (or they were subsumed): Secondary resource definition doesn't parse, probably should drop the second "that". Did NOT do: 5 A Link does not need to be internal to a representation of any of the two (or more) resources between which there is a relationship, the definition might want to mention that. ---------------------------- revision 1.476 date: 2004/04/28 20:02:55; author: ijacobs; state: Exp; lines: +7 -2 Per, added to end of 3.6.1: "For example, the owner of an XML Namespace should provide a Namespace Document; below we discuss useful characteristics of a Namespace Document." ---------------------------- revision 1.475 date: 2004/04/28 19:54:45; author: ijacobs; state: Exp; lines: +7 -6 In GPNs, s/server manager/representation provider/ which I think is true in its more general form. There is also a connection between URI owner and "providing a representation" in the section on authoritative metadata. Not sure yet whether there needs to be a more formal definition of a representation provider. ---------------------------- revision 1.474 date: 2004/04/28 19:49:58; author: ijacobs; state: Exp; lines: +34 -33 Once more, in an attempt to reduce the number of subjects of GPNs: s/language|format designer/[format] specification/ in the GPNs. ---------------------------- revision 1.473 date: 2004/04/28 19:33:37; author: ijacobs; state: Exp; lines: +42 -35 --- * "authors of a specification" vs "language designer" vs "format designer" The distinction between format and language is said to be null in the document; I believe that usually, format is associated to the syntactic part of a language (which also includes the semantics); I think that at least the terms should be used consistently (ie either 'language designer' or 'format designer') in the conformance requirements, if only for ease of reading. Also, the terms "authors of a specifications" seems to be bound to the same type of subjects, but probably with a wider scope - maybe is there a way to merge all these terms in one? -- 1) In GPNs, s/Language designer/Specification designer/ 2) In document, s/specification author/specification designer/ s/author/content author/ s/developer/software developer/ 3) Moved note about format v. language to section on audience, and introduce phrase "specification designer" as an encompassing term. ---------------------------- revision 1.472 date: 2004/04/28 18:59:21; author: ijacobs; state: Exp; lines: +3 -3 s/user agent/agent in "Agents should detect such inconsistencies but should not resolve them without involving the user." ---------------------------- revision 1.471 date: 2004/04/28 18:58:29; author: ijacobs; state: Exp; lines: +3 -3 Consistent with "Authoritative Metadata" finding and this reviewer's comment: Changed "User agents MUST NOT silently ignore authoritative metadata. " to "Agents MUST NOT silently ignore authoritative metadata. " ---------------------------- revision 1.470 date: 2004/04/28 18:56:16; author: ijacobs; state: Exp; lines: +6 -6 s/uri publisher/uri owner per This is also consistent with eliminating "resource owner", also suggested by the reviewer. ---------------------------- revision 1.469 date: 2004/04/28 18:15:04; author: ijacobs; state: Exp; lines: +11 -10 Agreed with reviewer based on "Authoritative Metadata" Finding. Removed "server" from GPN. Also tweaked some text regarding "authority responsible for domain X": In this document, the phrase "authority responsible for domain X" indicates that the same entity owns those URIs where the authority component is domain X. ---------------------------- revision 1.468 date: 2004/04/28 17:19:41; author: ijacobs; state: Exp; lines: +6 -5 Editorial changes to text on Unicode. ---------------------------- revision 1.467 date: 2004/04/27 23:07:02; author: ijacobs; state: Exp; lines: +3 -3 Did not implement the following suggestions from sandro hawke: =========================================== == Comment 2, 1. Introduction: The server responds with a representation that includes XHTML data and the Internet Media Type "application/xml+xhtml". In the graphic, you show the media type as text/html, which is probably the better choice for simplicity's sake. Rationale: Adjusted image to application/xhtml+xml. =========================================== == Comment 3, 1.1.3 Principles, Constraints, and Good Practice This categorization is derived from Roy Fielding's work on "Representational State Transfer" [REST]. Authors of protocol specifications in particular should invest time in understanding the REST model and consider the role to which of its principles could guide their design: statelessness, clear assignment of roles to parties, uniform address space, and a limited, uniform set of verbs. The first sentence is fine, the second reads rather like a paid product placement. Is Fielding's thesis that much better than every other work ever written on distributed systems design that it merits strong recommendation in the section introducing labeling terms? If you want to save this text, put it in a Recommended Reading section. =========================================== == Comment 7, 1.2.4. Protocol-based Interoperability Did NOT add a GPN. ======================= I would suggest instead [of "secondary resource"] that you: (1) Name the the portion of the URI up the the "#". TimBL has called this the "racine", but I like "stem", "trunk", or maybe even "non-fragment portion". (2) Call the resource identified by a URI's stem the "stem resource", or something like that. ======================= ---------------------------- revision 1.466 date: 2004/04/27 23:06:03; author: ijacobs; state: Exp; lines: +6 -3 Tried to clarify meaning of "safe" by adding note: "Note: In this context, the word "unsafe" does not mean "dangerous"; the term "safe" is used in section 9.1.1 of [RFC2616] and "unsafe" is the natural opposite." ---------------------------- revision 1.465 date: 2004/04/27 22:59:59; author: ijacobs; state: Exp; lines: +4 -4 Followed In 3.2, s/electronic data/data ---------------------------- revision 1.464 date: 2004/04/27 22:59:17; author: ijacobs; state: Exp; lines: +5 -5 Followed suggestion of s/representation of the state of a resource/representation of a resource/ However, left "resource state" elsewhere. ---------------------------- revision 1.462 date: 2004/04/27 22:43:01; author: ijacobs; state: Exp; lines: +17 -6 markup fixes, added more example of sameAs per ---------------------------- revision 1.461 date: 2004/04/27 22:16:37; author: ijacobs; state: Exp; lines: +36 -40 Proposed fix to in previous edits. In this draft, adopted SH proposal to s/URI ambiguity/URI overloading/ As a result I *also* deleted the paragraph on natural language ambiguity; this seems less and less relevant. I substituted a paragraph at the end of the section on authoritative metadata: "Note that the choice and expressive power of a format can affect how precisely a representation provider communicates resource state. The use of natural language to communicate resource state may lead to ambiguity about what the associated resource is. This ambiguity can in turn lead to URI overloading." ---------------------------- revision 1.460 date: 2004/04/27 21:56:05; author: ijacobs; state: Exp; lines: +10 -11 =========================================== == Comment 9, 2.2. URI Ownership ... the "uuid" scheme ... and ... the "md5" scheme ... but you don't give references. They are not on IANA's list. I pay some attention, and I'm not aware of a stable specification for either one. The spec on DanC's list for UUID has long since expired; the reference for MD5 is simply to a hypothetical use of it. For uuid you could use urn:nid-5, but that's technically not a "URI scheme": Maybe you can says "such as a possible 'UUID' scheme", etc, or you could use WebDAV's unique-lock-token scheme. Instead: Merged "Random number" and "Checksums" list items into one list item: "Large numbers. The generation of a fairly large random number or a checksum reduces the risk of ambiguity to a calculated small risk. A draft "uuid" scheme adopted this approach; one could also imagine a scheme based on md5 checksums." ---------------------------- revision 1.458 date: 2004/04/27 21:47:31; author: ijacobs; state: Exp; lines: +5 -5 Agreed with and implemented ---------------------------- revision 1.457 date: 2004/04/27 21:38:14; author: ijacobs; state: Exp; lines: +12 -7 As suggested by SH, added forward link. However, did not add a GPN. Added "View source effect" to the glossary. =========================================== == Comment 7, 1.2.4. Protocol-based Interoperability. This seems out of place. I get the point, but it's never summed up. And I don't see how it belongs in 1.2 General Architecture Principles. I think you mean: Good practice: design protocols and data formats which people can view and reproduce with a minimum of special tools and effort. [ Ahhh, this is finally covered in Section 4.1; maybe a forward link? ] and maybe: Good practice: user agents should allow user to look "inside" to see (and even manipulate) the protocol interactions the agent is performing on behalf of the user. ---------------------------- revision 1.456 date: 2004/04/27 21:15:18; author: ijacobs; state: Exp; lines: +7 -7 Based on comment from SH: QUOTE =========================================== == Comment 6, 1.2.2. Extensibility: The following applies to languages, in particular the specifications of data formats, of message formats, and URIs. Note: This document does not distinguish in any formal way the terms "format" and "language." Context has determined which term is used. I can't really parse the first sentence. Maybe you mean something like: The data formats and (more generally) formal languages used in the bodies of messages and even in the text of URIs can be defined to have certain properties to promote evolution and interoperation. /QUOTE Changed to: "Below we discuss the property of "extensibility," exhibited by URIs and some data and message formats, which promotes technology evolution and interoperability. Note: This document does not distinguish in any formal way the terms "format" and "language." Context determines which term is used." ---------------------------- revision 1.455 date: 2004/04/27 21:06:03; author: ijacobs; state: Exp; lines: +3 -3 Accepted proposal from SH: =========================================== == Comment 4, 1.2.1. Orthogonal Specifications: ... agents can interact with any identifier ... That's ambiguous. Replace "with" with "using" and I think you're okay. Otherwise it sounds rather like the identifier is one of the parties doing something in an interaction. ---------------------------- revision 1.454 date: 2004/04/27 21:04:26; author: ijacobs; state: Exp; lines: +5 -5 Accepted proposal from SH: =========================================== == Comment 1, 1. Introduction: Identification. Each resource is identified by a URI. In this travel scenario, the resource is about the weather in Oaxaca and the URI... ^^^^^ This was jarring to read. The text up to that point is simple and direct, but suddenly here there's handwaving with "about." What *is* the resource identified by that URI? Fortunately, in the picture you answer this question. Suggested text: Each resource is identified by a URI. In this travel scenario, the resource is a periodically-updated report on the weather in Oaxaca, and the URI ... ---------------------------- revision 1.453 date: 2004/04/27 20:44:32; author: ijacobs; state: Exp; lines: +24 -21 More clean-up of resource owner v. URI owner. I hope this resolves ---------------------------- revision 1.452 date: 2004/04/27 20:35:03; author: ijacobs; state: Exp; lines: +3 -3 Previous changes take into account these issues: NOT DONE from Patrick's comments: ------------ Section 3.2, para 1, last sentence: Consider changing to "A message may even include metadata about the message itself (for message-integrity checks, for instance). Rationale: This is about the message metadata, not the message, I believe. ------------ Section 3.3.1, last para, last sentence: This sentence seems misleading, as if one can infer something about the nature of a secondary resource by interpreting a URI reference with fragement identifier. One cannot infer the nature of any URI denoted resource based either on the URI *or* based on any representation obtained by dereferencing that URI, either directly, or for URI references with fragment identifiers, by first dereferencing the base URI and interpreting the fragment in terms of the MIME type of the returned represenatation. This last sentence could either be removed or clarified/reworked. ------------- These issues: ---------------------------- revision 1.451 date: 2004/04/27 20:24:43; author: ijacobs; state: Exp; lines: +3 -3 Per stickler comments: s/cell phone/mobile phone ---------------------------- revision 1.450 date: 2004/04/27 20:21:13; author: ijacobs; state: Exp; lines: +13 -13 Deleted "Resource owner" from the document, replacing it with URI owner. ---------------------------- revision 1.449 date: 2004/04/27 20:11:32; author: ijacobs; state: Exp; lines: +13 -7 Clarified per suggestion of stickler1 Section 3.6, para 1: - Per suggestion from PS, added two more examples of unreliability. - Continue to move away from "resource owner" (since nobody owns the weather in Oaxaca) to "URI owner". ---------------------------- revision 1.448 date: 2004/04/27 20:05:51; author: ijacobs; state: Exp; lines: +9 -7 Clarified per suggestion of stickler1 Story in 3.5.1 on unsafe interactions and accountability. ---------------------------- revision 1.447 date: 2004/04/27 19:34:16; author: ijacobs; state: Exp; lines: +24 -17 Per suggestion of stickler1 "Section 3.4, para 2: The text of this paragraph is a bit too strong regarding URI owner's rights. The owner of a URI has the right to decide which representations of the denoted resource are accessible via that URI -- but in fact anyone has the license to create a representation of that resource, and indirectly associate that representation via another URI that is declared (e.g. using own:sameAs) as semantically equivalent. I.e. the rights of the owner of a URI are limited to the access of representations via that particular URI, not (necessarily) to total control of the resource denoted as well as any and all representations of that resource (accessible via other URIs)." Did two things: 1) Added this to the section on future directions and URIs: "One consequence of this direction is that URIs syntactically different can be used to identify the same resource. This means that multiple parties may create representations of the (same) resource, all available for retrieval using multiple URIs. The URI owner's rights (e.g., to provide authoritative representation metadata) extend only to the representations served for requests given that URI." 2) Changed: In our travel scenario, the authority responsible for "weather.example.com" has license to create representations of this resource. Which representation(s) Nadia receives depends on a number of factors, including: to: In our travel scenario, the owner of "" provides the authoritative metadata for representations retrieved for that URI. Precisely which representation(s) Nadia receives depends on a number of factors, including: ---------------------------- revision 1.446 date: 2004/04/27 19:15:55; author: ijacobs; state: Exp; lines: +7 -5 stickler1 Section 3.4, para 1, last sentence: The phrase "authoritative interpretation of representations of the resource" is a bit unclear. The owner of the URI can specify the denotation of the URI and what representations of that resource are accessible, but is it not the case that the MIME type specifications define the interpretation of any given representation -- insofar as the web architecture is concerned? I.e., for a given representation, it is the MIME type specification that defines the interpretation of that representation, not the owner of the URI denoting the represented resource. ??? FIXED (though might require additional editing) according to the "Authoritative Metadata" finding. ---------------------------- revision 1.445 date: 2004/04/27 19:09:00; author: ijacobs; state: Exp; lines: +9 -13 Per suggestion of stickler1 Removed first part of para after story, since it said almost the same thing as following paragraph and was less clear. ---------------------------- revision 1.444 date: 2004/04/27 19:00:56; author: ijacobs; state: Exp; lines: +3 -3 tweak in section 3.2, para 1: added "itself" ---------------------------- revision 1.443 date: 2004/04/27 18:59:33; author: ijacobs; state: Exp; lines: +12 -6 Per suggestion of stickler1 Adopted suggested replacement sentence in 3.1: Access may take many forms, including retrieving a representation of the state of the resource (for instance, by using HTTP GET or HEAD), adding or modifying a representation of the state of the resource (for instance, by using HTTP POST or PUT, which in some cases may change the actual state of the resource if the submitted representations are interpreted as instructions to that affect), and deleting some or all representations of the state of the resource (for instance, by using HTTP DELETE, which in some cases may result in the deletion of the resource itself)." ---------------------------- revision 1.442 date: 2004/04/27 18:57:46; author: ijacobs; state: Exp; lines: +25 -11 Per suggestion of stickler1 Added example in 2.3 on URI ambiguity. Also moved some content around to try to better explain what it means. ---------------------------- revision 1.441 date: 2004/04/23 19:45:48; author: ijacobs; state: Exp; lines: +4 -4 Also tried to make clearer that "URI ambiguity" means that the URI is used to refer to more than one resource in a context of Web protocols and formats. ---------------------------- revision 1.440 date: 2004/04/23 19:44:35; author: ijacobs; state: Exp; lines: +11 -11 per - s/Web resource/resource globally to avoid confusion. - In 2.3, changed last sentence to: "URI ambiguity arises when a URI is used to identify two different resources outside the context of Web protocols and formats." ---------------------------- revision 1.439 date: 2004/04/23 19:35:32; author: ijacobs; state: Exp; lines: +7 -4 per Added example in 2.1: "By following the "http" URI specification, agents are licensed to conclude that "" and "" identify the same resource." ---------------------------- revision 1.438 date: 2004/04/23 16:33:45; author: ijacobs; state: Exp; lines: +82 -65 Editorial changes based on In particular, reorganized 2.1 to read more clearly, and made second half a new subsection. ---------------------------- revision 1.437 date: 2004/04/23 15:32:22; author: ijacobs; state: Exp; lines: +9 -5 Added sentence to GPN in 4.5.4 per "When a namespace representation is provided by the authority responsible for the namespace, that material is authoritative." Also changed "Resource owner" to "Authority responsible for [an XML Namespace Name]" ---------------------------- revision 1.436 date: 2004/04/21 23:54:53; author: ijacobs; state: Exp; lines: +29 -21 Added forward link to section on representation management (3.6.3). Also reworked the text to state more clearly: URI owners get to serve authoritative metadata. ---------------------------- revision 1.435 date: 2004/04/21 23:24:17; author: ijacobs; state: Exp; lines: +4 -4 Issue Source Minor editorial fixes: 1.2.2 "Context has determined which term" should read "Context determines which term" for agreement of tense with sentence that precedes this ones. 1.2.3 "error condition so that a an agent" "so that an agent" ---------------------------- revision 1.434 date: 2004/04/21 23:22:53; author: ijacobs; state: Exp; lines: +8 -4 Issue Source Improved glossary entry for "secondary resource" ---------------------------- revision 1.433 date: 2004/04/21 23:18:35; author: ijacobs; state: Exp; lines: +9 -7 Issue Source 4.5.6, number 3: changed "might reveal the attributes of type ID" to "might reveal the attributes declared to be of type ID". Made some other minor tweaks as well. ---------------------------- revision 1.432 date: 2004/04/21 23:15:42; author: ijacobs; state: Exp; lines: +9 -5 Issue Source 3.4.1: Added examples to both bullets. ---------------------------- revision 1.431 date: 2004/04/21 23:01:45; author: ijacobs; state: Exp; lines: +13 -14 Issue Source Globally changed "orthogonal" to "independent" ---------------------------- revision 1.430 date: 2004/04/21 22:59:47; author: ijacobs; state: Exp; lines: +7 -5 Issue Source 1.1.3: s/<p>/ palso made the explanation a little more self-contained. ---------------------------- revision 1.429 date: 2004/04/21 22:57:02; author: ijacobs; state: Exp; lines: +3 -3 Issue Source 1.1 s/at least// ---------------------------- revision 1.428 date: 2004/04/21 21:56:35; author: ijacobs; state: Exp; lines: +4 -3 added clarification of what the "authority component" part of an http URI is. Cf: ---------------------------- revision 1.427 date: 2004/04/21 21:49:44; author: ijacobs; state: Exp; lines: +3 -3 bug fix in media type of intro story ---------------------------- revision 1.426 date: 2004/04/21 20:09:10; author: ijacobs; state: Exp; lines: +4 -4 Comment: ." Solution: Changed title to "URI nonuniqueness" ---------------------------- revision 1.424 date: 2004/03/31 22:56:51; author: ijacobs; state: Exp; lines: +15 -14 Editorial changes based on - All typos fixed. IMPLEMENTED: * Sect 5 - Term Index. Maybe missing some terms? Would be useful to see 'Web' (and 'WWW', 'World Wide Web'), 'URI' (as a 'see Unifo...' cross-reference), 'Data Format', 'Media Type' (maybe). * Sect 2.4.1, 2nd para, 3rd bullet, 'One should not expect...' - suggest change from 'will do anything useful with URIs of this scheme' to something like 'will do anything with URIs of this scheme beyond comparison' or some other wording. * Sect 4.1. Suggest to interchange 1st and 2nd paras to reflect order in section title.. We used to have that and then chose the current organization instead. >*? We have examples of other schemes. No need to use exotic schemes if not motivated by story. >* Sect 6 - References. Still minded to have a division between normative and >informative refs. Otherwise seems rather haphazard like the Web itself. Cf. >the [IETFXML] entry: 'This IETF Internet Draft is available at .... If this >document is no longer available, refer to...' And BTW, my understanding is >that I-Ds should ony be referenced as a work in progress. Same with [IRI] >entry below. TAG did not feel the need to have normative refs. >* Sect 2.4, 3rd para, 1st sentence, 'While the Web architecture...' - change >'is costly' to 'can be costly'? Not sure about this one. >* Sect 2.4, 3rd para, 3rd sentence, 'Introducing a new URI scheme...' - >change 'requires' to 'may require'? Not sure about this one. >(There is the problem here that unregistered scheme URIs may not be >authoritatively compared. OTOH if we have a registered scheme URI and an >unregistered scheme URI *using the same scheme* - can we authoritatively >compare them? Anyway, the point I am trying to bring out in this comment is >that URI identity/comparison is in and of itself a powerful utility, beyond >dereference.) > >*. Nonetheless, the statement is true. >*. The TAG did not agree to that definition. >* Sect 3.6.2, 1st para. Should clarify here that 'URI persistence' actualy >refers to persistence of the referenced resource, not to the URI. (That >point is made in the [Cool] reference entry but should be made here and not >in the refrence section.) Having reread the sentence, I don't believe that's necessary. It's defined clearly. >*? I think that's a qname rather than a qualified name. ---------------------------- revision 1.423 date: 2004/03/31 21:21:55; author: ijacobs; state: Exp; lines: +15 -2 Per comments from Tony Hammond Added World Wide Web, WWW, Web, and URI to the term index. Changes between 5 Dec 2003 Editor's Draft and the 9 Dec 2003 Last Call Working Draft. Changes between 3 Dec 2003 Editor's Draft and the 5 Dec 2003 Editor's Draft (diff). Changes between 28 Nov 2003 Editor's Draft and the 3 Dec 2003 Editor's Draft (diff). functionalPropertyper TBL suggestion.
http://www.w3.org/2001/tag/webarch/changes.html
crawl-001
refinedweb
14,040
54.12
Hey Haim, Thursday, June 28, 2001, 4:42:46 PM, you wrote: HD> Kevin, >> If you apply Dave Fuchs' patch to make a '.' a valid character (but making '/' >> and invalid one), then that becomes a valid Cyrus username. Search the Cyrus >> IMAP mailing list archives for it. He sent it out for 2.0.14 some time last >> week when I requested it (but I don't have it on me here) :) HD> So using that patch makes the "." part of a valid username. What do I do HD> about the '@' in the email address? AFAIK, the '@' is already a valid character in the Cyrus mailbox namespace. "Taken from an email to the cyrus list: cyrus-imapd-2.0.12 - imap/mboxname.c - line #187: I believe this is what you're looking for... #define GOODCHARS " +,-.0123456789:=@ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~" -David Fuchs" Technically, the '.' is already a legal character in mailbox names, but it does something funky (I don't recall quite what it is/was), but the patch curbs that behaviour. HD> Thanks a lot (especially for answering so fast) Np. I've been doing a lot of research into this lately. You caught me at a good time ;) Btw, I have to agree with the LDAP recommendation. -- Kevin
https://lists.debian.org/debian-isp/2001/06/msg00309.html
CC-MAIN-2016-07
refinedweb
208
83.86
We are proud to announce the preview release of the Elastic APM .NET agent! To make sure everyone is on the same page I’d like to start by defining what we exactly mean by “Preview Release”: this release is the first properly packaged version of the .NET APM agent. The main goal is to collect feedback and to share our progress with the community. If you are interested in this work, please try the agent in your test environment and let us know how it works for you and what features you miss! The best way to give feedback and ask questions is in our discussion forum or if you find an issue or you would like to submit a pull request, jump to our GitHub repository. The .NET agent is Apache licensed and we are more than happy to receive contributions from the community! Elastic APM is an Application Performance Monitoring solution from Elastic and alongside the .NET agent, there are official agents available for Java, Node.js, Python, Ruby, JavaScript/RUM, and Go. Elastic APM helps you to gain insight into the performance of your application, track errors, and gauge the end-user experience in the browser. I will dive into this new APM .NET agent below, but if you'd prefer to watch a preview rather than read about it, check out this video: If you would like to know more details, please take a look at the Elastic APM .NET agent documentation. Supported frameworks We started the the design of the agent by asking our potential user base about what frameworks and .NET flavors they use for their .NET applications. Based on that feedback the preview release has auto instrumentation features for the following libraries: - ASP.NET Core 2.x - Entity Framework Core 2.x - Outgoing web requests with the HttpClient class on .NET Core Additionally, the APM .NET agent offers a Public Agent API that enables you to manually instrument your application for other frameworks, or for custom tagging. This API is shipped as a .NET Standard 2.0 library, which means you can use it on every .NET flavour that supports .NET Standard 2.0. In the case of .NET Full Framework this is version 4.6.1 or newer, and in case of .NET Core this is version 2.0 or newer. The is no restriction in terms of operating systems, the agent packages should work on any operating system that is supported by the .NET Framework you use. Downloading the agent The agent ships as a set of NuGet packages via nuget.org. The following packages are available: - Elastic.Apm.All: This is a meta package that references every other Elastic APM .NET agent package. If you plan to monitor a typical ASP.NET Core application that depends on the Microsoft.AspNetCore.All package and uses Entity Framework Core then you should reference this package. In order to avoid adding unnecessary dependencies in applications that aren’t depending on the Microsoft.AspNetCore.All package we also shipped some other packages - those are all referenced by the Elastic.Apm.All package. - Elastic.Apm: This is the core of the agent, which we didn’t name “Core”, because someone already took that name :) This package also contains the Public Agent API and it is a .NET Standard 2.0 package. We also ship every tracing component that traces classes that are part of .NET Standard 2.0 in this package, which includes the monitoring part for HttpClient. Every other Elastic APM package references this package. - Elastic.Apm.AspNetCore: This package contains ASP.NET Core monitoring related code. The main difference between this package and the Elastic.Apm.All package is that this package does not reference the Elastic.Apm.EntityFrameworkCore package, so if you have an ASP.NET Core application that does not use EF Core and you want to avoid adding additional unused references, you should use this package. - Elastic.Apm.EntityFrameworkCore: This package contains EF Core monitoring related code. ASP.NET Core + Elastic APM The focus of this release is clearly ASP.NET Core monitoring. The first step to enable the agent is to add the Elastic.Apm.All package to your application: The next step is to call the UseElasticApm method in the Configure method within the Startup.cs file: public class Startup { public void Configure(IApplicationBuilder app, IHostingEnvironment env) { app.UseElasticApm(Configuration); //…rest of the method } //…rest of the class } And that’s it! With this the agent will automatically capture incoming requests, outgoing database and HTTP calls and it will also monitor unhandled exceptions. You can also configure the agent with environment variables or by using the IConfiguration interface. For configuration options please see the Elastic APM .NET agent documentation. What about non-web .NET applications? Public Agent API As already hinted, even if you don’t use ASP.NET Core, you can still monitor your application with the public Agent API. For this, all you need to do is to reference the Elastic.Apm package. Once you did that you will have access to the public Agent API. The entry point of the API is the Agent.Tracer property. For more details about the public Agent API please refer to the documentation. This small sample shows how to monitor an application that listens to incoming HTTP requests with the HttpListener class: using System; using System; using System.Net; using System.Net.Http; using System.Threading.Tasks; using Elastic.Apm; using Elastic.Apm.Api; using Elastic.Apm.DiagnosticSource; using Newtonsoft.Json.Linq; namespace HttpListenerSample { class Program { static async Task Main(string[] args) { //We enable outgoing HTTP request capturing: Agent.Subscribe(new HttpDiagnosticsSubscriber()); // Create a listener. var listener = new HttpListener(); // Add the prefix listener.Prefixes.Add(""); listener.Start(); Console.WriteLine("Listening..."); while (true) { // Note: The GetContext method blocks while waiting for a request. var context = listener.GetContext(); // Capture the incoming request as a //transaction with the Elastic APM .NET Agent await Agent.Tracer.CaptureTransaction("Request", ApiConstants.TypeRequest, async () => { var request = context.Request; // Obtain a response object. var response = context.Response; // Construct a response. var responseString = $"<HTML><BODY> <p> Hello world! Random number:" + $" < {await GenerateRandomNumber()} </p>" + $" <p> Number of stargazers on <a" + $" href=\"\">" + $" GitHub for the APM .NET Agent </a>:" + $" {await GetNumberOfStars()} </p>" + $" </BODY></HTML>"; var buffer = System.Text.Encoding.UTF8.GetBytes(responseString); // Get a response stream and write the response to it. response.ContentLength64 = buffer.Length; var output = response.OutputStream; output.Write(buffer, 0, buffer.Length); // You must close the output stream. output.Close(); }); } listener.Stop(); } static readonly Random Random = new Random(); private static async Task<int> GenerateRandomNumber() { // Get the current transaction and then capture // this method as a span on the current transaction return await Agent.Tracer.CurrentTransaction .CaptureSpan("RandomGenerator", "Random", async () => { // Simulate some work await Task.Delay(5); return Random.Next(); }); } private static async Task<int> GetNumberOfStars() { var httpClient = new HttpClient(); httpClient.DefaultRequestHeaders.Add("User-Agent", "APM-Sample-App"); //This HTTP request is automatically captured by the Elastic APM .NET Agent: var responseMsg = await httpClient .GetAsync(""); var responseStr = await responseMsg.Content.ReadAsStringAsync(); return int.TryParse(JObject.Parse(responseStr)["stargazers_count"] .ToString(), out var retVal) ? retVal : 0; } } } One thing I’d like to point out is that in case of an ASP.NET Core application by executing the UseElasticApm() method the agent automatically starts capturing database and outgoing HTTP calls. This is not the case in non-ASP.NET Core applications therefore you have to subscribe to those events, which can be done with the Agent.Subscribe method. This is how you can subscribe to capture outgoing HTTP calls: Agent.Subscribe(new HttpDiagnosticsSubscriber()); And this is how you can subscribe to capture Entity Framework Core calls (for this the Elastic.Apm.EntityFrameworkCore package must be referenced): Agent.Subscribe(new EfCoreDiagnosticsSubscriber()); More complex configuration can be done through Agent.Setup() if so desired. Summary and future We would be thrilled to get feedback in our discussion forum or in our GitHub repository. Please keep in mind that the current release is a preview and we may introduce breaking changes based on feedback or in case we find some issue with the current release.Of course we are just in the beginning of this journey with .NET within the Elastic APM solution. We have lots of exciting things we are currently working on. Some examples are distributed tracing support, support for ASP.NET Classic, and many other things. Additionally, we are also always open for contributions, so feel free to check out the source code over at GitHub and to open a pull request.
https://www.elastic.co/blog/elastic-apm-dot-net-agent-preview-released
CC-MAIN-2019-30
refinedweb
1,423
52.46
2CommentsJoined May 29th, 2016 - nbattan commented on DanM5's instructable Moving Rainbow Arduino Sign7 months ago - nbattan enrolled in Robots Class1 year ago - nbattan commented on Magesh Jayakumar's instructable Quick Start to Nodemcu (ESP8266) on Arduino IDE1 year agoView Instructable » Very nice write-up for a Quick Start! Helped me confirm my hardware and software environment are set up correctly for first experience with a NodeMCU/ESP8266. One suggestion I'd like to make: Since the ESP8266 saves the SSID and password to non-volatile storage, I define them in preprocessor directives rather than variables. I then use a #ifdef/#else/#endif block to call WiFi.begin(MYSSID,MYPASSWORD) if MYSSID is defined or WiFi.begin() if it isn't. Once I've successfully connected the board to my WiFi I delete the preprocessor directives from my code. This greatly reduces the possibility that I'll accidentally store my WiFi credentials in plaintext on GitHub, Codebender or any other code-sharing site. Are you powering the entire string directly from the Nano's 5V output pin? WS2811 pixel modules at max brightness can draw 20-60 mA each, so 42 of them can easily exceed the Nano's max current specification. How have you gotten around this?
http://www.instructables.com/member/nbattan/
CC-MAIN-2018-05
refinedweb
208
62.48
Introduction Almost all SAP implementations require to certain degree a custom ABAP development. Some custom development projects are very small and involve one or two ABAP programmers implementing user exits or enhancements. Some are very large and include custom development of complete, complex bolt-on subsystems by a large onsite and/or offshore development teams. No matter whether your custom ABAP development is small, medium or large in size, it always pays off to approach it in well organized manner with potential future expansion in mind. When doing any software development, it is important to focus on source code reusability by building libraries of well defined, tested and documented components. Developing software libraries might increase initial development time for small projects but it would pay off quickly when undertaking additional assignments that would reuse those libraries. For medium and large size projects, the development and usage of software component libraries would have a positive effect on the development timeline and better quality of the source code. The library of reusable components might be useful in any project and could start with the development of widely used components for error handling, string manipulation, file i/o and alerts to name few areas. The road to success is not only in the development of reusable components but also in superior documentation that fully describes them, shows their signatures, examples of using them as well as provide a comprehensive search capabilities. Only then, the programmers will be fully aware of them and will appreciate and use them to the full extend rather than write similar ABAP code over and over again. Usually, the development process starts with preparing Blueprint, Functional Specification and Technical Specification documents. Once Technical Specification for custom development is ready and approved, the ABAP programmers start writing ABAP code implementing user requirements. The Technical Specification document describes how to implement a specific requirement described in the Functional Specification document. Usually, it is one to one relation. In real life, the process described in Technical Specification could be divided into number of separate components. Many of those components could be used when implementing objects described in more than one Functional Specification document. Adding Component Documentation Step The standard project documentation ends with Technical Specification document that is a basis for ABAP programming. It misses n:n relation between Technical Specification and Development Components where one Technical Specification may utilize many Development Components and one Development Component may be utilized by many Technical Specifications. The following diagram emphasizes the importance of the Development Components in building libraries of well documented and tested reusable programming objects: Figure 1 – Custom ABAP Development Methodology The traditional approach with one Functional Specification document followed by one Technical Specification document and its ABAP Programming misses the reusability advantage that might streamline many projects. The above diagram adds the additional step – the Development Component Documentation – shown as Apps & Libs with Inline Documentation. Often, the Technical Specification object may be divided into one or more Development Components that form libraries of reusable ABAP components; e.g., function modules, ABAP forms, ABAP object classes and methods, ABAP macros, … etc. The Technical Specification document is usually prepared in MS Word format and saved with other documentation in Solution Manager. The Development Component Documentation should be as close as possible to the development system. Since it would be mainly used by programmers it should be an integral part of ABAP components’ programs as Java documentation is a part of Java programs. To make ABAP Component Documentation easy to find and benefit from it during the development process, the Java Docs concept should be used to handle it. The generation program is needed to convert inline component documentation to online documentation and presentation program is needed to display it. The bottom layer on the preceding diagram shows components’ inline documentation in HTML format presented with help of ABAP Docs system. The entire custom development process is shown in more detail on the following diagram: Figure 2 – Custom Development Process with ABAP Docs Beside standard development process flow with Functional Specification, Technical Specification and Development steps, notice splitting the Technical Specification object into Development Components, creating their documentation with ABAP Docs as well as Development and Testing of those components. Those additional steps to the standard development process would allow for greater usability of ABAP code. Especially important step is documenting components to be developed with ABAP Docs so the functionality of the new and already available components could be available for reference online to the entire development team. Team Lead and Programmer Roles in Component Development The creation of the Development Component documentation should be delegated to team lead or chief architect/developer. The process consists of the following tasks: - Defining component scope that could be useful for this and future use - Selection of component name - Selection of component location; e.g., ABAP include where it will be implemented - Definition of component signature - Writing component inline documentation including code examples - Generation of component online documentation The component implementation should be delegated to a programmer in the development team. The process consists of the following tasks: - Development of component ABAP code - Writing of component test program - Preparing component test cases - Performing component unit testing - Updating component inline documentation - Generation of component online documentation ABAP Docs To help architects/developers/programmers with documenting Development Components and finding information about them, the ABAP Docs system was implemented. The ABAP Docs is a premier ABAP software documentation system intended for software developers. It is based on assumption that the documentation is the last thing that programmers want to do unless they can really benefit from it. It is why ABAP Docs, with its superior search capabilities, has developers in mind and gives them instantaneous access to software components’ online documentation, making the development process faster and easier. The software documentation created and generated with ABAP Docs provides very useful information on software components’ signatures, functional and technical description, example of their use, links to related components’ documentation as well as status information on the components’ development process. It can be also easily incorporated into MS Word based documents. ABAP Docs is not the artificial intelligence system and it does not write documentation for you. It only helps to write it and use it providing useful documentation templates, HTML generator and documentation cockpit. You are responsible for writing documentation. The documentation will be as good as written by architects, team leads and/or programmers. Once you start using it, you will quickly appreciate its benefits and write documentation that would be very helpful for the entire development team and speedup the implementation process. The complete free ABAP Docs source code and documentation is available through the link to the “Upgrade to ABAP Docs 2.0 …” whitepaper – the latest version of ABAP Docs that introduces its many new exciting features: Upgrade to ABAP Docs 2.0 and Contribute to Its Development in Google™ Code Community Project The ABAP Docs 2.0 system was also installed on the internal SAP Consultant’s E60 system; i.e., Application Server: tsphl815, System Number: 02, System Id: E60 I have long been used to JavaDoc and PHPDoc type comment blocks and am looking forward to having the same type of functionality within ABAP. I have scanned through the documentation and I look forward to trying it out in a sandpit. Inline documentation is much better that separate word docs as it is easier to write and easier to read. I am looking forward to see where the community takes this. Could it spell the end of the dreaded ‘No documentation available in language EN’? Many Thanks, Nigel Thank you, Adam – keep up the good work! Peter It will make a difference in developer´s tasks. Best regards. – anto nice tool! In my eyes it makes sense to make this a community project in order to get a broader audience similar to SAPLink and the downport of the new ABAP editor. Midterm goal could be a better integration into the ABAP editor. What about creating a project at code.google.com? Thanks, Stephan Having a component based development approach along with inline documentation would really help the development and support of the custom ABAP. Thanks for sharing the tool.I would try the tool and look forward to implementing in my next implementation project. Highly appreciate your contribution. I welcome your blog spot and I agree with requirement of some Custom ABAP Development Methodology. I know realization/delivery/providing of customer development is sometimes made ineffectively with impact on increased costs and unnecessarily too much utilized project team members. From your blog spot I understand that you are focused mainly on developer’s work. In other words “everything is in ABAP code and code must be well documented”. OK.. I agree with process steps illustrated in Figure 2 (Custom Development Process) and as a most important I see Meeting with Functional Lead, Functional Consultant, Development Lead and Developer to reviewing Technical Specification in contrast to customer requirement and prepared Functional Specification. Then the Fuctional Consultant with Fuctional Lead should sign off Technical Specification. Naturally. May be a question what should be contained/described in the technical specification because Fuctional Consultant with Fuctional Lead aren’t strong in technical issues. By the way your blog spot is oriented to promotion of ABAP Docs mainly. And you mean ABAP Docs like a tool for helpful online documentation generator based on inline comments in ABAP code. And this is what I can’t support in my professional ABAP developer’s practice (and with 6+ years of experience) because I often write technical specification via using UML schemes (in my object-oriented reality) what are used like a nice documentation after development process. That’s why I see ABAP Docs activity like back way. And as you know most of objects of ABAP Workbench and ABAP Dictionary are invested with well-known documentation possibility. For example on Function Modules, Programs, Global Classes, Transparent tables, Transport requests,.. etc. and from general experience this documentation aren’t maintained/updated in most of cases (and by SAP developers “occasionally” as well) on real projects. Finally it’s hardly to say how we can use your tool in WebDynpro, Persistent Classes, PDF-based Interactive Forms, Web Services (ABAP&J2EE collaboration) and other new frameworks. That’s why I don’t see ABAP Docs tool like helpful and I don’t prefer inline comment systems in modern development platform. Best regards, Ondrej Thank you for your comments. In my blog I differentiate between a technical specification document and a development component document. The technical specification describes requirements defined in functional specification from technical point of view and usually it tries to follow the business process structure so it can be understood and approved by functional consultants. However, without dividing technical specification into set well defined entities; e.g., function modules, ABAP forms, macros, etc., it is difficult to talk about reusability. When you look closer at it you will find out that many technical specification documents describe combination of new functionality and some functionality that was already developed and described in earlier functional/technical specification documents. Often, that already developed functionality was not encapsulated in a separate component so it cannot be easily reused. The concept of the development component is to divide the technical specification document into set of well defined programming components. When doing it, you will find out that some of those components were already defined. It would be easier to notice it, if those components are well documented and documentation is easy to access. The ABAP Docs system allows maintenance of that documentation inline where it is easily available to the programmer for quick update if necessary. As I understand your usage of UML for writing technical specification does not contradict the concept of the development component. You use UML to describe business process in technical document. The usage of development components; e.g., function module, class method, ABAP form, …, etc., divides it into separate entities that are well described with ABAP Docs providing information about signature, functional description, usage examples and references to similar components. Keeping component documentation inline with the source code is not a new concept. It is used for many years in Java development and Java documentation can be generated and accessed using JavaDocs. You are right that SAP provides tools for documenting function modules, classes, programs and other objects. As you know those tools are not used very often. One of the reasons is that they are not very convenient to use. Additionally, there is no tool that could display on one list information about; e.g., function modules, class methods, macros and ABAP forms. The ABAP Docs allows you to do this so you can have all type of objects on the same list. At the moment ABAP Docs supports function modules, ABAP forms, macros, ABAP types. It will be extended to support global and local class methods, constants and simple types. Of course there are limitations to ABAP Docs use. It does not support directly PDF-based interactive forms, WebDynpro, WebServices, BAdIs and some other component types. However, if appropriate, you can encapsulate programming logic of those components into function modules or subroutine pool ABAP forms. Then their functionality can be documented with ABAP Docs and reused in other places; e.g, function module called in BAdI could be reused in WebDynpo. Best regards, Adam Very good contribution! thank you! We will use it in our company… I also like the discussion about building library of components so we could reuse them in the future. Your blog is a good one and threw light in many area which are also vital for quality. In our development process, the developer details the components as well as main program logic in the TS. I am not very clear on what differentiates DC from TS. I also notice that, you suggest for splitting the TS into DCs after the TS is signed off by the FC. In this case technically the TS is distorted after sign-off. I may be confused here since I dont understand the difference between TS and DC. It would be useful if you take a simple case and eloborate your methodology right from FS, TS, DC, inline documentation and developed object. Regards Suresh Radhakrishnan Thank you for your comments. As you know the Functional Consultant signs off on the Technical Specification. For Functional Consultant to be comfortable with signing off the Technical Specification, he/she has to be comfortable with its content. Therefore, the Technical Specification should follow the structure of Functional Specification. It should be one to one relation between FS and TS. The Technical Specification should present information like function modules to be used, database tables to retrieve data from, configuration parameters considered, …, technology used; e.g., WebDynpro, standard ABAP or SAP Controls. With information like that and a little effort to verify it, the Functional Consultant should be able to understand the Technical Specification and eventually sign off on it. The Development Component Documentation is all about packaging the content of FS and TS into reusable, mostly custom components that Functional Consultant is usually not aware of. By dividing Technical Specification into one or more Development Components, some already developed, the development process could be better organized and benefit from code already developed and tested. Best regards, Adam It was clear & descent , for Programmers it is helpfull to know the big picture of Any Impelmentation , as you told it will enjoyable work , if everyone Knows about each level of work beginning from Blue Print to Coding… Many of Projects are not following , ebcs of running Out of time , so the work gets distracted . finally likage will be there @ every level . if we try to follow your Methodalogy it will great . Girish I am not finding the object ZABAP_DOCS_LIST in the nugget, I am sure I am missing something. It is supossed to be part of ZABAP_DOCS_LIST_REC structure. Kindly let me know what I need look into. Thanks Please, follow closely instructions in the Installation Manual. How to handle ZABAP_DOCS_LIST table is described on page 36. I had a several problems with SAPlink and was not able to download that table. You have to create it manually by copying and pasting information from manual. Some other objects are also not included in NUGG files; e.g., Development Classes/Packeage, Message Classes. I hope that this helps. Best regards, Adam Before i would like to say thanks for this blog and I was fortunate to work with you in my recent project. But as a developer I am not comfortable with Component Development. This methodology is killing freedom of choice for a developer. Technical architect should propose component development before the Realization phase.There should be discussion sessions between technical architect and developers. But this is a great methodology to optimize development efforts Best regards, Pradeepvonti. ZFLC is already defined as structure or table But there is no structure/table defined with this name. What could be done for that? Thank you for your comment. I have developed ABAP Docs on one system and then I used SAPlink to download it to NUGG files and upload it to another system – just to make sure that it will work. While doing this I was writing the Installation Manual. As you can see there, the process was not smooth. I had a few problems and many times I had to force activation of objects that did not compile. Once everything was activated the recompilation worked without any problem. Please, follow closely the Installation Manual – the order of steps might matter. I am not sure whether your problem could be fixed by forced activation of code that does not compile. If not, please, try to reinstalled everything selecting SAPlink option to delete/overwrite existing code. You might also try to create ZFLC type group before installing ZFLC nugget. I will try again to reinstall ABAP Docs on another system and verify Installation Manual. I hope that you would be able to install it and use it. Best regards, Adam Generation errors in program ————————————————————————- Source code %_CZFLC Line 0 Program %_CZFLC does not exist. I don’t know what to do, because if a reimport the nugget again there will be no change. Please, look at comment on 2008-09-04 20:41:20 Eswar Rao Boddeti. He had similar problems and found out how to go around them. It looks that the Installation Manual does not describe well how to install ABAP Docs. Best regards, Adam I just want to question about the development procedure. I dont see any Functional spec sign off, do we normally do that as well in the project? Thanks, Lim… a remarkable tool indeed. And the HUGE effort to build this is well understood! My doubt is: can such a methodology be deployed in real life projects? It takes 3 meetings per development according to your example. I would find it very hard to convince a client pay all these manhours. Maybe in the long run it is in favour of the budget for custom developments. But this has to be proven first. So it would be nice to have some kind of ROI study (or generic case study) for deploying such a methodology. Best Regards Spyros Regards, Worawit R. Thanks for the blog. Have some basic doubts, which I would appreciate if you could clarify. Lets say for eg that we are developing a custom report in which we call a custom function module (newly developed for the custom report that we are developing). What you are saying is that we should have seperate documentation for the FM using ABAP DOCS. Should we not have the details of the FM documented in the technical specification (TS) of the custom report? Also if we document details of the FM in the TS and seperately using ABAP DOCS, then arent we duplicating effort? Regards, Mick Great way to start blogging with such a nice tool. ABAP Docs is an informative tool and helps us developers provide inline documenation with quick and easy reference. To know that the tool is one man’s effort is simply amazing. However i would like to add few points to avoid the installation errors and please feel free to make your comments. 1. Change the approach for installation as: i) Create Type Groups ii) Create Dictionary Object: ZABAP_DOCS_LIST. iii) Extract each nugget individually starting the session afresh for each nugget iv) Ensure that all local and transportable objects are active v) Create Message Classes and Transactions 2. Add a condition in function module: ZADOC_HELP_P1LIST_DPARM_HSCLCK,Line No: 78 to check for positive offset. This is avoid short dump for negative offset while drilling the report output to ABAP Internal Types like ‘C’. Eg., if li_pos > 0. lc_suffix = lc_obj_name+li_pos. endif. Waiting for more informative blogs and utilities from you. Warmest Regards, Eswar Thank you for explaining how to go around the installation problems. I think that it should help others with installing the system. Obviously, the installation manual did not describe the process correctly. Thank you again for your help. The blog is really good one. I would look forward to use it soon, in near future. Thanks & Regards, Wajid Hussain P. Seems to be a nice tool. But I have the one or other question: 1. Are customer namespaces supported also like e.g. “/acme/myreport”? 2. Will this tool become part of SAP standard software delivery (as part of SAP_ABA or SAP_BASIS? 3. If not what about upgrade stability? Thanx for this blog, the toolset and for answering my questions. Regards, Volker Thank you for you comments. Here are the answers for your questions: 1. I did not test it but I cannot see why not. I guess / character in component name should not cause any problem. Please, try and let me know. If you want, you can also use it to document standard SAP components; e.g., function modules, using explicit enhancements. 2. I do not think that SAP has plans to include it in standard SAP. It was developed by me as Z* project after regular consulting hours. 3. I do not use anything specific in my code and as long as SAP Controls are supported, ABAP Docs should work fine Best regards, Adam 4. What about BSPs? 5. What about WebDynpro ABAP? 6. What about Enhancements? Regards, Volker Thank you for you comments. Here are the answers for your questions: 4. No BSPs at the moment – ABAP Docs is extendable software. At the moment, it supports ABAP Forms, ABAP Macros, ABAP Types, Function Modules. There is also there some code to support ABAP Classes and ABAP programs. However, it is not fully implemented. You can add more component types if you wish on your own. To do this, you would need to develop component editor template, component HTML template and component parser subroutine and integrate it with ABAP Docs engine. I am also considering to make out of ABAP Docs a community project on – if this would happen then possible many programmers could contribute developing functionality for new components, additional sections for existing components and completely new functionality 5. No ABAP WebDynpro at the moment 6. No Enhancements at the moment Even so components you mentioned are not supported by ABAP Docs, you might consider wrapping business or technical functionality that you implement in those components into function modules that you could document with ABAP Docs. When doing this you would be able to reuse those function modules in multiple components; e.g., BSPs and ABAP WebDynpros Best regards, Adam how can i fix the installation problem when defining the type group ZFLC. I had first imported the ZFLC-nugg-file. Have i to delete all objects of the nugg file? Is there any hidden object. I can’t understand the sap error message when creating type group ZFLC, because there is no equal named ddic object. Thanx for help Ullrich Burghoff Thank you for you comments. When writing ABAP Docs Installation Manual I mixed the order of steps needed to install ABAP Docs cleanly. Please, look at [2008-09-04 20:41:20 Eswar Rao Boddeti] comment to my blog. He explains how to go through the installation avoiding those problems. Best regards, Adam As a workaround to avoid unisntalling/deleting the ZFLC nugget, you can do as below: 1. Place a break-point in program: LSD31F01, Line No: 166. –> SUBRC check after selection from table DDTYPES. 2. Now try to create ZFLC type group via SE11. When you reach the break-point, change the SUBRC value, since the select statement checks for the pattern name on a global perpesctive rather than the object kind. Hope that helps 🙂 Regards Eswar shed some light, I am missing some thing … Thanks, Shaik – There is no installed SAPlink plugin for object type FUGR – There is no installed SAPlink plugin for object type TABL I don’t think that you can expect widespread adoption / use of the ABAP Docs tool – without it being a part of the SAP standard solution. As the effort taken to start tagging comments in all custom development is a significant one – and one that an SAP customer is unlikely to take without a guarantee of ongoing support for the toolset. Is there any chance of you pursuing this further within SAP? Thank you for you comments. At the moments are no plans to incorporate ABAP Docs into standard SAP ABAP Workbench. However, ABAP Docs as custom development is independent of standard SAP and you should not have any problems with upgrades to new SAP releases or installation of support packs. You are also free to make your own enhancements and if you would send them back to me I might incorporate them into next release. I plan to continue ABAP Docs development on my own and consider organizing community project on to get a broader audience. I am thinking about adding the following new features in the next release of ABAP Docs: • Support for ZMETH component for local and global classes • Automatic support for related components defined in the same ABAP include without listing them in component’s documentation • Support for ZCLAS component for local classes • Fixing problems with Installation Manual • Fixing some bugs Best regards, Adam one shed some light, I am missing some thing … – There is no installed SAPlink plugin for object type FUGR – There is no installed SAPlink plugin for object type TABL Thanks in advance, Shaik Please, read comment s to blog, especially Eswar Bodetti comments. Yhey might help you with the installation problems. Best regards, Adam I have exactly the same problem trying to install the ZFCL nugget: There is no installed SAPlink plugin for object type FUGR There is no installed SAPlink plugin for object type TABL So, I tried the proposal of Eswar Bodetti and I: – created the type groups (but without details because for ex. the inlcude zflc_tg was not found) – Created the table ZABAP_DOCS_LIST and when I try to import the nugget again, i have the same error as before. Please help Herbert In one of the comments to blog there is link that allows you downloading SAPlink including TABL & FUGR plugin. Please, follow Eswar explanation on how to deal with installation problems In next two weeks or so ABAP DOcs 2.0 is coming This is a good tool. Actually we have a plan to include this tool and customize it for our purpose. But while installing the tool, i found some issues. So, i seek some clarifications from you. Is the tool only applicable for Ecc 6.0 and above? Because while executing “ABAP Docs 2.1 Cockpit” transaction, it goes for a short dump, stating that SAP_BOOL (data type used in public section of class ZFLC_ALV_GRID_GC) is not available. SAP_BOOL data type doesnot exist in SAP v4.7. But it is available in SAP ECC 6.0. I have checked it. Since, this is one of the errors i found, i seek your valuable clarification regarding the following: Is it possible to install the tool in SAP v4.7 or is it only applicable for Ecc 6.0 and above? What are the pros and concerns to be taken care, when i need to install the tool in SAP v4.7? What are all the issues i will face? Thanks and Best Regards, Suresh Sorry for replying so late. I missed your comment. Even so ABAP Docs was developed on ECC 6.0 you should be able to port it back to SAP 4.7 with minimal effort. Please, let me know how did it go. Best regards, Adam Baryla
https://blogs.sap.com/2008/07/31/custom-abap-development-methodology/
CC-MAIN-2018-30
refinedweb
4,795
54.73
Suppose, we have given an integer array. The problem “Count of index pairs with equal elements in an array” asks to find out the no of pair of indices (i,j) in such a way that arr[i]=arr[j] and i is not equal to j. Example arr[] = {2,3,1,2,3,1,4} 3 Explanation Pairs of indices are: (0, 3), (1, 4), (2, 5) arr[] = {3, 3, 1, 4} 1 Explanation Pairs of indices are: (0, 1) Algorithm - Declare a Map. - Traverse the array and count and store the frequency of each element into the map. - Set output to 0. - Traverse the map and get the frequency of each element from the map. - Do output += (VAL * (VAL – 1))/2, VAL is the frequency of each element. - Return output. Explanation We have given an array of integers, we have asked to find out the total no. of pairs present in an array such that their indices are not the same but the elements on those indices should be the same. So we are going to use a Hashing for it. Hashing is a better way than the brute force method in which we have to visit all those elements using extra time of O (n2). So we are avoiding that. We will declare a map, picking up each element, count and store the frequency of each element into the map, if it is present already into the map, make a place for it, if it present already just increase its frequency by 1. To use a combination formula, we have to count the frequency of each number. We are going to select several pairs that satisfy the given condition of equal elements but not their indices. We can assume that any number which is present in an array appears k times at any index up to kth index. Then pick any two indices ai and ay which will be counted as 1 pair. Similarly, ay and ax can also be pair. So, nC2 is the formula for finding out the number of pairs for which arr[i]=arr[j] also equal to x. After the traversal of the array, and putting each element and its occurrence in a map, we will traverse the map, picking up each frequency of element and applying a formula on it, adding up with the output and store it in the output. We have to keep repeating it until we traverse all the elements and their frequencies and performing an operation on it. And at last, we will return that output. C++ code to find count of index pairs with equal elements in an array #include<iostream> #include<unordered_map> using namespace std; int getNoOfPairs(int arr[], int n) { unordered_map<int, int> MAP; for (int i = 0; i < n; i++) MAP[arr[i]]++; int output = 0; for (auto it=MAP.begin(); it!=MAP.end(); it++) { int VAL = it->second; output += (VAL * (VAL - 1))/2; } return output; } int main() { int arr[] = {2,3,1,2,3,1,4}; int n = sizeof(arr)/sizeof(arr[0]); cout << getNoOfPairs(arr, n) << endl; return 0; } 3 Java code to find count of index pairs with equal elements in an array import java.util.HashMap; import java.util.Map; class countIndexPair { public static int getNoOfPairs(int arr[], int n) { HashMap<Integer,Integer> MAP = new HashMap<>(); for(int i = 0; i < n; i++) { if(MAP.containsKey(arr[i])) MAP.put(arr[i],MAP.get(arr[i]) + 1); else MAP.put(arr[i], 1); } int output=0; for(Map.Entry<Integer,Integer> entry : MAP.entrySet()) { int VAL = entry.getValue(); output += (VAL * (VAL - 1)) / 2; } return output; } public static void main(String[] args) { int arr[]= {2,3,1,2,3,1,4}; System.out.println(getNoOfPairs(arr,arr.length)); } } 3 Complexity Analysis Time Complexity O(n) where “n” is the number of elements in the array. Space Complexity O(n) where “n” is the number of elements in the array.
https://www.tutorialcup.com/interview/hashing/count-of-index-pairs-with-equal-elements-in-an-array.htm
CC-MAIN-2021-49
refinedweb
659
62.17
User talk:Un-Algorithm/archive2 From Uncyclopedia, the content-free encyclopedia More Fisher Price gripes I made a category for Fisher Price so that we wouldn't have to have a sub-heading. Having a subheading in the article defeats the meaasge which is, Fisher Price was made, and remains as, a short petty vandalism. Mr. Briggs Inc. 00:13, 1 December 2006 (UTC) Eh? - Without the link, new users wouldn't get the joke. Granted, they probably won't get the whole joke even with the link, but at least they'll find it funny. I really don't think the addition of two extra, obviously separate, lines adversely affects the content of the page. --Algorithm 01:06, 1 December 2006 (UTC) Help me? You are an Admin right? So I was trying to register but I typed the spam blocker code inccorectly twice. I can't register now, when I try it says I allready have two accounts but when I try to login I am not able to. Can you help me?-Absurdism Headliners Template Thanks for that edit, I've been wandering round trying different things trying to work out how to acheive that, cheers! Now if you could just edit the css on the UnNews Main page so that external links don't show that little logo... :P --Olipro Co-Anc (Harass) 11:58, 29 April 2006 (UTC) - Use <span class="plainlinks"></span> (or a <div> for the whole page). --Splaka 06:10, 30 April 2006(UTC) God I’ve come here because you’re the first admin on the list and I couldn’t be bothered looking any further. Several people are insulted, on religious and political grounds no doubt, at the article about God. The article is continuously being reverted to a crap version with sections like “how to contact God” with a phone number on it. This is funny? It may seem like a drastic measure, but would it be possible to put a warning sign on the article similar to the one on the article about George Bush? If this is a step too far could you do anything else to help prevent this? Thanks. Weri long wang 17:54, 16 April 2006 (UTC) - I've come here because I'm one, of several, users who've reverted the "God is a twat" version of God because, while the "fuzzy bunnies" version has weak spots (on the "how to contact god" Mr. Wang and I are in agreement), the "twat" version is mean and unnecessary. - I'm not offended on religious or political grounds, I'm offended that, "God is possibly the best known fictional entity on Earth after the Beatles." is being overwritten by, "God is a vindictive psychopath who supports George Dubya Bush and his 'crusade' in Iraq, a country filled with swarthy skinned non-Christians, usually referred to as "terrorists" in the southern states of the USA and on Fox News.". One is occasionally, funny. The other is consistently bad-tempered. - For a possible solution I suggest moving the "twat" version to Yahweh and leaving the "fuzzy bunnies" version where it is. That would put the Old Testament in Yahweh with the smiting and the wrath, and the New Testament in God with the smiting and the wrath, not so much. - Of course, there's always Thunderdome...I could play the little guy. Maybe User:Chronarion or User:Carlb could play the big guy. Modusoperandi 02:26, 17 April 2006 (UTC) - The funny version of the God article. Apt title I'm sure you'll agree. Weri long wang 14:14, 17 April 2006 (UTC) - Or mayb you might to rename it the angry version of the God article?Weri long wang 14:16, 17 April 2006 (UTC) Dealing with an insult from an admin I made something inadequate;was looking for what I think were legit ways of linking other articles to one of mine. One admi warned me and I immediately stopped. But another left an insulting message, where I took the liberty of deleting the insult alone. How should i react to this? Elmicael 02:59, 11 April 2006 (UTC) - I've looked at the "insult", and honestly I don't think it was meant to be insulting as much as it was meant for emphasis. People use foul language here on a regular and consistent basis, so if that sort of thing truly bothers you, this may not be the best site to visit. Try not to get your feathers ruffled over it. - Also, you should keep in mind that deleting or altering other people's messages is generally considered bad wiki etiquette. Report it to an admin if it's truly offensive, but please don't delete it yourself, ok? --Algorithm (talk) 03:11, 11 April 2006 (UTC) Quick question Is there any function that gives you the number of articles in a particular category?--Rataube 09:56, 14 April 2006 (UTC) - Rat: You can try browsing through Special:Mostlinkedcategories and Special:Wantedcategories though. --Splaka 03:16, 15 April 2006 (UTC) Fisher Price Instead of reverting it every time it gets modified, why not just protect it? --Mindsunwound: (MUN) Heterocidal Tendencies Vacuum 18:47, 19 April 2006 (UTC) Title-left Hi. I was wondering what the purpose of {{PAGENAME}} was in Template:Title-left. Is it just so people can use {{title-left}} without specifying a page name (in which case it has no effect anyway)? Angela 03:14, 21 April 2006 (UTC) - Strictly speaking, it has an effect: it strips off the namespace. Mostly, though, it's just there for consistency across the three templates. --Algorithm (talk) 03:36, 21 April 2006 (UTC) NRV rewording Hi, I made a new version of NRV that's less intimidating as the result of discussion in the Dump. Isra said to ask you your thoughts. --Hobelhouse 02:03, 25 April 2006 (UTC) - Yeah, that's fine. My only objection to the earlier change was the removal of the phrase "no redeeming value" from the No Redeeming Value tag. --Algorithm (talk) 07:02, 25 April 2006 (UTC) PFP, FI, and all the likes Hiya, Algo. I noticed you removed the negatively voted featured images from PFP and the FI-template. Now there's nothing wrong with that (of course), but may I request a wee favour for next time? Namely update the scores and such... For you see, when updating PFP, I copy the coding from the last update into a Word document, and compare it to the coding from the most recent PFP version (using the special feature Word offers to its Microsoftian slaves). Though this method requires a bit of finetuning, I have noticed that because the removal of some sections, it's all shot to shambles... Of course I couldn't expect you to know of my method, so I thought I'd just give you a heads-up. :D Thanks in advance, and take care. --⇔ Sir Mon€¥$ignSTFU F@H|UotM|+S 09:38, 29 April 2006 (UTC) Carmen I restored it... looks like it was a previous revision... but I don't really know where it came from. I guess I just kind of took ZB's word for it... which is never the right thing to do... Anyway... sorry fo any confusion or wrongdoing, and spare the lectures because I've heard them, 16 May 2006 (UTC) Nerves... Shattered... On Edge... Can't think of clever heading... Sorry to have resurrected that old Google-ban thread, Algo - I just figured it made more sense to keep my latest bit in context with the original reference (to Morton d's sporkification) so that the newer people wouldn't think I was whining about something completely out of the blue. Not that the whining isn't bad enough, mind you... I also wanted to thank you (and Spintherism, too) for being voices of reason last month when that whole business with Tompkins happened... I didn't even see any of that until more than two weeks later, when I finally decided to start checking Uncyclopedia again, and realized I'd only really been blocked for about 10 minutes! And by the time I came back the page was already protected, which was just as well, really - it gave me a fairly decent excuse to try and just forget the whole incident. So I'm hesitant to post this for fear of needlessly stirring things up again, but I feel like you deserve at least some sort of explanation. Obviously my behavior leading up to that point was confrontational, for which I'd apologize if anyone were to ask me to, or if I thought it would do any good. The reason for the whole business was that it was shortly after Wikitruth.info first appeared, and that of course led to a witch-hunt over at Wikipedia, which apparently is still going on. The Wikitruthers made some sort of coy little jibe (since deleted) "thanking" David Gerard for "mirroring" their site, which led to misunderstandings among some clue-deprived types about David being directly involved, which in turn led to this talk page entry. (The "two sockpuppets of one troll" are User:Mahroww and myself, in this entry.) And shortly thereafter, David started this forum, in which he pointedly speculates about who's responsible for Wikitruth based on their "writing style," and since AFAIK he still thinks I'm the person he referred to there on his WP talk page (it's the same person who started this thread on wikipediareview.com, which seems to be as far as they've gotten with this), I basically put two and two together and decided David was essentially trying to pin the whole thing on me! Which is not only waaaay beyond ridiculous but, well... you saw how I reacted - badly. Maybe I wouldn't have minded so much though, if I didn't consider wikitruth.info to be completely reprehensible. I know it's ludicrously complicated, insane even, and that's why I haven't made such a big deal over it - there hardly seems to be much point in asking people to follow up on all those links and absorb all that material, especially if they're among the 98% of users here who are unfamiliar with the situation to begin with. AAAAAAAAA! Meanwhile, Wikitruth has since been proven to be the work of "rouge admins" over at Wikipedia, so I guess I'm off the hook for that at least. (Sheesh!) Anyway, all I seem to be good for these days is getting in a snit about sporkings by Wikipedians, both incoming and outgoing, and that's only constructive up to a point — which I'm finding is very easily reached! But either way, thanks again, man, and keep up the good work. c • > • cunwapquc? 06:55, 3 June 2006 (UTC) BENSON IS HERE TOO! Heh, sorry about Forum:BENSON, it was just really pissing me off in Forum:Village_Dump (and feel free to remove the message there in a day or so, or make it less ugly right now, or ban benson and huff all the shit). ♥ ♥ ♥ --Splaka 06:51, 7 June 2006 (UTC) Pie Someone told me that admins like pie. Here you are: Failed to parse (unknown function\Large): \Large{\pi} Hey Algorithm, I am contacting you because you're the first one in the sysop-list. The link to the German uncyclopedia on the main page doesn't lead anywhere. I can't change it, nobody reacted to my post on the discussion page :-( and I don't know whom else to tell. So I'm telling you. I think this is the correct URL:. Will that get us anywhere? -- Krankman 09:38, 14 June 2006 (UTC) (I forgot to add a new section, durr) That's actually a good idea (a list of top level forums in the forum). And since Wikia hasn't installed the newest version of your extension yet, you could add an optional feature: hideself=true (that is, it would exclude itself if it was in the listed category), so you could just the contents of Forum:Index in a table at the top with that added parameter to list all *other* forums. Just a thought *grin* --Splaka 11:30, 14 June 2006 (UTC) Dr. Phil Hi, why did you huff my UnBook on Dr. Phil? I spent quite some time on the cover art.--Shandon 08:54, 9 July 2006 (UTC) - Sorry, but it was poorly made and unfunny, as was the article. Please note that we are currently in a Forest Fire Week, so anything that isn't high-quality is at risk. --Algorithm 21:21, 10 July 2006 (UTC) Forum Archives How about, to delay having to make a final decision, we start archiving forum topics by putting them into [[Category:{{{1}}}/Archive]] or [[Category:Forum archive/{{{1}}}]]? Either by editing {{Forumheader}} to accept a parameter, or by creating a new template like {{Forumheader/archive}} (and then get Hymie to go through and change all the topics older than 30 days). This would allow easy restoration later. And then, any topics older than say 1 year could be huffed? --Splaka 05:42, 10 July 2006 (UTC) - Once the new version of 1.7 is conclusively installed, the upgraded forum software combined with DynamicFunctions makes this problem moot, as each forum page will be able to serve as its own archives. --Algorithm 21:23, 10 July 2006 (UTC) Quote Template Could you check the quotes template talk page when you have a chance? Thanks! User:Alexjohnc3/sig 01:56, 9 August 2006 (UTC) New PFP I just checked the new PFP page (I've been out of town most of the past month) and I have to say I think it makes things more complicated than they need to be. Having a separate page for every image is sort of a mess - it's already that way for template:FI and it's a headache (though in that case I don't know if there's a more elegant solution). I think it would also discourage voting by forcing you to click to another page. The worst thing is that, unless I'm missing something, there's no way to get a list of up-to-date scores - meaning to sync with template:FI we'd have to check every individual page. The only real advantage to this system that I see is that no images should languish at the bottom, like the oldest stuff sometimes did on PFP. Overall I don't think this change is a good one at all. I know the old PFP was bloated, but it still seems like a much easier way to do things - and additionally, some people have petitioned to raise the standards for PFP, which could cut out a lot of stuff. —rc (t) 19:58, 16 August 2006 (UTC) - On further consideration, you're right; I went too far. Still, I think it was important to both a) split the very large PFP listing into individual votes for maintenance purposes, and b) upgrade the PFP template to match the current format. I've restored the main PFP page to something closely resembling the original layout, and moved the forum monitoring to a separate page. - Additionally, the new layout may be sufficient to render Template:FI/all obsolete. Should I go ahead and merge these pages together? --Algorithm 23:27, 16 August 2006 (UTC) Original Jesus Dunno if you're busy, or not, but when you have some time can you look at Original Jesus. A new user has been, in my opinion, butchering the page. He's trying to be blasphemous, but instead his edits are just puerile (as with problems with a different user's edits on God, I have no problem with someone else's take on the same subject, except when it changes the vibe of a page). The problem initially started when he cut my favourite line (tho' it be mine, which makes me biased). We sort of had a revert war, I backed off and put a message on the user's talkpage. He didn't reply, but his next edit had "(Oh, wow! All the blood has gone to Dick's head! Oy vey!)" as its comment line...which seems kind of rude. Anyway, time moves on, he decrufts a bit, makes more jokes about shit, abortion and mary's tits...yada yada yada...but now he's gone too far. He cut the section on "Bend it like Bethlehem", my second favourite bit. Long story short, do you mind checking the page out? In particular, the His Birth section after "What many don't know...". I just need to know if I'm off base here. A second opinion is valued and you helped out on God with the recent unpleasantness. If you say the changes are good I'll abandon the page to the mad clutches of whomever (I might be too close to the trees...), but if we agree then it means (potentially) that we can take up ladders and storm the gates at dawn. Or something. Preferably something that doesn't involve ladders, running, or getting up early. But I digress. I've also pestered Chronarion, as he too got involved in the unpleasantness and a three opinions are better than two. Even if one of them is mine. --Sir Modusoperandi Boinc! 20:38, 18 August 2006 (UTC) - Frankly, Modus, Original Jesus is a very bloated article, and such articles are almost never consistently well-written. I can't really take users to task for eliminating small sections of bloated articles they don't find funny, since they're generally honestly trying to improve the article's quality. With this in mind, I'd recommend creating new pages out of the sections that have been excised, and adding back links where appropriate. This way, the sections can be judged on their own merits, not by whether they bog an article down. - In fact, you may want to split some existing sections out of this article as well; it really is quite a mess. --Algorithm 23:23, 18 August 2006 (UTC) - Seen. I figure it's not worth throwing a tantrum over ("We were here first!" just doesn't ring true with the anonymity of the interweb). My main complaint is that the rewrit bits are worse than what came before; OJ was unfocussed but funny, now it's still unfocussed but the focussed bits have been stretched and distorted into non-funny. That, of course, is just my opinion and, as long as it remains just mine I'll leave the page alone, potty humor and all. --Sir Modusoperandi Boinc! 23:29, 18 August 2006 (UTC) - I suspect that I'd feel differently if he was behaving in a rational, adult manner. But he's being a dink about it. For his most recent edit he cut a witty joke about the Beatles being bigger than Jesus (flipped around to have Jesus smaller than the Beatles) and put in the edit comment "(*blows raspberry*)". It reminds me of when I was little and had a neighbour that tossed his dog's shit into my yard. If I remember correctly my father eventually got pissed and chased him around with a shovel...ah, good times. Sir Modusoperandi Boinc! 02:03, 19 August 2006 (UTC) - Fact of the matter is, he hasn't done anything banworthy, so there isn't much I can do at the moment. My advice: Calm down, wait until he's finished, and add back what you like. Anything you do right now will just lead to higher blood pressures. --Algorithm 02:30, 19 August 2006 (UTC) - Don't worry about it. I'm not going to resort to dinkery. In fact I'm not going to do anything at all. Ahh...is this what it feels like to be the bigger man? Kinda nice, really.--Sir Modusoperandi Boinc! 02:47, 19 August 2006 (UTC) - ...oh, and I wasn't trying to get anyone banned. I was asking your opinion as a user vice admin to see if my opinion was way off base because we'd agreed on the bad edits in God awhile ago. --Sir Modusoperandi Boinc! 21:55, 19 August 2006 (UTC) Favor Since your online could you do me a favor? I resently rewrote Darth Hitler for an AFD in which the vote was rewrite. It was huffed anyway despite the vote. I was wondering if you could put a copy in my user space? Thanks, --Darkfred 04:31, 9 September 2006 (UTC) Zork/room2 I changed it back because the point of this room is that you can't get past the Grue until after your knife wound has healed. The "blood...ravenous" serves as an explanation for this. --L 10:17, 6 October 2006 (UTC) - While that's true, it's my feeling that this is already conveyed by the "dripping blood" remark, and the extra explanation makes it less funny. But to each their own. --Algorithm 10:21, 7 October 2006 (UTC)! Wow, that guy really loved his page. His page may not have been funny, but his outburst was. DPLForum extension problem? Hi Algorithm. There seems to be a little problem with that extension, and I was told that this might be the best way to contact you. If you have a moment, please see this page. I'm trying "addfirstcategorydate=true", but the date doesn't show up correctly (instead it reads "2006--1-0-"). Is this a problem with the extension, or am I the problem? :) You can reach me via the talk page of that article or leave a note on memoryalpha:User talk:Cid Highwind. Thanks. -- CID Can we move Asian People to Yellow People? cf. Black People and White People.--Mrasdfghjkl 14:47, 15 October 2006 (UTC) For lack of knowledge on who to ask... I'll ask you who to ask. I believe the article on binary ought to be reverted to its sensible state, but am at a loss for who to talk to about un-locking, copying in the good binary code, and re-locking. Just my 2c, though. Or is there somebody else you'd like to refer me to bug? Like i said, imma n00b. 04:48, 9 November 2006 (UTC) My first attempt. It was on the Un-news page, and it was called Mad Stander stikes again. If it was shit, fair enough. But I didn't get a warning and I'm not really sure why it's been deleted. Is there different rules to follow on the un-news compared to the rest of the site? If you could let me know why I was shite, that would be nice. I can then try not to do it again, or kill myself. --Jake Justice. 18:18, 8 December 2006 (UTC) Xmas Mail A great year. Thanks for all you've done. --Sir Todd GUN WotM MI UotM NotM MDA VFH AotM Bur. AlBur. CM NS PC (talk) 16:19, 18 December 2006 (UTC) page deletion Hi Basically, there is a page called finlay_physics. Although the page doesn't really name an exact Finlay, the Finlay in my school that generally follows the guidelines of the page is threatening to go to the principle if the page isn't deleted. The original author also attends my school. Is it possible to have the page deleted (or moved to an obscure location where said finlay can't find it) so that me and the original author can escape being punished by the evil principle overlord... Basically, can you delete it? Thanks (I would of followed the normal deletion system, but a) I don't know how it works and b) there isn't much time between now and it getting reported :() Leo (--86.138.193.252 16:21, 20 December 2006 (UTC)) Hi, i've been googling about the "choose/option" tags, and your name came up :-) Am I mistaken or you are responsible for them ? If that's the case, I have a question for you (else, you can forget about me, ;-) ) I was trying (on the portuguese-speaking uncyclo, Desciclopédia) to create a template. Inside each option tag, there was a image tag, and each image tag would have a parameter (left/right). Example: ... <option> [[Image:Norrisss.jpg|thumb|{{{position|left}}}|100px|Chuck Norris, o poderoso]] * [[{{CURRENTDAY}}]] de [[{{CURRENTMONTHNAME}}]] - Dia em que Chuck Norris encarou o mundo. </option> .... It seems that, when the template argument (in this case, "position") is used inside choose/option tags, it gets ignored. (i've tried the same code, without choose/option, and the paremeter works like a charm). Is that some known issue with your extension ? Or Desciclopédia may be just using some old version (in this case a new version would be working and i'd just tell a sysop over there about it) ? Thanks in advance. WendelScardua 02:30, 11 January 2007 (UTC) - Yes, this is a known issue, but not with my extension. I'm afraid that no MediaWiki extension is allowed to use template arguments, as they are replaced and purged before extensions have a chance to access them. May I instead suggest you encase the entire <choose> block with <div align={{{position}}}>? With a little CSS tinkering, this should provide much the same effect. --Algorithm 06:25, 11 January 2007 (UTC) - Thanks for the help :-) I'll try that... - WendelScardua 03:31, 10 February 2007 (UTC) Admin help I’ve got a picture of the Flying Spaghetti Monster on my user page. Can somebody please give me some advice on how to scale it down to a sensible size, and push it to the right hand side of the screen? Thanks. Newze rules 22:30, 7 March 2007 (UTC) signatures how do you put pictures and other things in your signature?--Vfdtyler 21:20, 11 March 2007 (UTC) - Please refer to the How To Get Started Editing page. --Algorithm 03:22, 13 March 2007 (UTC) deleted article How do I find out what happened to my article? I wrotwe an article entitled "he's gaining." Also is this the best way to contact an admin? Or is there a general forum for that? Also, is there a template to put on my page, when I have a question, (like {{helpme}} on wikipedia)? Appreciate any help you can give. thanks. --Sm8900 12:39, 21 March 2007 (UTC) What happened to my article? I wrote one called "Motörheadbanger". It was funny and everything, and had a good size to it and links to other articles and stuff, the only thing it didn't have was a picture. So why was it deleted and how can I can it put back on? Do I have to type it up over again? I don't have an account on this site but if you could answer me it'd be great. Thanks. erm...what? i had made a parody page of a school near where i live...but it seems to have either not come up, or has been deleted, but no one had said anything to me about it...The name of the article is West carteret high school, and was what i had intended to be the begining of a series of parody of local places where i live. Is there something im missing? (17:26 july 31, 2007)--Hydrorunner 21:26, 31 July 2007 (UTC) chhhhh...yea i had forgoten that tidbit...well thanks for the info.
http://uncyclopedia.wikia.com/wiki/User_talk:Algorithm/archive2?direction=prev&oldid=2250081
CC-MAIN-2016-18
refinedweb
4,621
71.44
The obvious thing to do with a formula is to evaluate it. The following proc does just that. It requires a mapping varToVal from the variable name to its value: from math import pow proc evaluate(n: Formula; varToVal: proc (name: string): float): float = case n.kind of fkVar: varToVal(n.name) of fkLit: n.value of fkAdd: evaluate(n.left, varToVal) + evaluate(n.right, varToVal) of fkMul: evaluate(n.left, varToVal) * evaluate(n.right, varToVal) of fkExp: pow(evaluate(n.left, varToVal), evaluate(n.right, varToVal)) Now, to check whether a formula is a polynomial (to see if we can easily differentiate it, for instance), we can use the following code: proc isPolyTerm(n: Formula): bool = n.kind == fkMul and n.left.kind == fkLit and (let e = n.right; e.kind == fkExp and e.left.kind == fkVar and e.right.kind == fkLit) proc isPolynomial(n: Formula): bool = isPolyTerm(n) or (n.kind == fkAdd and isPolynomial(n.left) and isPolynomial(n.right)) isPolyTerm is quite ugly. Pattern matching would be much nicer. While Nimrod does not support elaborate pattern matching beyond case out-of-the-box, it's quite easy to implement it thanks to the sophisticated macro system: For pattern matching, we define a macro =~ that constructs the and expression at compile time. Then the code can look like this: proc isPolyTerm(n: Formula): bool = n =~ fkMul(fkLit, fkExp(fkVar, fkLit)) But this is still not as nice as it could be: The point of Nimrod's macros is that they enable DSLs that make use of Nimrod's lovely infix syntax. In fact, that's a conscious design decision: Macros do not affect Nimrod's syntax they can only affect the semantics. This helps readability. So here is what we really want to support: proc isPolyTerm(n: Formula): bool = n =~ c * x^c Where c matches any literal , x matches any variable, and the operators their corresponding formula kinds: proc pat2kind(pattern: string): FormulaKind = case pattern of "^": fkExp of "*": fkMul of "+": fkAdd of "x": fkVar of "c": fkLit else: fkVar # no error reporting for reasons of simplicity Note that for reasons of simplicity, we don't implement any kind of variable binding, so 1 * x^2 matches c * x^c as c is not bound to the literal 1 in any way. This form of variable binding is called unification. Without unification, the pattern matching support is still quite primitive. However, unification requires a notion of equality, and since many useful but different equality relations exist, pattern matching is not baked into the language. So here's the implementation of the =~ macro in all its glory: import macros proc matchAgainst(n, pattern: PNimrodNode): PNimrodNode {.compileTime.} = template `@`(current, field: expr): expr = newDotExpr(current, newIdentNode(astToStr(field))) template `==@`(n, pattern: expr): expr = newCall("==", n@kind, newIdentNode($pat2kind($pattern.ident))) case pattern.kind of CallNodes: result = newCall("and", n ==@ pattern[0], matchAgainst(n@left, pattern[1])) if pattern.len == 3: result = newCall("and", result.copy, matchAgainst(n@right, pattern[2])) of nnkIdent: result = n ==@ pattern of nnkPar: result = matchAgainst(n, pattern[0]) else: error "invalid pattern" macro `=~` (n: Formula; pattern: expr): bool = result = matchAgainst(n, pattern) In Nimrod, a template is a declarative form of a macro, while a macro is imperative. It constructs the AST with the help of an API that can be found in the macros module, so that's what line 1 imports. The final macro definition is in line 25 and it follows a fairly common approach: It delegates all of its work to a helper proc called matchAgainst, which constructs the resulting AST recursively. PNimrodNode is the type the Nimrod AST consists of. The Nimrod AST is structured quite similar to how we implemented Formula, except that every node can have a variable number of children. n[i] is the ith child. The various function application syntaxes (prefix, infix, command) all map to the same AST structure kind(callee, arg1, arg2, ...), where kind describes the particular syntax. In matchAgainst, we treat every call syntax the same with the help of macros.CallNodes. We allow for a(b) and a(b, c) (line 15) call syntaxes and construct the AST representing an and expression with the help of two @ and ==@ templates. n@field constructs the AST that corresponds to n.field and a ==@ b constructs a.kind == pat2kind(b). Line 18 deals with the case when the pattern only consists of a single identifier ( nnkIdent), and line 20 supports () ( nnkPar) so that grouping in a pattern is allowed. As this example shows, metaprogramming is a good way to transform two lines of long ugly code to a short beautiful one-liner at the cost of 30 lines of ugly code dealing with AST transformations. However, the DSL we created here pays off as soon as there are more patterns to match against. It's also reasonably easy to abstract the =~ pattern-match operator so that it operates on more than just the Formula data type. In fact, a library solution that also supports unification is in development. Conclusion Nimrod is open source software that runs on Windows, Linux, Mac OS, and BSD. In addition to generating C and JavaScript, it can generate C++ or Objective-C. The compiler can optionally enforce all kinds of error checking (bounds checking, overflow, etc.) and it can perform extensive optimizations. It has an extensive standard library and many ported libraries. In addition, it has wrappers for most of the premier C libraries (including OLE, X, Glib/GTK, SQLite, etc.) and C-based languages (Lua and Tcl). If you're searching for a systems programming language that provides higher-level constructs and extensive metaprogramming, but boils down to C, Nimrod might well be what you're looking for. Andreas Rumpf is the creator of Nimrod.
http://www.drdobbs.com/jvm/an-algorithm-for-compressing-space-and-t/jvm/nimrod-a-new-systems-programming-languag/240165321?pgno=2
CC-MAIN-2016-26
refinedweb
964
56.76
Version 0.4.0.428 of DBTestUnit has been released and can be downloaded from SourceForge. This release implements the name change from ‘Database testing framework’ to ‘DBTestUnit’. So what has changed? There has been no change in overall functionality. Basically, a number of components and namespaces have been changed to reflect the new name. These include: - DatabaseTesting.dll renamed to DBTestUnit.dll. The test dll is now found in: …\Projects\DBTemplate\libs\DBTestUnit\ - DatabaseTesting.ExportDBDataAsXML.exe renamed to DBTestUnit.ExportDBDataAsXML.exe. The exe is found in: …\DBTemplate\tools\ExportDBDataAsXML\ - All sample tests have been updated to reference DBTestUnit.dll rather than DatabaseTesting.dll - All namespaces have been updated eg using DatabaseTesting.UnitTestBaseClass.MSSQL; to d using DBTestUnit.UnitTestBaseClass.MSSQL; How does this effect using the framework? If you are a new user – none. Just download and start using. If you are using a previous version – there are a number of relatively minor steps that you will need to carry out if you want to start using the new ‘renamed’ version. 1. Download the latest version – eg 0.4.0.428_DBTestUnit.zip 2. In your database testing solution remove all references to the ‘old’ DatabaseTest.dll. 3. Add a reference to the new test dll – DBTestUnit.dll – found in ….\Projects\DBTemplate\libs\DBTestUnit\. 4. Update any existing namespaces to reflect the new name ie do a ‘find and replace’ changing ‘DatabaseTest’ to ‘DBTestUnit’. eg using DatabaseTesting.InfoSchema; using DatabaseTesting.UnitTestBaseClass.MSSQL; to using DBTestUnit.InfoSchema; using DBTestUnit.UnitTestBaseClass.MSSQL; 5. Next you will need to change the test dll config file. In the sample project provided – which uses AdventureWorks database as an example – then the change would be applied to following config file: …\src\AdventureWorksDatabaseTest\bin\Debug\ AdventureWorks.DatabaseTest.dll.config The following change would be made to reflect the changes in the internal namespaces of the testing framework. <!--************************************--> <add key="AssemblyName" value="DatabaseTesting"></add> <add key="DaoFactoryNamespace" value="DatabaseTesting.InfoSchema.DataAccess.MSSQL"></add> to <!--************************************--> <add key="AssemblyName" value="DBTestUnit"></add> <add key="DaoFactoryNamespace" value="DBTestUnit.InfoSchema.DataAccess.MSSQL"></add> 6. The final part is if you use the XML export tool found in: …\DBTemplate\tools\ExportDBDataAsXML\ For this, it is probably easier to just take a copy of the latest version of this from the download. Make sure that you take a back up of your existing config files as you will need to incorporate them into the ‘vanilla’ config files from the new download. And that’s it. If you have any problems ‘upgrading’ feel free to contact me.
https://dbtestunit.wordpress.com/2011/02/25/version-0-4-0-428-dbtestunit-released/
CC-MAIN-2017-39
refinedweb
421
52.56
atRetAttr, atSetAttr, atAttrName, atAttrValue, atAllAttrs, atFreeAttrs - attribute handling #include <atfs.h> #include <atfstk.h> char*atRetAttr (Af_key *aso, char *attributeName); void atFreeAttr (char *attributeValue); int atSetAttr (Af_key *aso, char *attribute, int mode); int atSetAttrFile (Af_key *aso, char *filename); char*atAttrName (char *attribute); char*atAttrValue (char *attribute); int atMatchAttr (Af_key *aso, char *attribute); The AtFS Toolkit Library extends the AtFS attribute handling. It introduces additional standard attribute names and a list of attribute value types. atRetAttr returns a string representation of the value(s) of the aso attribute named attributeName. If the attribute value is preceded by a value special character (see list below), it will be evaluated accordingly. When the evaluation fails, the original attribute value, including value special character, is returned. When the attribute value is empty, an empty string is returned. When the attribute does not exist or on any other error condition, a null pointer is returned. Attribute citations (like 7.0) in attribute values will always be expanded by atRetAttr. There is no way to disable attribute expansion. If you need the raw, unexpanded attribute value, use af_retattr (manual page af_attrs(3)). The attribute value returned by atRetAttr either resides in static memory (in case of AtFS standard attributes) or in allocated memory. Use atFreeAttr on each attribute value returned by atRetAttr when this not needed any longer. This will recycle allocated memory if possible. atSetAttr sets the attribute attribute for aso. It calls af_setattr (manual page af_attrs(3)) and hence understands the modes AF_ADD, AF_REMOVE, and AF_REPLACE. Alternatively, the mode argument is ignored, when the equal sign between attribute name and value is preceded by either a plus (+) or a minus (-) sign for adding and deleting attribute values. The value special character at (@) will also be evaluated. atSetAttr opens the file and reads its contents. If either the opening or reading fails, the attribute setting is aborted and returns FALSE. On successful execution, atSetAttr returns TRUE, otherwise FALSE. atSetAttrFile evaluates a file containing attributes. If opens the named file (filename) and interprets each line in the file as attribute argument to atSetAttr. atAttrName fills a static memory buffer with the name of the given attribute and returns a pointer to this buffer. Subsequent calls of atAttrName overwrite previous results. atAttrValue returns a pointer to the value part of attribute. atMatchAttr checks if aso has the given attribute. Result values are TRUE or FALSE. If just an attribute name is given, atMatchAttr returns a positive result if the attribute exists in the asos attribute list or, in the case that it is a standard attribute, if it’s value is non null. A full attribute (name and value) must match an attribute in the asos attribute list. The value of the given attribute argument may be a (sh(1)) pattern. Attributes have the general format name=value. Additionally, a plus or minus sign may precede and a value special character may follow the equal sign (<name>[+-]=[^@!*]<value>). plus (+) A plus sign preceding the equal sign indicates, that the value shall be added to existing values (no matter if there are any). minus (-) With the minus sign, the value shall be removed from the list of values, if it exists. The following is the complete list of value special characters recognized by AtFStk. circumflex (^) The value is regarded as a reference to a file or ASO carrying the real attribute value as contents. at (@) An attribute value starting with an at (@) is considered to be a file name from where the real attribute value is to be taken. In contrary to the circumflex notation above, this causes the file read only once, when setting the attribute value. exclam (!) This introduces execution attributes. The attribute value is regarded as command to be passed to a shell process. On access, the command will be executed and its output will be catched as real attribute value. asterisk (*) This denotes a pointer attribute modeling relationships between attributed software objects. When atSetAttr finds an attribute value with an asterisk as first character it interprets the remaining value string as version identifier and tries to bind it using atBindVersion (manual page atbind(3)). On success, the network path (see atNetworkPath(3)) of the identifies ASO will be stored as attribute value together with the leading asterisk. atRetAttr maps pointer attributes to local pathnames using atLocalPath (manual page atnetwork(3)). There are a number of standard attributes defined by the AtFS toolkit library and by AtFS (Attribute Filesystem) for each ASO. For a list of the AtFS standard attributes see the af_attrs(3) manual page. This is a list of all standard attribute names defined in AtFStk. Header A compound attribute consisting of a $Header: Label followed by the bound file name (AF_ATTBOUND), the version date (vtime - see below), the versions author and state and a trailing dollar sign ($). Example: $Header: attrs.c[1.1] Tue Dec 1 17:01:10 1992 andy@cs.tu- berlin.de proposed $ Log The versions modification history from the very beginning. This might be long. note The modification note of the version as set by the author. pred, succ The physical predecessor/successor version as stored in AtFS. If there is none, the string n/a (not available) is returned. vtime The version date. For busy versions, this is the date of last modification, for saved versions, it is the saving date. xpon, xpoff Pseudo attribute turning attribute expansion on and off (see above). Some other names are mapped to the appropriate AtFS standard name: AtFStk name AtFS name (constant definition)AtFStk nameAtFS name (constant definition) atime AF_ATTATIME revision AF_ATTREV author AF_ATTAUTHOR self AF_ATTBOUND ctime AF_ATTCTIME selfpath AF_ATTBOUNDPATH dsize AF_ATTDSIZE size AF_ATTSIZE generation AF_ATTGEN stateAF_ATTSTATE host AF_ATTHOST stime AF_ATTSTIME lock AF_ATTLOCKER syspath AF_ATTSPATH ltime AF_ATTLTIME type AF_ATTTYPE mtime AF_ATTMTIME unixname AF_ATTUNIXNAME name AF_ATTNAME version AF_ATTVERSION owner AF_ATTOWNER atSetAttr may also be used to set standard attributes where possible. Attributes that may be altered are Attribute Mapped to AtFS function author af_chauthor (af_protect(3)) generation, revision, versionaf_svnum (af_version(3)) mode af_chmod (af_protect(3)) owner af_chowner (af_protect(3)) state af_sstate (af_version(3)) note af_snote (af_note(3)) On error, the atError variable is set to a nun null value, and atErrMsg holds a diagnostic message. atRetAttr returns a null pointer on error, or if the desired attribute does not exist. atSetAttr and atSetAttrFile return FALSE on any error condition. atMatchAttr returns FALSE if an error occurred or if the attribute does not match. af_attrs(3), atbind(3), atnetwork(3)
http://huge-man-linux.net/man3/atAttrName.html
CC-MAIN-2017-13
refinedweb
1,078
55.95
Hey everyone, sorry for the weird title but I don't know how to word this without it sounding confusing. Basically I have a left and right arrow on my screen that move the player when physically held on the phone (touch controls). My problem is that when you click and hold the right arrow button, the player walks right (like they should) but will keep walking even if you drag your finger off of that button. I don't mean that once you release the button he keeps walking, cause he doesn't. I mean that if you press and hold the button and slide your finger to anywhere else on the screen (while never taking your finger off the screen) he keeps walking. How do I make it that if the player slides there finger off of the button, the function stops playing? Here is my code for the Right Arrow. Player Script: if (Input.GetKey(KeyCode.RightArrow)) { rb2d.velocity = new Vector2(maxSpeed, rb2d.velocity.y); transform.localScale = new Vector3(1, 1, 1); } if (moveright) { rb2d.velocity = new Vector2(maxSpeed, rb2d.velocity.y); transform.localScale = new Vector3(1, 1, 1); } Touch Script: public void RightArrow() { player.moveright = true; player.moveleft = false; } public void ReleaseRightArrow() { player.moveright = false; } Here is what Unity and Even Triggers look like. Sorry my Right Arrow image is hard to see, I have the Opacity down but it is located on the bottom right of the Scene / Game View. Thanks everyone! :) Answer by Nyro · Jul 28, 2017 at 08:25 PM Use OnPointerExit. Okay cool, how exactly would I use it. would i put it in my Touch Script like: public void OnPointerExit(EventSystem.PointerEventData eventData); Because that says EventSystem namespace could not be found. I even added "using UnityEngine.EventSystems;" to the top of my Script thinking it would identify it. Or would i put "RightArrow" in there somewhere? Inside Unity in the Event Trigger component click on "Add New Event Type", choose Pointer Exit, then make the same thing as you did with Pointer Up You got it! Sorry for the noob question,. How to make basic touch screen controls 1 Answer Touch controls for Pong based video games for Android 0 Answers How to show and hide an image by swiping 0 Answers Possible to get touch area data? 1 Answer Android Touch Screen Button for touch and hold 1 Answer
https://answers.unity.com/questions/1386059/make-touch-buttons-stop-working-after-finger-slide.html
CC-MAIN-2019-47
refinedweb
400
74.29
Related link: Shelley Powers came up with an interesting alternative to trackback: “tagback,” in which she creates a unique tag (in the folksonomy/flickr/del.icio.us/technorati sense of the term “tag,” not the XML sense) for each weblog posting, tags the posting with that tag using Technorati’s syntax, and then encourages anyone commenting on the entry (as I will once this is live) to tag their comment with the same tag. So, for example, her first post has the tag bbintroducingtagback, and the Technorati URL links to her posting and the comments on it. This way, you don’t have to rely on trackback or comment software on her weblog—two processes getting more polluted by spam lately—to create a connection from her entry to yours. Her posting does have a lot of comments, and it’s an interesting exchange. (It’s now closed to comments, but here I am commenting, which shows another strength of her new idea.) Some complain that a single, oddly-spelled unique tag is contrary to the classification that they see these tags playing, but I disagree. Tags are metadata, and much of the fun of metadata is the creation of new kinds of applications around existing metadata or metadata systems, and that’s what Shelly’s done. Others quibble with her plan to prefix her tagback tags with bb (as in burningbird.net) to identify them as part of her tagback namespace. They ask what would happen if boingboing did the same thing—as if a website co-led by Mr. Metacrap himself would add something as useless as metadata to their entries. Shelley replies that, because the tag incorporates the piece’s title (”Introducing Tagback”), another piece with the same tag would probably be related, so this namespace collision wouldn’t necessarily be such a bad thing. And, when creating a new tag, a programmatic check to see if a given tag has already been used is simple enough. This is a nice way to create and use indirect links on the web. While I’m not quite creating a link from her weblog entry to my own, her page’s inclusion of a link to a Technorati query for the page’s unique tag and my ability to tag my entry with the same tag in del.icio.us means that I’m creating something that her page links to, so I get the same effect. A difficulty with implementing indirect links has always been the infrastructure to implement the indirection, and Shelley noticed that Technorati could serve as this infrastructure. (Experiments with backlinking have also used Google for indirect linking infrastructure.) To paraphrase something I said above, it’s nice to see someone create a new linking application around a new class of metadata so quickly after it appears. Technorati, del.icio.us, and tagged entries I forgot that Technorati puts entries tagged from del.icio.us in smaller type in the lower-right. I guess I'd highlight local entries and put remote ones off on the side as well. For now, mine is showing up high in the del.icio.us list for Shelley's tag. Permalink as identifier? Wouldn't it be simpler to use the permalink of the original post, instead of a random tag chosen by the author of that post? Using the permalink avoids potential collisions and also helps finding the origin of the "tag"... Permalink as identifier? A permalink makes it easier for me to link my posting to hers and remain confident that the link will still work a year later. Tagback, like trackback, is mechanism for me to let people who are reading her post find mine more easily. Combined with the use of permalinks, it's a form of two-way linking.
http://www.oreillynet.com/xml/blog/2005/02/folksonomy_tags_for_indirect_l.html
crawl-002
refinedweb
635
60.95
keyctl_get_keyring_ID man page keyctl_get_keyring_ID — get the ID of a special keyring Synopsis #include <keyutils.h> key_serial_t keyctl_get_keyring_ID(key_serial_t key, int create); Description keyctl_get_keyring_ID() maps a special key or keyring ID to the serial number of the key actually representing that feature. The serial number will be returned if that key exists. If the key or keyring does not yet exist, then if create is non-zero, the key or keyring will be created if it is appropriate to do so. The following special key IDs may be specified as key: -_SPEC_REQKEY_AUTH_KEY This specifies the authorisation key created by request_key() and passed to the process it spawns to generate a key. If a valid keyring ID is passed in, then this will simply be returned if the key exists; an error will be issued if it doesn't exist. Return Value On success keyctl_get_keyring_ID() returns the serial number of the key it found. On error, the value -1 will be returned and errno will have been set to an appropriate error. Errors - ENOKEY No matching key was found. - ENOMEM Insufficient memory to create a key. - EDQUOT The key quota for this user would be exceeded by creating this key or linking it to the keyring.).
https://www.mankier.com/3/keyctl_get_keyring_ID
CC-MAIN-2018-47
refinedweb
204
61.46
Table of Content Search clouds Licenses Author's resources The SCaVis contains many high-performance Java packages for linear algebra and matrix operations: For manipulations with vectors, use the following classes with useful static methods: You can also use the Python list as a container to hold and manipulate with 1D data structures. P0I and P0D arrays have been considered in Sect.Data structures. Below we show how to use static methods by mixing Python lists with the static methods of the ArrayMathUtils Java class: from jhplot.math.ArrayMathUtils import * a=[-1,-2,3,4,5,-6,7,10] # make a Python list print a b=invert(a) # invert it print b.tolist() c=scalarMultiply(10, b) # scalar multiply by 10 print c.tolist() print mean(a) print sumSquares(a) # sums the squares This code generates the following output: [-1, -2, 3, 4, 5, -6, 7, 10] [10, 7, -6, 5, 4, 3, -2, -1] [100.0, 70.0, -60.0, 50.0, 40.0, 30.0, -20.0, -10.0] 2.5 240 For matrix calculations, consider the package LinearAlgebra. A simple example below can illustrate how to get started: from jhplot.math.LinearAlgebra import * array = [[1.,2.,3],[4.,5.,6.],[7.,8.,10.]] inverse=inverse(array) # calculate inverse matrix print inverse print trace(array) # calculate trace While working with NxM matrices, consider another important library DoubleArray which helps to manipulate with double arrays. For example, this class has toString() method to print double arrays in a convenient format. Consider this example: from jhplot.math.LinearAlgebra import * from jhplot.math.DoubleArray import * print dir() # list all imported methods array = [[1.,2.,3],[4.,5.,6.],[7.,8.,10.]] inverse=inverse(array) print toString("%7.3f", inverse.tolist()) # print the matrix The above script prints all the methods for matrix manipulation and the inverse matrix itself: -0.667 -1.333 1.000 -0.667 3.667 -2.000 1.000 -2.000 1.000 Below is a simple example of how to call Jama package to create a matrix to perform some manipulations. from Jama import * array = [[1.,2.,3],[4.,5.,6.],[7.,8.,10.]] a = Matrix(array) b = Matrix.random(3,1) x = a.solve(b) Residual = a.times(x).minus(b); rnorm = Residual.normInf(); To print a matrix, one can make a simple function that converts a matrix to a string: from Jama import * def toString(a): s="" for i in range(a.getRowDimension()): for j in range(a.getColumnDimension()): s=s+str(a.get(i,j))+" " s=s+ "\n" return s print toString(a) # print "a" (must be Matrix object) For matrix manipulation, one can also use Apache Math Common Linear Algebra package: Look at the Linear Algebra Java package. Below we show a simple example of how to create and manipulate with matrices: from org.apache.commons.math3.linear import * # Create a real matrix with two rows and three columns matrixData = [[1,2,3], [2,5,3]] m=Array2DRowRealMatrix(matrixData) # One more with three rows, two columns matrixData2 = [[1,2], [2,5], [1, 7]] n=Array2DRowRealMatrix(matrixData2) # Now multiply m by n p = m.multiply(n); print p.getRowDimension() # print 2 print p.getColumnDimension() # print 2 # Invert p, using LU decomposition inverse =LUDecompositionImpl(p).getSolver().getInverse(); la4j package provides a simple API to handle sparse and dense matrices. According to the La4j authors, the package has Linear systems solving (Gaussian, Jacobi, Zeidel, Square Root, Sweep and other), Matrices decomposition (Eigenvalues/Eigenvectors, SVD, QR, LU, Cholesky and other), and useful I/O (CSV and MatrixMarket format). Let us consider how we define such matrices in this package: 1: from org.la4j.matrix.dense import * 2: from org.la4j.matrix.sparse import * 3: from org.la4j.matrix import * 4: from java.io import FileInputStream 5: 6: 7: print "Dense matrix" 8: m=Basic2DMatrix( [[1.0, 2.0, 3.0 ], 9: [4.0, 5.0, 6.0], 10: [7.0, 8.0, 9.0]]) 11: 12: print "Sparse matrix" 13: m=CRSMatrix( [[1.0, 2.0, 3.0 ], 14: [4.0, 5.0, 6.0], 15: [7.0, 8.0, 9.0]]) 16: 17: print "Matrix from CSV" 18: file=open("matrix.csv","w") 19: file.write("1.0, 2.0, 3.0\n") 20: file.write("1.0, 2.0, 3.0\n") 21: file.write("1.0, 2.0, 3.0\n") 22: file.close() 23: 24: a = Basic2DMatrix(Matrices.asSymbolSeparatedSource( 25: FileInputStream("matrix.csv"))) 26: 27: print "We want to take first row of the matrix 'a' as sparse vector 'b'" 28: b = a.toRowVector(); 29: print c.toString() 30: 31: print "We want to take first column of the matrix 'a' as sparse vector 'c'" 32: c = a.toColumnVector(); 33: print c.toString() Let us show how to perform manipulations with such matrices. In the example shown below, we multiply matrices and then perform a transformation of matrices using an arbitrary function: The EJML package provides 2 types of matrices: EJML library provides the following operations: Let us give a simple example using Jython: We create a few matrices and perform some algebra (multiplication, inverse etc). We also computes the eigen value decomposition and will print the answer: You can test various features of a matrix using this MatrixFeatures API. For example, let's check “SkewSymmetric” feature of a given matrix: You can save matrices in CVS files or binary formats. The example below shows how to do this: Finally, you can visualize the matrices. The example below creates a matrix and then shows it's state in a window. Block means an element is zero. Red positive and blue negative. More intense the color larger the element's absolute value is. The above example shows a graphic representation of the matrix defined as: A=DenseMatrix64F(4,4,True,[0,2,3,4,-2,0,2,3,-3,-2,0,2,-4,-3,-2,0]) You can save arrays and matrices in a compressed serialized form, as well as using XML form. Look at the Section (Input and output). Please refer Sect.Linear equations for this topic. Matrix manipulation can be performed on multiple cores taking the advantage of parallel processing supported by Java virtual machine. In this approach, all processing cores of your computer will be used for calculations (or only a certain number of core as you have specified). Below we give a simple example. In the example below we create a large matrix 2000×2000 and calculate various characteristics of such matrix (cardinality, vectorize). We compare single threaded calculations with multithreaded ones (in this case, we set the number of cores to 2, but feel free to set to a large value). To build random 2D matrices use DoubleFactory2D. Here is a short example to create 1000×1000 matrix and fill it with random numbers: from cern.colt.matrix import * from edu.emory.mathcs.utils import ConcurrencyUtils ConcurrencyUtils.setNumberOfThreads(4) # set 4 numbers of threads M=tdouble.DoubleFactory2D.dense.random(1000, 1000) # random matrix Below is a simple example which shows matrix operations on several cores. We set 2 cores, but you should remove the method “ConcurrencyUtils.setNumberOfThreads()” if you what to let the machine to determine the core automatically.
http://www.jwork.org/scavis/wikidoc/doku.php?id=man:numeric:matrix_operations
CC-MAIN-2014-41
refinedweb
1,200
50.43
In my previous article I talked about Unity and Visual Studio: using Visual Studio to edit and maintain your Unity code. In that article I talked about working with source code directly in your Unity project. DLLs are another way to get code into your Unity project. I'm talking about C# code that has been compiled and packaged as a .NET assembly. The simplest way of working is to store source code directly in your Unity project. However, using DLLs gives you an alternative that has its own benefits. To be sure, it adds complication to your process so you must carefully weigh your options before jumping all the way in. In this article I'll explain the benefits that will make you want to use DLLs. Then I'll cover the problems that will make you regret that decision. Finally I'll show you how to get the best of both worlds: using source code where it makes sense and using DLLs where they make sense. I'll show you how to compile a DLL yourself and bring it into your Unity project. Table of Contents generated with DocToc - The Basics - Why use source code instead of DLLs? - Why use DLLs? - DLLs: not so good for prototyping and exploratory-coding - The best of both worlds! - Compiling a DLL to use in Unity - Referencing the Unity DLLs - Conclusion The Basics The starting point for this article, is the understanding that a DLL can simply be dropped into your Unity project, Unity will detect it and you can then start using it. Of course, it's almost never that simple in real scenarios, however I will demonstrate that it can be that simple. In a future article I'll give you some tools to help solve the issues that will inevitably come up. Say you have a compiled DLL. You may have compiled it yourself, got it from a mate, got it from the Asset Store or somewhere else, it doesn't matter. You can copy the DLL into your Unity project and (presuming it actually works with Unity) start using it. Of course, any number of problems can and do happen. Especially if you don't know where the DLL came from, what it does or what its dependencies are. These problems can be very difficult to solve and I'll come back to that in the future. If you have started programming with Unity, the way source code is included in the project and automatically compiled will seem normal. In the traditional .NET programming world in the time before Unity, exes and DLLs (both being .NET assemblies) are the normal method for packaging your code and distributing it as an application. It's only with Unity that the rules have been changed: Unity automatically compiles code that is included in the Unity project. This is the default way of working with Unity, and is what you learn when introduced to programming through Unity. Note however that DLLs do get created, it's just that they are created for you (possibly without you even realizing it). Go and check if you like, take a Unity build and search for DLLs in the data directory. You can see for yourself that Unity automatically compiles your code to DLLs. This leads us to the understanding that we can copy pre-compiled DLLs into the project and Unity will recognize them. Indeed we can make use of DLLs that have been created for us by other developers. You have probably even done this already. Many of the packages for sale on the Unity Asset Store include DLLs rather than source code. Also available to us are many .NET libraries that can be installed through nuget (at least the ones that work with Mono/Unity). More about nuget in my next article. Why use source code instead of DLLs? I'm going to make arguments for and against using DLLs with Unity. In this line of work there are no perfect answers, we must do our best given our understanding and knowledge at the time and aim to make the right tradeoffs at the right times. I've heard it said that there are no right decisions, only less worse decisions. We need to think critically about technology to judge one approach against another.. That's how serious this is.. So my first advice is... don't use DLLs until you are convinced that they will benefit you. I'll attempt to convince you of the benefit of DLLs in the next section, but if you are uncertain or not ready to commit, then your best bet is to stick with source code. This is the way it works by default with Unity. It's simple and works with very low overhead. Visual Studio integrates directly and you can use it easily for editing and debugging your code. Why use DLLs? Ok, I probably just convinced you to use source code over DLLS. Now I must show you there are real benefits to working with DLLs. - DLLs are an efficient, natural and convenient mechanism for packaging code for sharing with other people and other projects. This already happens extensively, you see many DLLs sold via the Unity Asset Store and nuget is literally full of DLLs to download for free. Github is full of code-libraries that you can easily compile to DLLs to include in your project. Even if you aren't selling your code on the asset store, you still might find it useful to package your code in DLLs for easy sharing between your own (or your friend's) projects. You can even share code between Unity and non-Unity projects. This might be important for you, for example, if you are building a stand-alone dedicated server for your game. This will allow you to share code between your game (a Unity application) and your server (a standard .NET application). - Using DLLs hides the source code. This is useful when you are distributing your code but don't want to give away the source. Note that it is nearly impossible to completely protect your code when working with Unity (and generally with .NET). Anyone with sufficient technical skills can decompile your DLLs and recover at least partial source code. Obfuscating your code can help a great deal and make decompilation more difficult, but it's just not possible to completely protect again this. - Using DLLs allows you to bake a code release. You might have tested a code release and signed-off on it. When the code release is baked to a DLL you can be sure that the code can't be modified or tampered with after the DLL is created. This may or may not be important to you. It is very important if you aim to embrace the practices of continuous integration and/or continuous delivery. I'd like to address both of these practices in future articles. - Building your code to DLLs enables you to use any of the commonly available .NET unit testing frameworks (we use xUnit.net). These frameworks were built to work with DLLs. Why? Remember that DLLs (and exes) are the default method of delivering code in the .NET world. I'll talk more about automated testing and TDD for game development in a future article. DLLs: not so good for prototyping and exploratory-coding Here is another pitfall of using DLLs that is worth considering. They will slow down your feedback loop. Compiling a DLL from source code and copying it to your Unity project takes time. Unity must reload the updated DLL, which also takes time. This is slow compared to having source code directly in the Unity project, which takes almost zero time to update and be ready to run. Increased turnaround time in the game dev process can and will cause problems that can derail your project. When developing games we must often do prototyping or exploratory-coding, this is all a part of the iterative process of figuring out the game we are developing and finding the fun. When coding in this mode, we must minimize turnaround-time and reduce the cycle time in our feedback loop. Using DLLs increases turn-around time. The effect is minimized by automated development infrastructure (another thing to talk about in a future article) and having a super-fast PC. The effect can also be reduced by test-driven-development, a technique that is enabled by DLLs and has the potential to drastically reduce your cycle time. TDD however is an advanced skill (despite what some people say) and must be used with care (or it will cause problems for you). So that leaves us in the position that DLLs are not great for rapid evolution of game-play code. DLLs are likely to slow you down and are best used for code that has already stabilized and that has stopped changing regularly. The best of both worlds! Fortunately, should we need to use DLLs, we can get the best of both worlds. Source code and DLLs can be combined in the same project: - Fast-moving and evolving game-play code should be stored as source code in the Unity project. - The rock solid and stable code (eg core-technology code) that we share from project to project can be stored in DLLs. As code modules transition from fast-moving to stable we can move them as necessary from source code to DLLs. Compiling a DLL to use in Unity Now I'll show you how to create a DLL for use in Unity. Getting real-world DLLs to work in Unity can be fraught with problems. In this section I'll demonstrate that you can easily create and use a DLL with no problems, what could possibly go wrong? There are many tutorials and getting started guides for Visual Studio and I don't need to replicate those. I'm only going to cover the basics of making a DLL and then being able to use it in Unity. I'll start with this... making a DLL for Unity in Visual Studio is basically the same as making any DLL in Visual Studio, with just a few small issues that I'll cover here. So any tutorial that shows you how to make DLLs in Visual Studio is going to work... just pay attention to the caveats I mention here, or they could catch you out! I talked about how to download and install Visual Studio in the last article. So I'll assume that you are ready to follow along. We first need to create a project for our DLL. When we start Visual Studio we should be at the start page. Click on New Project... You can also create a project from the File menu. Click New then Project... Now you will see the New Project window. Here you can select the type, name and location of the project. You'll notice the many types of projects. For Unity we want to create a Class Library. Here you could also choose a project type for a stand-alone application. For example Console Application for a command line exe. If you require a GUI application then use Windows Forms Application or WPF Application. Any of these choices might be appropriate for building a stand-alone dedicated server that is independent of Unity. After selecting a name and location for your project now click OK to create the solution and the project (see the previous article for more on solutions and projects). You have now created a project that looks something like this: We now need to edit the project's Properties to ensure that the DLL will run under Unity. In the Solution Explorer right-click on the project and click Properties: You should be looking at the project properties now. You'll see that by Target framework default is set to the latest version of .NET. At the time of writing this is .NET Framework 4.5.2. We need to change Target framework to .NET Framework 3.5 for our DLL to be usable under Unity. Why did we have to change to .NET 3.5? I'm glad you asked. The Unity scripting engine is built from an ancient version of Mono. Mono is the open source equivalent of the .NET Framework. When working with Unity we are limited to .NET 3.5. This might not seem so bad until you realize that .NET 3.5 is 8 years old! We are missing out on all the new features in .NET 4 and 4.5. Unity is keeping us in the digital dark ages! As a side note you may have heard the (not so recent) news that the .NET Framework itself is now open source. This is certainly great news and will hopefully help Unity get it's act together and bring their .NET support up-to-date. Switching to .NET 3.5 causes some of the .NET 4.5 references to go bad. You will have to manually remove these references: Now you are ready to build the DLL. From the Build menu click Build Solution or use the default hotkey Ctrl+Shift+B. At this stage you may get a compile error due to switching .NET frameworks. In new projects, Visual Studio automatically creates a stub Class. The generated file imports the System.Threading.Tasks namespace, and this doesn't exist under .NET 3.5. You must either remove the offending using directive or delete the entire file (if you don't need the stub Class). Build again and you should be able to see the generated DLL in the project's bin/Debug directory. Now we are almost ready to test the DLL under Unity. First though we really need a class to test. For this example I'll add a function to the generated stub class so it looks as follows. I've added the UnityTest function which returns the floating-point value 5.5. Now rebuild the DLL and copy it over to the Unity project. The DLL must go somewhere under the Assets directory. You probably don't want to put everything in the root directory of your project, so please organize it in sub-directory of your choosing. Make sure you copy the pdb file over as well. You'll need this later for debugging (which I'll cover in another article). Now you can use classes and functions from your DLL in your Unity scripts. Unity automatically generates a reference to your DLL when you copy it to the Assets directory. When you attempt to use your new class there will likely be an error because the namespace hasn't been imported. This is easily rectified using Quick Actions (available in Visual Studio 2015). Right-click on the error (where the red squiggly line is). Click Quick Actions.... Alternately you can use the default hotkey Ctrl+. This brings up the Quick Actions menu. Select the menu item that adds the using directive for you, in this example select using MyNewProject;. Visual Studio has added the using directive and fixed the error. The red squiggly line is gone. Now we have created an instance of our class and can call the UnityTest function on it. Typing the name of the object and then . (period) allows intellisense to kick in and list the available options. There is only the UnityTest function here that we currently care about. So select that and Visual Studio will autocomplete the line of code for you. Let's modify the code slightly to print out the value returned by UnityTest. This will allow us to verify that our code is working as expected. Now we should start Unity and test that this works. When we run this we expect the output 5.5 to appear in the Unity Console. It's a good practice, while coding, to predict the result of running your code. This makes the development process slightly more scientific and it helps to improve your skill as a developer. Programming can be complex and it is often hard to predict the outcomes, if anyone disagrees with that statement you should ask them why their code still has bugs? Improving our predictive skills is one of the best ways to gradually improve our general ability to understand what code is doing. To test your code you need a scene with a GameObject that has the test script attached. I've talked about this in the previous article so I won't cover it again here. With the scene setup for testing, click Play in Unity and then take a look at the Console. If working as expected you'll see something like this in the Console: 5.5 UnityEngine.Debug:Log(Object) TestScript:Start() (at Assets/TestScript.cs:10) Referencing the Unity DLLs The first DLL we created is not dependent on Unity. It doesn't reference the Unity DLL at all. I wanted to start this way to demonstrate that this is possible. That we can create a DLL that is independent of Unity and that can be used in other applications. An example of which, as already mentioned, is a stand-alone dedicated server app. Another example might be a command line helper app. The important take-away is that the DLL can be shared between Unity and non-Unity applications. However you will most likely want to create DLLs that are designed to work in Unity and that do reference the Unity DLL. So let's cover that now. We'll upgrade our example DLL to depend on Unity. We'll add a MonoBehavior script to the DLL that we can use in Unity. The first thing we need to do is add a reference to the Unity DLL. To do this you must copy the DLL from your Unity installation to the project that will reference it. To find the DLL go into your local Unity installation (for me that is C:\Program Files\Unity) and navigate to the sub-directory Editor\Data\Managed. Here you will find UnityEngine.dll. Copy the DLL into the folder containing your Visual Studio project. Switch back to the Visual Studio Solution Explorer. Right-click on your project, click Add, then click Reference.... In the Reference Manager window select Browse, then click the Browse... button. Navigate to the Visual Studio project directory and select UnityEngine.dll. Then click Add: Back in the Reference Manager, click OK. You should now see a reference to UnityEngine from your project: Now you can add a MonoBehaviour class to your dll: After you copy the DLL to your Unity project you can attach the new component to GameObjects in the hierarchy. Conclusion In this article I've summarized the benefits and pitfalls of using DLLs with Unity. I've shown how to compile and use your own DLL and how to bring it into your Unity project. You now have an understanding of the the reasons you might want to use DLLs and you have the tools to start using them. If you do a lot of prototyping or exploratory coding, where a tight feedback loop works best, then source code is your best bet and DLLs may not be a good choice. Quick and streamlined prototyping is one of Unity's benefits. Using DLLs, whilst it can be a good practice, can detract from efficient prototyping, something that is so very important in game development. In the end, if DLLs are important to you, you may have to mix and match source code and DLLs. Using DLLs where they make sense, where the benefits outweigh the added complexity. Using source code when you need to move fast. Make your own calls and find a balance that works for your project. As a last word... The reasons why DLLs are difficult to use is mainly down to Unity: the out-of-date .NET version combined with terrible error messages (when things go wrong). We should all put pressure on Unity to get this situation improved! In future articles I'll be talking about nuget and troubleshooting DLL-related issues. Thanks for reading.
https://codecapers.com.au/unity-and-dlls/
CC-MAIN-2021-49
refinedweb
3,377
74.08
Problem Running a simple PyCUDA program, I got this error: $ python hello_cuda.py Traceback (most recent call last): File "hello_cuda.py", line 5, in <module> from pycuda.compiler import SourceModule File "/usr/local/lib/python2.7/dist-packages/pycuda/compiler.py", line 1, in <module> from pytools import memoize File "/usr/local/lib/python2.7/dist-packages/pytools/__init__.py", line 5, in <module> from six.moves import range, zip, intern, input ImportError: cannot import name intern Solution The error had nothing to do with PyCUDA or CUDA, but with the six module which helps with compatibility between Python2 and Python3 code. It needed an upgrade and it worked after this: $ sudo pip install six --upgrade Tried with: Six 1.9.0, PyCUDA 2014.1, CUDA 5.5 and Ubuntu 14.04 4 thoughts on “PyCUDA error: cannot import name intern” doesn’t work .. I tried to use the proposed solution but my Ubuntu 14.04 refuses to uninstall version 1.5.2 of six. $ sudo pip install six –upgrade Downloading/unpacking six from Downloading six-1.9.0-py2.py3-none-any.whl Installing collected packages: six Found existing installation: six 1.5.2 Not uninstalling six at /usr/lib/python2.7/dist-packages, owned by OS Successfully installed six Cleaning up… After this I am left with the original version of six on my system and no way to upgrade six. I even tried the following with no better result: sudo pip install –upgrade –force-reinstall six Any help would be greatly appreciated. Charles As a followup to my own post I consulted with an expert and he proposed the following work-around: in init.py replace (you can find its location in your error mssg): from six.moves import range, zip, intern, input with the following: try: from six.moves import range, zip, intern, input except ImportError: from six.moves import range, zip, input This should work. Note that the same issue is present with version 1.9.0 of six. Happy pythoning. Charles
https://codeyarns.com/2015/02/24/pycuda-error-cannot-import-name-intern/
CC-MAIN-2020-40
refinedweb
337
60.92
What is the best approach for importing a CSV that has a different number of columns for each row using Pandas or the CSV module into a Pandas DataFrame. "H","BBB","D","Ajxxx Dxxxs" "R","1","QH","DTR"," "," ","spxxt rixxls, raxxxd","1" Using this code: import pandas as pd data = pd.read_csv("smallsample.txt",header = None) the following error is generated Error tokenizing data. C error: Expected 4 fields in line 2, saw 8 Best answer Supplying a list of columns names in the read_csv() should do the trick. ex: names=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’] Edit: if you don’t want to supply column names then do what Nicholas suggested
https://pythonquestion.com/post/import-csv-with-different-number-of-columns-per-row-using-pandas/
CC-MAIN-2020-16
refinedweb
111
60.04
Welcome to CS1315. Click on the python to add comments. This page removed for FERPA compliance Midterm Exam 1 Review Fall 2003: What does it mean? (Back to Fall2003 Midterm Review 1 ) Answers? Comments? Comments on answers? def defines and names a function. This statement defines the function as "someREMOVEDnction" with inputs of x and y. It allows the user to input two numbers for starts a loop. In this case it will set "x" to a number that is between 1 and 5. The x varible stores numbers one at a time. print tell the computer that it wants to return something to the user. Print a returns a the value of a variable named "a" FOR loops walk a sequence. Try the explanation again talking about sequences. Mark Guzdial for sends a variable through a sequence(looping). for x in a range(1,5) is actually a sequence of (1,2,3,4). the loop takes each value or variable one at a time and executes everything under the for block lindsay REMOVEDndsay, that definition is getting really close, but you are forgettting one thing: what does the For loop do after "the loop takes each value or variable one at a time and executes everything under the for block" ? Does it just stop there? Brittany Selden When it gets to the last value in the range it goes to the end of the loop and continues with the rest of the program. Rachel Tres bien! Mark Guzdial if u tried inserting a wouldnt it return an error unless u put par. around it? Link to this Page Fall2003 Midterm Review 1 last edited on 10 September 2003 at 4:19 pm by w205d156.lawn.gatech.edu
http://coweb.cc.gatech.edu/cs1315/793
crawl-003
refinedweb
289
74.9
. SQL Server would take each byte of the source binary data, convert it to an integer value, and then uses that integer value as the ASCII value for the destination character data. This behavior applied to the binary, varbinary, and timestamp datatypes. The only workarounds were to use either a stored procedure as described in a Knowledge Base Article: "INFO: Converting Binary Data to Hexadecimal String" ( ) or by writing a CLR function. An ISV I work with doesn’t support CLR and therefore they implemented their own version of a custom convert function in form of a stored procedure. This one was even faster than everything else they found on the Internet. NEW – IN SQL SERVER 2008 the convert function was extended to support binary data – hex string conversion. It looks like a tiny improvement almost not worth mentioning. However, for the ISV it was a big step forward as some critical queries need this functionality. Besides the fact that they no longer have to ship and maintain their own stored procedure, a simple repro showed a tremendous performance improvement. Repro: ===== I transformed the procedure described in the KB article mentioned above into a simple function. The stored procedure below will create a simple test table with one varbinary column and insert some test rows in 10K packages ( e.g. nr_rows = 100 -> 1 million rows in the table ). The repro shows two different test cases: 1. insert 0x0 two million times 2. insert 0x0123456789A12345 two million times Depending on the length of the value the disadvantage of the stored procedure solution will be even bigger. On my test machine the results of the test queries below were: (both tests were done with the same SQL Server 2008 instance - no change of any settings) 1. two million times value 0x0 a, using stored procedure : about 3460 logical reads, no disk IO, ~52 secs elapsed time b, using new convert feature : about 5200 logical reads, no disk IO, < 1 sec elapsed time 2. two million times value 0x0123456789A12345 a, using stored procedure : about 3460 logical reads, no disk IO, ~157 secs elapsed time b, using new convert feature : about 5200 logical reads, no disk IO, < 1 sec elapsed time Repro Script: ======== create function sp_hexadecimal ( @binvalue varbinary(255) ) returns varchar(255) as begin declare @charvalue varchar(255) declare @i int declare @length int declare @hexstring char(16) select @charvalue = '0x' select @i = 1 select @length = datalength(@binvalue) select @hexstring = '0123456789abcdef' while (@i <= @length) begin declare @tempint int declare @firstint int declare @secondint int select @tempint = convert(int, substring(@binvalue,@i,1)) select @firstint = floor(@tempint/16) select @secondint = @tempint - (@firstint*16) select @charvalue = @charvalue + substring(@hexstring, @firstint+1, 1) + substring(@hexstring, @secondint+1, 1) select @i = @i + 1 end return ( @charvalue ) end create procedure cr_conv_test_table ( @value varbinary(16), @nr_rows int ) as begin declare @exist int declare @counter int set NOCOUNT ON set statistics time off set statistics io off set statistics profile off set @exist = ( select count(*) from sys.objects where name = 'conv_test_table' and type = 'U' ) if( @exist = 1 ) drop table conv_test_table set @exist = ( select count(*) from sys.objects where name = 'conv_test_table_temp' and type = 'U' ) if( @exist = 1 ) drop table conv_test_table_temp create table conv_test_table ( varbincol varbinary(16) ) create table conv_test_table_temp ( varbincol varbinary(16) ) set @counter = 10000 while @counter > 0 begin insert into conv_test_table_temp values ( @value ) set @counter = @counter - 1 end set @counter = @nr_rows while @counter > 0 begin insert into conv_test_table select * from conv_test_table_temp set @counter = @counter - 1 end end -- create 2 million test rows execute cr_conv_test_table 0x0, 200 set statistics time on set statistics io on -- compare runtime of stored procedure with new convert feature select count(*) from conv_test_table where dbo.sp_hexadecimal(varbincol) = '0x00' select count(*) from conv_test_table where CONVERT(varchar(255),varbincol,1) = '0x00' -- create 2 million test rows execute cr_conv_test_table 0x0123456789A12345, 200 set statistics time on set statistics io on -- compare runtime of stored procedure with new convert feature select count(*) from conv_test_table where dbo.sp_hexadecimal(varbincol) = '0x0123456789A12345' select count(*) from conv_test_table where CONVERT(varchar(255),varbincol,1) = '0x0123456789A12345'
https://techcommunity.microsoft.com/t5/sql-server-blog/sql-server-2008-new-binary-8211-hex-string-conversion/ba-p/383490
CC-MAIN-2021-49
refinedweb
668
50.7
14.6. Neural Collaborative Filtering for Personalized Ranking¶ This section moves beyond explicit feedback, introducing the neural collaborative filtering (NCF) framework for recommendation with implicit feedback. Implicit feedback is pervasive in recommender systems. Actions such as Clicks, buys, and watches are common implicit feedback which are easy to collect and indicative of users’ preferences. The model we will introduce, titled NeuMF, short for neural matrix factorization, aims to address the personalized ranking task with implicit feedback. This model leverages the flexibility and non-linearity of neural networks to replace dot products of matrix factorization, aiming at enhancing the model expressiveness. In specific, this model is structured with two subnetworks including generalized matrix factorization (GMF) and multilayer perceptron (MLP) and models the interactions from two pathways instead of simple inner products. The outputs of these two networks are concatenated for the final prediction scores calculation. Unlike the rating prediction task in AutoRec, this model generates a ranked recommendation list to each user based on the implicit feedback. We will use the personalized ranking loss introduced in the last section to train this model. 14.6.1. The NeuMF model¶ As aforementioned, NeuMF fuses two subnetworks. The GMF is a generic neural network version of matrix factorization where the input is the elementwise product of user and item latent factors. It consists of two neural layers: where \(\odot\) denotes the Hadamard product of vectors. \(\mathbf{P} \in \mathbb{R}^{m \times k}\) and \(\mathbf{Q} \in \mathbb{R}^{n \times k}\) corespond to user and item latent matrix respectively. \(\mathbf{p}_u \in \mathbb{R}^{ k}\) is the \(u^\mathrm{th}\) row of \(P\) and \(\mathbf{q}_i \in \mathbb{R}^{ k}\) is the \(i^\mathrm{th}\) row of \(Q\). \(\alpha\) and \(h\) denote the activation function and weight of the output layer. \(\hat{y}_{ui}\) is the prediction score of the user \(u\) might give to the item \(i\). Another component of this model is MLP. To enrich model flexibility, the MLP subnetwork does not share user and item embeddings with GMF. It uses the concatenation of user and item embeddings as input. With the complicated connections and nonlinear transformations, it is capable of eastimating the intricate interactions between users and items. More precisely, the MLP subnetwork is defined as: where \(\mathbf{W}^*, \mathbf{b}^*\) and \(\alpha^*\) denote the weight matrix, bias vector, and activation function. \(\phi^*\) denotes the function of the corresponding layer. \(\mathbf{z}^*\) denotes the output of corresponding layer. To fuse the results of GMF and MLP, instead of simple addition, NeuMF concatenates the second last layers of two subnetworks to create a feature vector which can be passed to the further layers. Afterwards, the ouputs are projected with matrix \(\mathbf{h}\) and a sigmoid activation function. The prediction layer is formulated as: The following figure illustrates the model architecture of NeuMF. import d2l from mxnet import autograd, init, gluon, np, npx from mxnet.gluon import nn import mxnet as mx import math import random import sys npx.set_np() 14.6.2. Model Implementation¶ The following code implements the NeuMF model. It consists of a generalized matrix factorization model and a multilayered perceptron with different user and item embedding vectors. The structure of the MLP is controlled with the parameter mlp_layers. ReLu is used as the default activation function. class NeuMF(nn.Block): def __init__(self, num_factors , num_users, num_items, mlp_layers, **kwargs): super(NeuMF, self).__init__(**kwargs) self.P = nn.Embedding(num_users, num_factors) self.Q = nn.Embedding(num_items, num_factors) self.U = nn.Embedding(num_users, num_factors) self.V = nn.Embedding(num_items, num_factors) self.mlp = nn.Sequential() # The MLP layers for i in mlp_layers: self.mlp.add(gluon.nn.Dense(i, activation='relu', use_bias=True)) def forward(self, user_id, item_id): p_mf = self.P(user_id) q_mf = self.Q(item_id) gmf = p_mf * q_mf p_mlp = self.U(user_id) q_mlp = self.V(item_id) mlp = self.mlp(np.concatenate([p_mlp, q_mlp], axis=1)) con_res = np.concatenate([gmf, mlp], axis=1) #Concatenate GMF and MLP. return np.sum(con_res, axis=-1) 14.6.3. Negative Sampling¶ For pairwise ranking loss, an important step is negative sampling. For each user, the items that a user has not interacted with are candidate items (unobserved entries). The following function takes users identity and candidate items as input, and samples negative items randomly for each user from the cadidate set of that user. During the training stage, the model ensures that the items that a user likes to be ranked higher than items she dislikes or has not interacted with. # Saved in the d2l package for later use. def negative_sampler(users, candidates, num_items): sampled_neg_items = [] all_items = set([i for i in range(num_items)]) for u in users: neg_items = list(all_items - set(candidates[int(u)])) indices = random.randint(0, len(neg_items) - 1) sampled_neg_items.append(neg_items[indices]) return np.array(sampled_neg_items) 14.6.4. Evaluator¶ In this section, we adopt the splitting by time strategy to construct the training and test sets. Two evaluation measures including hit rate at given cutting off \(\ell\) (\(\text{Hit}@\ell\)) and area under the ROC curve (AUC) are used to assess the model effectiveness. Hit rate at given position \(\ell\) for each user indicates that whether the recommended item is included in the top \(\ell\) ranked list. The formal definition is as follows: where \(\textbf{1}\) denotes an indicator function that is equal to one if the ground truth item is ranked in the top \(\ell\) list, otherwise it is equal to zero. \(rank_{u, g_u}\) denotes the ranking of the ground truth item \(g_u\) of the user \(u\) in the recommendation list (The ideal ranking is 1). \(m\) is the number of users. \(\mathcal{U}\) is the user set. The definition of AUC is as follows: where \(\mathcal{I}\) is the item set. \(S_u\) is the candidate items of user \(u\). Note that many other evaluation protocals such as precision, recall and normalized discounted cumulative gain (NDCG) can also be used. The following function caculates the hit counts and AUC for each user. # Saved in the d2l package for later use def hit_and_auc(rankedlist, test_matrix, k): hits_k = [(idx, val) for idx, val in enumerate(rankedlist[:k]) if val in set(test_matrix)] hits_all = [(idx, val) for idx, val in enumerate(rankedlist) if val in set(test_matrix)] max = len(rankedlist) - 1 auc = 1.0 * (max - hits_all[0][0]) /max if len(hits_all) > 0 else 0 return len(hits_k) , auc Then, the overall Hit rate and AUC are cacluated as follows. # Saved in the d2l package for later use def evaluate_ranking(net, test_input, seq, candidates, num_users, num_items, ctx): ranked_list, ranked_items, hit_rate, auc = {}, {}, [], [] all_items = set([i for i in range(num_users)]) for u in range(num_users): neg_items = list(all_items - set(candidates[int(u)])) user_ids, item_ids, x, scores = [], [], [], [] [item_ids.append(i) for i in neg_items] [user_ids.append(u) for _ in neg_items] x.extend([np.array(user_ids)]) if seq is not None: x.append(seq[user_ids,:]) x.extend([np.array(item_ids)]) test_data_iter = gluon.data.DataLoader(gluon.data.ArrayDataset(*x), shuffle=False, last_batch="keep", batch_size=1024) for index, values in enumerate(test_data_iter): x = [gluon.utils.split_and_load(v, ctx, even_split=False) for v in values] scores.extend([list(net(*t).asnumpy()) for t in zip(*x)]) scores = [item for sublist in scores for item in sublist] item_scores = list(zip(item_ids, scores)) ranked_list[u] = sorted(item_scores, key=lambda t: t[1], reverse=True) ranked_items[u] = [r[0] for r in ranked_list[u]] temp = hit_and_auc(ranked_items[u], test_input[u], 50) hit_rate.append(temp[0]) auc.append(temp[1]) return np.mean(np.array(hit_rate)), np.mean(np.array(auc)) 14.6.5. Training and Evaluating the Model¶ The training function is defined below. We train the model in the pairwise manner. # Saved in the d2l package for later use def train_ranking(net, train_iter, test_iter, loss, trainer, test_seq_iter, num_users, num_items, num_epochs, ctx_list, evaluator, negative_sampler, candidates, eval_step=1): num_batches, timer, hit_rate, auc = len(train_iter), d2l.Timer(), 0, 0 animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0, 1], legend=['test hit rate', 'test AUC']) for epoch in range(num_epochs): metric, l = d2l.Accumulator(3), 0. for i, values in enumerate(train_iter): input_data = [] for v in values: input_data.append(gluon.utils.split_and_load(v, ctx_list)) neg_items = negative_sampler(values[0], candidates, num_items) neg_items = gluon.utils.split_and_load(neg_items, ctx_list) with autograd.record(): p_pos = [net(*t) for t in zip(*input_data)] p_neg = [net(*t) for t in zip(*input_data[0:-1], neg_items)] ls = [loss(p, n) for p, n in zip(p_pos, p_neg)] [l.backward(retain_graph=False) for l in ls] l += sum([l.asnumpy() for l in ls]).mean()/len(ctx_list) trainer.step(values[0].shape[0]) metric.add(l, values[0].shape[0], values[0].size) timer.stop() with autograd.predict_mode(): if (epoch + 1) % eval_step == 0: hit_rate, auc = evaluator(net, test_iter, test_seq_iter, candidates, num_users, num_items, ctx_list) train_l = l / (i + 1) animator.add(epoch + 1, ( hit_rate, auc)) print('train loss %.3f, test hit rate %.3f, test AUC %.3f' % (metric[0] / metric[1], hit_rate, auc)) print('%.1f examples/sec on %s' % (metric[2] * num_epochs / timer.sum(), ctx_list)) Now, we can load the MovieLens 100k dataset and train the model. Since there are only ratings in the MovieLens dataset, with some losses of accuracy, we binarize these ratings to zeros and ones. If a user rated an item, we consider the implicit feedback as one, otherwise as zero. The action of rating an item can be treated as a form of providing implicit feedback. Here, we split the dataset in the seq-aware mode where users’ latest interacted items are left out for test. batch_size =") num_workers = 0 if sys.platform.startswith("win") else 4 train_iter = gluon.data.DataLoader(gluon.data.ArrayDataset( np.array(users_train), np.array(items_train)), batch_size, True, last_batch="rollover", num_workers=num_workers) We then create and initialize the model. we use a three-layer MLP with constant hidden size 10. ctx = d2l.try_all_gpus() net = NeuMF(10, num_users, num_items, mlp_layers=[10, 10, 10]) net.initialize(ctx=ctx, force_reinit=True, init=mx.init.Normal(0.01)) The following code trains the model. lr, num_epochs, wd, optimizer = 0.01, 10, 1e-5, 'adam' loss = d2l.BPRLoss() trainer = gluon.Trainer(net.collect_params(), optimizer, {"learning_rate": lr, 'wd': wd}) train_ranking(net, train_iter, test_iter, loss, trainer, None, num_users, num_items, num_epochs, ctx, evaluate_ranking, negative_sampler, candidates) train loss 3.981, test hit rate 0.336, test AUC 0.737 9.9 examples/sec on [gpu(0), gpu(1)] 14.6.6. Summary¶ Adding nonlinearity to matrix factorization model is benificial for improving the model capability and effectivess. NeuMF is a combination of matrix factorization and Multilayer perceptron. The multilayer perceptron takes the concatenation of user and item embeddings as the input. 14.6.7. Exercises¶ Vary the size of latent factors. How the size of latent factors impact the model performance? Vary the architectures (e.g., number of layers, number of neurons of each layer) of the MLP to check the its impact on the performance. Try different optimizers, learning rate and weight decay rate. Try to use hinge loss defined in the last section to optimize this model.
https://www.d2l.ai/chapter_recommender-systems/neumf.html
CC-MAIN-2019-47
refinedweb
1,829
50.33
I Hi Andy, When you create the new user using the Membership API, you can actually mark the account as not approved directly via the Membership API. What this means is that you don't need to use the Profile API for this. You'd then send them a dynamic random URL to click to activate the account. You could to this using the System.Net.Mail namespace -- which you can learn more about here: One the page that they link back to from, you'd then write code like this to activate the account Dim userdetails as MembershipUser userdetails = Membership.GetUser(username) userdetails.IsAppoved = true Membership.UpdateUser(userdetails) Hi Firoz, There are a couple of ways you could do this. One would be to take the built-in ASP.NET Membership Provider and customize/extend it to support another column for a second password, and then just have the ValidateUser method check both passwords. If you use this approach then you don't need to re-implement any Login controls -- since they'll just call into your provider method (whose signature would stay the same). This website has tons of information on how to build providers, and also includes the source-code to the built-in ASP.NET ones: Hi Scott, Thanks for the above info. I discovered the "convert to template" option in the login control, and added my extra password field. I didn't know you could do this, hence I thought that I would have to write a new login control, etc. Thanks very much again Firoz Hi SixSide, The default Profile provider uses an XML blob structure to store the data in a database. This avoids you having to explictly create columns in a table. If you want to map directly against columns in a table (or against a set of SPROCs), you can alternatively use the Profile Table Provider here: Hi Alvaro, If you want to store this information in custom database tables yourself, you might find this article particularly useful: Hi Iron, Yep -- you can definitely log people in manually. This page has a bunch of links on security that you might find useful: This link then talks about how to programmatically login: Great stuff Scott, any update on VB samples? Hi 11HerbsAndSpirces, This more recent blog post (and the article I link to) should help with the VB version: Hi Aaron, Where are you declaring this code? Is it within your page? Thanks, Hi Ray, I believe you could use Profiles to-do this. Alternatively, you could also store this registration data directly within a database. This article has information on how you could do this and links to some other great articles you might want to check out: Hi JCFromFrance, That is a good question, and to be honest with you I haven't ever profiled SQL to see if there is any substantial difference between using a GUID and an INT for a column key. I suspect a INT would be faster -- but don't really know how much for sure. Scott Hi Hayden, In general for adding additional fields that are user specific, I'd recommend using the "Profile" feature above. It is designed to allow you to store extra columns of information about your users. This article helps explain how you can map the Profile provider to a regular SQL table if you want to map these properties to an underlying SQL table or set of SPROCs: Hi Andre, Yep -- the SQL Profile Table Provider shipped back in January. You can learn more and download it here: Hi Max, That is odd -- do you have viewstate turned off for the page? I'm wondering if that is causing the value not to be persisted across the multiple steps. thanks, Hi Carlos, You could build a Profile page that allows users to customize their profile later. This tutorial shows one example of how to-do this: Can you send me an email with a code example I could look at? Thx! Hi Simon, That seems pretty odd. Can you double check to make sure there is a form element in the master page and there aren't any html errors? What you are describing sounds a little like it might be that. Do other form elements (normal textboxs and buttons) work on the page? Hi Death, You can use the Profile in a VS 2005 Web Application Project. However the Profile proxy class isn't automatically generated for you. Instead you should download and use this add-in to generate it: That is pretty weird. If you want to send me a simple .zip file that repros the problem I can take a look for you. Hi Richard, Are you using a Web Site Project or a Web Application Project? Hi Scott Great article, We are creating an ASP.NET 2.0 application with Oracle database. We are planning to recreate the membership, roles and profiles (aspnetdb) tables, views and SP's in Oracle Before that I would like to know is there any other way other than recreating everything manually. I mean some tools or db script ready to download? Hi Marty, You can definitly create your own "Provider" for the Profile API to call into your existing library object. This blog post will help show an example of how to-do this: Hi San Jun, This article will probably help with what you are after: Hi Mark, Here are two articles you might want to check out to learn how to create pages to manage roles: and Hi Petr, Good question -- I'm not actually sure. Any object saved in the profile needs to be serializable - it could be that BitMap picture objects aren't. Hi John, There is an "IsApproved" property on the MembershipUser class that you can set to lock/unlock users from a system. You can retrieve this object like so: MembershipUser user = Membership.GetUser("scottgu"); user.IsApproved = false; Membership.UpdateUser(user); This is regarding using Membership API to login a user. I have following code but it redirects page to Default.aspx and iwant membership API to redirect it to the page I choose. How could I change following code if ( Membership.ValidateUser( TextBox1.Text, TextBox2.Text ) ) FormsAuthentication.RedirectFromLoginPage( TextBox1.Text, false ); All the examples I have seen so far has a Login.aspx and a Default.aspx Thanks in advance Hi Salil, There is an attribute you can set within the <forms> section of your <authentication> section within the web.config file of your application. This allows you to configure the default redirect page of the application. I have a table users in my sql server 2005 database creamworks how can i link this table to asp.net profile. Thanks the project is good but i would like to know how could i connect to oracle as a backend .as it is showing me that unicode not supported. can u plz help me. i have also created the membership provider. wting 4 ur rply. Hi Paul, One suggestion would be to handle it within your Global.asax file's Application_Start event. I demonstrate a similar technique for doing this with roles in my blog post here: Thanks Scott: This appears to be exactly what I was looking for. Hey Scott: Did a modification of your Global.asax idea noted above. Rather than create the roles (and users) in the Global.asax file, I simply called a function createSec() from global.asax's Application_Start event, and did the additions there (in a *.vb file). This way, the stuff is compiled in the application and not visually available. Seems to work. Thanks again for getting me pointed in the right direct. Cheers! Ok this profiling provider object is all so useful but if it was made for making profiles why is there no inbuilt capability to search it? Without going through every profile in the system and creating a collection? Hi Scott, I was hoping you could point me in the right direction. I am trying to set up navigation (sitemap) for users that belong to a set of GROUPS and a set of CATALOGS (i.e. userA is in grp1,grp2 AND also in catX,catY,catZ). I would like to filter the menus based on both these criteria but am unable to see how using just the single set of defined ROLES i can define. So for a menu item i can define grp1 and catX and only users w/ both those securities can see it. I have looked at creating custom IDENTITY's and PRINCIPALS but am not sure how to combine all the concepts to arrive at a solution. Any help would be greatly appreciated. Thanks in advance, Gust.. I read in your posts that you where considering doing something similar for VB.NET. I was would be greatful if you could point to the correct link if you have already posted it. I dont get this one to work!...I use exactly the sam code as you, but I can for example not choose more than one role in the listbox (even if I have SelectionMode="Multiple" for the listbox.... All the other things works fine....Usernamn, password are saved correctly in aspnet_membership and the profiles are saved in aspnet_profile.... Its the role-part that not work for me...For the first can I only select one role..and for the second are nothing saved in aspnet_UsersInRoles...Which it should...Or should it? Maybe someone know what the problem can be? Hi Marcus, What code are you using to retrieve the roles and update them in the Role Provider? I have Admins and Operators roles. I want to have the Operators accounts created only by the admins. This means that initial information for an operator will be provided by an admin and an email with the username and password will be sent to the operator. The operator will then change his/her password. But I also need to have the Security Question and Security Answer set for an operator. How do you think I should implement this? Dako I'm planning to have a web site (I've already created the custom providers) and 2 kinds of users - admins and operators (2 roles). The admins are creating the accounts for all the users. This means that they are creating all the passwords (or these will be autogenerated) and the secret questions and answers. I want a user to be able to change his/her password and secret question/answer. Should I send an email with a link for activating the account and at that moment also ask for changing these? Also, if an user will forget the password and the secret answer, I want the admins to be able to reset them. What do you think is the best solution for this? Dako. Hi Dako, It probably makes sense to implement role-based security, and create two roles: operators and admins. You can then lock down the create-user page to only be accessible by admins. You can then build another page that is available to operators that allows them to change the password. If you use the CreateUserWizard control there is a property that you can set that will prevent the user being created from automatically being logged in (you'll want to set this so that when an admin creates the role they don't get logged out). Thanks a lot for your time. I still have some questions :) Is it ok to allow the operators to change also the secret question/answer in the same page they are changing the password? If so, can I somehow extend the ChangePassword control? Does it make any sense to do this? Also, what should I do if an operator forgets the password but also the secret answer? I'm using hashed passwords and I'm having EnablePasswordReset set to true and EnablePasswordRetrieval set to false. I guess the operator should be able to send an email and an admin should reset the values for the operator. How does it sound? Thanks & best regards, Scott, I don't understand what's wrong with my code. The day I wrote it, it was working fine. Today when I'm trying to run it again, the line: Dim userprofile As ProfileCommon = CType(ProfileCommon.Create(CreateUserWizard1.UserName, True), ProfileCommon) gives me the following error: "Type 'ProfileCommon' is not defined." Please help! Hi hrmalik, This might be caused by a configuration error in your web.config file. Can you open it and check to make sure everything there is right? I think you could allow the operators to change the secret question/answer if you wanted to. You can use the Membership API to control this. If someone forgets their password, then you can use the MembershipUser.ResetPassword() method to reset a new one for them. Thanks for the great article. I've implemented your example to a web application of mine, and I'd say has 85% operational functionality. I do have an issue, with regards to the Profile handler, what are the changes done to the database tables once I introduce these new profiles (which include the new fields to my registration page), apparently I did everything mentioned on your page, but for some strange reason, the page takes alot of time, and comes to) " I check the database, and the member is added, but minus the profile information. Which provider is the error message talking about? amazingly good, I was expecting same thing from profile system Hi Mohamed, I suspect you have an error with how you've registered your provider, and so it can't connect to the database. How did you register the connectionString and provider in your web.config file? I don't get it. Even if we change the code, what about the stored procedures and the database fields ?? can you explain ? I am trying to work on it but something is just not clicking to me :S hope you are fine. i was just wondering how can i store the values in database using "add a Login, Roles and Profile system to an ASP.NET 2.0 app in only 24 lines of code". i tried this thing but its not saving in the database....although i created the fields myself. any suggestions ? hey Scott, I created the pages according to what you said and added my own fields in the database. its not giving any error but its not entering the values in the database. it is adding the username and stuff....but not my fields which are <properties> <add name="FirstName" type="string"/> <add name="LastName" type="string"/> <add name="ClientID" type="string"/> <add name="Phone" type="string"></add> </properties> </profile> which are in a system.web tag. and in the database i made fields for these...it did not work....then i deleted the fields...still not working... any suggestions ?? hi Scott Do i have to create a database with SQL 2000/2005 in order to use the Createuser wizard to build a membership website that someone can create their own account Tommy We currently have our own customer/login tables in our database linked with CustomerID columns of both tables. Some of the requirements of the system are 1) When I validate users, I need to pass more parameters than just username/password, for example customers are created for different clients, so I need to pass our ClientID with username/password together. We build one portal to support multiple clients. 2) When a customer is validated, I need to get CustomerID instead of username from Context.User.Identity. By implementing MembershipProvider class, I don't think I can achieve any of requirement mentioned above. Any idea? Hi Hardy, If you create a custom MembershipProvider you should be able to implement the above requirements. Note that you can also always call Membership.Provider.CustomMethod() to invoke custom methods on your provider implementation - so that would be another way to surface extra functionalit yif you want. How would you go about creating a user with additional user information in one step using the create user wizard and dropping the data in the existing table(s) with the other data? I thought I could just override the membership.createuser method and specify the additional parameters (which seemed logical), but this apparently cannot be done. Thanks! Hi Derek, This blog post has more information that I think will help you: This article in particular talks about how to store the information from CreateUser directly into a database: hi scott when will you release the vb.net version?i have already create the database what if i tried to implement your codes, will the user registered will be affected? Thanx hi scott, i got the code up an running and it worked fine, but If i were to integrate it with my local database, should i create a new table for country, age and gender? or they were automatically created ...... hello there i've been following your tutorial on custom membership.i'm adding a new field in the wizard control. i'm getting Object reference not set to an instance of an object error. Can i email you the details of what i wrote or i can just spill it here...... been doing this for a very long time... Hi Saigei, Sure - feel free to email me what you built and I can try and help. First off, thanks for the great posts. You definitely know what your talking about. Second, I have a problem that, at first, I thought the profile feature could solve but now I'm not so sure. Your advice would be greatly appreciated. I'm using the profile to store some extra user data (like a companyId, fullname). I want to be able to, from admin pages or whatever, filter all the memberships by these extra pieces of data. For instance, if I wanted an admin to only be able to manage the memberships for companyA from code I should easily be able to get a list of the members who's profile has companyA for thier companyID. From what it seems if I wanted to build a wrapper to give me a list of usernames who's profile fit the criteria I'd have to hit the db for each user by building their profileCommon via profil.getProfile. Doesn't seem like a good idea. The profileManager seems like it only helps me report on profile stats and do maintenance, not view all the user's profile data. Is there a way to do this with the profile API? I'd rather not use the ProfileTableProvider to let me build these queries because it would tie me to a particular datastore and kinda defeats the whole datastore neutral idea. Am I missing something? Am I using the profileAPI incorrectly? Any advice on a direction I should take? Thanks in advance, Dane hi i'm wondering how do i get these codes embeded in my web.config. all these while i had to do them manually. and what does the do? > I'd like to add a new user profile as an Employer might add an Employee....assign roles, email address...etc How do I reference profile data to accomplish this ? Thanks, Daren I would like to know how to use visio as a dynamic web page in visio. Hello, I have checked it for Oracle. But the cod eis not working, Could you please tell, what I have to do to connect it with Oracle? Ceema
http://weblogs.asp.net/scottgu/archive/2005/11/01/427754.aspx
CC-MAIN-2014-15
refinedweb
3,276
72.66
Analyze API message content using custom analytics Introduction This topic presents a tour demonstrating how to use policies to extract statistical data from a request and feed that data to the Edge Analytics system. Then, it shows how to create a custom analytics report based on that data, which appears as custom dimensions. In addition, this topic explains how to extract statitstical data by using the Edge Solution Builder tool. API combined with policies to analyze data that is unique to your report that enables you to collect statistics on the number of requests received for weather reports for different locations. Once you have gathered the statistical data, you can use the Edge management UI or API to retrieve and filter statistics that the Edge collects. Parsing response payloads using policies The Yahoo Weather API returns an XML-formatted response. call the API by using an Edge API proxy. The API proxy inspects the request messages to and response messages from the Yahoo API. You are provided with a pre-configured API proxy in the test environment of your organization. The API proxy is called weatherapi. You can invoke that API proxy to obtain a response from the the Yahoo Weather API. If you don't have an account on Apigee Edge, see Creating an Apigee Edge account. If you want to create your own API proxy for the Yahoo Weather API, see Create your first API proxy. You can invoke the API proxy for the Yahoo Weather API by using the following command. Using the Extract Variables policy to extract data from the response The weather response contains potentially valuable information. However, Apigee Edge doesn't yet 'know' how to feed this message content into Analytics Services for processing. To enable data extraction, Edge provides the Extract Variables policy, which can parse message payloads with JSONPath or XPath expressions. See Extract Variables policy for more. To extract the information of interest from the weather report, use an XPath expression. For example, to extract the value of the city, the XPath expression is: /rss/channel/yweather:location/@city Note how this XPath expression reflects the structure of the XML nodes returned from the Yahoo Weather API. Also, note the prefix yweather is defined by a namespace: xmlns:yweather=" To enable the XML message to be parsed properly, you use both the XPath and the namespace definition in the policy. There are many tools available online that you can use to construct XPath expressions for your XML documents. There also many tools available for JSONPath. After the XPath has been evaluated, the Extract Variables policy needs a place to store the value that results from the evaluation. For this storage, the policy uses variables. You can create custom variables whenever you need them by defining a variable prefix and variable name in the Extract Variables policy. In this example, you define four custom variables: weather.location weather.condition weather.forecast_today weather.forecast_tomorrow For these variables, weather is the prefix, and location, condition, forecast_today, and forecast_tomorrow are each variable names. The following naming restrictions apply to custom analytics variables: - Names cannot be multiple-word (no spaces). - No quotation marks, hyphens, or periods. - No special characters. - In addition to the above, follow the column naming restrictions here: The Extract Variables policy below shows how to exrtract data fromt he XML response and write it to custom variables. The <VariablePrefix> tag specifies that the variable names are prefixed by weather. Each <Variable> tag uses the name attribute that specifies the name of the custom variables and the associated XPath expression. Add this policy to your API proxy in the Edge UI or, if you are building the API proxy in XML, add a file under /apiproxy/policies named> Using the Statistics Collector policy to write data to the Analytics Service The next step is to create another policy that reads the custom variables created by the Extract Variables policy and writes them to the Analytics Services for processing. The Statistics Collector policy is used for this operation. See Statistics Collector policy for more. In the Statistics Collector policy, the ref attribute of the <Statistics> tag specifies the name of the variable for which you want to collect statistics. The name attribute specifies the name of the collection of statistical data for that variable stored by the Analytics Server, and the type atribute specifies the data type of the recorded data. You can then query that collection to view the collected statistics about the corresponding variable. Optionally provide a default value for a custom variable, which will be forwarded to Analytics Services if the variables cannot be resolved or the variable is undefined. In the example below, the default values are Earth, Sunny, Rainy, and Balmy. Add this policy to your API proxy in the Edge UI or, if you are building the API proxy in XML, add a file under /apiproxy/policies named> Only one Statistics Collector policy should be attached to a single API proxy. If there are multiple Statistics Collector policies in a proxy, then the last one to execute determines the data written to the analytics server.> Deploying the API proxy After you have made these changes, you need to deploy the API proxy that you have configured. report of statistics Now that you have sent some statistical data to the Analytics Server, you can use the Edge management UI or API to view the collected statistics in the same way that you use the API to get statistics on the out-of-the-box dimensions. Access the recorded data as either a Dimension or as a Metric of a custom report. When you create a Statistics Collector policy, you specify the data type of the collected data. In the example above, you specifed the data type as string for all four variables. For data of type string, reference the statistical data as a Dimension in a custom report. For numerical data types (integer/float/long/double), reference the statistical date in a custom report as a Metric. See Create custom reports for more. Generating a custom report using the Edge UI After you create new collections of analytics data of type string, those collections appear in the Dimensions menu of the Custom Report builder: - From the Custom part of the Analytics menu, select Reports. - In the Custom Reports page, click +Custom Report. - Specify a Report Name. - Select a Metric, such as Traffic, and an Aggregate Function, such as Sum. - Select the +Dimensions button to add a new dimnsion to the report. - Click the Select... dropdown to view the collections that you specified in the Statistics Collector policy. For example, if you specified the name of the collection as location, then location appears in the dropdown. - Select Save to view the report. See Create custom reports for more. Generating a custom report using the Edge API You can use the Edge management API exposed by the Analytics Services to get statistics on your new custom dimensions, in the same way that you use the API to get statistics on the out-of-the-box dimensions. The timeRange parameter must be modified to include the time interval when data was collected. Data older than six months from the current date is not accessible by default. If you want to access data older than six months, contact Apigee Support. In the example request below, the custom dimension is called location. This request builds a custom report for locations based on the sum of message counts submitted for each location. Substitute your organization name for the variable {org_name}, and substitute the email and password for your account on Apigee Edge for $ curl{org_name}/environments/test/stats/location?"select=sum(message_count)&timeRange=11/19/2015%2000:00~11/21/2015%2000:00&timeUnit=day" -u email:password You should see a response inthe form: { /2015%2000:00~11/21/2015%2000:00&timeUnit=day&sortby=sum(message_count)&topk=2" \ -u email:password Results can also be filtered by specifying the values of the dimensions of interest. In the example below, the report is filtered by results for London and Shanghai : $ curl{org_name}/environments/test/stats/location?"select=sum(message_count)&timeRange=11/19/2015%2000:00~11/21/2015%2000:00&timeUnit=day&filter=(location%20in%20'London','Shanghai')" \ -u email:password You should see a response inthe form: { . Creating custom analytics variables with the Solution Builder The Solution Builder lets you create custom analytics variables through an easy-to-use management UI dialog. You may wish to read the previous section "Parsing payloads using policies", which explains how the Extract Variables and Statistics Collector Variables and Statistics Collector policies and gives them unique names. The Solution Builder does not let you go back and change these policies once they are created in a given proxy revision. If you wish to make changes, you can edit the generated policies directly the policy editor. - Go to the Overview page for your proxy in the Edge UI. - Click Develop. - On the Develop page, select Custom Analytics Collection from the Tools menu. The Solution Builder dialog appears. - In the Solution Builder dialog, you first configure two policies: Extract Variables and Statistics Collector. Then, you configure where to attach those policies. - Specify the data you wish to extract: - Location Type: Select the type of data you wish to collect and where to collect it from. You can select data from the request or response side. For example, Request: Query Parameter or Response: XML Body. - Location Source: Identify the data you wish to collect. For example, the name of the query parameter or the XPath for XML data in the response body. - Specify a variable name (and type) that the Statistics Collector policy will use to identify the extracted data. You can use any name. If you omit the name, the system selects a default for you. Note: The name you pick will appear in the dropdown menu for Dimensions or Metrics in the Custom Report builder UI. - Pick where in the API proxy flow you wish to attach the generated policies Extract Variables and Statistics Collector. For guidance, see "Attaching policies to the ProxyEndpoint response Flow". To make things work properly, policies must be attached to the API proxy Flow in the appropriate location. You need to attach the polices at a stage in the flow where the variables you are trapping are in scope (populated). - Click +Collector to add more custom variables. - When you're done, click Build Solution. - Save and deploy the proxy. You can now generate a custom report for the data as described above. Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://docs.apigee.com/analytics-services/content/analyze-api-message-content-using-custom-analytics
CC-MAIN-2016-22
refinedweb
1,799
54.32
Simple CRON for MicroPython. Project description Simple CRON for MicroPython SimpleCRON is a time-based task scheduling program inspired by the well-known CRON program for Unix systems. The software was tested under micropython 1.10 (esp32, esp8266) and python 3.5. What you can do with this library: Run any task at precisely defined intervals Delete and add tasks while the program is running. Run the task a certain number of times and many more. Requirements: The board on which the micropython is installed(v1.10) The board must have support for hardware timers. Install You can install using the upip: import upip upip.install("micropython-scron") or micropython -m upip install -p modules micropython-scron You can also clone this repository, and install it manually: git clone cd ./micropython-scron ./flash-src.sh ESP8266 The library on this processor must be compiled into binary code. The MicroPython cross compiler is needed for this. If you already have the mpy-cross command available, then run the bash script: ./compile.sh and then upload the library to the device, e.g. using the following script: ./flash-byte.sh Simple examples Simple code to run every second: from scron.week import simple_cron # Depending on the device, you need to add a task that # will be started at intervals shorter than the longest # time the timer can count. # esp8266 about 5 minutes # esp32 - for processor ESP32D0WDQ6, the problem did not occur simple_cron.add('null', lambda *a, **k: None, seconds=0, minutes=range(0,59,5), removable=False) simple_cron.add('helloID', lambda *a,**k: print('hello')) simple_cron.run() Code, which is activated once a Sunday at 12:00.00: simple_cron.add( 'Sunday12.00', lambda *a,**k: print('wake-up call'), weekdays=6, hours=12, minutes=0, seconds=0 ) Every second minute: simple_cron.add( 'Every second minute', lambda *a,**k: print('second call'), minutes=range(0, 59, 2), seconds=0 ) Other usage samples can be found in the ‘examples’ directory. How to use it Somewhere in your code you have to add the following code, and from then on SimpleCRON is ready to use. from scron.week import simple_cron simple_cron.run() # You have to run it once. This initiates the SimpleCRON action, # and reserve one timmer. To add a task you are using: simple_cron.add(<callback_id_string>, <callback>, ...) Callbacks Example of a callback: def some_counter(scorn_instance, callback_name, pointer, memory): if 'counter' in memory: memory['counter'] += 1 else: memory['counter'] = 1 where: scorn_instance- SimpleCRON instance, in this case scron.weekend.simple_cron callback_name- Callback ID pointer- This is an indicator of the time in which the task was to be run. Example: (6, 13, 5, 10). This is ( Sunday , 1 p.m. , minutes 5 , seconds 10 ) memory- Shared memory for this particular callback, between all calls. By default this is a dictionary. Important notes: If a task takes a very long time, it blocks the execution of other tasks! If there are several functions to run at a given time, then they are started without a specific order. If the time has been changed (time synchronization with the network, own time change), run the simple_cron.sync_time()function, which will set a specific point in time. Without this setting, it may happen that some callbacks will not be started. What has not been tested: SimpleCRON operation during sleep How to test First install the following things: git clone cd micropython-scron/ micropython -m upip install -p modules micropython-unittest micropython -m upip install -p modules micropython-time Then run the tests: ./run_tests.sh Support and license If you have found a mistake or other problem, write in the issues. If you need a different license for this library (e.g. commercial), please contact me: fizista+scron@gmail.com. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/micropython-scron/0.5.4/
CC-MAIN-2022-40
refinedweb
649
58.38
So it looks increasingly like .NET Core is going to be an important technology in the near future, in part because Microsoft is developing much of it in the open (in a significant break from their past approach to software), in part because some popular projects support it (Dapper, AutoMapper, Json.NET) and in part because of excitement from blog posts such as ASP.NET Core – 2300% More Requests Served Per Second. All I really knew about it was that it was a cut-down version of the .NET framework which should be able to run on platforms other than Windows, which might be faster in some cases and which may still undergo some important changes in the near future (such as moving away from the new "project.json" project files and back to something more traditional in terms of Visual Studio projects - see The Future of project.json in ASP.NET Core). To try to find out more, I've taken a codebase that I wrote years ago and have migrated it to .NET Core. It's not enormous but it spans multiple projects, has a (small-but-better-than-nothing) test suite and supports serialising search indexes to and from disk for caching purposes. My hope was that I would be able to probe some of the limitations of .NET Core with this non-trivial project but that it wouldn't be such a large task that it take more than a few sessions spaced over a few days to complete. Would I be able to migrate one project at a time within the solution to .NET Core and still have the whole project building successfully (while some of the other projects were still targeting the "full fat" .NET framework)? Yes, but some hacks are required. Would it be easy (or even possible) to create a NuGet package that would work on both .NET Core and .NET 4.5? Yes. Would the functionality that is no longer present in .NET Core cause me problems? Largely, no. The restricted reflection capabilities, no - if I pull in an extra dependency. The restricted serialisation facilities, yes (but I'm fairly happy with the solution and compromises that I ended up with). Essentially, .NET Core is a new cross-platform .NET Product .. [and] is composed of the following parts: A .NET runtime .. A set of framework libraries .. A set of SDK tools and language compilers .. The 'dotnet' app host, which is used to launch .NET Core apps (from Scott Hanselman's ".NET Core 1.0 is now released!.NET Core 1.0 is now released!" post) What .NET Core means in the context of this migration is that there are new project types in Visual Studio to use that target new .NET frameworks. Instead of .NET 4.6.1, for example, there is "netstandard1.6" for class libraries and "netcoreapp1.0" for console applications. The new Visual Studio project types become available after you install the Visual Studio Tooling - alternatively, the "dotnet" command line tool makes things very easy so you can could create projects using nothing more than notepad and "dotnet" if you want to! Since I was just getting started, I chose to stick in my Visual Studio comfort zone. The Full Text Indexer code that I'm migrating was something that I wrote a few years ago while I was working with a Lucene integration ("this full text indexing lark.. how hard could it really be!"). It's a set of class libraries; "Common" (which has no dependencies other than the .NET framework), "Core" (which depends upon Common), "Helpers" (which depends upon both Common and Core), and "Querier" (which also depends upon Common and Core). Then there is a "UnitTests" project and a "Tester" console application, which loads some data from a Sqlite database file, constructs an index and then performs a search or two (just to demonstrate how it works end-to-end). My plan was to try migrating one project at a time over to .NET Core, to move in baby steps so that I could be confident that everything would remain in a working state for most of the time. The first thing I did was delete the "Common" project entirely (deleted it from Visual Studio and then manually deleted all of the files) and then created a brand new .NET Core class library called "Common". I then used my source control client to revert the deletions of the class files so that they appeared within the new project's folder structure. I expected to then have to "Show All Files" and explicitly include these files in the project but it turns out that .NET Core project files don't specify files to include, it's presumed that all files in the folder will be included. Makes sense! It wouldn't compile, though, because some of the classes have the [Serializable] attribute on them and this doesn't exist in .NET Core. As I understand it, that's because the framework's serialisation mechanisms have been stripped right back with the intention of the framework being able to specialise at framework-y core competencies and for there to be an increased reliance on external libraries for other functionality. This attribute is used through my library because there is an IndexDataSerialiser that allows an index to be persisted to disk for caching purposes. It uses the BinaryFormatter to do this, which requires that the types that you need to be serialised be decorated with the [Serializable] attribute or they implement the ISerializable interface. Neither the BinaryFormatter nor the ISerializable interface are available within .NET Core. I will need to decide what to do about this later - ideally, I'd like to continue to be able to support reading and writing to the same format as I have done before (if only to see if it's possible when migrating to Core). For now, though, I'll just remove the [Serializable] attributes and worry about it later. So, with very little work, the Common project was compiling for the "netstandard1.6" target framework. Unfortunately, the projects that rely on Common weren't compiling because their references to it were removed when I removed the project from the VS solution. And, if I try to add references to the new Common project I'm greeted with this: A reference to 'Common' could not be added. An assembly must have a 'dll' or 'exe' extension in order to be referenced. The problem is that Common is being built for "netstandard1.6" but only that framework. I also want it to support a "full fat" .NET framework, like 4.5.2 - in order to do this I need to edit the project.json file so that the build process creates multiple versions of the project, one .NET 4.5.2 as well as the one for netstandard. That means changing it from this: { "version": "1.0.0-*", "dependencies": { "NETStandard.Library": "1.6.0" }, "frameworks": { "netstandard1.6": { "imports": "dnxcore50" } } } to this: { "version": "1.0.0-*", "dependencies": {}, "frameworks": { "netstandard1.6": { "imports": "dnxcore50", "dependencies": { "NETStandard.Library": "1.6.0" } }, "net452": {} } } Two things have happened - an additional entry has been added to the "frameworks" section ("net452" joins "netstandard1.6") and the "NETStandard.Library" dependency has moved from being something that is always required by the project to something that is only required by the project when it's being built for netstandard. Now, Common may be added as a reference to the other projects. However.. they won't compile. Visual Studio will be full of errors that required classes do not exist in the current context. Although the project.json configuration does mean that two version of the Common library are being produced (looking in the bin/Debug folder, there are two sub-folders "net452" and "netstandard1.6" and each have their own binaries in), it seems that the "Add Reference" functionality in Visual Studio doesn't (currently) support adding references. There is an issue on GitHub about this; Allow "Add Reference" to .NET Core class library that uses .NET Framework from a traditional class library but it seems like the conclusion is that this will be fixed in the future, when the changes have been completed that move away from .NET Core projects having a project.json file and towards a new kind of ".csproj" file. There is a workaround, though. Instead of selecting the project from the Add Reference dialog, you click "Browse" and then select that file in the "Common/bin/Debug/net452" folder. Then the project will build. This isn't a perfect solution, though, since it will always reference the Debug build. When building in Release configuration, you also want the referenced binaries from other projects to be built in Release configuration. To do that, I had to open each ".csproj" file notepad and change <Reference Include="Common"> <HintPath>..\Common\bin\Debug\net452\Common.dll</HintPath> </Reference> to <Reference Include="Common"> <HintPath>..\Common\bin\$(Configuration)\net452\Common.dll</HintPath> </Reference> A little bit annoying but not the end of the world (credit for this fix, btw, goes to this Stack Overflow answer to Attach unit tests to ASP.NET Core project). What makes it even more annoying is the link from the referencing project (say, the Core project) to the referenced project (the Common project) is not as tightly integrated as when a project reference is normally added through Visual Studio. For example, while you can set breakpoints on the Common project and they will be hit when the Core project calls into that code, using "Go To Definition" to navigate from code in the Core project into code in the referenced Common project doesn't work (it takes you to a "from metadata" view rather than taking you to the actual file). On top of this, the referencing project doesn't know that it needs to be rebuilt if the referenced project is rebuilt - so, if the Common library is changed and rebuilt then the Core library may continue to work against an old version of the Common binary unless you explicitly rebuild Core as well. These are frustrations that I would not want to live with long term. However, the plan here is to migrate all of the projects over to .NET Core and so I think that I can put up with these limitations so long as they only affect me as I migrate the projects over one-by-one. I repeated the procedure for second project; "Core". This also contained files with types marked as [Serializable] (which I just removed for now) and there was the IndexDataSerialiser class that used the BinaryFormatter to allow data to be persisted to disk - this also had to go, since there was no support for it in .NET Core (I'll talk about what I did with serialisation later on). I needed to add a reference to the Common project - thankfully adding a reference to a .NET Core project from a .NET Core project works perfectly, so the workaround that I had to apply before (when the Core project was still .NET 4.5.2) wasn't necessary. However, it still didn't compile. In the "Core" project lives the EnglishPluralityStringNormaliser class, which is used to adjust tokens (ie. words) so that the singular and plural versions of the same word are considered equivalent (eg. "cat" and "cats", "category" and "categories"). Internally, it generates a compiled LINQ expression to try to perform its work as efficiently as possible and it requires reflection to do that. Calling "GetMethod" and "GetProperty" on a Type is not supported in netstandard, though, and an additional dependency is required. So the Core project.json file needed to be changed to look like this: { "version": "1.0.0-*", "dependencies": { "Common": "1.0.0-*" }, "frameworks": { "netstandard1.6": { "imports": "dnxcore50", "dependencies": { "NETStandard.Library": "1.6.0" "System.Reflection.TypeExtensions": "4.1.0" } }, "net452": {} } } The Common project is a dependency regardless of what the target framework is during the build process but the "System.Reflection.TypeExtensions" package is also required when building for netstandard (but not .NET 4.5.2), as this includes extensions methods for Type such as "GetMethod" and "GetProperty". Note: Since these are extension methods in netstandard, a "using System.Reflection;" statement is required at the top of the class - this is not required when building for .NET 4.5.2 because "GetMethod" and "GetProperty" are instance methods on Type. There was one other dependency that was required for Core to build - "System.Globalization.Extensions". This was because the DefaultStringNormaliser class includes the line var normalisedValue = value.Normalize(NormalizationForm.FormKD); which resulted in the error 'string' does not contain a definition for 'Normalize' and no extension method 'Normalize' accepting a first argument of type 'string' could be found (are you missing a using directive or an assembly reference?) This is another case of functionality that is in .NET 4.5.2 but that is an optional package for .NET Core. Thankfully, it's easy to find out what additional package needs to be included - the "lightbulb" code fix options will try to look for a package to resolve the problem and it correctly identifies that "System.Globalization.Extensions" contains a relevant extension method (as illustrated below). Note: Selecting the "Add package System.Globalization.Extensions 4.0.1" option will add the package as a dependecy for netstandard in the project.json file and it will add the required "using System.Globalization;" statement to the class - which is very helpful of it! All that remained now was to use the workaround from before to add the .NET Core version of the "Core" project as a reference to the projects that required it. The process for the "Helpers" and "Querier" class libraries was simple. Neither required anything that wasn't included in netstandard1.6 and so it was just a case of going through the motions. At this point, all of the projects that constituted the actual "Full Text Indexer" were building for both the netstandard1.6 framework and .NET 4.5.2 - so I could have stopped here, really (aside from the serialisation issues I had been putting off). But I thought I might as well go all the way and see if there were any interesting differences in migrating Console Applications and xUnit test suite projects. For the Tester project; no, not much was different. It has an end-to-end example integration where it loads data from a Sqlite database file of Blog posts using Dapper and then creates a search index. The posts contain markdown content and so three NuGet packages were required - Dapper, System.Data.Sqlite and MarkdownSharp. Dapper supports .NET Core and so that was no problem but the other two did not. Thankfully, though, there were alternatives that did support netstandard - Microsoft.Data.Sqlite and Markdown. Using Microsoft.Data.Sqlite required some (very minor) code changes while Markdown exposed exactly the same interface as MarkdownSharp. The "UnitTests" project didn't require anything very different but there are a few gotchas to watch out for.. The first is that you need to create a "Console Application (.NET Core)" project since xUnit works with the "netcoreapp1.0" framework (which console applications target) and not "netstandard1.6" (which is what class libraries target). The second is that, presuming you want the Visual Studio test runner integration (which, surely, you do!) you need to not only add the "xunit" NuGet package but also the "dotnet-test-xunit" package. Thirdly, you need to enable the "Include prerelease" option in the NuGet Package Manager to locate versions of these packages that work with .NET Core (this will, of course, change with time - but as of November 2016 these packages are only available as "prereleases"). Fourthly, you need to add a line "testRunner": "xunit", to the project.json file. Having done all of this, the project should compile and the tests should appear in the Test Explorer window. Note: I wanted to fully understand each step required to create an xUnit test project but you could also just follow the instructions at Getting started with xUnit.net (.NET Core / ASP.NET Core) which provides you a complete project.json to paste in - one of the nice things about .NET Core projects is that changing (and saving) the project.json is all it takes to change from being a class library (and targeting netstandard) to being a console application (and targeting netcoreapp). Similarly, references to other projects and to NuGet packges are all specified there and saving changes to that project file results in those reference immediately being resolved and any specified packages being downloaded. In the class library projects, I made them all target both netstandard and net452. With the test suite project, if the project.json file is changed to target both .NET Core ("netcoreapp1.0", since it's a console app) and full fat .NET ("net452") then two different versions of the suite will be built. The clever thing about this is that if you use the command line to run the tests - dotnet test .. then it will run the tests in both versions. Since there are going to be some differences between the different frameworks and, quite feasibly, between different versions of dependencies then it's a very handy tool to be able to run the tests against all of the versions of .NET that your libraries target. There is a "but" here, though. While the command line test process will target both frameworks, the Visual Studio Test Explorer doesn't. I think that it only targets the first framework that is specified in the project.json file but I'm not completely sure. I just know that it doesn't run them both. Which is a pity. On the bright side, I do like that .NET Core is putting the command line first - not only because I'm a command line junkie but also because it makes it very easy to integrate into build servers and continuous integration processes. I do hope that one day (soon) that the VS integration will be as thorough as the command line tester, though. So, now, there are no errors and everything is building for .NET Core and for "classic"* .NET. * I'm still not sure what the accepted terminology is for non-.NET-Core projects, I don't really think that "full fat framework" is the official designation :) There are no nasty workarounds required for the references (like when the not-yet-migrated .NET 4.5.2 projects were referencing the .NET Core projects). It's worth mentioning that that workaround was only required when the .NET 4.5.2 project wanted to reference a .NET Core project from within the same solution - if the project that targeted both "netstandard1.6" and "net452" was built into a NuGet package then that package could be added to a .NET Core project or to a .NET 4.5.2 project without any workarounds. Which makes me think that now is a good time to talk about building NuGet packages from .NET Core projects.. The project.json file has enough information that the "dotnet" command line can create a NuGet package from it. So, if you run the following command (you need to be in the root of the project that you're interested in to do this) - dotnet pack .. then you will get a NuGet package built, ready to distribute! This is very handy, it makes things very simple. And if the project.json targets both netstandard1.6 and net452 then you will get a NuGet package that may be added to either a .NET Core project or a .NET 4.5.2 (or later) project. I hadn't created the Full Text Indexer as a NuGet package before now, so this seemed like a good time to think about how exactly I wanted to do it. There were a few things that I wanted to change with what "dotnet pack" gave me at this point - For points one and two, the "project.json reference" documentation has a lot of useful information. It describes the "name" attribute - The name of the project, used for the assembly name as well as the name of the package. The top level folder name is used if this property is not specified. So, it sounds like I could add a line to the Common project - "name": "FullTextIndexer.Common", .. which would result in the NuGet package for "Common" having the ID "FullTextIndexer.Common". And it does! However, there is a problem with doing this. The "Common" project is going to be built into a NuGet package called "FullTextIndexer.Common" so the projects that depend upon it will need updating - their project.json files need to change the dependency from "Common" to "FullTextIndexer.Common". If the Core project, for example, wasn't updated to state "FullTextIndexer.Common" as a dependency then the "Core" NuGet package would have a dependency on a package called "Common", which wouldn't exist (because I want to publish it as "FullTextIndexer.Common"). The issue is that if Core's project.json is updated to specify "FullTextIndexer.Common" as a dependency then the following errors are reported: NuGet Package Restore failed for one or more packages. See details in the Output window. The dependency FullTextIndexer.Common >= 1.0.0-* could not be resolved. The given key was not present in the dictionary. To cut a long story short, after some trial and error experimenting (and having been unable to find any documentation about this or reports of anyone having the same problem) it seems that the problem is that .NET Core dependencies within a solution depend upon the project folders having the same name as the package name - so my problem was that I had a project folder called "Common" that was building a NuGet package called "FullTextIndexer.Common". Renaming the "Common" folder to "FullTextIndexer.Common" fixed it. It probably makes sense to keep the project name, package name and folder name consistent in general, I just wish that the error messages had been more helpful because the process of discovering this was very frustrating! Note: Since I renamed the project folder to "FullTextIndexer.Common", I didn't need the "name" option in the project.json file and so I removed it (the default behaviour of using the top level folder name is fine). The project.json reference made the second task simple, though, by documenting the "packOptions" section. To cut to the chase, I changed the Common's project.json to the following: { "version": "1.0.0-*", "packOptions": { "iconUrl": "", "projectUrl": "", "tags": [ "C#", "full text index", "search" ] }, "authors": [ "ProductiveRage" ], "copyright": "Copyright 2016 ProductiveRage", "dependencies": {}, "frameworks": { "netstandard1.6": { "imports": "dnxcore50", "dependencies": { "NETStandard.Library": "1.6.0" } }, "net452": {} } } I updated the other class library projects similarly and updated the dependency names on all of the projects in the solution so that everything was consistent and compiling. Finally, I created an additional project named simply "FullTextIndexer" whose only role in life is to generate a NuGet package that includes all of the others (it doesn't have any code of its own). Its project.json file looks like this: { "version": "1.0.0-*", "packOptions": { "summary": "A project to try implementing a full text index service from scratch in C# and .NET Core", "iconUrl": "", "projectUrl": "", "tags": [ "C#", "full text index", "search" ] }, "authors": [ "ProductiveRage" ], "copyright": "Copyright 2016 ProductiveRage", "dependencies": { "FullTextIndexer.Common": "1.0.0-*", "FullTextIndexer.Core": "1.0.0-*", "FullTextIndexer.Helpers": "1.0.0-*", "FullTextIndexer.Querier": "1.0.0-*" }, "frameworks": { "netstandard1.6": { "imports": "dnxcore50", "dependencies": { "NETStandard.Library": "1.6.0" } }, "net452": {} } } One final note about NuGet packages before I move on - the default behaviour of "dotnet pack" is to build the project in Debug configuration. If you want to build in release mode then you can use the following: dotnet pack --configuration Release Serialisation in .NET Core seems to a bone of contention - the Microsoft Team are sticking to their guns in terms of not supporting it and, instead, promoting other solutions: Binary Serialization Justification. as it allows to serialize object graphs including private state. Replacement. Choose the serialization technology that fits your goals for formatting and footprint. Popular choices include data contract serialization, XML serialization, JSON.NET, and protobuf-net. (from "Porting to .NET Core") Meanwhile, people have voiced their disagreement in GitHub issues such as "Question: Serialization support going forward from .Net Core 1.0". The problem with recommendations such as Json.NET) and protobuf-net is that they require changes to code that previously worked with BinaryFormatter - there is no simple switchover. Another consideration is that I wanted to see if it was possible to migrate my code over to supporting .NET Core while still making it compatible with any existing installation, such that it could still deserialise any disk-cached data that had been persisted in the past (the chances of this being a realistic use case are exceedingly slim - I doubt that anyone but me has used the Full Text Indexer - I just wanted to see if it seemed feasible). For the sake of this post, I'm going to cheat a little. While I have come up with a way to serialise index data that works with netstandard, it would probably best be covered another day (and it isn't compatible with historical data, unfortunately). A good-enough-for-now approach was to use "conditional directives", which are basically a way to say "if you're building in this configuration then include this code (and if you're not, then don't)". This allowed me the restore all of the [Serializable] attributes that I removed earlier - but only if building for .NET 4.5.2 (and not for .NET Core). For example: #if NET452 [Serializable] #endif public class Whatever { The [Serializable] attribute will be included in the binaries for .NET 4.5.2 and not for .NET Core. You need to be careful with precisely what conditions you specify, though. When I first tried this, I used the line "if #net452" (where the string "net452" is consistent with the framework target string in the project.json files) but the attribute wasn't being included in the .NET 4.5.2 builds. There was no error reported, it just wasn't getting included. I had to look up the supported values to see if I'd made a silly mistake and it was the casing - it needs to be "NET452" and not "net452". I used the same approach to restore the ISerializable implementations that some classes had and I used it to conditionally compile the entirety of the IndexDataSerialiser (which I got back out of my source control history, having deleted it earlier). This meant that if the "FullTextIndexer" package is added to a project building for the "classic" .NET framework then all of the serialisation options that were previously available still will be - any disk-cached data may be read back using the IndexDataSerialiser. It wouldn't be possible if the package is added to a .NET Core project but this compromise felt much better than nothing. The migration is almost complete at this point. There's one minor thing I've noticed while experimenting with .NET Core projects; if a new solution is created whose first project is a .NET Core class library or console application, the project files aren't put into the root of the solution - instead, they are in a "src" folder. Also, there is a "global.json" file in the solution root that enables.. magic special things. If I'm being honest, I haven't quite wrapped my head around all of the potential benefits of global.json (though there is an explanation of one of the benefits here; The power of the global.json). What I'm getting around to saying is that I want my now-.NET-Core solution to look like a "native" .NET Core solution, so I tweaked the folder structure and the .sln file to be consistent with a solution that had been .NET Core from the start. I'm a fan of consistency and I think it makes sense to have my .NET Core solution follow the same arrangement as everyone else's .NET Core solutions. Having gone through this whole process, I think that there's an important question to answer: Will I now switch to defaulting to supporting .NET Core for all new projects? .. and the answer is, today, if I'm being honest.. no. There are just a few too many rough edges and question marks. The biggest one is the change that's going to happen away from "project.json" and to a variation of the ".csproj" format. I'm sure that there will be some sort of simple migration tool but I'd rather know for sure what the implications are going to be around this change before I commit too much to .NET Core. I'm also a bit annoyed that the Productivity Power Tools remove-and-sort-usings-on-save doesn't work with .NET Core projects (there's an issue on GitHub about this but it hasn't bee responded to since August 2016, so I'm not sure if it will get fixed). Finally, I'm sure I read an issue around analysers being included in NuGet packages for .NET Core - that they weren't getting loaded correctly. I can't find the issue now so I've done some tests to try to confirm or deny the rumour.. I've got a very simple project that includes an analyser and whose package targets both .NET 4.5 and netstandard1.6 and the analyser does seem to install correctly and be included in the build process (see ProductiveRage.SealedClassVerification) but I still have a few concerns; in .csproj files, analyser are all explicitly referenced (and may be enabled or disabled in the Solution Explorer by going into References/Analyzers) but I can't see how they're referenced in .NET Core projects (and they don't appear in the Solution Explorer). Another (minor) thing is that, while the analyser does get executed and any warnings displayed in the Error List in Visual Studio, there are no squigglies underlining the offending code. I don't know why that is and it makes me worry that the integration is perhaps a bit flakey. I'm a big fan of analysers and so I want to be sure that they are fully supported*. Maybe this will get tidied up when the new project format comes about.. whenever that will be. * (Update: Having since added a code fix to the "SealedClassVerification" analyser, I've realised that the no-squigglies-in-editor problem is worse than I first thought - it means that the lightbulb for the code fix does not appear in the editor and so the code fix can not be used. I also found the GitHub issue that I mentioned: "Analyzers fail on .NET Core projects", it says that improvements are on the way "in .NET Core 1.1" which should be released sometime this year.. maybe then will improve things) I think that things are close (and I like that Microsoft is making this all available early on and accepting feedback on it) but I don't think that it's quite ready enough for me to switch to it full time yet. Finally, should you be curious at all about the Full Text Indexer project that I've been talking about, the source code is available here: bitbucket.org/DanRoberts/full-text-indexer and there are a range of old posts that I wrote about how it works (see "The Full Text Indexer Post Round-up"). Posted at 13:38 Dan is a big geek who likes making stuff with computers! He can be quite outspoken so clearly needs a blog :) In the last few minutes he seems to have taken to referring to himself in the third person. He's quite enjoying it.
http://www.productiverage.com/migrating-my-full-text-indexer-to-net-core-supporting-multitarget-nuget-packages
CC-MAIN-2017-13
refinedweb
5,333
64.91
Hi, I'm having some very hard time with my header files when working with linked lists. I'm not too sure what I need to define in them except for the prototype functions. The way to create strucs within the header files confuse me. This first file is the header for the file that works with the linked lists. // // linked.h // // // Created by on 12-03-06. // Copyright (c) 2012 __MyCompanyName__. All rights reserved. // #ifndef _linked_h #define _linked_h #define BOOLEAN int struct NODE { char username[50]; char password[50]; char usertype[50]; struct NODE *next; }*head = 0; void createList(); BOOLEAN add(struct NODE *p); BOOLEAN valid (char username[50], char password[50]); a #endif As you can see, I choose to declare the struct in the header file. Is there anything wrong with this? #include <stdio.h> #include <stdlib.h> #include <string.h> #include "linked.h" #include "cipher.h" #define MAXLENGTH 50 #define ENCRYPT 50 void createList(){ static const char filename[] = "password.csv"; FILE *file; file = fopen(filename, "r+"); if (file == NULL){ printf("NO PASSWORD FILE"); exit(0); } size_t i=0, size = 1000; char line[MAXLENGTH]; while(i<size && fgest(line,sizeof(line),file)){ NODE *input; input = (NODE*) malloc(sizeof(NODE)); sscanf(line, "%[^','],%[^','],%s", input->username, input->password, input->usertype); decrypt(input->username,ENCRYPT); decrypt(input->password,ENCRYPT); decrypt(input->usertype,ENCRYPT); if ( add(input) == 0){ printf("Error: was unable to initialize password validation!!"); exit(0); } ++i; } BOOLEAN add(struct NODE *p){ struct NODE *temp1; temp1 = (NODE*) malloc(sizeof(NODE)); temp1 = head; while(temp1->next !=NULL){ temp1 = temp1->next; } struct NODE *temp2; temp2 = (NODE*)malloc(sizeof(node)); temp2->username =p->username; //fill in later temp2->password =p->password; //fill in later temp2->usertype =p->usertype; //fill in later temp2->next = NULL; temp1->next = temp2; } BOOLEAN valid(char username[50], char password[50]){ struct NODE *temp1; temp1 = head; while (temp1 != 0){ if((strcmp(temp1->username, username) == 0) && (strcmp(temp1->password, password) == 0)){ return 1; } temp1 = temp1->next; } return 0; } I also constantly get errors on how NODE isn't declared. Also when i try to say temp2->username =p->username; temp2->password =p->password; temp2->usertype =p->usertype; I've been stuck on this for hours. :( Any help is appreciated. Thanks First: temp2->username =p->username; temp2->password =p->password; temp2->usertype =p->usertype; will never work. You cannot just copy char arrays like that; what you are doing is attempting to assign memory addresses. To copy C-strings, you will need to use the strcpy function. Second: what is a doing here? Is it a typo? BOOLEAN valid (char username[50], char password[50]); a Third: This statement while(i<size && fgest(line,sizeof(line),file)){ NODE *input; will not work, because you need to define NODE as struct NODE. I would highly recommend using a typedef (if you have learned about that) which will help reduce the amount of typing (that way you will only have to write NODE and not struct NODE each time it is required). Thanks! I have a slight problem after fixing those problems. It's starting to complain how my *head is declared in the header file. I include this header into 2 more .c files. The solution that I tried for this was to take the *head = NULL out and replace it with extern struct NODE *head; and initialized it within the .c file. Now its giving me a error I've never seen before. Undefined symbols for architecture x86_64: "_head", referenced from: _add in ccAyvsLn.o _valid in ccAyvsLn.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status What is the reason for declaring *head in your header file? I'm asking, because it causes complications: since you defined it in the header file, and protected it using #ifdef/#endif, it will not be visible in your other source (C file). Now a question out of curiosity: what kind of compiler / IDE are you using? The reason I declared it there was because I wanted it to be available to all the functions within the .c file I posted. The functions here are going to be used in another file that contains the main function. I'm not sure exactly which compiler I'm using but its on Mac OSX and came with XCode. Are you compiling two different programs, or are both sources linked to each other? Take a look at this: Thanks for the link! It compiles perfectly until I try the gcc login.o ...etc. step It gives me the same error as before. I'm not too sure what you mean by compiling two different programs, or both sources linked to each other. In the linked.h file, I have changed the head into extern struct NODE *head; and have initialized it the way the link told me to but inside another file, login.c, in the main function. head = NULL; The reason I want to initialize head as a global is because I want to avoid passing it to the linked list functions every time they need to run.
http://www.daniweb.com/software-development/c/threads/416091/help-with-header-files-linked-lists
CC-MAIN-2014-10
refinedweb
849
64.91
. Test: Tests can be run automatically on a MeeGo image by adding package eat-selftest to the image. eat-selftest package pulls testrunner-lite and test-definition as dependencies to the image. The package itself installs an init script that finds all installed test packages and calls testrunner-lite to execute them. Test packages need to. After installing you need to configure the host machine to use usb networking with the device under test. With Fedora you can do this by right-clicking the network managet icon and selecting edit connections. Edit the "Auto usb0" -connections IPv4 Settings and set Method to Manual and define the following Address to it: Address: 192.168.2.14 Netmask: 255.255.255.0 Gateway: 0.0.0.0 or if you don't use network manager to manage your networking do the following # ifconfig usb0 192.168.2.14 up After this your host machine should be ready to act as a test control pc. Have fun testing with it. If you're using Ubuntu 10.04 as your host you can get the host side configuration packages and tools from Tools:Testing debian repository by doing the following. 1. Add the following to your sources.list deb / 2. Add the repository key wget sudo apt-key add Release.key rm Release.key 3. Update your PC's package information sudo apt-get update 4.. To set up Fedora 13 as the host,_server. If the device under test uses IP 192.168.2.15 and the test control PC uses IP 192.168.2.14 then you can make the set up by using the eat-device and eat-host packages. The default IP addresses need to be those to support OTS. If you need to use some other IP addresses you have to do some manual work to get host based test execution working. To enable host based test execution do the following: Add the test control PC's public ssh-key to the devices authorized_keys by cat ~/.ssh/id_rsa.pub | ssh root@[devices's IP address] "mkdir -p ~/.ssh ; cat >> ~/.ssh/authorized_keys" you may also need to increase the device's sshd startups by echo "MaxStartUps 1024" >> /etc/ssh/sshd_config after this you should be able to run testrunner-lite from the host machine. If you need to get the device's syslog sent to the test control PC you have to edit the device's /etc/rsyslog.conf or /etc/syslog.conf depending if the image is using rsyslog or sysklogd. Newer MeeGo images are using sysklogd. For sysklogd do the following cp /etc/syslog.conf /etc/syslog.conf.back echo "*.*;auth,authpriv.none @[control PC's IP]"\ >> /etc/syslog.conf cp /etc/sysconfig/sysklogd /etc/sysconfig/sysklogd.back sed -e 's/SYSLOGD_OPTIONS.*/SYSLOGD_OPTIONS=\"-m 0 -r\"/' \ /etc/sysconfig/sysklogd.back \ > /etc/sysconfig/sysklogd After that setup the test control PC to receive the syslogs cp /etc/sysconfig/sysklogd /etc/sysconfig/sysklogd.back cp /etc/syslog.conf /etc/syslog.conf.back echo "SYSLOGD_OPTIONS=\"-m 0 -r\"" >> /etc/sysconfig/sysklogd echo ":fromhost-ip, isequal, "[device's IP]" /var/log/eat.log"\ >> /etc/syslog.conf MeeGo images can be installed to the N900 automatically by using autoinstaller-n900.sh script. The script is included in File:Autoinstaller-n900.tar.gz. The package also contains a custom initrd image and kernel that are used in the installation process. The package however does not include the Nokia flasher utility required by the process. The utility is available for N900 owners from Nokia Autoinstaller for N900 images is available here: File:Autoinstaller-n900.tar.gz. The automated install of netbooks is still work in progress. Current idea is that the test control PC acts as a PXE boot server for the SUT (=netbook). We boot into initrd image over ethernet and from there on do pretty much the same as with the N900. The following has been tested with Samsung NC 10 laptop. Autoinstaller for netbooks: File:Autoinstaller-netbook.tar.gz Pre-requirements Usage Starting netcat for initial connection check cd /path/to/directory/containing/script/ sudo ./autoinstaller-netbook.sh /path/to/raw-image No solution for other targets currently available sudo kill -USR1 `pidof dd` ifconfig bring the interface up by ifconfig usb0 192.168.2.14 up The installation tries to bring the interface up automatically if it can't get a response from the device by doing the above command Note that USB hubs, etc. can slow down the installation significantly. Some users have a use case where they want to start a server process in a test step to be used by later test steps. If you fork a new process it is possible that the ssh session running freezes during logout until given timeout or when the process terminates. This blocks further test steps. Reason for this is that each test step is currently in its own ssh session and the forked process holds on to its standard pipes and working directory. This is discussed in more technical detail in bug 5718. One solution to this issue is daemonizing the new process. Then it redirects it's input, output and error pipes to "/dev/null" and changes the working directory to root. Here's a short code snippet how to daemonize a C program. #include <stdlib.h> int main( int argc, char *argv[] ) { daemon(0, 0); /* actual code */ return 0; } The server process must be terminated by a later test step so that it won't stay running in the device after test case.
http://wiki.meego.com/index.php?title=Quality/QA-tools/Autotest-guide&diff=prev&oldid=20842
CC-MAIN-2013-20
refinedweb
925
58.69
. Designed by Dave Winer only a week after he formalized RSS 2.0, the BlogChannel module allows the inclusion of data used by weblogging applications and, specifically, the newer generation of aggregating and filtering systems. It consists of three optional elements, all of which are subelements of channel, and has the following namespace declaration: xmlns:blogChannel="" The elements are: Contains a literal string that is the URL of an OPML file containing the blogroll for the site. A blogroll is the list of blogs that the blog author habitually reads. Contains a literal string that is the URL of a site that the blog author recommends the reader visits. Contains a literal string that is the URL of the OPML file containing the URLs of the RSS feeds to which the blog author is subscribed in her desktop reader. Example 8-2 shows the beginning of an RSS 2.0 feed using the BlogChannel module. <?xml version="1.0"?> <rss version="2.0" xmlns: <channel> <title>RSS2.0Example</title> <link></link> <description>This is an example RSS 2.0 feed</description> <blogChannel:blogRoll></blogChannel:blogRoll> <blogChannel:blink></blogChannel:blink> <blogChannel:mySubscriptions></blogChannel:mySubscriptions> ... We will discuss OPML, blogrolls, and subscription lists in Chapter 10. In the meantime, let's look at producing RSS 2.0 feeds.
http://etutorials.org/Misc/rss/Chapter+8.+RSS+2.0+Simply+Extensible/8.2+Module+Support+Within+RSS+2.0/
CC-MAIN-2017-04
refinedweb
216
56.66
> Hi guys, I'm getting this error: error CS0246: The type or namespace name `CharacterMovement' could not be found. Are you missing a using directive or an assembly reference? The script is trying to reference the script "CharacterMovement" on the same gameObject level as itself, but it's coming up with this namespace error. I'm using the following line of code to try to access "CharacterMovement": GetComponent<CharacterMovement>().enabled = true; I have noted that with some of the SampleAsset Scripts, I can say, using UnitySampleAssets.Utility; or something similar, but this is a custom script. Do you have any idea how to fix this? ViolinRobot Yes it is. And is CharacterMovement written in the same language (c#/UnityScript) as this script? Is both the name of the script and the name of the class CharacterMovement? Yup, that is true. Thanks so much guys. Another related issue is that I can't enable a script in one of the children of my gameObject, even with GetComponentInChildren<>() Do you have any idea how I could reference it? Answer by fares90 · Aug 15, 2015 at 10:24 AM see this: i think this is the solution for you :) Answer by Tom01098 · Aug 15, 2015 at 01:05 PM Are you sure the script is called CharacterMovement? It could be something as simple as a misspelt word. No, I checked for that. Answer by moriggi · Aug 15, 2015 at 03:05 PM Hi, first of all your script CharacterMovement must be a public class. Make a reference in the other script that is trying to access it: CharacterMovement charMov; and in the Start method write charMov = GetComponent(); structure, finding scripts vis GetComponent and related errors 1 Answer error CS0246: The type or namespace name `rect' could not be found. 1 Answer error CS0246: The type or namespace name `List' could not be found. 1 Answer using UnityEditor outside of Editor folder when compiling build? 4 Answers CS0246: The type or namespace name `targetPosition' could not be found. Are you missing a using directive or an assembly reference? 1 Answer
https://answers.unity.com/questions/1029223/error-cs0246-typenamespace-error.html
CC-MAIN-2019-22
refinedweb
346
64.51
Related Topics: Java IoT Java IoT: Article Java Gotchas: Instance Variables Hiding If methods with the same signatures or member variables with the same name exist in ancestor and descendant classes, the Java keyword super allows access members of the ancestor. But what if you do not use the keyword super in the descendant class? In case of methods, this is called method overriding and only the code of the descendant's method will execute. But when both classes have a member variable with the same name, it may cause a confusion and create hard to find bugs. Recently in one of the Java online forums, a user with id cityart posted a question about a "strange behavior" of his program, and I decided to do some research on this subject. Let's take a look at the Java program that declares a variable greeting in both super and subclasses (class A and class B). The subclass B also overrides the Object's method toString(). Please note, that the variable obj has a type of the superclass (A), but it points at the instance of the subclass (B), which is perfectly legal. class A { public String greeting ="Hello"; } class B extends A { public String greeting="Good Bye"; public String toString(){ return greeting; } } public class VariableOverridingTest { public static void main(String[] args) { A obj = new B(); obj.greeting="How are you"; System.out.println(obj.greeting); System.out.println(obj.toString()); } } If you compile and run this program, it'll print the following: How are youGood Bye How come? Aren't we printing a member variable greeting of the same instance of the class B? The answer is no. If you run this program in IDE through a debugger, you'll see that there are two separate variables greeting. For example, Eclipse IDE shows these variables as greeting(A) and greeting(B). The first print statement deals with the member variable of the class A since obj has a type A, and the second print uses a method of the instance B that uses its own variable greeting. Now, change the declaration of the variable obj to B obj = new B(); Run the program, and it'll print "How are you" twice. But since you wanted the variable obj to have the type of the superclass A, you need to find a different solution. In the code below, we prohibit direct access to the variable greeting by making it private and introducing public setter and getter methods in both super and subclasses. Please note that in the following example, we override the setter and getter in the class B. This gives us a better control of which variable greeting to use. class A { private String greeting ="Hello"; public void setGreeting(String greet){greeting = greet;} public String getGreeting(){return greeting;} } class B extends A { private String greeting="Good Bye"; public String toString(){ return greeting; } public void setGreeting(String greet){greeting = greet;} public String getGreeting(){return greeting;} } public class VariableOverridingTest2 { public static void main(String[] args) { A obj = new B(); obj.setGreeting("How are you"); System.out.println(obj.getGreeting()); System.out.println(obj.toString()); } }(). In Sun's Java tutorial, I found only a brief mentioning of member variables inheritance over here: Basically, you can hide a variable but override a method of a superclass. Java Language Specification describes hiding of instance variables over here: second_edition/html/classes.doc.html#229119 One more term to be aware of is shadowing. Here's another Sun's article that discusses hiding and shadowing: What do you think of the following quote from this article: "First an important point needs to be made: just because the Java programming language allows you to do something, it doesn't always mean that it's a desirable thing to do." Well, if a feature is not desirable, why keep it in the language? Most likely, creators of the language decided to keep a separate copy of the superclass' instance variable to give developers a freedom to define their own subclasses without worrying of overriding by accident some internal members of the superclasses. But in my opinion it should be a responsibility of the superclasses to protect their members. I'd love to see some practical examples, which would show when this feature of the Java language could be useful. Published September 13, 2004 Reads 76,7 guess the author clearly states what the problem is and how the encapsulation helps in avoiding the potential errors. I think the following line from the text of the article would be enough to red flag this for any rational developer. Or, this may make it more noticeable :-) LOOK AT THE LINE BELOW. POTENTIAL BUG!!! ()." And in most scenarios of real software development, we never have the luxury of time to track the "patient zero" who coded this sort of bug and treat him. :-) As for the existence of debates such as this, THEY SHOULD (I mean MUST) exist for the sake of posterity. The correct solutions to these problems to be aware of such problems. If one thinks that articles such as this are "encouraging" the such "malpractices" (without reading the complete articles) then they are wrong and should be advised to use good commonsense in adopting coding techniques. The problem of hiding variables will never arise if you apply good practices of Oriented Object programming and NEVER make use of anything else than "private" as a modifier for class members. And yse accessors when needed. Now as for method overriding, well...that is *exactly* what OO design is for. Making sure your objects are correctly polymorphic, behave properly and offer the proper services and proper extensibility through their exposed methods. This whole debate should not exist in the first place, and to me, the article comes from software malpractice, and is not depicted as such, which is not so good IMHO. Correct OO design is the issue that should be addressed here, not the effects of it. Good pitfall. This is quite common and very had to find if you have more than two classes in the inheritance tree and the instance members are "protected" which is very common thing that we see. If we go by Bertrand Myer's Object Oriented software construction, which enforces strict encapsulation by saying no to protected variables, we will *not* run in to these sort of problems. But then again, we may be tempted to write "Train Wreck" code -- obj.getThis().getThat().getSomething() There is another pitfall that has the disguise of this "overridding." Guess what is printed by the following code? public class A { public static void getInstance(){ System.out.println("class A"); } } public class B extends A { public static void getInstance(){ System.out.println("class B"); } } public class Tester { public staic void main(Strnig[] args) { A obj = new B(); obj.getInstance(); } } This would print "class A" because, the static methods go by the name "class methods" in Java. In C++, similar code would print "class B". Java language says, static/class methods are not inherited and cannot be overriden. But allowing to define classes with the same name, though legal when looked from the namespace perspective, would result in these pitfalls. The irony with these "false" static methods AKA class methods is that, it makes your brain hurt when you look at code like the one below... Keeps you guessing why it does not throw NullPointerException. A obj = null; obj.getInstance(); These are the many good reasons why one should enforce, with the help of IDEs like Eclipse, the practice of qualifying instance members with "this", "super" and static members with the "type name". Both David Hibbs and J.R. Titko are right that there is no problem if the design is right - in particular if you have proper encapsulation (i.e. keep the data member private and use getter and setter functions). But I still can't imagine a single legitimate usage. The closest I can come is that it means that you don't need to worry what (private) data members base classes might have, and can reuse their names for your own purposes. I think this is what David means when he says "... the capability to do this is required or else behaviours of parent classes are not encapsulated ...". I'm not convinced - a compile error at this point might save a lot of grief later. Mr. Tyrrell commented that "...it is the type of the object, not the type of the pointer, that determines the behaviour. To me, that makes it a fault rather than a feature!" In some regards, yes. The key word here though is "behaviour". Behaviour as in, what happens when a method is invoked? Direct access of fields (IMHO) is not a "behaviour" of an object. Allowing access to member fields like this is poor style and design in any OO language. Proper encapsulation helps this problem. This is not to say that encapsulation is a cure-all; indeed, generating getters and setters for the field in the child class (in effect, overriding them and shadowing them at the same time!) can create a whole new set of hard-to-find bugs. The bottom line: proper design, planning, and review will avoid the pitfall, while the capability to do this is required or else behaviors of parent classes are not encapsulated -- and subject to breakage by children. I have been using this example when teaching for a couple years to show what to avoid in coding Java. It is a situation set up at compile time by the compiler making the substitution of the literal for the variable. I agree its a problem in the language, but can easily be avoided by always using getters and setters to revtrieve instance level variables. It seems to me from the examples that there is no way of utilising this feature without breaking the Liskov substitution principle that it is the type of the object, not the type of the pointer, that determines the behaviour. To me, that makes it a fault rather than a feature! The article is good but can be very briefly eneded by saying that When a method is called by a reference object, it takes into consideration the Object it is referencing and not the type of referencing object. While, when a memeber variable is accessed by a reference object, the type of the referencing object is taken into consideration and not the object it is referencing. This is another example how important it is, that every Developer has easy to use access to Software Audits in it's IDE, so that suspicious constructs like this don't survive until the check-in... i.e. the Audits provided by Borland Together in JBuilder and Eclipse-based IDE's It was interesting. It is the behavior of an object that is defined by its type (whose instance it is). I think the attributes of an object are defined by the handle used, since in Java methods are only bound at runtime. Simple typecasting with the superclass/subclass can have obtained the desired result, as long as the typecast is valid. Perhaps a better way to demonstrate is to print obj.greeting before setting, e.g., A obj = new B(); System.out.println(obj.greeting); obj.greeting="How are you"; System.out.println(obj.greeting); System.out.println(obj.to.
http://news.sys-con.com/node/46344
CC-MAIN-2018-17
refinedweb
1,896
61.16
> Very nice summary! Just suggesting some changes here, in some cases > because the docs you started from are out of synch with the current > incarnation of the language: Thanks! (Say, what's the deal with the overlap in the ref manual and the lib ref manual? It seems like the lib ref manual provides more up-to-date and exact details on the things it does cover, though i may have that impression because i went through the lib ref after i'd digested the regular ref...) > > operators: 'not', 'and', 'or'; binary ones shortstop I was referring to the fact that successive conditions are evaluated depending on the values of preceeding ones - is that short-circuiting? I think i've heard it referred to both ways, but that may just be me hallucinating again...-) > While the "Global ns" column is accurate, it boils down to just this: > Today, every piece of code executes in some module, and the global ns for > the code is simply the module's namespace; the "global ns of cb/caller" > clauses always trace a chain back up to "Module", "Script" or > "Interactive cmd". The only exception is when the global ns is > explicitly overridden via argument to exec/eval/execfile. That makes sense - do you think it ought to be explicitly articulated in the cheat sheet? Do you have suggestions how i might fit it in? > Nit: (-sys.maxint-1)/-1 doesn't raise OverflowError, but should. I may want to put this in my 'useful hints and idioms' section. > > . } Dictionaries (mutable): {key1: 'val1', key2: 'val2', ...} > > keys can be of any immutable types > > Subtlety: Keys actually need to be hashable; immutable is neither > necessary nor sufficient. E.g., a tuple with a list component can't be > used as a key (so "top-level" immutability isn't sufficient), while > instances of any user-defined class that defines __hash__ and __cmp__ > methods can be used as keys (so immutable isn't necessary). This is a crucial point. I think i got much more clear about it from reading the lib ref manual, and have tried to express it clearly (but concisely). How does this look? the ref man)
https://legacy.python.org/search/hypermail/python-1994q2/0971.html
CC-MAIN-2021-43
refinedweb
362
61.67
Question Acme Sales has two store locations. Store A has fixed costs of $125,000 per month and a variable cost ratio of 60%. Store B has fixed costs of $200,000 per month and a variable cost ratio of 30%. At what sales volume would the two stores have equal profits? Answer to relevant QuestionsA leading broker has advertised money multiplier certificates that will triple your money in nine years; that is, if you buy one for $333.33 today, it will pay you $1,000 at the end of nine years. What rate of return will ..."Early in 2005, Herndon Industries was formed with authorization to issue 200,000 shares of $10 .par value common stock and 30,000 shares of $100 par value cumulative preferred stock. During 2005, all the preferred stock was ...The Ramirez Company's last dividend was $1.75. Its dividend growth rate is expected to be constant at 25% for 2 years, after which dividends are expected to grow at a rate of 6% forever. Its required return (rs) is 12%. What ...FDE Manufacturing Company has a normal plant capacity of 37,500 units per month. Because of an extra-large quantity of inventory on hand, it expects to produce only 30,000 units in May. Monthly fixed costs and expenses are ...How do you calculate % of sales in elasticity?Elasticity of demand = .4512housing income 75000expected to fall 2.5% sales will fall by _____? Post your question
http://www.solutioninn.com/acme-sales-has-two-store-locations-store-a-has-fixed
CC-MAIN-2017-13
refinedweb
245
77.23
Trying to load a .mp4 Video fails Hey guys, I've installed ffmpeg and opencv on my linux server, like that: git clone <ffmpeg_git_repositiory> cd FFmpeg ./configure --enable-shared make sudo make install git clone <opencv_git_repositiory> cmake <path to the OpenCV source directory> mkdir release cd release cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. make sudo make install I wrote a little Python script: import cv2 cap = cv2.VideoCapture('video.mp4') total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) But I'm not able to load that video and total_frames are always 0. I did some research why this doesn't work and found tons of answers tried so many ways to install it, but it didn't worked for me. What did I do wrong? Am I missing some detail? I'm really annoyed and from installing it
https://answers.opencv.org/question/128583/trying-to-load-a-mp4-video-fails/
CC-MAIN-2020-24
refinedweb
138
67.55
On Mon, 2004-04-19 at 23:08, Trond Myklebust wrote:> On Mon, 2004-04-19 at 16:49, Fabian Frederick wrote:> > Trond,> > > > Here is a patch to have nfs to sysctl although Maxreadahead is tunable> > under nfs init only.Do you have an idea and do you think it's acceptable> > to make it applicable directly i.e. would it be readahead reduction> > tolerant ?> > > > btw, is this inode.c an issue for V4 ?> > > > The lockd module has already registered the name /proc/sys/fs/nfs, so> your scheme will end up corrupting the sysctl list. Sorting out the> /proc namespace issue is the main reason why this hasn't been done> before.> Personally, I'd prefer renaming the lockd module into> /proc/sys/fs/lockd, but it will have to be up to Andrew to decide> whether he wants to allow that during a stable kernel cycle.AFAICS one of the first sysctl concepts is to be redundancy tolerant.If you take fs for instance, it's being declared here and there.It means "if it doens't exist yet, we create it with mode (555) else go cycle to the sub-branch (fs-nfs....) btw, we have a register(fs) sono problem for me.Lockd has to do with nfs so it should be preserved atthe same place IMHO.> > Also note that putting initializers into a ".h" file is horrible style.> ".h" files should be for forward declarations only.I used both ntfs, coda fs scheme having in mind independant sysctlregistering in forthcoming releases.But hey ! "I'm an absolute beginner" :) Maybe you and Andrew can tell mewhat to do with this ugly patch ;) e.g. no sysctl.h -> include stuff ininode.c ...Regards,FabianF-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2004/4/19/253
CC-MAIN-2016-44
refinedweb
315
66.64
About a year ago we checked the Linux core. It was one of the most discussed articles at that time. We also got quite a number of requests to check FreeBSD, so finally we decided to take the time to do it. About the project FreeBSD is a contemporary operating system for servers, desktops and embedded computer platforms. Its code has gone through more than thirty years of continuous development, improvement and optimization. It has proven itself as a system for building intranet, Internet networks, and servers. It provides reliable network services and efficient memory management. Despite the fact that FreeBSD is regularly checked by Coverity, we had a great time checking this project because a lot of suspicious fragments were found. In this article we’ll provide about 40 fragments, but the developers of this project may have a look at a full list, which contains around 1000 analyzer warnings of high severity. To my humble opinion, a lot of those warnings issued by the analyzer are real bugs, but it’s hard for me to determine how critical they are, as I am not the developer of the system. I suppose it could be a good ground for a discussion with the authors of the project. The source code was taken from GitHub branch – ‘master’. The repository contains ~23000 files and two dozens of assembly configurations for different platforms, but I checked the kernel only, which I compiled in this way: # make buildkernel KERNCONF=MYKERNEL Methodology We used static code analyzer PVS-Studio, version 6.01. For convenience, I set a PC-BSD and wrote a small utility in C++, which keeps the working environment of the compilers’ runs when building the kernel. The acquired information was used to get the preprocessed files and their analysis, done by PVS-Studio. This method allowed me to quickly check a project without having to study an unfamiliar build system to integrate the analyzer. On top of it, analysis of preprocessed files allows you to do a more in-depth analysis of the code and find more sophisticated and interesting, errors, in macros for instance. This article will provide several examples of such a kind. Linux kernel was analyzed in the same way; this mode is also available for Windows users in the Standalone utility, which is a part of PVS-Studio distribution kit. Usually PVS-Studio seamlessly integrates into the projects. There is a number of ways to integrate the analyzer, described in the documentation. Monitoring utilities have a big advantage of trying the analyzer if the project has an unusual building system. Surprising luck The first possible error was found before I ran the analyzer on the project, and even before I built the kernel; the build was interrupted by a linking error. Having addressed the file, specified in the error, I saw the following: Pay attention to the highlighted fragment: a tab character is used for the formatting of the indentations; two statements are moved under the condition. But the last statement does not actually refer to a condition and will be always executed. Perhaps, curly braces were forgotten here. Once we got a comment that we just copy the analyzer warnings, but it is not so. Before the analysis of the project we have to make sure that it gets compiled correctly; when the report is done, the warnings must be sorted/examined and commented. The same work is done by our customer support team, when they answer the incoming mails. There are also cases when the customers send examples of false positives (in their opinion) which turn out to be real bugs. Capy-poste and typos PVS-Studio analyzer is a powerful tool for static code analysis that finds bugs of various levels of severity. The first diagnostics were very simple and were created to detect most common bugs, related to typos and copy-paste programming. After the analysis review, I sort them according to the error code. So in this article we’ll start with this type of diagnostic rules. V501 There are identical sub-expressions ‘(uintptr_t) b->handler’ to the left and to the right of the ‘>’ operator. ip_fw_sockopt.c 2893); } Here is a vivid example of a bad practice – giving the variables short and uninformative names. Now because of the typo in the letter ‘b’, the a part of the condition will never be return 1. Thus, the function returns a zero status not always correctly. V501 There are identical sub-expressions to the left and to the right of the ‘!=’ operator: m->m_pkthdr.len != m->m_pkthdr.len key.c 7208 int key_parse(struct mbuf *m, struct socket *so) { .... if ((m->m_flags & M_PKTHDR) == 0 || m->m_pkthdr.len != m->m_pkthdr.len) { // <= .... goto senderror; } .... } One of the fields of the structure is compared with itself; therefore, the result of the logical operation will always be False. V501 There are identical sub-expressions to the left and to the right of the ‘|’ operator: PIM_NOBUSRESET | PIM_NOBUSRESET sbp_targ.c 1327 typedef enum { PIM_EXTLUNS = 0x100, PIM_SCANHILO = 0x80, PIM_NOREMOVE = 0x40, PIM_NOINITIATOR = 0x20, PIM_NOBUSRESET = 0x10, // <= PIM_NO_6_BYTE = 0x08, PIM_SEQSCAN = 0x04, PIM_UNMAPPED = 0x02, PIM_NOSCAN = 0x01 } pi_miscflag; static void sbp_targ_action1(struct cam_sim *sim, union ccb *ccb) { .... struct ccb_pathinq *cpi = &ccb->cpi; cpi->version_num = 1; /* XXX??? */ cpi->hba_inquiry = PI_TAG_ABLE; cpi->target_sprt = PIT_PROCESSOR | PIT_DISCONNECT | PIT_TERM_IO; cpi->transport = XPORT_SPI; cpi->hba_misc = PIM_NOBUSRESET | PIM_NOBUSRESET; // <= .... } In this example we see that the same variable “PIM_NOBUSRESET” is used in the bitwise operation, which doesn’t affect the result in any way. Most likely a constant with a different value was meant to be used here, but the variable was left unchanged. V523 The ‘then’ statement is equivalent to the ‘else’ statement. saint.c 2023 GLOBAL void siSMPRespRcvd(....) { .... if (agNULL == frameHandle) { /* indirect mode */ /* call back with success */ (*(ossaSMPCompletedCB_t)(pRequest->completionCB))(agRoot, pRequest->pIORequestContext, OSSA_IO_SUCCESS, payloadSize, frameHandle); } else { /* direct mode */ /* call back with success */ (*(ossaSMPCompletedCB_t)(pRequest->completionCB))(agRoot, pRequest->pIORequestContext, OSSA_IO_SUCCESS, payloadSize, frameHandle); } .... } Two condition branches are commented differently: /* indirect mode */ and /* direct mode */, but they are implemented similarly, which is very suspicious. V523 The ‘then’ statement is equivalent to the ‘else’ statement. smsat.c 2848 osGLOBAL void smsatInquiryPage89(....) { .... if (oneDeviceData->satDeviceType == SATA_ATA_DEVICE) { */ } else { */ } .... } This example is even more suspicious than the previous one. A big code fragment was copied, but later no changes were made. V547 Expression is always true. Probably the ‘&&’ operator should be used here. qla_hw.c 799 static int qla_tx_tso(qla_host_t *ha, struct mbuf *mp, ....) { .... if ((*tcp_opt != 0x01) || (*(tcp_opt + 1) != 0x01) || (*(tcp_opt + 2) != 0x08) || (*(tcp_opt + 2) != 10)) { // <= return -1; } .... } Here the analyzer detected that the condition “(*(tcp_opt + 2) != 0x08) || (*(tcp_opt + 2) != 10)” is always true and it is really so, if you build a truth table. But most likely the ‘&&’ is not needed here, it is just a typo in the address offset. Perhaps the function code should be like this: static int qla_tx_tso(qla_host_t *ha, struct mbuf *mp, ....) { .... if ((*tcp_opt != 0x01) || (*(tcp_opt + 1) != 0x01) || (*(tcp_opt + 2) != 0x08) || (*(tcp_opt + 3) != 10)) { return -1; } .... } V571 Recurring check. This condition was already verified in line 1946. sahw.c 1949 GLOBAL bit32 siHDAMode_V(....) { .... if( saRoot->memoryAllocated.agMemory[i].totalLength > biggest) { if(biggest < saRoot->memoryAllocated.agMemory[i].totalLength) { save = i; biggest = saRoot->memoryAllocated.agMemory[i].totalLength; } } .... } This code is really strange, if we simplify it, we’ll see the following: if( A > B ) { if (B < A) { .... } } The same condition is checked twice. Most likely, something else was supposed to be written here. A similar fragment: - V571 Recurring check. This condition was already verified in line 1940. if_rl.c 1941 Dangerous macros V523 The ‘then’ statement is equivalent to the ‘else’ statement. agtiapi.c 829 if (osti_strncmp(buffer, "0x", 2) == 0) { maxTargets = osti_strtoul (buffer, &pLastUsedChar, 0); AGTIAPI_PRINTK( ".... maxTargets = osti_strtoul 0 \n" ); } else { maxTargets = osti_strtoul (buffer, &pLastUsedChar, 10); AGTIAPI_PRINTK( ".... maxTargets = osti_strtoul 10\n" ); } First, I skipped this analyzer warning, thinking that it is a false positive. But warnings of low severity should also be reviewed after the project check (to improve the analyzer). So I came across such a macro: #define osti_strtoul(nptr, endptr, base) \ strtoul((char *)nptr, (char **)endptr, 0) The ‘base’ parameter isn’t used at all, and the ‘0’ value is always passed to the “strtoul” function as the last parameter, although values ‘0’ and ’10’ are passed to the macro. In the preprocessed files all macros got expanded and the code became similar. This macro is used in this way several dozen times. The entire list of such fragments was sent to the developers. V733 It is possible that macro expansion resulted in incorrect evaluation order. Check expression: chan – 1 * 20. isp.c 2301 static void isp_fibre_init_2400(ispsoftc_t *isp) .... if (ISP_CAP_VP0(isp)) off += ICB2400_VPINFO_PORT_OFF(chan); else off += ICB2400_VPINFO_PORT_OFF(chan - 1); // <= .... } At first glance, there is nothing strange in this code fragment. We see that sometimes the ‘chan’ value is used, sometimes less by one ‘chan – 1’, but let’s have look at the macro definition: )) // <= When passing the binary expression to the macro, the calculation logic changes dramatically. The expression “(chan – 1) * 20” turns into “chan – 1 *20”, i.e. into “chan – 20”, and the incorrectly calculated size gets used further in the program. About the priorities of operations In this section, I will discuss how important it is to know the priorities of operations, use extra parentheses, if you are not sure and sometimes test yourself by building truth tables of logical expressions. V502 Perhaps the ‘?:’ operator works in a different way than it was expected. The ‘?:’ operator has a lower priority than the ‘|’ operator. ata-serverworks.c 166 ata_serverworks_chipinit(device_t dev) { .... pci_write_config(dev, 0x5a, (pci_read_config(dev, 0x5a, 1) & ~0x40) | (ctlr->chip->cfg1 == SWKS_100) ? 0x03 : 0x02, 1); } .... } The priority of ‘?:’ operator is lower than of the bitwise OR ‘|’. As a result, in the bit operations, in addition to the numeric constants, the expression result “(ctlr-> chip > cfg1 = SWKS_100)” gets used, which suddenly changes the calculation/computation logic. Perhaps this error wasn’t noticed so far because the result seemed so close to the truth. V502 Perhaps the ‘?:’ operator works in a different way than it was expected. The ‘?:’ operator has a lower priority than the ‘|’ operator. in6.c 1318 void in6_purgeaddr(struct ifaddr *ifa) { .... error = rtinit(&(ia->ia_ifa), RTM_DELETE, ia->ia_flags | (ia->ia_dstaddr.sin6_family == AF_INET6) ? RTF_HOST : 0); .... } A different file also had a fragment with a similar error with a ternary operator. V547 Expression ‘cdb[0] != 0x28 || cdb[0] != 0x2A’ is always true. Probably the ‘&&’ operator should be used here. mfi_tbolt.c 1110 int mfi_tbolt_send_frame(struct mfi_softc *sc, struct mfi_command *cm) { .... if (cdb[0] != 0x28 || cdb[0] != 0x2A) { // <=' if ((req_desc = mfi_tbolt_build_mpt_cmd(sc, cm)) == NULL) { device_printf(sc->mfi_dev, "Mapping from MFI " "to MPT Failed \n"); return 1; } } else device_printf(sc->mfi_dev, "DJA NA XXX SYSPDIO\n"); .... } The first conditional expression is always true, that’s why the ‘else’ branch never gets control. I will provide the truth table in case of controversial logical expressions in this and the following examples. An example for this case: ); .... } The problem with this fragment is that the conditional expression doesn’t depend on the result “error == 0”. Perhaps, something is wrong here. Three more cases: - V590 Consider inspecting the ‘error == 0 || error != 35’ expression. The expression is excessive or contains a misprint. if_ipw.c 1855 - V590 Consider inspecting the ‘error == 0 || error != 27’ expression. The expression is excessive or contains a misprint. if_vmx.c 2747 - V547 Expression is always true. Probably the ‘&&’ operator should be used here. igmp.c 1939 V590 Consider inspecting this expression. The expression is excessive or contains a misprint. sig_verify.c 94 enum uni_ieact { UNI_IEACT_CLEAR = 0x00, /* clear call */ .... } void uni_mandate_epref(struct uni *uni, struct uni_ie_epref *epref) { .... maxact = -1; FOREACH_ERR(e, uni) { if (e->ie == UNI_IE_EPREF) continue; if (e->act == UNI_IEACT_CLEAR) maxact = UNI_IEACT_CLEAR; else if (e->act == UNI_IEACT_MSG_REPORT) { if (maxact == -1 && maxact != UNI_IEACT_CLEAR) // <= maxact = UNI_IEACT_MSG_REPORT; } else if (e->act == UNI_IEACT_MSG_IGNORE) { if (maxact == -1) maxact = UNI_IEACT_MSG_IGNORE; } } .... } The result of the whole conditional expression doesn’r depend on the calculation of the value “maxact != UNI_IEACT_CLEAR”. Here’s how it looks in the table: In this section I give three ways of how to make an error in seemingly simple formulas. Just think of it… V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. aacraid.c 2854 #define EINVAL 22 /* Invalid argument */ #define EFAULT 14 /* Bad address */ #define EPERM 1 /* Operation not permitted */ static int aac_ioctl_send_raw_srb(struct aac_softc *sc, caddr_t arg) { .... int error, transfer_data = 0; .... if ((error = copyin((void *)&user_srb->data_len, &fibsize, sizeof (u_int32_t)) != 0)) goto out; if (fibsize > (sc->aac_max_fib_size-sizeof(....))) { error = EINVAL; goto out; } if ((error = copyin((void *)user_srb, srbcmd, fibsize) != 0)) goto out; .... out: .... return(error); } In this function the error code gets corrupted, when the assignment is executed in the ‘if’ operator. I.e. in the expression “error = copyin(…) != 0” the “copyin(…) != 0” is evaluated first, and then the result (0 or 1) is written to the variable ‘error’. The documentation for the function ‘copyin’ states that in case of an error, it returns EFAULT (value 14), and after such a check, the result of a logical operation ‘1’ gets stored in the error code. It is actually EPERM, a completely different error status. Unfortunately, there is quite a number of such fragments. - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. aacraid.c 2861 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_age.c 591 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_alc.c 1535 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_ale.c 606 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_jme.c 807 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_msk.c 1626 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_stge.c 511 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. hunt_filter.c 973 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_smsc.c 1365 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. if_vte.c 431 - V593 Consider reviewing the expression of the ‘A = B != C’ kind. The expression is calculated as following: ‘A = (B != C)’. zfs_vfsops.c 498 Strings V541 It is dangerous to print the string ‘buffer’ into itself. ata-highpoint.c 102 static int ata_highpoint_probe(device_t dev) { .... char buffer[64]; .... strcpy(buffer, "HighPoint "); strcat(buffer, idx->text); if (idx->cfg1 == HPT_374) { if (pci_get_function(dev) == 0) strcat(buffer, " (channel 0+1)"); if (pci_get_function(dev) == 1) strcat(buffer, " (channel 2+3)"); } sprintf(buffer, "%s %s controller", buffer, ata_mode2str(idx->max_dma)); .... } Some string is formed in the buffer. Then the programmer wants to get a new string, saving the previous string value, and add two more words. It seems really simple. To explain why unexpected result will be recieved here, I will quote a simple and clear example from the documentation for this diagnostic: char s[100] = "test"; sprintf(s, "N = %d, S = %s", 123, s); As a result of the work we would want to get the following string: N = 123, S = test But in practice it will be like this: N = 123, S = N = 123, S = In other situations, the same code can lead not only to the incorrect text, but also to the program abortion. The code can be fixed if you use a new buffer to store the result . The correct version: char s1[100] = "test"; char s2[100]; sprintf(s2, "N = %d, S = %s", 123, s1); V512 A call of the ‘strcpy’ function will lead to overflow of the buffer ‘p->vendor’. aacraid_cam.c 571 #define SID_VENDOR_SIZE 8 char vendor[SID_VENDOR_SIZE]; #define SID_PRODUCT_SIZE 16 char product[SID_PRODUCT_SIZE]; #define SID_REVISION_SIZE 4 char revision[SID_REVISION_SIZE]; static void aac_container_special_command(struct cam_sim *sim, union ccb *ccb, u_int8_t *cmdp) { .... /* OEM Vendor defines */ strcpy(p->vendor,"Adaptec "); // <= strcpy(p->product,"Array "); // <= strcpy(p->revision,"V1.0"); // <= .... } All three strings here are filled incorrectly. There is no space for null-terminal symbol in the arrays, which may cause serious problems with such strings in the future. One space can be removed in “p->vendor” and “p->product”. Then there will be room for null terminal, that the strcpy() function adds to the end of the string. But there is no free space at all for the end of line characters for the “p->revision”; that’s why the value SID_REVISION_SIZE should be increased at least by one. Of course, it is rather hard for me to judge about the code. It’s possible, that the terminal null is not needed at all and everything is designed for a specific buffer size. Then the strcpy() function is chosen incorrectly. In this case the code should be written like this: memcpy(p->vendor, "Adaptec ", SID_VENDOR_SIZE); memcpy(p->product, "Array ", SID_PRODUCT_SIZE); memcpy(p->revision, "V1.0", SID_REVISION_SIZE); V583 The ‘?:’ operator, regardless of its conditional expression, always returns one and the same value: td->td_name. subr_turnstile.c 1029 static void print_thread(struct thread *td, const char *prefix) { db_printf("%s%p (tid %d, pid %d, ....", prefix, td, td->td_tid, td->td_proc->p_pid, td->td_name[0] != '\0' ? td->td_name : td->td_name); } Suspicious fragment. Despite the “td->td_name[0] != ‘\0′” check, this string is still printed. Here are such fragments: - V583 The ‘?:’ operator, regardless of its conditional expression, always returns one and the same value: td->td_name. subr_turnstile.c 1112 - V583 The ‘?:’ operator, regardless of its conditional expression, always returns one and the same value: td->td_name. subr_turnstile.c 1196 Operations with memory In this section I will tell about incorrect usage of the following functions: void bzero(void *b, size_t len); int copyout(const void *kaddr, void *uaddr, size_t len); V579 The bzero function receives the pointer and its size as arguments. It is possibly a mistake. Inspect the second argument. osapi.c 316 /* Autosense storage */ struct scsi_sense_data sense_data; void ostiInitiatorIOCompleted(....) { .... bzero(&csio->sense_data, sizeof(&csio->sense_data)); .... } To zero the structure, we should pass the structure pointer and the size of the memory to be zeroed in bytes to the bzero() function; but here the pointer size is passed to the function, not the structure size. The correct code should be like this: bzero(&csio->sense_data, sizeof(csio->sense_data)); V579 The bzero function receives the pointer and its size as arguments. It is possibly a mistake. Inspect the second argument. acpi_package.c 83 int acpi_PkgStr(...., void *dst, ....) { .... bzero(dst, sizeof(dst)); .... } In this example we see a similar situation: the size of the pointer, not the object gets passed to the ‘bzero’ function. Correct version: bzero(dst, sizeof(*dst)); V579 The copyout function receives the pointer and its size as arguments. It is possibly a mistake. Inspect the third argument. if_nxge.c 1498 int xge_ioctl_stats(xge_lldev_t *lldev, struct ifreq *ifreqp) { .... *data = (*data == XGE_SET_BUFFER_MODE_1) ? 'Y':'N'; if(copyout(data, ifreqp->ifr_data, sizeof(data)) == 0) // <= retValue = 0; break; .... } In this example the memory is copied from ‘data’ to ‘ifreqp->ifr_data’, at the same time the size of the memory to be copied is sizeof(data), i.e. 4 or 8 bytes depending on the bitness of the architecture. Pointers V557 Array overrun is possible. The ‘2’ index is pointing beyond array bound. if_spppsubr.c 4348 #define AUTHKEYLEN 16 struct sauth { u_short proto; /* authentication protocol to use */ u_short flags; #define AUTHFLAG_NOCALLOUT 1 /* callouts */ #define AUTHFLAG_NORECHALLENGE 2 /* do not re-challenge CHAP */ u_char name[AUTHNAMELEN]; /* system identification name */ u_char secret[AUTHKEYLEN]; /* secret password */ u_char challenge[AUTHKEYLEN]; /* random challenge */ }; static void sppp_chap_scr(struct sppp *sp) { u_long *ch, seed; u_char clen; /* Compute random challenge. */ ch = (u_long *)sp->myauth.challenge; read_random(&seed, sizeof seed); ch[0] = seed ^ random(); ch[1] = seed ^ random(); ch[2] = seed ^ random(); // <= ch[3] = seed ^ random(); // <= clen = AUTHKEYLEN; .... } The size of ‘u_char’ type is 1 byte in the 32 and 64 – bit applications; but the size of the ‘u_long’ type is 4 bytes in the 32-bit applications and 8 byte in the 64-bit application. So in the 32-bit application during the execution of the operation “u_long* ch = (u_long *)sp->myauth.challenge”, the array ‘ch’ will consist of 4 elements, 4 bytes each. And in the 64-bit application the array ‘ch’ will consist of 2 elements, that have 8 bytes each. Therefore, if we compile the 64-bit kernel, then when accessing ch[2] and ch[3] we’ll have array index out of bounds. V503 This is a nonsensical comparison: pointer >= 0. geom_vinum_plex.c 173 gv_plex_offset(...., int *sdno, int growing) { .... *sdno = stripeno % sdcount; .... KASSERT(sdno >= 0, ("gv_plex_offset: sdno < 0")); .... } We managed to detect a very interesting fragment with the help of diagnostic 503. There is no point in checking that the pointer is greater than or equal to 0. Most likely, the pointer “sdno” was not dereferenced in order to compare the stored value. There are two more comparisons with null. - V503 This is a nonsensical comparison: pointer >= 0. geom_vinum_raid5.c 602 - V503 This is a nonsensical comparison: pointer >= 0. geom_vinum_raid5.c 610 V522 Dereferencing of the null pointer ‘sc’ might take place. mrsas.c 4027 void mrsas_aen_handler(struct mrsas_softc *sc) { .... if (!sc) { device_printf(sc->mrsas_dev, "invalid instance!\n"); return; } if (sc->evt_detail_mem) { .... } If the pointer “sc” is a null one, then the function will exit. However, it’s not quite clear, why the programmer tried to dereference the “sc->mrsas_dev” pointer. A list of strange fragments: - V522 Dereferencing of the null pointer ‘sc’ might take place. mrsas.c 1279 - V522 Dereferencing of the null pointer ‘sc’ might take place. tws_cam.c 1066 - V522 Dereferencing of the null pointer ‘sc’ might take place. blkfront.c 677 - V522 Dereferencing of the null pointer ‘dev_priv’ might take place. radeon_cs.c 153 - V522 Dereferencing of the null pointer ‘ha’ might take place. ql_isr.c 728 V713 The pointer m was utilized in the logical expression before it was verified against nullptr in the same logical expression. ip_fastfwd.c 245 struct mbuf * ip_tryforward(struct mbuf *m) { .... if (pfil_run_hooks( &V_inet_pfil_hook, &m, m->m_pkthdr.rcvif, PFIL_IN, NULL) || m == NULL) goto drop; .... } The check “m == NULL” is placed incorrectly. First we need to check the pointer, and only then call the pfil_run_hooks() function. Loops V621 Consider inspecting the ‘for’ operator. It’s possible that the loop will be executed incorrectly or won’t be executed at all. if_ae.c 1663 #define AE_IDLE_TIMEOUT 100 static void ae_stop_rxmac(ae_softc_t *sc) { int i; .... /* * Wait for IDLE state. */ for (i = 0; i < AE_IDLE_TIMEOUT; i--) { // <= val = AE_READ_4(sc, AE_IDLE_REG); if ((val & (AE_IDLE_RXMAC | AE_IDLE_DMAWRITE)) == 0) break; DELAY(100); } .... } In the source code of FreeBSD we found such an interesting and incorrect loop. For some reason, there is a decrement of a loop counter instead of an increment. It turns out that the loop can execute more times than the value of AE_IDLE_TIMEOUT, until the ‘break’ operator executes. If the loop is not stopped, then we’ll have the overflow of a signed variable ‘i’. Signed variable overflow is nothing but a undefined behavior. And it’s not some abstract theoretical danger, it is very real. Recently, my colleague wrote an article on this topic: Undefined behavior is closer than you think One more interesting moment. We detected the same error in the code of Haiku operating system (see the section “Warnings #17, #18”) No idea, who borrowed the “if_ae.c” file, but this error appears after Copy-Paste. V535 The variable ‘i’ is being used for this loop and for the outer loop. Check lines: 182, 183. mfi_tbolt.c 183 mfi_tbolt_adp_reset(struct mfi_softc *sc) { .... for (i=0; i < 10; i++) { for (i = 0; i < 10000; i++); } .... } Probably, this small piece of code is used for creating the delay, but in sum total only 10000 operations are executed, not 10*10000; why then 2 loops are needed here? I specifically cited this example because it is the most vivid to show that the usage of the same variable in the external and nested loops leads to unexpected results. V535 The variable ‘i’ is being used for this loop and for the outer loop. Check lines: 197, 208. linux_vdso.c 208 void __elfN(linux_vdso_reloc)(struct sysentvec *sv, long vdso_adjust) { .... for(i = 0; i < ehdr->e_shnum; i++) { // <= if (!(shdr[i].sh_flags & SHF_ALLOC)) continue; shdr[i].sh_addr += vdso_adjust; if (shdr[i].sh_type != SHT_SYMTAB && shdr[i].sh_type != SHT_DYNSYM) continue; sym = (Elf_Sym *)((caddr_t)ehdr + shdr[i].sh_offset); symcnt = shdr[i].sh_size / sizeof(*sym); for(i = 0; i < symcnt; i++, sym++) { // <= if (sym->st_shndx == SHN_UNDEF || sym->st_shndx == SHN_ABS) continue; sym->st_value += vdso_adjust; } } .... } This is probably a too complicated example to understand if the code executes correctly. But looking at the previous example we can draw a conclusion that a wrong number of iterations is executed here as well. V547 Expression ‘j >= 0’ is always true. Unsigned type value is always >= 0. safe.c 1596; } dptr = mtod(dstm, caddr_t) + j; dlen = dstm->m_len - j; .... } There are two dangerous loops in this function. As the ‘j’ variable (loop counters) has an unsigned type, then the “j >= 0” check is always true and these loops are “infinity”. Another problem is that some value is constantly subtracted from this counter; therefore if there is an attempt to access beyond the zero value, then the ‘j’ variable will get the maximum value of its type. V711 It is dangerous to create a local variable within a loop with a same name as a variable controlling this loop. powernow.c 73 static int pn_decode_pst(device_t dev) { .... struct pst_header *pst; // <= .... p = ((uint8_t *) psb) + sizeof(struct psb_header); pst = (struct pst_header*) p; maxpst = 200; do { struct pst_header *pst = (struct pst_header*) p; // <= .... p += sizeof(struct pst_header) + (2 * pst->numpstates); } while (cpuid_is_k7(pst->cpuid) && maxpst--); // <= .... } In the body of the loop we detected variable declaration was that matches the variable used for the loop control. I suspect that the value of the external pointer with the ‘pst’ name doesn’t change because a local pointer with the same ‘pst’ is created. Perhaps the same “pst->cupid” value is always checked in the loop condition do….while(). The developers should review this fragment and give the variables different names. Miscelleneous V569 Truncation of constant value -96. The value range of unsigned char type: [0, 255]. if_rsu.c 1516 struct ieee80211_rx_stats { .... uint8_t nf; /* global NF */ uint8_t rssi; /* global RSSI */ .... }; static void rsu_event_survey(struct rsu_softc *sc, uint8_t *buf, int len) { .... rxs.rssi = le32toh(bss->rssi) / 2; rxs.nf = -96; .... } It’s very strange that an unsigned variable “rxs.nf” is assigned with a negative value ‘-96’ As a result, the variable will have the value ‘160’. V729 Function body contains the ‘done’ label that is not used by any ‘goto’ statements. zfs_acl.c 2023 int zfs_setacl(znode_t *zp, vsecattr_t *vsecp, ....) { .... top: mutex_enter(&zp->z_acl_lock); mutex_enter(&zp->z_lock); .... if (error == ERESTART) { dmu_tx_wait(tx); dmu_tx_abort(tx); goto top; } .... done: // <= mutex_exit(&zp->z_lock); mutex_exit(&zp->z_acl_lock); return (error); } In this code there are functions containing labels, but at the same time, the call of the ‘goto’ statement is missing for these labels. For example, we see that the ‘top’ label is used in this fragment, but ‘done’ isn’t used anywhere. Perhaps the programmer forgot to add a jump to the label, or it was removed over the time, while the label was left in the code. V646 Consider inspecting the application’s logic. It’s possible that ‘else’ keyword is missing. mac_process.c 352; .... } Finally, I want to tell you about suspicious formatting, which I already came across in the very beginning of the project check. Here the code is aligned in such a way that the absence of the keyword ‘else’ looks strange. V705 It is possible that ‘else’ block was forgotten or commented out, thus altering the program’s operation logics. scsi_da.c 3231 static void dadone(struct cam_periph *periph, union ccb *done_ccb) { .... /* * If we tried READ CAPACITY(16) and failed, * fallback to READ CAPACITY(10). */ if ((state == DA_CCB_PROBE_RC16) && .... } else // <= /* * Attach to anything that claims to be a * direct access or optical disk device, * as long as it doesn't return a "Logical * unit not supported" (0x25) error. */ if ((have_sense) && (asc != 0x25) // <= .... } else { .... } .... } This code has no error now, but it will definitely show up one day. By leaving such a big commentary before ‘else’ you may accidentally forget that this keyword was somewhere in the code and make some erroneous edits. Conclusion The FreeBSD project was tested by special version of PVS-Studio, which showed a great result! The whole material is impossible to fit in one article. Nevertheless, the development team of FreeBSD got the full list of the analyzer warnings that should be examined. I suggest everyone to try PVS-Studio on your projects. The analyzer works in Windows environment. We don’t have a public version for using the analyzer in the development of the projects for Linux/FreeBSD. We could also discuss possible variants of customization PVS-Studio for your projects and specific tasks. By Svyatoslav Razmyslov One thought on “FreeBSD kernel analysis” Pingback: Logical Expressions in C/C++. Mistakes Made by Professionals | How Not To Code
https://hownot2code.com/2016/05/23/pvs-studio-delved-into-the-freebsd-kernel/
CC-MAIN-2021-39
refinedweb
4,899
58.18
am wrapping an internal set of libraries written in C++ to provide access to an API in Java and I haven't run into any issues until now. I have a struct that is wrapped into a Java proxy class with its associated getters and setters. The generated code actually does work for some time. However, after enough calls to the getters in Java a segmentation fault occurs and the JVM to crashes. I am calling the getters in a for-each loop. For example: for( NativeProxyClass t : ContainerOfNativeProxyClasses ) { if( t.getSomeField() == 1 ) /// Segfault occurs in the native code corresponding with this getter only sometimes. { /// Do something with t. } } I know this may be vague, but I cannot post the exact code. Like I said this is a strange issue because it does not always occur after a fixed amount of time, sometimes it takes a few seconds, sometimes it happens instantly. I don't believe the object is being deleted because I've added print statements to the finalizer\delete function of the proxy class. Your help is appreciated. I am a c++ programmer and support a multithreaded messaging api. My swig project uses SWIG_JAVA_ATTACH_CURRENT_THREAD_AS_DAEMON preprocessor definition. I noticed that the message rate coming into the java application is very slow compared to what we get when directly using the c++ interface. I also noticed that each update Was coming into java on a new thread id while my c++ api only has 4 threads. I also presumed that the message rate is so slow because with each update a new thread has to be spawned. So I removed the SWIG_JAVA_ATTACH_CURRENT_THREAD_AS_DAEMON definition and now the updates come on 4 threads only and the message rate has quadrupled. However, the application Now hangs on shutdown. How do I go about fixing this? I am ok with forcing the application to shutdown if I can detect when the Java app has exited but I to figure out how. I don't want the user to have To write any extra java code, I would like to figure this out on the c++ side. Any help would be greatly appreciated. -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ To be short, I have two modules, foo (foo.i) and bar (bar.i): foo.i: <code> %module "foo"; %inline %{ class A { public: template<typename T> void xyz(T) { } }; %} //%extend A { %template(xyz) xyz<int>; } </code> bar.i: <code> %module "bar"; %import <foo.i> %extend A { %template(xyz) xyz<int>; } </code> foo.i defines class A, and I would instantiate its template method 'xyz' in module bar. The above example does not let me achieve this goal (compiling for python). I test it with the following python script: <code> import foo import bar print "is xyz in foo.A()?: %r" % ('xyz' in dir(foo.A())) print "is xyz in bar.foo.A()?: %r" % ('xyz' in dir(bar.foo.A())) </code> and the scrip yields: is xyz in foo.A()?: False is xyz in bar.foo.A()?: False If I uncomment the %extend{ .. } line in foo, the class gets extended (method gets instantiated): is xyz in foo.A()?: True is xyz in bar.foo.A()?: True Is there any way to achieve my goal, that is to instantiate class template methods in third modules? I attach a project for convenience. Best Regards! -- Paweł Tomulik, tel. (22) 234 7374 Instytut Techniki Lotniczej i Mechaniki Stosowanej Politechnika Warszawska I have some libraries that use this form: #define MY_ID ((Uint32)-1) SWIG seems to be omitting this from the generation. (Tested both Lua and JavaScript.) I'm worried SWIG may be missing other ones. I noticed that #defines to functions are ignored. Is there a documented list of which ones cause problems? Thanks, Eric -- Beginning iPhone Games Development I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=201405&viewday=28
CC-MAIN-2017-26
refinedweb
665
67.45
yamldirs - create directories and files (incl. contents) from yaml spec. yamldirs Create directories and files (including content) from yaml spec. This module was created to rapidly create, and clean up, directory trees for testing purposes. Installation: pip install yamldirs Usage The YAML record syntax is: fieldname: content fieldname2: | multi line content nested: record: content yamldirs interprets a (possibly nested) yaml record structure and creates on-disk file structures that mirrors the yaml structure. The most common usage scenario for testing will typically look like this: from yamldirs import create_files def test_relative_imports(): files = """ foodir: - __init__.py - a.py: | from . import b - b.py: | from . import c - c.py """ with create_files(files) as workdir: # workdir is now created inside the os's temp folder, containing # 4 files, of which two are empty and two contain import # statements. Current directory is workdir. # `workdir` is automatically removed after the with statement. If you don’t want the workdir to disappear (typically the case if a test fails and you want to inspect the directory tree) you’ll need to change the with-statement to: with create_files(files, cleanup=False) as workdir: ... yamldirs can of course be used outside of testing scenarios too: from yamldirs import Filemaker Filemaker('path/to/parent/directory', """ foo.txt: | hello bar.txt: | world """) Syntax The yaml syntax to create a single file: foo.txt Files with contents uses the YAML record (associative array) syntax with the field name (left of colon+space) is the file name, and the value is the file contents. Eg. a single file containing the text hello world: foo.txt: hello world for more text it is better to use a continuation line (| to keep line breaks and > to convert single newlines to spaces): foo.txt: | Lorem ipsum dolor sit amet, vis no altera doctus sanctus, oratio euismod suscipiantur ne vix, no duo inimicus adversarium. Et amet errem vis. Aeterno accusamus sed ei, id eos inermis epicurei. Quo enim sonet iudico ea, usu et possit euismod. To create empty files you can do: foo.txt: "" bar.txt: "" but as a convenience you can also use yaml list syntax: - foo.txt - bar.txt For even more convenience, files with content can be created using lists of records with only one field each: - foo.txt: | hello - bar.txt: | world Note This is equivalent to this json: [{"foo.txt": "hello"}, {"bar.txt": "world"}] This is especially useful when you have a mix of empty and non-empty filess: mymodule: - __init__.py - mymodule.py: | print "hello world" directory with two (empty) files (YAML record field with list value): foo: - bar - baz an empty directory must use YAML’s inline list syntax: foo: [] nested directories with files: foo: - a.txt: | contents of the file named a.txt - bar: - b.txt: | contents of the file named b.txt Note (Json) YAML is a superset of json, so you can also use json syntax if that is more convenient. Extending yamldirs To extend yamldirs to work with other storage backends, you’ll need to inherit from yamldirs.filemaker.FilemakerBase and override the following methods: class Filemaker(FilemakerBase): def goto_directory(self, dirname): os.chdir(dirname) def makedir(self, dirname, content): cwd = os.getcwd() os.mkdir(dirname) os.chdir(dirname) self.make_list(content) os.chdir(cwd) def make_file(self, filename, content): with open(filename, 'w') as fp: fp.write(content) def make_empty_file(self, fname): open(fname, 'w').close() Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/yamldirs/
CC-MAIN-2017-43
refinedweb
584
67.04
Rafael Z. Frantz wrote: Hi folks, another question: I have an object that uses a PriorityBlockingQueue. This object is shared among several threads that invoke add( object ) and pool() on this shared object. here is the code: The PriorityBlockingQueue is used like this: public class MyClass { PriorityBlockingQueue queue = new PriorityBlockingQueue<Message>(); public synchronized void add(Message msg) { queue.add(msg); } public synchronized Message pool() { return queue.pool(); } } Will I have problems if one thread calls MyClass.pool() at the same time another thread calls MyClass.add() or the structure of PriorityBlockingQueue takes care of it? Thanks a lot! Steve Luke wrote: What does queue.pool() do? I don't see it in the PriorityBlockingQueue's API. Rafael Z. Frantz wrote:Hi Steve, I was a mistyped, I mean poll() method "Retrieves and removes the head of this queue, or returns null if this queue is empty." Regards, BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control. However, the bulk Collection operations addAll, containsAll, retainAll and removeAll are not necessarily performed atomically... Will I have problems if one thread calls MyClass.pool() at the same time another thread calls MyClass.add() more from paul wheaton's glorious empire of web junk: cast iron skillet diatomaceous earth rocket mass heater sepp holzer raised garden beds raising chickens lawn care CFL flea control missoula heat permaculture
http://www.coderanch.com/t/487349/threads/java/PriorityBlockingQueue-concurrent-access
crawl-003
refinedweb
235
58.38
ccTalk - Part 2 - Coin acceptor handling Published on 19 August 2013 Coin acceptor A coin acceptor is a device that can recognize various types of coins based on its weight, shape, size and more (all depends on the device and the manufacturer). Usually, all the different coins it can recognize are separated in what are called validation channels, which means one coin type is assigned an ID. Coin acceptors usually can recognize up to 16 different coin values, and each of them is assigned one of those validation channels. These devices also have at least two sorter paths, which define the path the coin will take based on the fact that the coin has been recognized or not. The two main paths are the "good coin" path (generally inside the machine) and the "error" path, which normally gives the coin back to the customer. Data request To actually get the data from the coin acceptor, the controller must issue a request with header 229 - Read buffered credit or error codes. The coin acceptor will respond with eleven bytes, containing the following information : [Counter ] [Result1A] [Result1B] [Result2A] [Result2B] [Result3A] [Result3B] [Result4A] [Result4B] [Result5A] [Result5B] - Counter is an event counter. Each event will increase this value. - Result is stored in two bytes. Usually, the first byte contains the validation channel and the second contains the error code (OK, bad coin, mechanical error, ...) I said usually, because some acceptors invert the two result bytes, resulting in many errors when trying to figure out what happens on those devices. When in doubt, refer to the product documentation. Since the coin acceptor will return the validation channel value, the controller will need to know which coin type is associated with each ID. It is then mandatory to know this association to be able to process different coins properly. Interfacing with a coin acceptor In the previous article, we saw how to create ccTalk packets and requests. Now, let's interface a coin acceptor, handle its returned data and process it ! Initialization First of all, You have to make sure that the coin acceptor is online by sending a sample poll packet and get the response. <cctalk src=1 dst=2 length=0 header=254> : 020001feff <cctalk src=2 dst=1 length=0 header=0> : 01000200fd Coin acceptor setup The coin acceptors need to be initialized prior accepting coins. The two main things to do are to set the inhibit status, which define which validation channels are enabled on the coin acceptor. This can be done using the header 231 - Modify inhibit status with two bytes of data. Each bit in the bytes represent a validation channel (little-endian). <cctalk src=1 dst=2 length=2 header=231 data=ffff> : 020201e7ffff16 <cctalk src=2 dst=1 length=0 header=0> : 01000200fd The second thing is to enable the coin acceptor, which is often not done by default. To do this, we need to send a request with header 228 - Modify master inhibit status and 0x01 as the data byte - which means enabled. <cctalk src=1 dst=2 length=1 header=228 data=01> : 020101e40117 <cctalk src=2 dst=1 length=0 header=0> : 01000200fd Data request and processing Now that the coin acceptor is ready, it needs to be polled regularly (specs say every 200 milliseconds, but it can be more) and the returned data needs to be parsed and processed. The counter starts at zero on initialization, and increments up to 255. It will then loop from 1 to 255 and so on. Processing the data is quite simple once you know which coin is what : - Send a request with header 229 - Get the response data - - Check the counter value. - If the value is greater than the previous one, check the corresponding number of results - Depending on the error code returned, process the coin ID that has been accepted - GOTO 1 Example code The following Python code uses the ccTalk library presented in last post It shows a basic coin acceptor initialization and polling, and can be enhanced to support other functionalities : import serial import time from ccTalk import * ser=serial.Serial('/dev/ttyUSB0', 9600, timeout=1) def sendMessage(header, data='', source=1, destination=2): request = ccTalkMessage(header=header, payload=data, source=source, destination=destination) request.setPayload(header, data) ser.write(request) data = ser.read(50) messages = parseMessages(data) for message in messages: print message init = ccTalkMessage() init.setPayload(254) ok = False #Wait for device to be initiated while ok!=True: ser.write(init) data = ser.read(50) try: messages = parseMessages(data) response = messages[-1] print response except: continue if response.payload.header==0: ok = True else: print response.payload.header #Set inhibit status to allow all sendMessage(231,'\xff\xff') #Set master inhibit status to enable device sendMessage(228,'\x01') #Read buffered credit or error codes event = 0 while True: try: request = ccTalkMessage() request.setPayload(229) ser.write(request) data = ser.read(50) messages = parseMessages(data) for message in messages: if message.payload.header==0: data = message.payload.data if ord(data[0])>event: event = ord(data[0]) print "Counter : " + str(ord(data[0])) print "Credit 1 : " + str(ord(data[1])), print "Error 1 : " + str(ord(data[2])) print "Credit 2 : " + str(ord(data[3])), print "Error 2 : " + str(ord(data[4])) print "Credit 3 : " + str(ord(data[5])), print "Error 3 : " + str(ord(data[6])) print "Credit 4 : " + str(ord(data[7])), print "Error 4 : " + str(ord(data[8])) print "Credit 5 : " + str(ord(data[9])), print "Error 5 : " + str(ord(data[10])) time.sleep(0.2) except KeyboardInterrupt, e: print "Quitting..." break ser.close() Teensy implementation I also created a simple ccTalk controller that can be used on an Arduino or a Teensy device. The code available here polls a coin acceptor and will send the corresponding amount of keystrokes to the host computer. The purpose of this was to add a coin acceptor on my MAMEcab to add a more realistic feeling when playing. Here is a (crappy) demo of it in action. You can actually see the credits change when I insert a new coin : In the next post, we'll start messing with a ccTalk bus by injecting data and see what can be done once you have a physical access to the bus.
http://www.balda.ch/posts/2013/Aug/19/cctalk-part2/
CC-MAIN-2015-35
refinedweb
1,045
51.58
CSS-in-JS is something I've been unable to stop using on both personal projects and work. CSS has been introducing more and more features, making SCSS less of an obvious choice. At the same time, CSS-in-JS libraries entered the scene. They add some interesting features: Server-Side-Rendering, code splitting as well as better testing. For the purpose of this article, I will be using EmotionJS and React. EmotionJS features TypeScript support, easy setup, and testing integration. Advantages of CSS-in-JS Being JavaScript, it offers all the features modern front-end development relies on. Server-Side Rendering and code split with Emotion Server-Side Rendering (SSR) with Emotion and React is simple. If you have React SSR enabled then congratulations! You have enabled it for Emotion as well. Code splitting is pretty much the same. Emotion is JavaScript so it will code split just like the rest of the application. Sharing props between React and Emotion Building styles based on classes can become quite complicated for big codebases. In most cases, having each prop become a class can increase the verbosity of the code. Having props determine styles without classes would cut a lot of unnecessary code. const classes = `${className} ${theme || "off-white"} ${size || "medium"} ${border !== false ? "with-border" : ""} ${inverted ? "inverted" : ""} ${disabled ? "disabled" : ""}`; The example above shows how convoluted a template literal can become. This can be avoided by leveraging Emotion. import { css } from "@emotion/core"; import styled from "@emotion/styled"; const themes = { red: css` color: pink; background: red; border-color: pink; `, blue: css` color: light-blue; background: blue; border-color: light-blue; `, }; const sizes = { small: '8px', medium: '12px', large: '16px' } const disabledCss = css` color: grey; border-color: grey; `; /* Defining the button with the conditional styles from props */ const StyledButton = styled.button` ${(props) => themes[props.theme]}; font-size: ${(props) => sizes[props.size]}; border: ${(props) => props.border ? '1px solid' : 'none'}; ${(props) => props.disabled && disabledCss}; `; /* And finally how to use it */ <StyledButton theme="red" size="medium" border={true} disabled={false} > Hello </StyledButton> There are no classes to depend on. The styles are applied to the components, removing the classes layer. New styles are easily added and even more easily removed, JavaScript handles variables far better than we handle classes. These atomic styles are easy to share across the codebase. Being variables, they can be imported and exported to other files. Testing Emotion and React Style regression and changes have always been up to the developer to check manually. CSS and SCSS do not allow to test this in any meaningful way. Jest allows to snapshot React components to see diffs in HTML, making sure changes are safe. In the same way, Emotion styles can be snapshotted. Snapshotting CSS removes the need to have to check manually if the styles break when making new changes. This can be a huge time saver for both developers and testers who can ship code with more confidence. Achieving all this in Emotion is rather fast. Add this to your Jest setup file import * as emotion from 'emotion' import { createSerializer } from 'jest-emotion' expect.addSnapshotSerializer(createSerializer(emotion)) And it's done. When creating a snapshot, the EmotionJS output will be included in the snapshot. Closing thoughts CSS-in-JS has drastically changed the way to write CSS. Leveraging the most used programming language gives CSS new features to improve the way styles can be written. Performance, maintainability, and testing are the core of a good application. CSS-in-JS offers improvements over older standards to all these issues. originally posted on decodenatura Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kornil/about-css-in-js-and-react-5ajg
CC-MAIN-2021-17
refinedweb
592
58.48
Mapbox is a mapping platform that makes it easy to integrate location into any mobile and online application. We are pleased to showcase the Mapbox Qt SDK as a target platform for our open source vector maps rendering engine. Our Qt SDK is a key component in Mapbox Drive, the first lane guidance map designed for car companies to control the in-car experience. The Qt SDK also brings high quality, OpenGL accelerated and customizable maps to Qt native and QtQuick. The combination of Qt and Yocto is perfect for bringing our maps to a whole series of embedded devices, ranging from professional NVIDIA and i.MX6 based boards to the popular Raspberry Pi 3. As part of our Mapbox Qt SDK, we expose Mapbox GL to Qt in two separate APIs: - QMapboxGL – implements a C++03x-conformant API that has been tested from Qt 4.7 onwards (Travis CI currently builds it using both Qt 4 and Qt 5). - QQuickMapboxGL – implements a Qt Quick (QML) item that can be added to a scene. Because QQuickFramebufferObjecthas been added in Qt version 5.2, we support this API from this version onwards. The QML item interface matches the Qt Map QML type almost entirely, making it easy to exchange from the upstream solution. QMapboxGL and QQuickMapboxGL solve different problems. The former is backwards-compatible with previous versions of Qt and is easily integrated into pure C++ environments. The latter takes advantage of Qt Quick’s modern user interface technology, and is the perfect tool for adding navigation maps on embedded platforms. So far we have been testing our code on Linux and macOS desktops, as well as on Linux based embedded devices. Mapbox is on a joint effort with the Qt Company to make the Mapbox Qt SDK also available through the official Qt Location module – we are aligning APIs to make sure Mapbox-specific features like runtime styles are available. QQuickMapboxGL API matches Qt’s Map QML Type, as you can see from the example below: import QtPositioning 5.0 import QtQuick 2.0 import QtQuick.Controls 1.0 import QQuickMapboxGL 1.0 ApplicationWindow { width: 640 height: 480 visible: true QQuickMapboxGL { anchors.fill: parent parameters: [ MapParameter { property var type: "style" property var url: "mapbox://styles/mapbox/streets-v9" }, ] center: QtPositioning.coordinate(60.170448, 24.942046) // Helsinki zoomLevel: 14 } } Mapbox Qt SDK is currently in beta stage. We’re continuously adding new features and improving documentation is one of our immediate goals. Your patches and ideas are always welcome! We also invite you to join us next month at Qt World Summit 2016 and contribute to Mapbox on GitHub. Wow, that looks really great! Are there also plans to bring that to mobile platforms (Android in particular)? Thanks, Bernhard Really cool! See the GitHub link at the bottom for a similar Android (and iOS) SDK. Looks like they’ve been around for a while. Thank you! As Mike said, we have our own native SDKs for Android and iOS – though nothing really stops you from compiling and running the Mapbox Qt SDK in both Android and iOS. Vector maps for Qt, finally! And looking really good… I am also wondering if it can be used for mobile apps. It would be also great to have access to the internal OpenGL API when using QQuickMapboxGL in order to be able to create map items (and of course example would be great!). And even better: do you have plans to integrate it with QtLocation? Hi Benoit! As previously replied, currently mobile apps are officially supported via our native SDKs. Though access to the internal OpenGL API is not yet possible, we expose an API in QMapboxGL for custom layers that allows foreign OpenGL code sharing the same context. Integration with QtLocation is on its way – we plan to provide our rendering engine as part of the Qt framework, and provide the vector tiles implementation as a plugin for the Map QML Type. Hi Bruno, is there already a time frame when it will be integrated into QtLocation? Really looking forward to use it 😉 Thanks for the answer. Sounds great, a plugin for QtLocation would be the ideal solution! A couple of things: 1) The Demo(s) you have picture are not part of the Qt Examples provided by the git repository (I am using the master branch). Can you tell me where to find them. 2) The SDK compiles with Qt 5.6, but does not compile for me with Qt 5.7 on the Mac anyway. The first error being: In file included from ../../../src/mbgl/actor/mailbox.cpp:2: ../../../src/mbgl/actor/message.hpp:30:22: error: no type named ‘index_sequence’ in namespace ‘std’ void invoke(std::index_sequence) { Must be some flag changed in the qmakespec. Any ideas from the Qt trolls. And could you please explain if it is possible to cache maps locally, if so how does one do this. Thanks for making this great SDK available. -David Integration with QtLocation and support of HighDpi and working inside a QtQuickControls2 App for Android and iOS would be cool 😉 Do you plan on renaming the types exposed to QML to better align with how Qt has named QML counterparts to C++ types? For example, QQuickMapboxGL would be MapboxGL. Thanks for the suggestion. We renamed it already, our Map item is called MapboxMap: What is the source of the data? Google maps? Bojan, the bulk of the data making up standard Mapbox maps is a highly curated extract from OpenStreetMap, updated every few minutes. Other data sources make up some components of the Streets layer as well as Terrain and Satellite layers. Custom maps can be created from any data source you have rights to use, however (). – Jeremy, Mapbox
http://blog.qt.io/blog/2016/10/04/customizable-vector-maps-with-the-mapbox-qt-sdk/
CC-MAIN-2017-13
refinedweb
958
64.41
Querying a Database with LINQ to SQL We will be creating a Windows Forms Application that allows as to you to query and view records from a particular table using LINQ to SQL classes, SQL Server 2008, and the Northwind sample database. You will learn how to use the Object Relational Designer to generate LINQ to SQL Classes and how to use them in your code. Creating LINQ to SQL Classes Create a new Windows Forms Application and name it LinqToSqlDemo. Once a project is created, we need to add a LINQ to SQL file. Click the Add New Item button in the toolbar and find LINQ to SQL Classes from the list of templates. Name it Northwind and click the Add button. Once you click the Add button, you will land on the Object Relational Designer containing nothing as off now. The Toolbox now also contains components used for creating classes and adding relationships. But since we will generate a class from an existing table in a database, we wont be using the components in the Toolbox. A DBML file (Database Markup Language) with extension .dbml will also be created and shown in the Solutions Explorer. Expanding that node will show two more files representing codes for the layout and the actual classes that will be generated. Double clicking the DBML file will also bring you to the Object Relational Desinger. We need to use the Database Explorer window in Visual C# Express. If you are using the full version of Visual Studio, you need to open the Server Explorer window instead. If it is not visible, go to Views > Other Windows > Database Explorer. Open the Database Explorer window and click the Connect to Database icon. You will be presented with the Choose Data Source Dialog which asks which type data source to use for the connection. Choose SQL Server Database File. Checking the check box allows you to always choose the specified type of data source when you want to add another one. You will be presented by another window asking for the type of data source and the location of the database files. You can also specify which SQL Server account to use but if you are using an administrator windows user account, then you can simply leave the default option. You can also click the Advanced button to edit more advanced settings about the connection. Click the Browse button and browse for the Northwind.mdf file. If you have installed it already, it will be located at C:SQL Server 2000 Sample Databases. Choose the file and click Open. Be sure that the file is not used by other programs. We then need to test the connection. Click Test Connection button and if everything is working properly, you will receive the following message. The Northwind.mdf will now appear as a child node of the Data Connections in the Database Explorer window. Expand the Northwind.mdf node to be presented with folders representing the different components of the database. Expand the Tables folder to see the different Tables of the Northwind database. We need to drag tables from the Database Explorer window to the Object Relational Designer’s surface. For this lesson, drag the Employees table to the Object Relational Designer. Visual Studio will prompt you whether to copy the Northwind.mdf database file since it will detect that it is located outside your project folder. Clicking Yes will copy the Northwind.mdf file from the origincal location to your project folder. Also note that everytime you run your program, the database file will also be copied to the output directory. You will learn later how to modify this behavior. After clicking Yes, The Object Relational Designer will now show a class diagram representing a generated class that will hold values of each row in the Employees table. The name of the class is a singularized version of the Table’s name. A property with an appropriate type is created for every column in the dragged table. You will see these properties in the Object Relational Designer. If a property conflicts with the name of the class, then it will be numbered. For example, if the class’ name is Employee and it has a column named Employeeas well, then the column’s corresponding property will be named Employee1. As soon as you drag a table to the Object Relational Designer, the DataContext class for the coresponding database will be created. Since we used the Northwind database, the generated DataContext class will be named NorthwindDataContext. Clicking a blank space in the Object Relational Designer will allow you to edit the properties of the DataContext class using the Properties Window. You can also change the properties of the created row class and properties of its members. But leaving the default names and settings for the classes is recommended. If you are curious about the generated classes and wan’t to take a look at its implementation, go to Solution Explorer and expand the node for the created DBML file. You will be presented with two files. Double click the one with .designer.cs extension. You will then see how the classes for your tables and DataContext was defined. You should always save the DBML file before using it in your application. Using LINQ to SQL Classes Once the required LINQ to SQL classes have been successfully generated, we can now use them in our application. For our GUI, we will be using a DataGridView control to display the queried records. Head back to the Windows Forms Designer. Drag a DataGridView control from the Toolbox’s Data cetegory to the form. Set the DataGridView‘s Dock property to Fill so it will take up all the space of the form. Then resize the form to a larger size so it will properly show all the records that we will query. We will be using the following code: using System; using System.Linq; using System.Windows.Forms; namespace LinqToSqlDemo { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { NorthwindDataContext database = new NorthwindDataContext(); var employees = from employee in database.Employees select new { employee.EmployeeID, employee.FirstName, employee.LastName, employee.BirthDate, employee.Address, employee.Country }; dataGridView1.DataSource = employees; } } } Double click the Form’s title bar in the Designer to generate a handler for the form’s Load event. Add the codes in lines 16-29. The code at line 16 creates a new NorthwindDataContext object. This will be used to access the tables of the database and the rows each table contain. Lines 18-27 uses a LINQ query which access the NorthwindDataContext‘s Employees property containing each record for employee. The select clause of the query only selects some of the properties of every employee. Line 29 uses the DataGridView‘s DataSource property and assigns the result of the query as it’s data source. When you run the program, you will see all the records from the Employees table. You can provide controls, for example, a combo box containing different countries, and modify the query based on the selected country in the combo box. For more LINQ techniques, you can review the lessons for the basics of LINQ querying.
https://compitionpoint.com/querying-a-database-with-linq-to-sql/
CC-MAIN-2021-31
refinedweb
1,205
56.05
.. image:: .. image:: :align: center :alt: Pipeline UI screenshot Pipelines is a simple tool with a web UI to manage running tasks. It supports running tasks manually through a Web UI or automatically via webhooks. Pipelines is composed of three components: Pipelines is primarily developed to run on Linux / MacOS. Windows support is not available at the moment. Requirements: .. code-block:: bash pip install pipelines Or get the latest dev version from Github <>_ and run pip install . from within the cloned repo. Or run pip directly from git pip install git+git://github.com/Wiredcraft/[email protected]. Pipelines runs solely on files. No database is currently required. All the pipelines, the logs of each run and various temporary files are stored under the workspace folder. Workspace is a folder that needs to be specified when running pipelines. .. code-block:: bash mkdir ~/pipelines_workspace Drop your pipelines files (see format below) directly at the root of this folder. Start the API with the following: .. code-block:: bash pipelines server --workspace ~/pipelines_workspace --username admin --password admin You may want to specify a different binding IP address (default: 127.0.0.1) or different port (defaut: 8888). Refer to the pipelines --help for additional parameters. You can now access pipelines at Create a dedicated user to run pipelines .. code-block:: bash #. .. code-block:: bash # Ubuntu / Debian apt-get install supervisor # CentOS / RedHat (to confirm) yum install supervisord Copy and adapt de config file from etc/supervisor/pipelines.conf to /etc/supervisor. .. code-block:: bash # Update and reload supervisord supervisorctl reread supervisorctl update supervisorctl start pipelines Access the web interface at Additionaly you may want to use nginx as reverse proxy as well. See sample config from etc/nginx. Static authentication You can define a static admin user by specifying the following options when running pipelines: .. code-block:: bash --username ADMIN_USER --password ADMIN_PASS Github Oauth ```````````` **This is an experimental feature** You can add ``oauth`` support from Github to allow **teams** to access pipelines. You will need to set it by using environment variables for the Oauth Apps, and the ``--github-auth`` to limit teams access. To get your OAUTH Key and Secret: - Register new application in Github: - Only field on that form that is important is the `Authorization callback URL`. This should point to your pipelines, for example if you run it locally it should be ``. The last part (`/ghauth`) always stays the same. - Copy the `Client ID` and `Client Secret` from that page. To start the pipelines server with Github OAuth enabled. .. code-block:: bash GH_OAUTH_KEY=my_oauth_app_key \ GH_OAUTH_SECRET=my_super_secret \ pipelines server [--options] --github-auth=MY_ORG/MY_TEAM[,MY_ORG/ANOTHER_TEAM] **Note**: If you use Github Oauth, you will **not** be able to use static authentication.. .. code-block:: yaml --- # }}"' - name: 'My custom name step' type: bash cmd: "echo 'less compact way to define actions'" - 'ls -la /tmp' Vars ---- The ``vars`` section of the pipeline definition defines variables that will then be available in any of the actions. .. code-block:: yaml vars: my_var: something actions: - echo {{ my_var }} You can then use the variables as seen above. **Note**: - You may have to quote `"` your vars to respect the YAML format. Prompts ------- You can prompt users to manually input fields when they run the pipeline through the web-UI. To do this add a ``prompt`` section to your pipeline definition. The ``prompt`` fields will **override** the variables from the ``vars`` section. You can alternatively provide a list of acceptable values; the prompt will then appear as a select field and let you choose from the available values .. code-block:: yaml vars: # This is the default value when triggered and no prompt is filled (e.g. via webhook) my_var: default_no_prompt prompt: # This is the default value when triggered via the web UI my_var: default_with_prompt # This will appear as a select field my_var_from_select: type: select options: - value1 - value2 actions: # This will display: # "default_no_prompt" when call via webhook # "default_with_prompt" when call via UI but keeping the default # "other" when call via UI and "other" is inputted by the user - echo {{ my_var }} # Depending on the selected value, will display value1 or value2 - echo {{ my_var_from_select }}. .. code-block:: yaml). .. code-block:: yaml. .. code-block:: yaml triggers: - type: webhook If you open the web-UI you can see the webhook URL that was generated for this pipeline in the "Webhook" tab. You can for example `configure GitHub repository <>`_ to call this url after every commit. You can access the content of the webhook content in the actions in the ``webhook_content`` variable; e.g. ``echo {{ webhook_content.commit_id }}`` **Note**: - You need to send the message via POST as ``application/json`` Content-Type. - Documentation is coming to explain how to use the content of the data sent through the hook. Advanced Templates ================== Pipelines uses `Jinja2 <>`_ to do variables replacement. You can use the whole set of builtin features from the Jinja2 engine to perform advanced operations. .. code-block:: yaml prompt: stuff: type: select options: - good - bad actions: - name: Print something type: bash cmd: | {% if stuff == 'good' %} echo "Do good stuff..." {% else %} echo "Do not so good stuff..." {% endif %} - name: Use builtin filters type: bash # Will display 'goose' or 'base' cmd: echo {{ stuff | replace('d', 'se') }}. .. code-block:: bash.
https://awesomeopensource.com/project/Wiredcraft/pipelines
CC-MAIN-2021-04
refinedweb
860
63.09
Description: Door Lock System using Arduino and GSM– In this Tutorial, you will learn how to control an electronic door lock through an SMS using GSM Module and Arduino Uno or Mega. This is a wireless remote controlled electronic lock project, with the help of this project the Door Lock can be controlled from anywhere around the world with the help of a text message consisting of a command to open the Electronic Door Lock. This tutorial covers - What is an Electronic Lock “E Lock” - GSM SIM900A Module Pinout and other details - Complete Circuit Diagram explanation - Door Lock Arduino Programming and finally - Testing Without any further delay, let’s get started!!! The components and tools used in this project can be purchased from Amazon, the components Purchase links are given below: 12v Electronic Door Lock / Elock / Solenoid Lock: One-Channel Relay Module: Other Tools and Components: Super Starter kit for Beginners PCB small portable drill machines DISCLAIMER: Please Note: these are affiliate links. I may make a commission if you buy the components through these links. I would appreciate your support in this way! About the Electronic Lock: Electronic Locks come in different shapes and sizes. But the working principle of all the Electronic Locks is exactly the same. The difference can only be in the voltage and current it requires to energize the coil of the electronic lock. Inside the electronic lock is the coil which creates the magnetic field when the desired voltage is applied. The Electronic Locks usually has two wires, the voltage wire, and the GND wire. The Electronic lock that we will be using in this project has two wires. The electronic lock can be checked by applying the voltage across the two wires of the electronic lock. As this project is based on the GSM, in order to control this Electronic Lock automatically we will need a driver circuit for this Electronic Lock. This electronic Lock can be controlled automatically using a relay, an NPN transistor or a MOSFET. As in this project our only aim to open and close the electronic lock, and as we are not using any PWM or fast switching, so in this project relay is the best choice. Using relay will also provide Isolation. SIM900A GSM Module: This is the GSM sim900A. lm317t adjustable variable voltage regulator, I have a very detailed tutorial on lm317t explaining everything. As you can see clearly in the picture above this module has so many pins which and GSM based Door Lock System Circuit Diagram: The circuit diagram as you can see is very simple. The electronic lock is controlled using an SPDT “Single Pole Double Throw” type relay, which is a 12v relay. As you can see in the circuit diagram the 12v wire from the power supply is connected with the electronic lock, it really doesn’t matter which wire you connect it to. The other wire of the electronic lock is connected with the normally open contact of the relay. While the ground of the power supply is connected with the common contact of the relay. This relay is controlled using the relay driver. The relay driver simply consists of the 2n2222 NPN transistor and a 10k resistor. The selection of the transistor depends on the coil current of the electronic lock. To find the current, first you will need to find the resistance of the electronic lock coil, the voltage is already known which is 12v, then using the Ohm’s law V = IR The current can be calculated. Now depending on the current value select any NPN transistor whose collector current is greater than the calculated value. For the best performance select a transistor whose collector current is 3 times greater than the calculated value. As you can see in the circuit diagram above, this relay is controlled using the digital pin 13 of the Arduino, and make sure you connect the ground of the Arduino with the emitter of the 2n2222 NPN transistor. On the right side you can see a GSM SIM900A module. You can also use SIM900D module. GSM SIM900A module supports Serial communication and that’s why this module is provided with TX and RX pins. As you know very well in Arduino Uno we have only one Serial Port which is on pin number 0 and pin number 1. As I always say never use the Arduino’s default Serial Port for the communication with other devices. The Arduino’s default Serial Port should only be used for debugging purposes. Using the Software Serial library we can define multiple Serial Ports which I will discuss in the Programming. So Pin number 7 and Pin number 8 of the Arduino will be used as the serial port, that’s why the GSM SIM900A module is connected with the Arduino’s pin number 7 and pin number 8. Pin number 7 is the RX while pin number 8 is the TX. As discussed earlier the recommended voltage of this module is 4.7 to 5 volts. So that’s all about the Circuit Diagram now let’s have a look at the Arduino’s Programming. Door Lock System Programming: Door Lock System Program Explanation: /* Commands b = Open the door “opens the electronic lock” c = close the door */ To open or close the door you will send b or c in a text message. If you send b in a text messages the door will open and if you only send c in the text message it will close the door. I started off by including the SoftwareSerial.h header file. This library is used to define multiple serial ports. Using this I defined a serial port on pin number 7 and pin number 8 of the Arduino. #include <SoftwareSerial.h> char inchar; // Will hold the incoming character from the GSM shield SoftwareSerial SIM900(7, 8); // gsm module connected here. int load1 = 13; //electronic lock is controlled using pin number 13 of the Arduino int flag1 = 0; void setup() { Serial.begin(9600); // activate the serial communication while 9600 is the baud rate pinMode(load1 , OUTPUT); // load1 which is the electronic lock is set as output digitalWrite(load1, LOW); // we keep to close by default. SIM900.begin(9600); // original 19200 randomSeed(analogRead(0)); SIM900.print(“AT+CMGF=1\r”); // set SMS mode to text delay(1000); SIM900.print(“AT+CNMI=2,2,0,0,0\r”); // blurt out contents of new SMS upon receipt to the GSM shield’s serial out delay(1000); SIM900.println(“AT+CMGD=1,4”); // delete all SMS delay(5000); Serial.println(“Ready…”); } void loop() { if(SIM900.available() >0) // if the gsm module has received a message then { inchar=SIM900.read(); // read the message and also print the received character. Serial.println(inchar); delay(20); using the following if conditions we check whether the GSM module has received the character b or c and then accordingly control the Electronic Lock. // for LOAD1 if ( (inchar == ‘b’) && ( flag1 == 0)) { digitalWrite(load1, HIGH); flag1 = 1; } if ( (inchar == ‘c’) && (flag1 == 1)) { digitalWrite(load1, LOW); flag1 = 0; } } } 3 Comments Sir what if I use SIM800L gsm module? Thank you very much >>> You save my final project as technicien in EI >> I have a question if I want to make the programe verify the phone number to accept the sms what should I do >? I would recommend to use programmable SMS like Twilio… you can create a Studio Flow to filter incoming SMSs
https://www.electroniclinic.com/door-lock-system-using-arduino-and-gsm-wireless-electronic-lock/
CC-MAIN-2021-39
refinedweb
1,244
60.85
Recent new feature: As well as tweeting user requested images NixieBot will send out a daily movie tweet about "how my day went". This movie is composed of one frame taken every 15 minutes throughout the day so you can see how lighting changes and weather affect the images the camera produces. The word to display is either picked from the last user requested word from the previous 7.5 minutes or else, if there was no request made during that time period, the most popular word (of four or more letters in length) used in the random tweet feed is displayed. Certain very common words: boringWords=["this","that","with","from","have","what","your","like","when","just"]are filtered out to make it more interesting. The movie attempts to summarize the twitter 'ZeitGeist' for the day. How it works: The time interval between frames is kept in the timeLapseInterval variable, every time round the loop in the main runClock() function this happens: if int(t.minute) % timeLapseInterval == 0 : doTimeLapse() #either choose a frame from recent first frames or, if none available, take one from random stats #if it's the appointed hour, generate and tweet the time lapse movie. else : lapseDone = FalseThe minutes value of the time variable t (set at the top of the loop) is checked to see if it's a multiple of the required interval, if so the doTimelapse() function is called. The lapseDone variable acts as a flag to make sure that doTimeLapse only gets called once per interval. Without this, if the timelapse process takes less than a minute to run, it would be called multiple times. So what does doTimelapse do then? here it is: def doTimeLapse() : global cam global lapseDone global makeMovie global effx global effxspeed if lapseDone : return print("doTimeLapse called") #delete all lapse*.jpg older than (lapseTime / 2) #pick youngest lapse*.jpg file and copy to lapseFrames directory>>>>>>>>>>>> Uploading Timelapse Movie ", datetime.datetime.now().strftime('%H:%M:%S.%f')) response = twitter.upload_media(media=pic ) print(">>>>>>>>>>>>> Updating status ", datetime.datetime.now().strftime('%H:%M:%S.%f')) twitter.update_status(>>>>>>>>>>>> Done ", datetime.datetime.now().strftime('%H:%M:%S.%f')) uploadRetries = 200 except BaseException as e: print("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Tweeting movie exception!" + str(e)) uploadRetries += 1 move(basePath+"Tlapse.gif", basePath+"lapseFrames/Tlapse"+time.strftime("%Y%m%d-%H%M%S")+ ".gif") for f in lapseFrames : os.remove(f) lapseDone = True return()In between calls to doTimeLapse the word displaying and picture taking routines save the image as a file with name composed of the string "lapse" then a timestamp. doTimeLapse() first iterates through all files named lapse*.jpg, it discards any that were created in the first half of the current lapse period and keeps track of the youngest file it finds that was created in the second half of the lapse period. If this process finds a youngest file it will move it into a subdirectory where all frames for the day's movie are kept. If no file is found then it retrieves a list of all words used in current buffer of random tweets by invoking the allWords() method of the randstream object (this TwythonStreamer object is in charge of receiving random tweets and keeps a circular buffer of the last 1000 tweets received, this buffer is a deque ). This word list is first iterated through to remove words that are too small or in the boringWords list. The resulting pruned word list is then fed into another of python's many handy collection types , the Counter. A counter accepts values and compiles a dictionary of unique values against a count of how many times that value occurs. Counters also have a handy most_common() method which is used in this case, to extract the most used word (actually for reasons lost in the mist of debugging time it extracts the top twenty words then picks the number one from those ... that should probably get neatened up one day). Having found the most popular word it then displays it, takes a photo, then stores the image in the directory where the other timelapse frames are kept. Next job is to see if there are a full day's worth of frames yet (and, in explaining all this to you I have found a potential bug, there's a hard coded value for number of frames per day when it should be calculated from the timeLapseInterval variable ... explaining your code to someone is a great technique for optimising and debugging! ) If a full day's worth of frames have been recorded then they are assembled into an animated gif with the "gm convert " command and that is posted to twitter (here there is substantial code duplication as there are other places where movies are posted to twitter... one day it might get refactored out into a separate function but this was a quick and dirty feature addition ) Finally the lapseDone flag is set so that the routine doesn't get called again next time round the main clock loop. So there you go ... an insight into what happens when I start coding and keep adding things without a refactor ... lots of global variables serving mysterious purposes and code duplication, it still works though! Discussions Become a Hackaday.io Member Create an account to leave a comment. Already have an account? Log In.
https://hackaday.io/project/9155-nixiebot/log/47480-daily-time-lapse-movies-from-nixiebot
CC-MAIN-2021-39
refinedweb
888
60.65
Here is an other (simpler) way to do it ! 14th October, 2015 Mehdi_Souihed left a reply on How Do You Unit Test File Uploads? • 3 years ago Here is an other (simpler) way to do it ! Mehdi_Souihed left a reply on File Upload Unkown Error With Phpunit • 3 years ago Hi Guys, I came across a similar problem while trying to test uploading a file on a form. Here is what I have done to make it work : // ... Your test file ... // array $file = 'public/uploads/file.pdf'; $this->assertFileExists($file); $pdf = new UploadedFile ($file, null, 'application/pdf', null, null, true); $form = $this->fakeFormData(); $form['file'] = $file; $this->visit('signup')->submitForm('Submit', $form); However if you typically have the following rule in your validator: //... 'file' => 'mimes:pdf', //... You will get an error in these terms : ** file must be of one these types: pdf** This is because the above rule checks against the mimeType sent by the browser (which is not a terribly good practice) and the PHPUnit crawler always sends a mime type of 'application/octet-stream' and then laravel winges. So to make the server check itself the mimeType and not rely on the browser, we have to define a custom validator rule: Validator::extend('is_pdf', function($attribute, $value, $parameters, $validator) { $mime = \Request::file($attribute)->getMimeType(); return $mime == 'application/pdf'; }); You should also define a custom error message, it is explained in the Laravel Doc (link above). 2nd April, 2015 Mehdi_Souihed left a reply on Eloquent Eager Loading Modify 'Where ... IN ' Clause • 3 years ago @constb For records this is what my query looks like now. It kind of simulates eager loading with the ability to choose which records go in the WHERE .. IN(...) clause automatically generated by Eloquent $users = User::whereHas('settings', function ($q) { $q->where('settingsid', 16)->where('value', 'LIKE', '%o%'); })->with(['settings' => function ($q) { $q->whereIn('settingsid', $this->settingsColumns); // Get only some columns in the settings table $q->addSelect(['value', 'id', 'userid', 'settingsid']); }])->get(); Thanks Mehdi_Souihed left a reply on Eloquent Eager Loading Modify 'Where ... IN ' Clause • 3 years ago Mehdi_Souihed left a reply on Eloquent Eager Loading Modify 'Where ... IN ' Clause • 3 years ago Mehdi_Souihed left a reply on Eloquent Eager Loading Modify 'Where ... IN ' Clause • 3 years ago Thanks for your replies. Right, I have a user_settings table which is an Entity Attribute Value Model, I know ... that's not great. It basically looks like this : settingsid, userid, value 15 , 5, Billing 15 , 6, Operations 16 , 5, London 16 , 6, Edinburgh ... What I already have is all settings for each user lined up in rows : User, Location, Dept. 5 , London, Billing 6 , Edinburgh, Operations I am looking to be able to filter per Location or Dept or any other setting. I hope that is clear enough, thank you Mehdi_Souihed started a new conversation Eloquent Eager Loading Modify 'Where ... IN ' Clause • 3 years ago Hi, I am using eager loading with one model and the query looks like this select `value`, `id`, `userid`, `settingsid` from `user_settings` where `user_settings`.`userid` in ('1', '4', '5', '8', '9', '11', '13', '14', '15', '16', '28') I have a User model and a User settings models containing a foreign key to User. What I would like to do is have control over the User ids used by the eager loading. Is that possible 23rd March, 2015 Mehdi_Souihed started a new conversation Codeception Dump Page • 3 years ago Hi all, Does anyone know how to dump the html page Codeception uses for the tests ? I would like to be able to see what Codeception sees because I have a failing test that doesn't make sense to me. Thank you 3rd March, 2015 Mehdi_Souihed left a reply on Class Not Found Exception • 3 years ago 23rd February, 2015 Mehdi_Souihed left a reply on [L5] Role And Permission • 3 years ago @nazar1987 You need to create the 'models' folder yourself and then put the different files in it (Role.php, Permission.php) Also don't forget to update the entrust config file config/entrust.php with the correct namespaces for Role and Permission models because they are defaulted to the [base namespace]:(). 18th February, 2015 Mehdi_Souihed left a reply on Eloquent. Infinite Children Into Usable Array And Render It To Nested Ul • 3 years ago @AlnourAltegani @pmall I see, this is at the database level ;-) I came across that problem for a hierarchy of users. I have used Baum to manage the hierarchies for me. For your case I guess you'll need to keep querying the database recursively. Mehdi_Souihed left a reply on Eloquent. Infinite Children Into Usable Array And Render It To Nested Ul • 3 years ago @pmall From his question @AlnourAltegani is only looking for the values of his elements : I have just tested the following code (I have normalised a bit the array which contained) $categories = array( 'name' => 'cat1', 'children' => array ( 'name' => 'cat1', 'children' => array ( 'name'=>'subcat1', 'children'=> array () ) ) ); function test_print($item, $key) { echo "Category name : $item\n"; } array_walk_recursive($categories, 'test_print'); Which outputed : Category name : cat1 Category name : cat1 Category name : subcat1 Mehdi_Souihed left a reply on Eloquent. Infinite Children Into Usable Array And Render It To Nested Ul • 3 years ago The most straightforward however maybe not fastest way is to use a array_walk_recursive function to traverse your array (which is called a tree datastructure) Here is the snippet from the PHP doc adapted to your case : function test_print($item, $key) { echo "Category name : $item\n"; } array_walk_recursive($categories, 'test_print'); Another method would be to use the Recursive Iterator however I think in your case it is overkill. Then to print it to an unordered list you have a Laravel HTML helper, if you use Laravel 5 you will need to install illuminate Pass your array to your view and print it with : {{ HTML::ul($categories) }} 17th February, 2015 Mehdi_Souihed left a reply on Redirect In AfterFilter • 3 years ago @dante_dd you need to return the result as follows : if ($something) { return $this->afterFilter(function($route, $request, $response) { return Redirect::to('', 301); }); } Note the return on line 2. Mehdi_Souihed left a reply on Laravel 4 TokenMismatchException Error When Login Or Signup • 3 years ago Are you sure that Input::get('_token') is not empty ? If it is not I would suggest clearing all your caches. Mehdi_Souihed left a reply on Access Errors In Form Macro • 3 years ago You can probably access your $errors variable using : Session::get('errors'); Mehdi_Souihed left a reply on Eloquent DB Questions.... • 3 years ago If what you mean is how to resolve your foreign key and get joined data from your three tables, you could do the following : VendorDetails::with('vendor')->with('client')->where('client_id', =, Session::get('client_id')) ; Want to change your profile photo? We pull from gravatar.com.
https://laracasts.com/@Mehdi_Souihed
CC-MAIN-2018-43
refinedweb
1,126
55.58
I am not seeing anyone else have this problem, and it just seems to have cropped up for me. So how do I determine if this is a local install problem or (and I highly doubt this) an issue with the library? Using requests import requests s = requests.Session() s.auth('username', 'pass') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-1e7d22bf85ad> in <module>() ----> 1 s.auth('user', 'pass') TypeError: 'NoneType' object is not callable requests s.auth is an attribute and not a method, so you cannot call it. s.auth = ('username', 'pass') is the assignment you want to use.
https://codedump.io/share/wSqY7h0AVOUD/1/baffling-inability-to-auth-with-requests-nonetype-error
CC-MAIN-2017-04
refinedweb
102
75.1
Hey, Scripting Guy! In Windows XP, how can I use a script to automatically queue up files that I want to burn to a CD?-- BD Hey, BD. Well, unfortunately, we have some bad news for you: there’s currently no way to burn a CD using a script. You see, the problem is that - oh, wait a second: you just want to queue up the files that will be burned to the CD, don’t you? Well, that’s very different. And, for we Scripting Guys, a huge relief; after all, people ask us every other day about using scripts to burn CDs, and the answer is always the same: you can’t. That’s why we started this column off by telling you can’t burn a CD using a script; we’ve had to tell so many people that you can’t burn CDs using a script that this has become a reflexive reaction. But simply collecting all the files and getting them ready to be burnt to a CD? Well, that we can help you with. Note. And for those of you who want to use scripts to burn CDs, well, we have at least some good news: this capability is coming in Windows Vista. In fact, in Windows Vista you’ll be able to use scripts to burn DVDs as well as CDs. Granted, this doesn’t help you too much today, but good things come to those who wait, right? You just need to wait a little bit longer and you’ll be able to burn all the CDs and DVDs you want. As for queuing up files that will be burned to a CD, well, this little script should do the trick: Const HKEY_CURRENT_USER = &H80000001 strComputer = "." Set objRegistry=GetObject("winmgmts:\\" & strComputer & "\root\default:StdRegProv") strKeyPath = "Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" strValueName = "CD Burning" objRegistry.GetStringValue HKEY_CURRENT_USER,strKeyPath,strValueName,strBurnFolder strFolder = "C:\Test" Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colFileList = objWMIService.ExecQuery _ ("ASSOCIATORS OF {Win32_Directory.Name='" & strFolder & "'} Where " _ & "ResultClass = CIM_DataFile") For Each objFile In colFileList strCopy = strBurnFolder & "\" & objFile.FileName & "." & objFile.Extension objFile.Copy(strCopy) There’s really no secret to queuing up files for burning: all you have to do is copy those files to the CD Burning folder (which will typically have a path like C:\Documents and Settings\kenmyer\Local Settings\Application Data\Microsoft\CD Burning). After you’ve determined the location of the CD Burning folder (something you can do by reading the registry) the rest is simply a matter of copying files to this folder. With that in mind, let’s start with the first part of our task, determining the location of the CD Burning folder. In order to create a script that can queue up files on remote computers as well as on the local computer, we decided to use WMI to determine the folder location. To do that, we start out by defining a constant named HKEY_CURRENT_USER and setting the value to &H80000001; this tells the script which registry hive we want to work with. We then bind to the WMI service on the local computer (although, again, we could just as easily run this script against a remote machine), and connect to the System Registry provider. That’s what this code does: Set objRegistry=GetObject("winmgmts:\\" & strComputer & "\root\default:StdRegProv") Next we need to assign values to a pair of variables. The variable strKeyPath is assigned the value Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders; that just happens to be the path (within HKEY_CURRENT_USER) to the registry value we need to read. Meanwhile, the variable strValueName is assigned the value CD Burning; that just happens to be - oh, that’s right, that’s the name of the registry value we want to read. Man, here we are trying to show off how much we know about scripting, only it turns out you guys know every bit as much as we do. Dang! As you doubtless know then, we can now use the GetStringValue method to read the registry value and determine the path to the CD Burning folder: objRegistry.GetStringValue HKEY_CURRENT_USER,strKeyPath,strValueName,strBurnFolder GetStringValue takes four parameters: the constant we defined at the beginning of our script, our two variables (strKeyPath and strValueName), and an “out” parameter named strBurnFolder. Being an out parameter we don’t assign a value to strBurnFolder; instead, the GetStringValue method will automatically assign the location of the CD Burning folder (as read from the registry) to that variable. But you already knew that, didn’t you? After we know the location of the CD Burning folder the next step is to copy files to that folder. For the sake of simplicity we’re going to assume that we want to copy all the files found in the folder C:\Test. To do that we first assign the path C:\Test to a variable named strFolder, then make a second connection to the WMI service, this time binding to the root\cimv2 namespace. After making that connection we can then use this query to return a collection of all the files found in the folder C:\Test: Set colFileList = objWMIService.ExecQuery _ ("ASSOCIATORS OF {Win32_Directory.Name='" & strFolder & "'} Where " _ & "ResultClass = CIM_DataFile") At this point it gets just a tiny bit tricky. Why? Because copying files using WMI requires you to specify the complete path name of the copied file. For example, suppose we want to copy the file C:\Test\File1.txt. The only way we can copy this file is to provide WMI with the full path to the new file: C:\Documents and Settings\kenmyer\Local Settings\Application Data\Microsoft\CD Burning\File1.txt. Like we said, that problem is a tiny bit tricky, but it’s one we can solve easily enough. And here’s how. To begin with, we set up a For Each loop to loop through all the files in the folder C:\Test. Inside that loop we start off by using this line of code to construct the path for the copied file: strCopy = strBurnFolder & "\" & objFile.FileName & "." & objFile.Extension You can see what we’re doing here. We start off by taking the location of the CD Burning folder (represented by the variable strBurnFolder) and then add a trailing \ to the folder name. We then tack on the value of the FileName property (for example, File1) followed by a dot (.) and the value of the Extension property (for example, txt). After all that the variable strCopy will contain a value similar to this: C:\Documents and Settings\kenmyer\Local Settings\Application Data\Microsoft\CD Burning\File1.txt Which, by remarkable coincidence, just happens to be the path to the copied file. Once we have that path we can then pass the value to the Copy method and copy the file to the CD Burning folder: objFile.Copy(strCopy) From there we loop around and repeat this process for the next file in the collection. After we’ve looped through the entire collection all the files in C:\Test will have been copied to the CD Burning folder, and you’ll be ready to burn those files to a CD. Or wait until Windows Vista is released and then use a script to burn those files to a CD. We’ll leave that up to you. And, yes, you can bet that we’ll fill you in on how to use scripts to burn CDs and DVDs in Windows Vista; trust us, after years of delivering people the same bad news (“No, sorry, you can’t use a script to burn a CD”) we can’t wait to give people a better answer. At long last we’ll have an answer for every question you guys can throw at us. What’s that? You want to know how manage a DHCP server using a script? OK, maybe not every question ….
http://blogs.technet.com/b/heyscriptingguy/archive/2006/05/02/how-can-i-use-a-script-to-automatically-queue-up-files-that-i-want-to-burn-to-a-cd.aspx
CC-MAIN-2013-48
refinedweb
1,321
59.64
Sensor node can't find gateway Hello, I'm trying to setup a mysensors serial gateway following the instructions. The only modifications for both (gateway and node) I added '#define MY_RF24_PA_LEVEL RF24_PA_LOW' Gateway is on Arduino Uno (chinese clone) on COM7 Node is on Arduino Uno (official version) on COM5 Distance between the 2 in debug : 0.5m. I also tried 3m between when the gateway was connected to RPI3+ (with domoticz) I already tried adding a capacitor on the NRF24L01+ (chinese clones) Didn't find any marks which is pin 1, but supposed that these are the same Connnected to the 3.3V of the Arduino Uno (on both) I don't know what capabilities difference (marked bold) in the log means, maybe hint to problem. Below snippets from the IDE-monitor Gateway: 0;255;3;0;9;0 MCO:BGN:INIT GW,CP=RNNGA---,VER=2.3.3.0 0;255;3;0;9;28 MCO:BGN:STP 0;255;3;0;9;34 MCO:BGN:INIT OK,TSP=1 Node: ASCII-art MySensors 16 MCO:BGN:INIT NODE,CP=RNNNA---,VER=2.3.0 25 TSM:INIT 26 TSF:WUR:MS=0 33 TSM:INIT:TSP OK 35 TSM:FPAR 1635 TSF:MSG:SEND,255-255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 3643 !TSM:FPAR:NO REPLY 3645 TSM:FPAR 5244 TSF:MSG:SEND,255-255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 7254 !TSM:FPAR:NO REPLY 7256 TSM:FPAR 8855 TSF:MSG:SEND,255-255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 10863 !TSM:FPAR:NO REPLY 10865 TSM:FPAR 12465 TSF:MSG:SEND,255-255 You forgot the MY_NODE_ID in the node sketch I added the value MY_NODE_ID but did not change anything in the communication. While searching how debug further I stumble upon this article. I looked up my order @ wish : My radios are NRF24L01 not NRF24L01+ Are these also useable or not? If you see 1635 TSF:MSG:SEND,255-255-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: It is broadcasting searching for gateway to assign an ID. Can you post your sketch? Except the node-id (1) , nothing changed in the logfile 16 MCO:BGN:INIT NODE,CP=RNNNA---,VER=2.3.0 25 TSM:INIT 26 TSF:WUR:MS=0 33 TSM:INIT:TSP OK 35 TSM:INIT:STATID=1 37 TSF:SID:OK,ID=1 39 TSM:FPAR 1639 TSF:MSG:SEND,1-1-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 3646 !TSM:FPAR:NO REPLY 3648 TSM:FPAR 5248 TSF:MSG:SEND**,1-1**-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 7255 !TSM:FPAR:NO REPLY 7257 TSM:FPAR 8856 TSF:MSG:SEND,1-1-255-255,s=255,c=3,t=7,pt=0,l=0,sg=0,ft=0,st=OK: 10863 !TSM:FPAR:NO REPLY 10865 TSM:FPAR 12465 TSF:MSG:SEND,1-1 This distance sensor using HC-SR04 - */ // Enable debug prints #define MY_DEBUG // Enable and select radio type attached #define MY_RADIO_NRF24 #define MY_RF24_PA_LEVEL RF24_PA_LOW #define MY_RF24_DATARATE RF24_1MBPS //#define MY_RADIO_RFM69 #define MY_NODE_ID 1 #include <SPI.h> #include <MySensors.h> #include <NewPing.h> #define CHILD_ID 1 ; bool metric = true; void setup() { metric = getController); } All solved with the correct modules. NRF24L01+ instead of NRF24L
https://forum.mysensors.org/topic/9743/sensor-node-can-t-find-gateway/6
CC-MAIN-2019-04
refinedweb
605
56.25
Hello, I've searched beforehand, and nothing applies to my specific situation, and I need to have moving platforms work, however, I'm using a rigidbody because I need it to be pushed around, and because of issues with scaling and being parented but just frozen in midair, I can't use parenting to do this. I'm moving the player by directly controlling velocity, the platform is ticked as kinematic (it has a rigidbody too), gravity on it is disabled, the way the platform moves is like this: public class Platform : MonoBehaviour { public float speed = -1f; public Vector3 pointB; IEnumerator Start() { var pointA = transform.position; while (true) { yield return StartCoroutine(MoveObject(transform, pointA, pointB, speed)); yield return StartCoroutine(MoveObject(transform, pointB, pointA, speed)); } } IEnumerator MoveObject(Transform thisTransform, Vector3 startPos, Vector3 endPos, float time) { var i = 0.0f; var rate = 1.0f/time; while (i < 1.0f) { i += Time.deltaTime * rate; thisTransform.position = Vector3.Lerp(startPos, endPos, i); yield return null; } } } What can I do to get the player to stick to the platform when needed?, and only when he's standing on the top of it, a scenario with the player being stuck to the side of the moving platform wouldn't be very good. Answer by HarshadK · Apr 28, 2014 at 02:34 PM I know it is not the best way but you can use Fixed Joint for this purpose. As you want to have your player character to just stand on the platform you can use Fixed Joint for this. All you have to do is check when the player collides with the platform and create a joint between the platform and player. Also you can just destroy the joint when not needed i.e. you want to make player move away from the platform. Hmm, it looks hopeful, I'll try it out. Alright, I've sorta got it working, only thing is that I can't seem to set the break force or break torque via script, how do I do this?, I'm only given the option to change the hingeJoint break force. There are two ways: 1) you have to apply force greater than your break force on your object. To use this you have to use lower amount of break force. 2) or you can Destroy the fixed joint component from your object. And each time create and add the fixed joint component programatically as required. Sorry if I didn't make it clear, that isn't my problem, I can't figure out how to assign the break force, which means I can't lower it, I need to know how to use the scripts to decide what the break force is. Does this make sense?, by default the break force is infinity, I can't find any way to use scripting to adjust this to a lower number, I found an option to adjust the break force for hinge joints, but not fixed joints. I did this with math, no parenting or joint. It does not use a rigidbody so I guess it is not what you want. Answer by Mr.Hal · Apr 29, 2014 at 11:01 AM Make the object on the platform share the velocity of the platform its on. So on the callback OnCollisionStay(). Get the rigidbody and assign the velocity on it to be the same as the platform. However for this to work you need to find out, make or use Vector3.SmoothDamp to find out the velocity of the platform. Because looking at your script... You are manually moving it via the lerp function so there is no true way to know the velocity of the platform. You could take the normalized direction of where the platform is going and multiply it by the speed. You could also find out if the player is on the platform by using a trigger instead or find out the direction the player is relative to the platform. var Direction = (transform.position - player.transform.position).normalized; const float SafeZone = 0.5;//Adjust me a little! //Player is on the platform if(Direction.y > Safe. movement inside a train problem 1 Answer Player slips off moving platform if Animator Component enabled 0 Answers Velocity calculation for non-rigidbodies. 0 Answers Getting the player to move with platform: Velocity, DistanceJoint, or...? 2 Answers rigidbody.velocity.normalized application 1 Answer
https://answers.unity.com/questions/696505/velocity-powered-rigidbody-on-a-moving-platform-wi.html
CC-MAIN-2019-09
refinedweb
731
72.56
Hello there! Here are two things that occurred to me while I played around with the TSRM module (I am using 4.0.4pl1): The first is: when compiling the TSRM without TSRM_DEBUG defined, but compiling the PHP with -enable-debug option, the file zend_alloc.c uses the function tsrm_error() but the function is not compiled in TSRM.c (#if'ed out) and you get an undefined symbol error in linkage. Maybe the tsrm_error function should be available even if TSRM_DEBUG is not defined. The other thing in TSRM.c is this: The macro TSRM_ERROR is defined as either "trsm_error" or as nothing. Here's the thing: when TSRM_ERROR is defined as nothing, then: TSRM_ERROR(something here, another here, etc, etc, etc ); is compiled as (something here, another here, etc, etc, etc ); Which is a legal C statement, and it is compiled, but that is not quite what one would hope for (i.e. an empty line), since all the parameters are still eval'ed. The solution might be: #ifdef TSRM_DEBUG #define TSRM_ERROR(x) tsrm_error x #else #define TSRM_ERROR(x) #endif TSRM_ERROR((something here, another here, etc, etc, etc )); Best Regards, Eetay Natan, R&D Manager Zend Technologies T: +972-3-6139665 mailto:[EMAIL PROTECTED] Come and visit the Zend Booth # 605 at Apachecon 2001, April 4-6, California. -----Original Message----- From: Joe Brown [mailto:[EMAIL PROTECTED]] Sent: Sunday, April 01, 2001 11:06 AM To: [EMAIL PROTECTED] Subject: [PHP-DEV] what kind of insect is this? Had fun tracking this bugger down, now that I found it, I don't know what to do with it. Maybe you can help me? Summary: WinNT, Apache 1.3.19, fairly recent snap, php_oci8.dll (maybe MySql also) Resource not releasted after casting to an integer. Don't know if this the the culprit to my Apache server on windows constantly crashing, but suspect it may be playing a role in it. <?php $connection = OCIPLogon ("prn30","prn30","shmengie"); echo "<br>First round:<br>"; $statement=OCIParse ($connection, "select user from dual"); //$result_value=intval($statement); //CULPABLE. echo var_dump($statement)."<br>Freed<br>"; @OCIFreeStatement($statement); echo var_dump($statement)."<br>"; echo "<br>Second round:<br>"; $statement=OCIParse ($connection, "select user from dual"); $result_value=intval($statement); //CULPRIT echo var_dump($statement)."<br>Freed<br>"; @OCIFreeStatement($statement); echo var_dump($statement)."<br>"; ?> Outputs: First round: resource(2) of type (oci8 statement) Freed resource(2) of type (Unknown) Second round: resource(3) of type (oci8 statement) Freed resource(3) of type (oci8 statement) In the second round, the resource is not freed. Background info: Thought it would be a good idea to use Manuel Lemos' Metabase. Works great on Linux. That's where I developed this app, but now I need to migrate to a Windows platform and it's crashing like crazy. Metabase's classes use an intval($resource) as a place holder, in an array of statement_info so that you can run mutiple queries. Metabase can keeps track of its queries this way (for OCI at least). The crashes experienced catestrophic, but threads aren't closing up shop properly. Bug #9857 came about because a constant would be still be defined the next page refresh. Haven't been able to reproduce this in a short code segment w/out starting up metabase classes. After using Metabase on windows however, all kinds of weirdness ensues. Apache performs illegal instructions after every other refresh w/oci8 then. -Joe "Shmengie" --] what kind of insect is this? Joe Brown - Eetay Natan
https://www.mail-archive.com/php-dev@lists.php.net/msg06840.html
CC-MAIN-2018-39
refinedweb
578
64.61
Closed Bug 705755 Opened 10 years ago Closed 4 years ago Reintroduce handling of SSL short write after SSL thread removal Categories (Core :: Security: PSM, defect, P1) Tracking () mozilla57 People (Reporter: briansmith, Assigned: mayhemer) References (Blocks 1 open bug) Details (Keywords: regression, Whiteboard: [psm-assigned]) Attachments (1 file, 3 obsolete files) +++ This bug was initially created as a clone of Bug #674147 +++ (In reply to Honza Bambas (:mayhemer) from Bug 674147 Comment 37) > You are regressing bug 378629: > > > > PSMSend must watch for libssl sending just request - 1 amount of data (= a > "short write"). In that case, the simplest solution is IMO that we have to > limit next data send to just 1 byte, simply explained in a pseudo code: > > PSMSend(buffer, count) > { > static flag = false; > if (flag) > count = 1; > flag = false; > > written = lower->send(buffer, count) > if (written == count - 1) > flag = true; > } > > We can do this in a separate bug, but ASAP. The final solution is up to > you, but I'd like to review and test it. This is marked blocking bug 674147 because that is how we show "this bug is caused by that bug" but I still intend to land the patch in bug 674147 first. I renamed the bug to be more descriptive and also added refs to other bugs that gets affected by bug 674147. I agree to land that bug first, this one is not a critical issue. -> me Stealing this one. Assignee: nobody → honzab.moz Status: NEW → ASSIGNED - implements what the SSL thread had been implementing before its removal - adds also check for 16383 bytes of written data ; this often happens even for larger buffers like 32768, w/o it we jitter again with 2 bytes being later written separately - the 16383 const is disputable Added Nelson for feedback to get more info on how all ssl lib indicates the short write. To have what we had before SSL thread removal we may just remove the 16383 const checking and we are done. Attachment #582708 - Flags: review?(kaie) Aren't we short on time for reviews on these for ff11 ? I agree with the patch in general, just some small comments: +/* Secure write sometimes returns this amount of bytes written even for larger + sizes of the output buffer, consider this as indication of a short write */ +static const PRInt32 kShortWrite16k = 16383; (a) Please explain (in the comment) where this number comes from. I had to research myself. I believe this is based on libssl's constant #define MAX_FRAGMENT_LENGTH 16384 Maybe add this comment: // Value is based on libssl's maximum buffer size (in a private header): // MAX_FRAGMENT_LENGTH - 1 // TODO: Query this value once at runtime (changes in NSS required) (b) Nit, optional: Given you only use this constant once, why not move it closer to the place where you use it? Makes the explanation easier to find when reading the code. (c) + // Value of the last byte pending from the SSL short write that needs + // to be passed to subsequent calls to send to perform the flush. + char mShortWritePendingByte; Please change that to "unsigned char", because that's what you use everywhere else. (d) + if (socketInfo->IsShortWritePending()) { + // We got "SSL short write" last time, try to flush the pending byte. + buf = socketInfo->GetShortWritePendingByteRef(); + amount = 1; + ### + PR_LOG(gPIPNSSLog, PR_LOG_DEBUG, ("[%p] pushing 1 byte after SSL short write", + fd)); + } + If I understand correctly, if we reach position ###, the following must be true: - the function was called with: amount >= 1 - this expression is true: *(const unsigned char*) pending-ref == *buf-given-to-function[0] I propose to add an assertion, that way you don't need to change the "buf" pointer. if (socketInfo->IsShortWritePending()) { // We got "SSL short write" last time, try to flush the pending byte. NS_ASSERTION(amount >= 1 && buf[0] == *(const unsigned char*) socketInfo->GetShortWritePendingByteRef(), "unexpected buffer after short write"); amount = 1; PR_LOG(gPIPNSSLog, PR_LOG_DEBUG, ("[%p] pushing 1 byte after SSL short write", fd)); } (e) + // SSL Short Write handling. + // + //. + // + // Value of the last byte pending from the SSL short write that needs + // to be passed to subsequent calls to send to perform the flush. + char mShortWritePendingByte; + // Original amount of data the upper layer has requested to write to + // return after the successful flush. + PRInt32 mShortWriteOriginalAmount; In this block it's difficult to see which variables you refer to when saying "These are only valid iff mIsShortWritePending is true." I'd align the block clearer, maybe like this: // True when SSL layer has indicated an "SSL short write", i.e. need // to call on send one or more times to push all pending data to write. bool mIsShortWritePending; // Value of the last byte pending from the SSL short write that needs // to be passed to subsequent calls to send to perform the flush. // Only valid if mIsShortWritePending unsigned char mShortWritePendingByte; // Original amount of data the upper layer has requested to write to // return after the successful flush. // Only valid if mIsShortWritePending PRInt32 mShortWriteOriginalAmount; Please run some quick local testing with a debug build, after you do (d), and confirm you don't see the assertion. If you address my proposals, r=kaie No second round of review necessary, but please attach the new patch for reference. Comment on attachment 582708 [details] [diff] [review] v1 r- If you address my small comments, you may mark r=kaie on your new patch. Attachment #582708 - Flags: review?(kaie) → review- > r- because of the char/unsigned char inconsistence. We may want to check this is actually still needed. Status: ASSIGNED → NEW David, can you or someone from your team please verify this is still an issue and should be fixed? Assignee: honzab.moz → nobody Flags: needinfo?(dkeeler) Sure, I can investigate this. Flags: needinfo?(dkeeler) Whiteboard: [psm-backlog] Priority: -- → P3 "perf" key word? I addressed all the Kai's 5 years old comments where I believed it was necessary. I can't trigger the code path tho, so hard to say if this breaks something or actually fixes something. Attachment #582708 - Attachment is obsolete: true Attachment #8900871 - Flags: review?(dkeeler) Whiteboard: [psm-backlog] → [psm-active] Target Milestone: mozilla11 → --- Comment on attachment 8900871 [details] [diff] [review] v1 (merged) Review of attachment 8900871 [details] [diff] [review]: ----------------------------------------------------------------- Cool. This looks good. r=me with comments addressed (particularly the assertion failure and the documentation). I managed to reproduce the short write situation by running a server locally and using tc to limit the bandwidth and set a delay to it. It looks like Firefox can get into an inefficient state where it will send a large packet followed by a small packet followed by a large packet, etc.. This patch addresses the issue and makes the packet size much more uniform (and large). Unfortunately, it seems this would be hard to write a test for. Maybe we could add this as a QA smoketest task? Or I think we have some talos tests that simulate network effects that should be able to get Firefox in this state... Anyway, I think we should elaborate on the documentation in this patch. Maybe something like this: NSS indicates that it can't write all requested data (due to network congestion, for example) by returning either one less than the amount of data requested or 16383, if the requested amount is greater than 16384. We refer to this as a "short write". If we simply returned the amount that NSS did write, the layer above us would then call PSMSend with a very small amount of data (often 1). This is inefficient and can lead to alternating between sending large packets and very small packets. To prevent this, we alert the layer calling us that the operation would block and that it should be retried later, with the same data. When it does, we tell NSS to write the remaining byte it didn't write in the previous call. We then return the total number of bytes written, which is the number that caused the short write plus the additional byte we just wrote out. ::: security/manager/ssl/nsNSSIOLayer.cpp @@ +1483,5 @@ > #endif > > + if (socketInfo->IsShortWritePending() && amount > 0) { > + // We got "SSL short write" last time, try to flush the pending byte. > + MOZ_DIAGNOSTIC_ASSERT(*static_cast<unsigned char const*>(buf) == *socketInfo->GetShortWritePendingByteRef(), This assertion fails for me. If I'm understanding correctly, the higher layer is retrying the original write, right? If so, buf wouldn't be the byte(s) that didn't get written - buf would be the original data, in which case buf[mShortWriteOriginalAmount - 1] should be mShortWritePendingByte, right? In fact, if we wanted to give ourselves more confidence that the right thing was happening, maybe we could save the original buf and compare it to the new buf (probably in debug-only builds, though). @@ +1497,4 @@ > int32_t bytesWritten = fd->lower->methods->send(fd->lower, buf, amount, > flags, timeout); > > + /* Secure write sometimes returns this amount of bytes written even for larger nit: //-style comments, please @@ +1501,5 @@ > + sizes of the output buffer, consider this as indication of a short write. > + > + Value is based on libssl's maximum buffer size (in a private header): > + MAX_FRAGMENT_LENGTH - 1 > + TODO: Query this value once at runtime (changes in NSS required) */ Let's file a bug in NSS :: Libraries to make this API available and reference it here. ::: security/manager/ssl/nsNSSIOLayer.h @@ +175,5 @@ > + //. nit: "only valid if and only if" is a bit redundant :) Attachment #8900871 - Flags: review?(dkeeler) → review+ Priority: P3 → P1 Whiteboard: [psm-active] → [psm-assigned] Attachment #8900871 - Attachment is obsolete: true Attachment #8902272 - Flags: review+ (updated ci message) Attachment #8902272 - Attachment is obsolete: true Attachment #8902311 - Flags: review+ Pushed by ryanvm@gmail.com: Handle SSL short-write correctly to save CPU looping. r=keeler Status: ASSIGNED → RESOLVED Closed: 4 years ago status-firefox57: --- → fixed Resolution: --- → FIXED Target Milestone: --- → mozilla57 status-firefox55: --- → wontfix status-firefox56: --- → wontfix status-firefox-esr52: --- → wontfix
https://bugzilla.mozilla.org/show_bug.cgi?id=705755
CC-MAIN-2021-39
refinedweb
1,631
57.61
Commons Lang 2.4 is out, and the obvious question is: "So what? What's changed?" . This article aims to briefly cover the changes and save you from having to dig through each JIRA issue to see what went on in the year of development between Lang 2.3 and 2.4. First, let us start with a couple of deprecations. As you can see in the release notes, we chose to deprecate the ObjectUtils.appendIdentityToString(StringBuffer, Object) method as its null handling did not match its design (see LANG-360 for more details. Instead users should use ObjectUtils.identityToString(StringBuffer, Object). We also deprecated DateUtils.add(java.util.Date, int, int) . It should have been private from the beginning; please let us know if you actually use it. Before we move on, a quick note on the build: we built 2.4 using Maven 2 and Java 1.4. We also tested that the Ant build passed the tests successfully under Java 1.3, and that the classes compiled under Java 1.2. As it's been so long, we stopped building a Java 1.1-compatible jar. Most importantly, it should be a drop in replacement for Lang 2.3, but we recommend testing first, of course. Also, for those of you who work within an OSGi framework, the jar should be ready for OSGi. Now... time to move on. Three new classes were added, so let's cover those next. Firstly, we added an IEEE754rUtils class to the org.apache.commons.lang.math package. This candidate for ugly name of the month was needed to add IEEE-754r semantics for some of the NumberUtils methods. The relevant part of that IEEE specification in this case is the NaN handling for min and max methods, and you can read more about it in LANG-381 . Second and third on our newcomers list are the ExtendedMessageFormat class and its peer FormatFactory interface, both found in the org.apache.commons.lang.text package. Together they allow you to take the java.text.MessageFormat class further and insert your own formatting elements. By way of an example, imagine that we have a need for custom formatting of a employee identification number or EIN. Perhaps, simplistically, our EIN is composed of a two-character department code followed by a four-digit number, and that it is customary within our organization to render the EIN with a hyphen following the department identifier. Here we'll represent the EIN as a simple String (of course in real life we would likely create a class composed of department and number). We can create a custom Format class: Our custom EIN format is made available forOur custom EIN format is made available for public class EINFormat extends Format { private char[] idMask; public EINFormat() { } public EINFormat(char maskChar) { idMask = new char[4]; Arrays.fill(idMask, maskChar); } public StringBuffer format(Object obj, StringBuffer toAppendTo, FieldPosition pos) { String ein = (String) obj; //assume or assert length >= 2 if (idMask == null) { return new StringBuffer(ein).insert(2, '-').toString(); } return new StringBuffer(ein.substring(0, 2)).append('-').append(idMask).toString(); } public Object parseObject(String source, ParsePosition pos) { int idx = pos.getIndex(); int endIdx = idx + 7; if (source == null || source.length() < endIdx) { pos.setErrorIndex(idx); return null; } if (source.charAt(idx + 2) != '-') { pos.setErrorIndex(idx); return null; } pos.setIndex(endIdx); return source.substring(idx, endIdx).deleteCharAt(2); } } MessageFormat-style processing by a FormatFactoryimplementation: Now you simply provide aNow you simply provide a public class EINFormatFactory implements FormatFactory { public static final String EIN_FORMAT = "ein"; public Format getFormat(String name, String arguments, Locale locale) { if (EIN_FORMAT.equals(name)) { if (arguments == null || "".equals(arguments)) { return new EINFormat(); } return new EINFormat(arguments.charAt(0)); } return null; } } java.util.Map<String, FormatFactory>registry (keyed by format type) to ExtendedMessageFormat: As expected, this will render a String EIN "AA9999" as:As expected, this will render a String EIN "AA9999" as: new ExtendedMessageFormat("EIN: {0,ein}", Collections.singletonMap(EINFormatFactory.EIN_FORMAT, new EINFormatFactory())); "EIN: AA-9999". This should render "AA9999" as:This should render "AA9999" as: new ExtendedMessageFormat("EIN: {0,ein,#}", Collections.singletonMap(EINFormatFactory.EIN_FORMAT, new EINFormatFactory())); "EIN: AA-####". ExtendedMessageFormatto override any or all of the built-in formats supported by java.text.MessageFormat. Finally, note that because ExtendedMessageFormatextends MessageFormatit should work in most cases as a true drop-in replacement. There were 58 new methods added to existing Commons Lang classes. Going through each one, one at a time would be dull, and fortunately there are some nice groupings that we can discuss instead: CharSet getInstance(String[]) adds an additional builder method by which you can build a CharSet from multiple sets of characters at the same time. If you weren't aware of the CharSet class, it holds a set of characters created by a simple pattern language allowing constructs such as "a-z" and "^a" (everything but 'a'). It's most used by the CharSetUtils class, and came out of CharSetUtils.translate, a simple variant of the UNIX tr command. ClassUtils canonical name methods are akin to the non ' Canonical' methods, except they work with the more human readable int[] type names rather than the JVM versions of [I. This makes them useful for parsing input from developer's configuration files. ClassUtils toClass(String[]) is very easy to explain - it calls toClass on each Object in the array and returns an array of Class objects. ClassUtils wrapper->primitive conversions are the reflection of the pre-existing primitiveToWrapper methods. Again easy to explain, they turn an array of Integer into an array of int[]. ObjectUtils identityToString(StringBuffer, Object) is the StringBuffer variant of the pre-existing identityToString method. In case you've not met that before, it produces the toString that would have been produced by an Object if it hadn't been overridden. StringEscapeUtils CSV methods are a new addition to our range of simple parser/printers. These, quite as expected, parse and unparse CSV text as per RFC-4180 . StringUtils has a host of new methods, as always, and we'll leave these for later. WordUtils abbreviate finds the first space after the lower limit and abbreviates the text. math.IntRange /LongRange.toArray turn the range into an array of primitive int/ longs contained in the range. text.StrMatch.isMatch(char[], int) is a helper method for checking whether there was a match with the StrMatcher objects. time.DateFormatUtils format(Calendar, ...) provide Calendar variants for the pre-existing format methods. If these are new to you, they are helper methods to formatting a date. time.DateUtils getFragment* methods are used to splice the time element out of Date. If you have 2008/12/13 14:57, then these could, for example, pull out the 13. time.DateUtils setXxx methods round off our walk through the methods - the setXxx variant of the existing addXxx helper methods. The StringUtils class is a little large, isn't it? Sorry, but it's gotten bigger. Hopefully they are in many cases self-describing. Rather than spend a lot of time describing them, we'll let you read the Javadoc of the ones that interest you. In addition to new things, there are the bugfixes. As you can tell from the release notes, there are a good few - 24 in fact according to JIRA. Here are some of the interesting ones: Hopefully that was all of interest. Don't forget to download Lang 2.4 , or, for the Maven repository users, upgrade your <version> tag to 2.4. Please feel free to raise any questions you might have on the mailing lists , and report bugs or enhancements in the issue tracker .
http://commons.apache.org/lang/article2_4.html
crawl-001
refinedweb
1,269
57.37
Hello and welcome to this tutorial series of Azure Data Services. Through the course of this tutorial, I will take you through what Azure has to offer in Data Services. I will start with the basics of relational databases, dig into the goodness of non-relational offerings and end with what is new and what is exciting in Azure Data Services Part 1: An Introduction to Azure Data Services Azure Data Services: SQL in the Cloud (Part 2) Yes, you read right. In this ‘blog’ post I am going to discuss ‘blobs’ (love the ring to it). And no, it is not the X-Men character and neither is it an alien. What I am referring to is a Binary Large Object, in short, BLOB. Blob Storage is one of the three (other two being table and queue) non-relational, persistent, PaaS data storage service from Azure. What are Blobs and why should you care ? Well, to begin with it is highly scalable(500 TB per storage account. A storage account can have a combination of blob, table and queue storage), highly available (99.9% uptime as per the SLA), persistent, redundant (triply replicated either locally or geographically) storage service from Azure primarily used to store files such as images, PDFs, videos, etc . This Blob Storage can be easily accessed using REST endpoints and programmatically (.NET, Java, Ruby, etc). The stored files can be made accessible to the public or secure it by allowing access to only authorized users or provide temporary access through tokens. If that is not enough to convince you of the resilience of Blob storage, Microsoft OneDrive uses Blob storage. In fact if you are building apps that plays around with files as described, you should consider using Blob Storage. There are essentially two kinds of Blobs: Page Blobs and Block Blobs. Page Blobs are 512 byte pages which are optimized for random access and are tuned for high performance and reliability. A Page Blob can be no larger than 1TB. Block Blobs on the other hand are comprised of a series of blocks which is optimized for multimedia streaming scenarios. Block Blobs can be no larger than 200GB. How would you use Blob? First of all, we need to create a storage account on Azure. A storage account will contain one or many containers. Each container can contain one or more blobs. Each blob can be thought of as a 2 level hierarchical structure: (Note: The snapshots below are from the new preview portal:) Step 1: Create the storage container Once you login into the new preview portal for Azure, you can select New in the bottom left, since Storage does not appear on the first blade, you will have to click on Everything when the ‘Marketplace’ opens, choose storage and then click on Storage. Now, after clicking on create a Storage account, there are a few parameters which need mention. It definitely needs a unique storage account name (step 2). The pricing tier (step 3) is interesting as the pricing varies based on the redundancy that you want your storage to incorporate. There are essentially 3 forms of redundancy: Locally Redundant (LRS), Geographically Redundant (GRS) and Read-Access GRS (RA-GRS). Locally implies within the same data center or facility where as Geographic implies triply replicated across different regions but in the same country. You can define a new Resource Group (step 4) or use an existing one. A resource group can be thought of as a collection of resources for a particular project. For example, you have created a website which will use a blob storage. In this case, you should select the same resource group to ensure that all your logical entities of your project belong to the same resource group. Finally, you can select the location where your primary storage should reside (step 5) and then you could click on Create (Step 6). Step 2: Get details to access the blob storage and write client app to manage blob Once you have created your storage account, you can get in the storage account, click on settings, properties and keys to see the relevant details .As you can see if you click properties, you get to see the 3 different end points per account storage that you create. An endpoint to a blob, a queue and a table. Using these endpoints you can access these storage containers. If you click Keys, you get the primary/secondary access keys that are used to programmatically access the storage. Now let us create a client application that accesses and uses this storage. Lets start with a blank console app, install the Windows Azure Storage Nuget Packages: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading.Tasks; 6: 7: using Microsoft.WindowsAzure.Storage; 8: using Microsoft.WindowsAzure.Storage.Auth; 9: 10: namespace BlobAccessApp 11: { 12: class Program 13: { 14: static void Main(string[] args) 15: { 16: } 17: } 18: } Now lets connect with the Blob Storage created using constructs such as CloudStorageAccount and using the key retrieved from the keys section in the Azure Portal as shown above. Note that in this example we use the Primary Access Key, however if we were to use CloudStorageAccount.Parse then we would need to use the Primary Connection String: 1: CloudStorageAccount account = new CloudStorageAccount( 2: new StorageCredentials("myblobdemo", 3: "*****************************************************************************"), 4: true 5: ); After having connected to the storage account, lets upload an image in the account. In the following code snippet, the explanation of each line is included in comments: 1: //Create a blob container within the storage account that is created 2: var blobclient = account.CreateCloudBlobClient(); 3: 4: //Get hte container reference which is created in order to store the image in the container 5: var container = blobclient.GetContainerReference("images"); 6: //This creates the blob if it does not already exist. 7: container.CreateIfNotExists(); 8: 9: //Get the blob reference and then upload the image to the continer using the blob reference 10: var blob = container.GetBlockBlobReference("diagrams.png"); 11: blob.UploadFromFile(@"C:\Users\addatta\Desktop\blob.jpeg", FileMode.Open); 12: Console.Read(); Now that we have written and understood the steps of uploading a file in the blob storage through a simple console app, let us get back to the portal and see where was this container created and access the image: As you can see above, the container tile shows one container and the Container Blade shows the container “images” created and the corresponding URL. You can see the same thing from your Visual Studio Server Explorer too in Azure Storage. Step 3: Access Control The next part is to ensure access to the uploaded files. One way is to have a key using which you can directly read the blob at any time. However, there are 3 settings for “Public Read Access” which otherwise determine whether you can read the blob or not. As you can see, there are 3 values for Public Read Access: Off, Blob, Container. If it is set to off, you can not access unless you have the key (which is true for all the other instances). If the setting is at Blob, then you can read the blob and if it is set at Container, you can enumerate Container blobs and read the blob. Either this value can be set in Visual Studio or it can be set on the portal as shown below: If we set the permissions to Blob and then try to access the URL, we are able to access the uploaded image. That is how we access a file in blob storage. However, there are other access methods such as Vallet Key Pattern using SAS tokens and Stored Access Policies which are more efficient and could be considered to define the access to your blob. Summary That will definitely get you started on Blobs. We started with discussing what a storage account in Azure is and the 3 different kinds of data services that can be availed. We then dived into what a Blob storage is and how scalable and available it is. We discussed how to create a blob in a storage account through a console app and finally the access control mechanisms providing secure access. In the next blog post, I will be discussing Tables in Storage account and some use cases where tables are preferred. Stay tuned and connect with me @AdarshaDatta and do share your experience using Azure with me. In the meantime I will leave you with one thought.
https://blogs.msdn.microsoft.com/cdndevs/2015/01/22/azure-data-services-blobbing-did-you-say-what-is-that-part-3/
CC-MAIN-2019-39
refinedweb
1,429
61.16
I saw many beautiful blogs around this topic, some among the others are: - Create and use custom tile type - Creating an SAP Smart Business like tile without HANA on your Fiori Launchpad - Customised Tile Types in Fiori Launchpad (although this one is not really a “custom” tile type – more like a customized standard one) Anyway, my goal is to put all together and have a Custom Tile type that is also able to show a Microchart (like the ones on SAP Smart Business). Moreover, it should also be connected to an oData service to give some “life” to the tile itself. In this example we will create a Tile dedicated to Managers: it will have an embedded ComparisonMicroChart that shows the average of Employees Overtimes over the last 3 months. So let’s start! 1. Create the Tile Technically speaking, as stated in the standard help guide here, a Custom Tile can be set up by creating an SAPUI5 View, register it as a CSR CHIP on the server and add it to the Tile Catalog configuration. What it’s not really stated is that if you need a Dynamic Tile (or even a Static one, but with navigation capabilities) this is not completely straightforward. In order to have it “live”, many rules have to be followed as the new Tile must be developed in such a way to be compatible with all the frameworks that are around the Launchpad itself, like Configuration, Navigation, Semantic Objects assignments, and so on. Luckily, almost the entire job has been already done by our folks at SAP, so we can start from a working template rather than from scratch. 1.1 Enter sap-ui-debug Open up /UI2/FLPD_CUST and attach the parameter sap-ui-debug=true to the URL, in order to have something like this: Load it and…be patient, libraries in debug mode take a while to get processed. As soon as you have access to your launchpad, open Chrome Developer tools and navigate to “Network” tab (if you don’t see anything, you might need to reload the page). Enter “Dynamic” in the filter field and check the loaded files: We need the contents of DynamicTile-dbg.controller.js but we can skip DynamicTile-dbg.view.js as we will create our own UI later on. At this point you shall also download the contents of applauncher_dynamic.chip.xml. Now, leaving Developer Tools open, go back to the Launchpad Customization page and click on any of the Dynamic Tiles to get to its configuration page. Then, go back to the Network tab in the Developer Tools and this time search for “Configuration”: We need the contents of both Configuration.view.xml and Configuration-dbg.controller.js. Pay attention to the url of these files: they must belong to the sap/ushell/components/tiles/applauncherdynamic repository. With these sources, we are now ready to create our tile. 1.2 Create the Project For the purpose of this tutorial, I created a simple project using Eclipse – the reason why I did not use the SapWebIDE is because I do not have a correctly set environment for the deployment from the Cloud (I am lazy and the Eclipse Team Provider is faster than a manual Download/Upload 🙂 ). The structure of the project is straightforward: Configuration.controller.js and Configuration.view.xml are copies of the original sources download in the previous chapter, slightly changed only to rewrite original component namespaces. KpiTile.controller.js is a changed copy of the original DynamicTile.controller.js source. KpiTile.view.xml is the new Tile layout file. kpilauncher_dynamic.chip.xml is a changed copy of the original applauncher_dynamic.chip.xml source. 1.2.1 View Layout Open up KpiTile.view.xml and create your own tile. You can use any tile type (StandardTile, CustomTile, GenericTile), but remember that the only supported frame type is OneByOne – I wasn’t able to have it working using the TwoByOne frame, but maybe someone else will be 🙂 Also, GenericTile is the one that works best with contents like Microcharts. Here is my example: <?xml version="1.0" encoding="UTF-8"?> <core:View <GenericTile id="kpiTile" press="onPress" header="{/config/display_title_text}" subheader="{/config/display_subtitle_text}" frameType="OneByOne"> <tileContent> <TileContent footer="{/config/display_info_text}"> <content> <ui:ComparisonMicroChart <ui:data> <ui:ComparisonMicroChartData <ui:ComparisonMicroChartData <ui:ComparisonMicroChartData </ui:data> </ui:ComparisonMicroChart> </content> </TileContent> </tileContent> </GenericTile> </core:View> Let’s have a look at the bindings. The Tile shall display a static title and a static subtitle customized in the configuration properties of the Tile itself. On the other hand, the embedded Microchart shall display live data, coming from an oData service. The “config” property of the configuration model of every Dynamic Tile has these properties: - display_title_text - display_subtitle_text - display_icon_url - display_info_text - display_number_unit The “data” property of the configuration model has the dynamic properties incoming from any configured oData service Url that can drive data on a DynamicTile: - icon - info - infoState - number - numberDigits - numberFactor - numberState - numberUnit - stateArrow - subtitle - targetParams - title In addition to these properties, remember that the oData service can return as many properties as you want, so – with a little trick – the Tile will not be limited to display only these. Moreover, do not change the “onPress” event handler registered to the Tile press event: this is already managed in the original Controller file and it works fine. If you want to have your Tile doing something else rather than navigate you to some content, than change the event handler and implement your specific behavior in the Controller object. 1.2.2 Changing the Controller Now open the KpiTile.controller.js file as we need to do some changes to the code copied from the standard. Important: the standard code declares the sap.ushell.components.tiles.applauncherdynamic.DynamicTile.controller. Remember to change this in the controller declaration and all other references! The namespace and repository structure must be consistent! Update: After checking in different systems, I noticed that in some cases the code within the controller requires the load of a library named sap.ushell.components.tiles.utilsRT, whereas on others only the sap.ushell.components.tiles.utils library is loaded. I didn’t investigate (maybe some Mentor has an answer here), but in the system where this project has been deployed the utilsRT library is not existing so I went through the controller code and changed all the references to utilsRT simply renaming them to utils (and therefore pointing all the references to sap.ushell.components.tiles.utils) – a “Find and Replace” in Eclipse or WebIDE is enough. Scroll down the code util you reach the successHandleFn function: this is the Success callback for each oData call that the Tile makes to the registered Service Url and here we will do a quick enhancement to take into account additional incoming data. successHandleFn: function (oResult) { var oConfig = this.getView().getModel().getProperty("/config"); this.oDataRequest = undefined; var oData = oResult, oDataToDisplay; if (typeof oResult === "object") { var uriParamInlinecount = jQuery.sap.getUriParameters(oConfig.service_url).get("$inlinecount"); if (uriParamInlinecount && uriParamInlinecount === "allpages") { oData = {number: oResult.__count}; } else { oData = this.extractData(oData); } } else if (typeof oResult === "string") { oData = {number: oResult}; } oDataToDisplay = sap.ushell.components.tiles.utils.getDataToDisplay(oConfig, oData); // Begin Change ---------------------------> // Use "emp" property to store original data var aKeys = [ // Additional data for our KPI Tile // "month1", "month2", "month3", "value1", "value2", "value3", "color1", "color2", "color3" // End additional data // ]; var sName; // Prepare emp object: oResult.results = {}; for ( var i = 0; i < aKeys.length; i++){ sName = aKeys[i]; oResult.results[sName] = oResult[sName]; } // Store the additional results back to emp oDataToDisplay.emp = oResult.results; // End Change <--------------------------- // set data to display this.getView().getModel().setProperty("/data", oDataToDisplay); // rewrite target URL this.getView().getModel().setProperty("/nav/navigation_target_url", sap.ushell.components.tiles.utils.addParamsToUrl( this.navigationTargetUrl, oDataToDisplay )); }, The core of the change is in the middle, right after the “Begin Change” mark: the oData linked to our tile returns a set of additional properties used to load the Microchart. We store these properties in an array named aKeys and for each one of them, we loop the oData structure oResult moving any additional property to its result property. As a last step, we fill the emp property of oDataToDisplay: the emp property is the one binded into the View object. In this example, additional properties are statically defined in the code: with some more advanced JavaScript, everything can be completely transformed to dynamic. 1.2.3 Change the CHIP definition file The standard documentation for custom Tiles creation refers to the term CSR CHIP, which stands for Client-Side Rendered CHIP. Basically, a CHIP within the system is described by a metadata XML file which contains some very important properties: these properties are interpreted at runtime by the Launchpad engine to understand if a registered CHIP can be used as a Tile or not. At this page is explained the schema of an XML CHIP definition file. For our example, the change is pretty straightforward and everything it’s done by simply changing the path to our SAPUI5 view in the <sapui5> node of the original applauncher_dynamic.chip.xml file. So make a copy of it and change it as follows: <?xml version="1.0" encoding="UTF-8"?> <chip xmlns=""> <implementation> <sapui5> <viewName>customtilechips.KpiTile.view.xml</viewName> </sapui5> </implementation> <appearance> <title>Dynamic Applauncher with Microchart</title> <description>Dynamic KPI Applauncher Chip</description> </appearance> <contracts> <consume id="configuration"> <parameters> <parameter name="tileConfiguration"></parameter> </parameters> </consume> <consume id="preview" /> <consume id="writeConfiguration" /> <consume id="configurationUi" /> <consume id="search" /> <consume id="refresh" /> <consume id="visible" /> <consume id="bag" /> <consume id="url" /> <consume id="actions" /> <consume id="types"> <parameters> <parameter name="supportedTypes">tile</parameter> </parameters> </consume> </contracts> </chip> You can also change the title and description of the Tile (it will be shown in the “Add new tile” screen of the Launchpad configuration) and customize the supportedTypes property. In my case, I only need a Tile so I’ve limited the property value to “tile”. Another suitable value is “link”, which must then be supported at runtime by additional coding (see the original DynamicTile-dbg.controller.js file for reference). As previously said, the most important property here is the <viewName> under <sapui5>. Remember that this property must contain the full name of the view – including the namespace – relatively to CHIP definition XML file. So if your XML file is under: Than the view name is relative to this file, so: customtilechips.KpiTile.view.xml 2. Deploy the tile Let’s do a quick review of what we have done so far. - Download necessary template files directly from your system - DynamicTile-dbg.controller.js - Configuration-dbg.controller.js - Configuration.view.xml - applauncher_dynamic.chip.xml - Create a new SAPUI5 project and import the downloaded? Good, so let’s deploy the project to the ABAP SAPUI5 Repository. After the deployment, you should see a newly created BSP Application with a structure that mirrors your project: 2.1 Register the CHIP This is by far the most simple step but at the same time the most important one. Without registering the CHIP, the Tile will never be visible in the Catalog so it will not be possible to use it in any Launchpad. Open transaction /UI2/CHIP and create a new Chip definition. Name it as you want, I named it exactly as the BSP Page for clarity: Use the absolute – without hostname – path to your chip definition file as the URL. While saving, the system validates all the entries so you’ll be sure that the file can actually be found. Save this newly created CHIP and go to SE80. At this point, we need to enhance the /UI2/CHIP_CATALOG_CONFIG component by adding our CHIP definition to the component configuration /UI2/FLPD_CATALOG. This configuration is taken into account by the Launchpad application configuration component: Tiles defined as CHIPS in this configuration are instantiated and can then be used in Launchpads. In the browser window, set parameters as follows then hit “Continue in Change Mode”. If the configuration ID is not existing, go on and create it. In the next window, we are going to add our CHIP as a configurable tile in Launchpads. From the Configuration Context table on the left, open node “parameterMultiVal”, select its line and click “New -> values” from the Toolbar. On the right, in the input field, write: X-SAP-UI2-CHIP: <your_chip_name> Where the chip name is the same used in transaction /UI2/CHIP Save and close the browser. 2.2 Try to load it Now that our Tile is deployed and registered as CHIP, let’s try to add it to one of our Launchpads. First of all, I noticed (as explained in this blog) that cleaning up the ICM cache helps in this cases, so if you have proper authorization open SMICM transaction and head to “Go-To -> HTTP Plug-In -> Server Cache -> Invalidate locally” (or globally, depending on your installation). Open transaction /UI2/FLPD_CUST and select one existing catalog. You might need to clean the browser cache, so it loads fresh files from the server. As soon as it loaded, hit the “+” sign on the page to add a new Tile and – if everything worked as expected – you shall be able to see your shiny new Custom Tile: Select the tile and you shall be presented with the Configuration page, in which you can set base parameters (remember the ones stored under the /config/ path of the Tile model). At the moment, even if we add the tile as it is, nothing is shown on our Microchart because we miss a proper oData service. Remember that in our case, the service must return a specific set of properties: - month1 - value1 - color1 - month2 - value2 - color2 - month3 - color3 - value3 Keeping this is mind, go on and create a very simple oData service that returns a value for each one of these properties. Go to SEGW transaction on your Gateway system, create a new service with at least one EntityType and EntitySet. The EntityType must expose properties necessary to be consumed by the Tile (refer to paragraph 1.2.1) plus our custom properties. With the oData service URL ready, configure it in the Tile options and go back to the catalog. At this point, your Launchpad Customization page should show you something similar to this: 3 Use the Tile Using a Username linked to the catalog you have changed, log on to the Fiori Launchpad or execute transaction /UI2/FLP. If you don’t see the new Tile, it could be because it is not added to the personalized catalog yet: click on the Pencil icon on the lower right of the screen and you’ll be presented with the customization page. From here, look for the custom tile and add it by clicking the “+” sign. Our custom tile should now be there, ready to receive the first update from the underlying oData service you created. Here’s the expected result: If you have defined a Semantic Object navigation, you should also be able to click on the Tile and get navigated to the application you have configured. 4 Closure Following this rather simplicistic approach (in which – to be honest – the most of the work has been already done by SAP guys creating the DynamicTile definitions) a complete new set of custom tiles can be created on the OnPremise Launchpad as well. Charts, Images, almost any control that can fit the “content” aggregation of the GenericTile component can be used – just pay attention on how you want to feed it and change the Tile controller accordingly to your needs. I hope that this can bring some help to anyone that needs to deal to a similar situation, and also that new ideas can be presented to the community. A really good trick would be to have a the option to select the kind of Microchart on the Tile, and have it completely dynamic and configurable. If you need a copy of the original coding, drop me a comment and I will place it on a GitHub repository for reference. Update: here is the repository URL on GitHub Enjoy your new Tile and all the best! Roberto. Hi Roberto, Thank you for sharing. Would be great if you start a GitHub repository. Best regards Gregor Hello Gregor, Thanks for your comment. I’ve updated the content with the repository url. Best regards, Roberto. Hi Roberto, thanks for sharing the topic! I prepared all steps you described (finally I used your coude) but I was not able to get the tile running. I see the Tail in my launchpad with a message “Cannot load tile”. In the console I have an Error: Call to success handler failed: sap.ushell.components.tiles.utils.getConfiguration is not a function – TypeError: sap.ushell.components.tiles.utils.getConfiguration is not a function at constructor.onInit (…./customtilechips/DumpTile.controller.js:18:49). Do you have an Idea, what went wrong? Thank you Denis Hi Denis, it seems to be related to the standard library sap.ushell.components.tiles.utils. Can you double check in your DumpTile.controller.js, at the very beginning, if you can find this line: And if the corresponding module can actually be loaded? Search for it using Chrome Developer tools -> Network tab Also, check around line 118: The last parameter (here “customtilechips.Configuration”) must match your “namespace.viewname”. Cheers, R. Hi Roberto, thanks for the response and hints. The error was caused by the action described in ‘1.2.2 Changing the Controller’ utilsRT replacement. In my IDE is unknown but the library seems to be mandatory in the system. I reversed the ‘utilsRT’ replacement and added the both libraries and this solved my issue. Thanks Regards Denis Hello Denis, good to know,glad it worked! This actually means that depending on the release the utilsRT library is indeed needed: as you noticed, my system did not use them,that’s why I suggested the replacement. I will try to investigate more 🙂 Cheers, Roberto. Hi Roberto Pagni, Nice blog Roberto Pagni . I am tying to create chips in my system . I have followed all the step upto 2.1 Registration CHIP. But I am unable to proceed further .Please help me. Thanks, Manoj Hello Manoj, Can you please be a little bit more specific? Which kind of issue you have? Regards, Roberto. Hi Roberto Pagni, I am able to resolve the issue. I have a small doubt . How to open my application or details page when user clicks on the Static CHIPS tile. Thanks, Manoj Hello, You need to setup the Semantic Object navigation just like with any other standard tile. Regards, Roberto. Hi Roberto, Thank you for sharing. I have already try it by myself and I want to share this for problem TwoByOne frame type. By add parameter name “col” as 2 as below you can able to use tile with frame type TwoByOne, but in my system tile would support only 2×1 or 1×1 size. (I’m not sure in other system is possible to use other size) Don’t forget to use your tile frameType to TwoByOne in tile view. . Hi there, That’s great! Thank you for sharing the solution! Greetings, R. Hi Roberto Pagni, Thanks for the great blog.I have followed the blog and created and registered the chip.But i am unable to see the tile template.I can see the default(dynamic,static and news) templates onlyPlease advice me on the steps to resolve the issue. Thanks and Regards, Ramees Hi Roberto Pagni, I have resolved the issue regarding the tile template.But i am not able to open the configuration view when i click on my custom tile.Please find the error log Hope you can provide me a solution Regards Rameez Hello Rameez, it seems an issue with the CHIP definition XML file. It clearly says that the “types” contract is not supported, which should definitely be. Please double check the kpilauncher_dynamic.chip.xml file and the original Configuration-dbg.controller.js file. You can also try to debug the Configuration-dbg.controller.js to understand why and where it fails. Regards, R. Hi, thanks for that turorial! we are using fiori 2.0 and your version didn´t work for me. The problem was, that the view.xml didn´t provided the needed function “getTileControl”. this is how i got it running: my Project: but different than the tutorial above i copied the source from: /sap/bc/ui5_ui5/ui2/ushell/resources/sap/ushell/components/tiles/applauncherdynamic/DynamicTile-dbg.view.js and then changed it to: customtile.CustomRadialChartTile.view.js and afterwards i changed the viewName at CustomRadialChart.chip.xml the following: regards Philipp Hello Roberto, This was an excellent blog post! Thank you so much for sharing. While working through the steps you provided I am experiencing the same issue as ramees rahath. After I load everything into my gateway system per your instruction I am unable to open configuration for the custom tile and I receive errors in the console: “sap.ui2.srvc.Chip({sChipUrl:”/sap/bc/ui5_ui5/sap/Y_FLP_KPITILE1/kpilauncher_dynamic.chip.xml”}): Contract ‘types’ is not supported – sap.ui2.srvc.Chip” Upon further investigation I found that when calling oView.getViewData() in the KpiTile controller it is returning undefined, so oViewData.chip isn’t being retrieved. I spent some time debugging the original from the launchpad customization page and it has no trouble retrieving oViewData.chip. Any suggestions as to why getViewData is coming back undefined?
https://blogs.sap.com/2017/01/28/how-to-create-custom-tile-types-for-onpremise-fiori-launchpad/
CC-MAIN-2017-22
refinedweb
3,581
53.81
I gave up on Boost. Never could get it to build no matter what I did. Thanks anyway, but I'm looking for a pyCXX solution. Stefan Seefeld wrote: > apocalypznow wrote: > >>Hi, >> >>Can anyone post a simple pyCXX example of a callback into python? My > > > I don't know what 'pyCXX' is, so let's assume you mean boost.python. > > >>python function should allow a string passed as a parameter, and returns >>a string, ie: >> >>def somepythonCB(s_in): >> s_out = dosomethingtoinputstringandreturnoutputstring(s_in) >> return s_out > > > The answer is very similar to one I gave in a previous question: > > namespace bpl = boost::python; > bpl::dict global; > bpl::exec_file("your_file.py", global, global); // load the definition > bpl::object func = global["somepythonCP"]; // extract python function > bpl::object retn = func(input); // call it with input string > std::string value = bpl::extract<std::string>(retn); // extract return value > > The magic here is that boost.python already has builtin support for > conversion between std::string and python's str. > Please note that the 'exec_file' function is only in CVS, not in the last > official release. See my previous posts for more on that. > > HTH, > Stefan >
http://mail.python.org/pipermail/cplusplus-sig/2006-November/011294.html
CC-MAIN-2013-20
refinedweb
187
54.83
>> variables and values. [ID:1333] (2/2) in series: Python Terminology - introduction - video tutorial by gasto, added 03/09 Name: [3253] gasto Member:. In this tutorial, I aim to teach absolute beginners the meaning of variables and values, how an expression can output a value in immediate mode of the interpreter without even using the print built-in function. The print built-in function ignores any quotation marks from a string argument. Got any questions? Get answers in the ShowMeDo Learners Google Group. Video statistics: - Video's rank shown in the most popular listing - Video plays: 489 << Even though I already know what values and variables are; I learned another way to access them. thank you thank you Yes, I would be happy to have your gc tutorial add nuance and texture to this simple introduction, and it's fine to lapse into that old way of thinking now and then, just with noobs (newbies) we wanna be clear on the mental models. Long threads about that, on variable names especially, on Python's edu-sig, here's my most recent, referring back to this video: anonymous, you would be impressed if I told you I am working on a garbage collection tutorial that talks about that. idea that values are "stored into" a variable is less the metaphor in Python, which is more about assigning or binding names to objects with the assignment operator (=). The problem with "store into" is then its hard to picture many names for the same object, yet that's easy when you think of a balloon with many strings. If you reffer to : a=1 then a has 1 stored to it until the program exits. Of course a variable name may or may not be visible to the interpreter depending on the namespace, if the namespace is imported to the working file then it is visible. Hello, I just saw this video on variables but I have a rather silly question as I am a newbie. I am working with GIS and with help got a variable to store a substring , which then can print so I know its working. My question is how can you set that variable to a record instead of screen printing it ? Thanks for the video, definitely easy to understand. just the right gradient for someone like me. thx for taking the time to record this. thank for your time. Basic but informative. cheers. Very basic (but useful) information for those with little programming experience. Very basic (but useful) information for those with little programming experience. Hi mystylplx, humans have an experimental approach to learning by excellence. The purpose of these videos is that people start experimenting on the Python interpreter and figure things out by themselves. If I cover every minimal detail of 'value' it'd span 2 hours at least. I think it is clear that a string called 'value' is an example of a string and not the unique way of creating values for strings. Still, you are right that beginners might speculate strange ideas about what's being said. That is when experimentation comes handy. The student thinks " 'value' is a string, so if I write on immediate mode 'sequence of characters', would it still be a string? " So he goes ahead and execute that line of code and realizes it is a string. Again, other doubts might come, as a beginner is prone to suffer from the impatience disorder, and run away from what is being learnt. But patience is handy when dealing with studying new subjects. So, it is very important the study methods a student has. Be it self-taught or formal education student. I might create a screencast out of this. Thanks for the feedback. A little confusing the way you use the string 'value' to illustrate a string value. Maybe you could make it a more clear that the value of a string can be any string... any collection of one or more characters, and that 'value' is only one possible example of the value of a string... if you see what I mean.
http://showmedo.com/videotutorials/video?name=6950010
CC-MAIN-2015-32
refinedweb
685
71.85
When we talk about writing asynchronous JavaScript we often use timer functions or promises as an example. Whereas the majority of asynchronous code written in modern JavaScript web apps is focused on events caused either by a user interacting with the UI (addEventListener) or some native API (IndexedDB, WebSocket, ServiceWorker). With modern front-end frameworks and the way we pass event handlers it is easy to end up with leaky abstraction. When you build your application from multiple small components it is a common good practice to move application state to components which are higher in the tree (parent components). Following this pattern, we have developed the concept of so called "container" components. This technique makes it much easier to provide synced state to multiple children components on different levels in the tree. One of the downsides is that we sometimes have to provide a callback to handle events along with some additional parameters which are supposed to be applied to that callback function. Here is an example: const User = ({ id, name, remove }) => ( <li> {name} <button onClick={() => remove(id)}>Remove</button> </li> ); class App extends Component { state = { users: [{ id: "1", name: "Foo" }, { id: "2", name: "Bar" }], }; remove = (id) => { this.setState(({ users }) => ({ users: users.filter(u => u.id !== id) })); }; render() { return ( <ul> {this.state.users.map(({ id, ...user }) => <User key={id} id={id} {...user} remove={this.remove} />)} </ul> ); } } Although User component is not using users id property for presentation, id is required because remove expects to be called for a specific user. As far as I am concerned this is a leaky abstraction. Moreover, if we decide to change id property to uuid we have to revisit User and correct it as well to preserve consistent naming. This might not be the biggest of your concerns when it comes to "id" property but I hope it makes sense and you can see an imperfection here. The cleaner way to do it would be applying id to remove function before it is passed to the User component. <User key={id} {...user} remove={() => this.remove(id)} /> Unfortunately, this technique has performance implications. On each App render, remove function passed to the User would be a newly created function. Brand new function effectively kills React props check optimizations which are relying on reference equality check and it is a bad practice. There is a third option (and probably a fourth and a fifth but bear with me). We can combine memoization and currying to create partially applied event handlers without adding too much complexity. A lot of smart words but it is simple: import { memoize, curry } from "lodash/fp"; const User = ({ name, remove }) => ( <li> {name} <button onClick={remove}>Remove</button> </li> ); class App extends Component { state = { users: [{ id: "1", name: "Foo" }, { id: "2", name: "Bar" }], }; remove = memoize(curry((id, _ev) => { this.setState(({ users }) => ({ users: users.filter(u => u.id !== id) })); })); render() { return ( <ul> {this.state.users.map(({ id, ...user }) => <User key={id} {...user} remove={this.remove(id)} />)} </ul> ); } } This is a sweet spot. Each user has its own remove function with id already applied. We don't have to pass, irrelevant for presentation, property to the User and thanks to memoization we did not penalize performance. Each time remove is called with a given id the same function is returned. this.remove("1") === this.remove("1") // => true Decorator Are you into decorators? If you are not up for importing memoize and curry functions, then wrapping your handlers in each container you may would like to go with a property decorator: // TODO: Reconsider this name, maybe something with "ninja" function rockstarEventHandler(target, key, descriptor) { return { get() { const memoized = memoize(curry(descriptor.value.bind(this))); Object.defineProperty(this, key, { configurable: true, writable: true, enumerable: false, value: memoized, }); return memoized; }, }; } This implementation does not cover all edge cases but it sells the idea. In production you probably want to combine two decorators and leave binding execution context to autobind decorator from jayphelps/core-decorators. Usage: @rockstarEventHandler remove(id, _ev) { this.setState(({ users }) => ({ users: users.filter(u => u.id !== id) })); } Conclusion I cannot say whether that is an optimal solution for you and your team but I am more than happy with this approach writing simpler tests and effectively less glue code. The downside of this approach is that you will probably have to explain few coworkers the rationale behind all that momoize(curry(() => {})) thing. Throwing a sentence like "it's just memoized and partially applied function" probably will not be enough. The bright side is you can always point them here! Photo by Marc-Olivier Jodoin on Unsplash.
https://michalzalecki.com/react-memoized-event-handlers/
CC-MAIN-2019-26
refinedweb
761
54.12
Products and Services Downloads Store Support Education Partners About Oracle Technology Network Name: dbT83986 Date: 05/13/99 When trying to load an ImageIcon from an application jar file that resides on an unmapped network directory, the image will not load. If you map the path to a drive instead of using the UNC path, the image is able to be loaded. Here is an example. I've included several different ways of trying to display the image. import javax.swing.*; import java.awt.*; import java.io.*; import javax.swing.border.*; public class BugDemo extends JFrame { public BugDemo() { } public Image getImageFromJAR(String fileName) { if( fileName == null ) return null; Image image = null; byte[] tn = null; Toolkit toolkit = Toolkit.getDefaultToolkit(); InputStream in = getClass().getResourceAsStream(fileName); try{ int length = in.available(); tn = new byte[length]; in.read( tn); image = toolkit.createImage( tn); } catch(Exception exc){ System.out.println( exc +" getting resource " +fileName ); return null; } return image; } void method1() { JLabel label = new JLabel(new ImageIcon("images/Splash.GIF")); label.setBorder(new BevelBorder(BevelBorder.LOWERED)); JOptionPane.showMessageDialog(this, label, "About", JOptionPane.INFORMATION_MESSAGE); } void method2() { ImageIcon image = new ImageIcon(ClassLoader.getSystemResource("images/Splash.GIF")); JLabel label = new JLabel(image); label.setBorder(new BevelBorder(BevelBorder.LOWERED)); JOptionPane.showMessageDialog(this, label, "About2", JOptionPane.INFORMATION_MESSAGE); } void method3() { Image image = getImageFromJAR("images/Splash.GIF"); JLabel label = new JLabel(new ImageIcon(image)); label.setBorder(new BevelBorder(BevelBorder.LOWERED)); JOptionPane.showMessageDialog(this, label, "About2", JOptionPane.INFORMATION_MESSAGE); } public static void main(String[] args) { BugDemo frame = new BugDemo(); frame.method1(); frame.method2(); frame.method3(); System.exit(0); } } method1 and method two do not return an image, while method three results in a null pointer exception, when the application is run from an unmapped directory. (Review ID: 63144) ====================================================================== CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: mantis mantis-b02 FIXED IN: mantis mantis-b02 INTEGRATED IN: mantis mantis-b02 EVALUATION I am unable to further evaluate this bug without additional information. Using the provided test case, I have been able to reproduce the described problem in jdk1.2.2; however, I have been unable to reproduce the problem in any jdk since that time. The last version tested was jdk1.4.1-rc-b16. Can this problem be reproduced using the latest jdk under development? Does this test still reproduce the problem and fail as described? On what type of system is the network directory (e.g. Solaris, winNT, etc.)? Is the network directory viewable via the UNC path outside of the java application? Please provide the complete command-line used to launch the application/test. Windows may cache network authentication to a network directory which has been previously mounted. To be absolutely certain that you do not retain authorization to that directory after it has been unmounted, you should log off and re-log in. -- iag@sfbay 2002-07-14 I've reproduced the problem. The customer has provided another test and additional details which have allowed me to easily display the error. FILE: URLTest.java public class URLTest { public static void main(String[] args) { try { java.net.URL[] u = new java.net.URL[1]; u[0] = new java.net.URL("\\\\doze\\iris\\i.jar"); System.out.println(u[0]); java.io.InputStream st1 = u[0].openStream(); System.out.println("opened stream with " + st1.available() + " bytes available"); String path = u[0].getFile().replace('/', java.io.File.separatorChar); java.io.File file = new java.io.File(path); if (!file.exists()) { throw new java.io.FileNotFoundException(path); } java.net.URLClassLoader scl = new java.net.URLClassLoader(u); java.io.InputStream st = scl.getResourceAsStream("images/Splash.GIF"); System.out.println("opened stream with " + st.available() + " bytes available"); } catch(Exception e){ e.printStackTrace(); } } } URLTest.class resides on a machine running windows. i.jar contains a single file images/SPLASH.gif. The .jar file is on another windows machine and is accessible via the UNC path "\\doze\iris\i.jar". java.lang.NullPointerException at URLTest.main(URLTest.java:29) The NPE occurs because the call to ClassLoader.getResourceAsStream has masked an IOException and returned null. If we modify the JDK, we get the following stack trace: C:\tmp\iris>d:/tmp/iris/tl/build/windows-i586/bin/java URLTest opened stream with 8225 bytes available java.util.zip.ZipException: The system cannot find the path specified at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.<init>(ZipFile.java:112) at java.util.jar.JarFile.<init>(JarFile.java:117) at java.util.jar.JarFile.<init>(JarFile.java:55) at sun.net..<init>(URLJarFile.java:55) at sun.net. at sun.net. at sun.net. at sun.net. at java.net.URL.openStream(URL.java:961) at java.lang.ClassLoader.getResourceAsStream(ClassLoader.java:943) at URLTest.main(URLTest.java:24) java.lang.NullPointerException at URLTest.main(URLTest.java:25) ZipFile.open is being called with "\doze\iris\i.jar" which is interpreted as an absolute path on the local machine's current drive, not the expected UNC path. This is derived from the url "file:/doze/iris/i.jar" which represents an absolute path on the local machine (see rfc 2398). It appears that the the location of the jar file is stored in a private field in java.net.JarURLConnection.jarFileURL. The jarFileURL field is set by the constructor. The JarURLConnnection constructor's single parameter is a jar url which represents the resource we wish to locate. In our case the url is: "jar:" The constructor calls the method parseSpecs which takes this url, determines the location of the jar file, and stores the location in the jarFileURL field using the following code fragment: private void parseSpecs(URL url) throws MalformedURLException { String spec = url.getFile(); int separator = spec.indexOf('!'); /* * REMIND: we don't handle nested JAR URLs */ if (separator == -1) { throw new MalformedURLException("no ! found in url spec:" + spec); } jarFileURL = new URL(spec.substring(0, separator++)); The single argument URL(String) constructor is called with "" thus, jarFileURL is set to "file:/doze/iris/i.jar". As mentioned above, a technically valid url, but certainly not the expected one. Note that it is actually surprising that a url of "" is recognized as a UNC path. This behaviour was introduced in jdk1.3 as a result of the fix for 4180841. There are a couple of different solutions we should consider. The first question is why does URL("\\\\doze\\iris\\i.jar") return ""? Though I have been unable to find conclusive evidence to support this, ###@###.### believes that the leading backslashes should probably have never been removed. If this is the case, then this would be the ideal fix. Unfortunately, a change of this magnitude would have a huge impact on the stability of UNC support. It is highly unlikely that all other parts of the JDK will properly identify this new format as a UNC path. The most obvious solution is to take advantage of the existing fix for 4180841. This may be accomplished by using a different URL constructor in JarURLConnection.parseSpecs. If we instead use URL(String protocol, String host, String file), the test passes. Though this does solve the reported problem, there are a couple of issues. First, this fix will apply to all platforms, not simply those which support UNC paths. This problem is easily solved. Next, we may be unable to conclusively distinguish when a url path should be interpreted as a UNC path verses an absolute path on the current disk. One possible solution for this problem is to attempt to find the jar file locally. If that fails, we could attempt to interpret the url as a UNC path. The correct path interpretation order is unclear. Though more conservative than the previous suggestion, I believe that this fix is also potentially unstable. I strongly recommend that this bug _not_ be addressed in hopper. A more appropriate release would be mantis, when we can further investigate other possible solutions and their potential impact. -- iag@sfbay 2002-07-17 ------ It's important to note that URLs that have UNC paths in the URL path are working as expected. For example the following work as expected with 1.4.1 and previous releases :- file://\\\\server\\dir\\images.jar Both cases correspond to \\server\dir\images.jar. URLConnection can open a connection to this file, and URLClassLoader can obtain a resource from the jar file. We do have a problem if the NetBIOS peer name is put in the host component, eg: - URLClassPath will treat this as \dir\images.jar as the underlying JarLoader in sun.misc.URLClassPath is not UNC aware. As it happens URL's file protocol handler is broken in this case aswell as the UNC code broken in 1.4.0. This was noticed too later for 1.4.1 but will be fixed in mantis (1.4.2) - see bug 4671171. Going further, the URL that the customer is using is similiar to the following :-\\\\doze\\iris\\i.jar Although technically not a valid URI (as the \ characters should be escaped) it is parsed by URL and with escaping removed we are working with :-\\doze\iris\i.jar Essenitally this means we are trying to open the file \\\doze\iris\i.jar which isn't a valid UNC path. As regards why java.io.File handles this and URLClassLoader's getResourcesAsStream doesn't - this stems from the normalization done by java.io.File. It normalizes the path to \\doze\iris\i.jar and thus opens the file. ###@###.### 2002-07-24 We will fix this in mantis (in conjunction with 4671171) so that UNCs are handled in file: URLs where the UNC is entirely contained in the URL path or alternatively, the standard form where the hostname from the UNC goes into the authority field of the URL. However, we will not support urls like\\doze\iris\i.jar because they evaluate to invalid UNCs (in this case \\\doze\iris\i.jar). The nearest correct equivalent for this url would be file://\\doze\iris\i.jar. Strictly, the \ character should not be used unescaped in URLs (and is not supported by the URI class). However, all of the browsers support them on Windows and we should also continue to do so. ###@###.### 2002-07-29 WORK AROUND Name: dbT83986 Date: 05/13/99 extracting the images from the jar or mapping the directory to a drive letter seems to get around the problem, but complicates distribution of applications ====================================================================== Specify the UNC path with foward slashes in the form: If we modify the test in the evalation, the test will properly locate the resource. e.g. change: u[0] = new java.net.URL("\\\\doze\\iris\\i.jar"); to: u[0] = new java.net.URL(""); opened stream with 7833 bytes available Note that u[0] = new java.net.URL("\\\\doze\\iris\\i.jar"); also works. -- iag@sfbay 2002-07-17 SUGGESTED FIX This fix is not ideal. See the evaluation section for more details. *** /tmp/geta15064 Wed Jul 17 17:29:07 2002 --- JarURLConnection.java Wed Jul 17 17:29:06 2002 *************** *** 154,160 **** throw new MalformedURLException("no ! found in url spec:" + spec); } ! jarFileURL = new URL(spec.substring(0, separator++)); entryName = null; /* if ! is the last letter of the innerURL, entryName is null */ --- 154,163 ---- throw new MalformedURLException("no ! found in url spec:" + spec); } ! int csep = spec.indexOf(':'); ! jarFileURL = new URL(spec.substring(0, csep), ! url.getHost(), ! spec.substring(csep + 1, separator++)); entryName = null; /* if ! is the last letter of the innerURL, entryName is null */ -- iag@sfbay 2002-07-17
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4238086
CC-MAIN-2017-04
refinedweb
1,906
51.95
Today was a regular day. All thanks to closures I was only able to learn just one new concept. Higher-Order Components in React Those are basically nothing but Higher-order functions. So Higher-Order Components basically takes one component as input do something with it and return a new component and components are basically functions returning JSX markup (type of return value). But get this, that is a new component after all even though it inherits the logic of original component. const EnhancedComponent = higherOrderComponent(ComponentToBeWrapped) And here is the code that shows beautiful use of closures. const Speakers = ({ speakers }) => { return ( <div> {speakers.map(({ imageSrc, name }) => { return ( <img src={`/images/${imageSrc}.png`} alt={name} key={imageSrc}></img> ); })} </div> ); }; function withData(maxSpeakersToShow) { return function(Component) { const speakers = [ { imageSrc: 'speaker-component-1124', name: 'Douglas Crockford' }, { imageSrc: 'speaker-component-1530', name: 'Tamara Baker' }, { imageSrc: 'speaker-component-10803', name: 'Eugene Chuvyrov' } ]; return function() { // This is the place where magic happens // How can this function access Components if its parent function have already executed? return <Component speakers={speakers}></Component>; }; }; } export default withData(Speakers); /* Speakers are nothing but just the people who are supposed to give a presentation on the given day, like a regular TED talk. */ And my beautiful friends, I present Mr. Closure in front of you. Returned child function can access environment variables of its parent and hence it can get the job done. Little update from the comments. My view ? Seperation of concerns demand seperation of UI logic (logic that makes UI visible as it is) and the application logic. So we can use higher order components to do that. Pass in our component with UI logic and let the HOC add data to it as in the example. Hope this might have helped in any way. I'd love to read your point of view about HOC. Thanks for reading.😀😀 Have a beautiful day.🌼 Discussion (2) Hi, I think that your code example is good but not your explanation.. Whatever, thank you for your posts. I find it really funny and useful. 😄 Thanks . I guess I was so busy admiring closures that I forgot about the main part. I loved your defination. Glad you liked it.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/icecoffee/6-of-100daysofcode-47ge
CC-MAIN-2021-21
refinedweb
365
50.12
The GIMP is designed to provide an intuitive graphical interface to a variety of image editing operations. Here is a list of the GIMP's major features:. To enable I18N extensions, execute "gimp.setfont" before you use GIMP. WWW: NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. No installation instructions: this port has been deleted. The package name of this deleted port was: gimp gimp No options to configure Number of commits found: 66 Retire print/gimp-print, graphics/gimp1, and graphics/gimp-pmosaic - Set for removal in one month -. Add USE_GETTEXT to appease portlint. Remove USE_REINPLACE for categories starting with a G Conversion to a single libtool environment. Approved by: portmgr (kris) Add CONFLICTS for gimpshop. - Fixed MASTER_SITES to fix fetch problem on mpeg_lib. - Removed unfetchable sites to download gimp tarball and replaced by new site. PR: 90753 - Define NO_LATEST_LINK in favour of graphics/gimp Spotted by: krion - Mark DEPRECATED and point users to 2.0 release in graphics/gimp port I haven't set an expiration date because there are still some ports that require old GIMP, most notably xsane. After GIMP 2.0 release both versions can not be longer installed at the same) Correct some obsolete MASTER_SITES PR: 57557 Submitted by: Mark Linimon <linimon@lonesome.com> Update to 1.2.5. Make the print plug-in conditional on whether or not WITHOUT_PRINT is specified. PR: 53071 Update to 1.2.4. Remove USE_GNOMENG. Clear moonlight beckons. Requiem mors pacem pkg-comment, And be calm ports tree. E Nomini Patri, E Fili, E Spiritu Sancti. Roll mpeg-lib into the two gimp ports. mpeg-lib is now abandoned ware, and will be removed from the ports tree soon. mpeg-lib will now be built solely for the purpose of gimp, and no libmpeg.* will be installed on the system. This will avoid a namespace conflict with KDE which also installs a libmpeg.so. These two libraries are incompatible. Submitted by: alane and myself Not reviewed by: alane Inspired by: mail/evolution Use USE_GNOMENG. - Use USE_LIBTOOL properly; - add dozen missed files into pkg-plist; - bump PORTREVISION. pass maintainerbit to gnome@ Approved by: sobomax Fix pkg-plist set PORTEPOCH back. PR: Christopher Masto <chris@netmonger.net> Upgrade to 1.2.3. Fully-qualify WWW: URL add missing entries to pkg-plist. Bump png major Resurrect PORTEPOCH which has been removed in the last commit by mistake. Upgrade to 1.2.2 Undo 1.2.2 upgrade. Update to 1.2.2. Correct pkg-comment & pkg_desc. Add MASTER_SITE_RINGSERVER to MASTER_SITES. Gimp is 1.2 now, not 1.1. Dont strip script when use --install-admin-bin, It will broken p5-Gimp installation. Upgrade to 1.2.1. Correct plist. Upgrade to 1.2.0, add a patch for print plug-ins (by mistral@imasy.or.jp (Yoshihiko SARUMARU). Upgrade to 1.1.30. Upgrade to 1.1.29, split gimp-perl to p5-Gimp (coming soon). Remove helpbrowser again -- it appears to be optional to gnome (look one line above). Upgrade to 1.1.27. Change PKGDIR from pkg/ to . Also fix places where ${PKGDIR} is spelled out (many of which are ${PKGDIR}/MESSAGE -> ${PKGMESSAGE} type fixes that shouldn't have been necessary) and the string "/pkg/" appear. Convert category graphics to new layout. Implement WANT_GNOME. Support WITH_PERL properly on -current. Update to 1.1.26. Update to 1.1.25. . Update 1.1.24.. Updates for new shared library versions in GNOME 1.2 Correct PLIST.perl and update Makefile to reflect recent WITHOUT_PERL --> WITH_PERL thansition. Update to 1.1.23. Also remove mandatory libintl dependency (should be inherited from gtk++). Added missed catalog files. Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 10 vulnerabilities affecting 30 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/graphics/gimp1/
CC-MAIN-2015-18
refinedweb
641
62.75
OpenGL Discussion and Help Forums > OpenGL Developers Forum > OpenGL coding: advanced > import .x file PDA View Full Version : import .x file appdeveloper 10-15-2009, 06:43 AM Is it possible to import a 3d studio max file, probably exported as .x (like in some game engines) to be rendered in OpenGL? How? My thanks in advanced Jan 10-15-2009, 07:00 AM Of course! By writing an importer. appdeveloper 10-15-2009, 07:15 AM Can you give me any reference or link so that i can read as an example? Jan 10-15-2009, 07:45 AM Sure. Having coded an x-importer, i found this page most informative: First of all, google for the "x file format". There are several articles that explain how the file format works in general. When you then want to know the details, to implement it, that msdn-page will be very valuable. The format is actually quite straight-forward to understand, but implementing an importer is a bit of work. Jan. DWilson 10-15-2009, 08:59 AM This article should help Loading and displaying .X files without DirectX ().. Its a bit old, but it still renders X files in OpenGL. It basically goes into the file format in more detail, and example code to load a model. appdeveloper 10-16-2009, 04:21 AM Thanks for all your answers. I see this is not an easy task. Is there any other way, maybe a most common way to add 3d Studio Max or Maya objects, to OpenGL and control them? Jan 10-16-2009, 05:45 AM Nope. OpenGL does not provide any built-in mechanisms. It allows you to render stuff. Where you get the data from, and how that is stored etc. is not a concern of OpenGL. So no matter what format you are trying to use, be it for models, textures, animations, whatever, you always need to write your own importers (or use other peoples importers, if there are any). Part of the reason is, that there is no "standard" way to render your stuff. Someone might use VBOs, someone else uses display lists, the layout of the data can change, depending on your needs etc., so having ONE importer is usually not possible, because everyone has different needs how to render things. Jan. DarkShadow44 10-17-2009, 02:54 PM You could also write your own exporter (3d modeller) and importer (openGL)... I've my own exporter for blender (with python), that exports ertices,UVs and animations... I think thats the "easiest" ^^ Alfonse Reinheart 10-17-2009, 03:25 PM I think thats the "easiest" Technically, what would be "easiest" is if Blender had a decent exporter to Collada, which has a specific and well-documented format. Then you can write an offline tool to convert this into whatever format you want. Powered by vBulletin® Version 4.2.5 Copyright © 2018 vBulletin Solutions Inc. All rights reserved.
https://www.opengl.org/discussion_boards/archive/index.php/t-168905.html
CC-MAIN-2018-30
refinedweb
493
75.1
0 Okay I don't get this problem at all: import random def chance(number): number = input for each in xrange(number): n = random.randrange(100)+1 return n def extreme(x): z = x + 5 y = z - 3 return z, y #main repeat = 0 while repeat != 2: x = input("Enter a value for x: ") z, y = extreme(x) print x," + 5 = ",z print z," - 2 = ",y n = chance(number) input = raw_input("How many random numbers do you want?: ") print n repeat=input("Do you want to go again? Yes [1] or no [1]: ") raw_input("Press Enter to exit!") This is copied from the homework website my teacher made (as this is an online Python Course): Make a course dev worth at least 20 marks which includes: a function with two positional parameters that returns two values (both integers or real numbers) a function which generates random numbers and returns them (any amonnt you want) a function with one parameter, which it gets from a random number generator, that returns two values (strings are returned) asks if you want to do it again (y or n) I have no idea how to do the third option, and I had it working before but I must've changed something and I don't know what.
https://www.daniweb.com/programming/software-development/threads/179539/loops-random-number-and-dictionaries
CC-MAIN-2017-34
refinedweb
213
55.1
In the last lesson we derived the nearest neighbor formula. Nearest neighbors is a powerful algorithm because it allows us to predict other attributes about people using their proximity data. For example, those who live in a particular neighborhood may be more likely to be a certain age or have similar interests. Using proximity, we might even be able to determine whether their likelihood to purchase a product approximates that of their neighbors. In this lesson we'll see how the nearest neighbors algorithm allows us to make predictions with data. We will also look at the workflow for machine learning in general and see some of the common struggles that we experience when applying a machine learning algorithm. Once again, here are the locations of Bob and our customers. This time let's add a fourth column for the number of purchases per year. | Name | Avenue #| Block # | No. Purchases | |------|------| ------ | | Bob | 4 | 8 | 52 | Suzie | 1 | 11 | 70 | Fred | 5 | 8 | 60 | Edgar | 6 | 13 | 20 | Steven | 3 | 6 | 32 | Natalie| 5 | 4 | 45 We represent these individuals along with their yearly purchases in Python with the following: neighbors = [{'name': 'Bob', 'x': 4, 'y': 8, 'purchases': 52}, {'name': 'Suzie', 'x': 1, 'y': 11, 'purchases': 70}, {'name': 'Fred', 'x': 5, 'y': 8, 'purchases': 60}, {'name': 'Edgar', 'x': 6, 'y': 13, 'purchases': 20}, {'name': 'Steven', 'x': 3, 'y': 6, 'purchases': 32}, {'name': 'Natalie', 'x': 5, 'y': 4, 'purchases': 45}] bob = neighbors[0] suzie = neighbors[1] import plotly plotly.offline.init_notebook_mode(connected=True) trace0 = dict(x=list(map(lambda neighbor: neighbor['x'],neighbors)), y=list(map(lambda neighbor: neighbor['y'],neighbors)), text=list(map(lambda neighbor: neighbor['name'] + ': ' + str(neighbor['purchases']),neighbors)), mode='markers') plotly.offline.iplot(dict(data=[trace0], layout={'xaxis': {'dtick': 1}, 'yaxis': {'dtick': 1}})) Just by looking at this data, aside from Suzie, it seems that the proximity of customers provides a good indication of the number of cupcake purchases per customer. Assume that a new customer just purchased his first cupcake, and we want to develop some expectation for how many cupcakes he may purchase from us in the following year. His location may help us determine the ingredients we need to buy to satisfy his demand. Let's see what the nearest neighbors algorithm tells us. Here is the nearest neighbors algorithm once again. The code below reflects the following steps: map) Ok once again, here is the code. def distance_all(selected_individual, neighbors): remaining_neighbors = filter(lambda neighbor: neighbor != selected_individual, neighbors) return list(map(lambda neighbor: distance_between_neighbors(selected_individual, neighbor), remaining_neighbors)) def nearest_neighbors(selected_individual, neighbors, number = None): number = number or len(neighbors) neighbor_distances = distance_all(selected_individual, neighbors) sorted_neighbors = sorted(neighbor_distances, key=lambda neighbor: neighbor['distance']) return sorted_neighbors[:number] bob = neighbors[0] nearest_neighbor_to_bob = nearest_neighbors(bob, neighbors, 1) nearest_neighbor_to_bob We try our nearest_neighbors function on a known piece of data, bob. When we ask our function to return only the closest neighbor, it returns Fred and tells us his number of purchases. Perhaps we can expect Bob's number of purchases to be similar to Fred's. We also can apply the function to a customer at new location to predict this customer's number of purchases. nearest_neighbor_to_new = nearest_neighbors({'x': 4, 'y': 3}, neighbors, 1) nearest_neighbor_to_new However, simply choosing the nearest neighbor seems like an arbitrary way to estimate number of purchases. Our estimate is determined by just one individual's purchases. We ought to expand the number of neighbors and take the average of their purchases to produce a better estimate for purchases by someone at this new location. nearest_three_neighbors = nearest_neighbors({'x': 4, 'y': 3}, neighbors, 3) nearest_three_neighbors purchases = list(map(lambda neighbor: neighbor['purchases'],nearest_three_neighbors)) average = sum(purchases)/len(purchases) average # 43.0 In the above section, we use the nearest neighbors formula to make a prediction. It's telling us that someone who lives on 4th street and 3rd Avenue is expected to purchase 45 cupcakes. This approach is highly flawed since our algorithm's predictions change dramatically depending upon the number of neighbors we include in our formula. The number of neighbors that we choose in the nearest neighbors algorithm is represented by k. Choosing the correct number of neighbors to consider touches upon a number of themes in data science. We'll introduce a few of these issues here, so we are aware of them as we visit other machine learning problems. Underfitting occurs when our formula does not pick up on the relevant signals from the data. For example, if the number of neighbors we have is too large, our algorithm would improperly predict the purchases of our known customers, as it would fail to respond to differences in location. How do we determine the correct number for k, the number of neighbors to consider? One way is to see how well our algorithm predicts against our existing data, then make the necessary changes. For instance, when we look at Bob's closest neighbor by setting k = 1, the nearest neighbor algorithm expects Bob to make 60 purchases. We already know that Bob actually purchased 52 cupcakes, so our formula is off by 8. This number, our actual value minus the expected value, is called the error. We can optimize the algorithm by adding all of the errors across all of the neighbors and selecting that k which minimizes this aggregate error for all of our data. This approach of looking at our existing dataset to optimize for some metric, like the lowest error, is called training. In this example, we train our algorithm by choosing numbers of k such that our algorithm optimizes for predicting the number of purchases in our dataset. However, when training our algorithm to match our data, overfitting could become a problem. Overfitting occurs when we overgeneralize from the data. If we are served a bad meal at a chain restaurant, we could improperly conclude that all meals at the chain are bad. The same thing can happen with our algorithm. Our algorithm can be optimized for and perform well with our existing data, but not do well with new data. Imagine that we have one hundred cupcake customers and choosing a k of 2 best minimizes the error in predicting the number of purchases. We could find later that, as we get new customers, our model does not predict their purchases. The algorithm could pick up on things particular to our existing data set, but fails to generalize to new data. To see whether the algorithm fits new data, we should test it with new data. Data scientists cannot waste time waiting for new data to arrive, so they split their data in two: roughly 80 percent of the data for testing and 20 percent for training. The training dataset is used for tweaking the algorithm, as we just saw, so that it minimizes the error or some other metric. Once the algorithm is optimized, they study how well their algorithm performs on something it is not molded to, called test data. If the algorithm performs well on this test data, it is ready for use and can make new predictions. So these four concepts are all related. Underfitting occurs when our algorithm is not responsive enough to our data, and therefore we can optimize our algorithm to better predict our existing data. Changing our algorithm so it responds to our data is called training. Overfitting is the risk of training the algorithm to an existing data set to the extent that it picks up on the quirks of the data and fails to generalize to new data. To prevent against overfitting, data scientists set aside a portion of the data for testing to determine if the algorithm properly can predict on this portion of the data. In this lesson, we reviewed how to collect and explore data by implementing the Pythagorean Theorem and the sorting method to build our nearest neighbors algorithm. We then learned how we could train the algorithm so that it can produce predictions about incoming data. As you can see, there is a very structured approach, and a lot of thought that goes into simply choosing the correct k size. At this point, we need not be so formal when choosing our k value. We'll learn in the next section that by choosing a correct k, we still can derive a nearest neighbors algorithm that is fairly predictive of our data.
https://learn.co/lessons/applying-nearest-neighbors
CC-MAIN-2019-09
refinedweb
1,396
50.77
Paul Prescod writes: >Without any xmllib-specific optimization, pyexpat runs almost as fast as >sgmlop: >raw sgmlop: 13222 items; 0.426 seconds; 1281.29 kbytes per second >fast xmllib: 13222 items; 1.445 seconds; 378.03 kbytes per second >slow xmllib: 13222 items; 6.651 seconds; 82.11 kbytes per second >pyexpat: 13210 items; 1.527 seconds; 357.68 kbytes per second >I can think of several optimizations that could speed it up quite a bit. 21K/sec difference, or around 6% slower; very good. Let's discuss these optimizations at IPC8; I'd like to get a version of this into the CVS tree ASAP. >Also if you compare it to the xmllib in the standard distribution, we >are talking night and day so if we bundle expat we're only improving >things for them. Note that the xmllib in 1.5.2 and xml.parsers.xmllib are different; namespace support has been added to the 1.5.2 version. This is a divergence that's needed fixing for a while, and now seems like a good opportunity.. Is Expat becoming a fairly common component of Linux and *BSD distributions? I still dislike the idea of adding Expat to the Python distribution, because of possible collisions with updated versions of Expat. -- A.M. Kuchling And at times the fact of her absence will hit you like a blow to the chest, and you will weep. But this will happen less and less as time goes on. -- From SANDMAN: "The Song of Orpheus"
https://mail.python.org/pipermail/xml-sig/2000-January/001843.html
CC-MAIN-2017-43
refinedweb
253
68.77
How To Write Your First Airflow Pipeline SeattleDataGuy ・6 min read This tutorial discusses the basic steps it takes to create your first workflow using Apache Airflow. Before getting started, you need to set up an Airflow environment to be able to follow along with each of the steps discussed in this article. If you haven't already done that, we find this article to be one of our personal favorites. Why Airflow? You might be asking why use Airflow anyway? Airflow helps solve a lot of issues by automating workflows and managing boring and redundant manual tasks. By definition, Apache Airflow is a platform to programmatically author, schedule, and monitor workflows, also known as DAGs (see Github). You can use Airflow to write ETLs, machine learning pipelines, and general job scheduling (e.g., Cron), including database backups and scheduling of code/config deployment. We discussed some of the benefits of using Airflow in our comparison of Airflow and Luigi. Understanding Airflow Pipelines An Airflow pipeline is essentially a set of parameters written in Python that define an Airflow Directed Acyclic Graph (DAG) object. Various tasks within a workflow form a graph, which is Directed because the tasks are ordered. To avoid getting stuck in an infinite loop, this graph does not have any cycles, hence Acyclic.. Having a clear structure for how those tasks are run and what the order is is important. With that basic explanation out of the way, let's create your first DAG. If you followed the link above for setting up Airflow, then you should have set up a directory that points the AIRFLOW_HOME variable to a folder. By default, this should be a folder called airflow.In that folder, you will need to create a DAGs folder. You want to create your first DAG in the DAGs folder, as below. airflow # airflow root directory. ├── dags # the dag root folder │ ├── first_dag.py # where you put your first task Set default_args Breaking this down, we will need to set up a Python dictionary containing all the arguments applied to all the tasks in your workflow. If you take a look in the code below, there are some basic arguments, including owner (basically just the name of the DAG owner), and start_date of the task (determines the execution day of the first DAG task instant) Airflow is built to handle both incremental and historical runs. Sometimes you just don't want to schedule the workflow and just run the task for today. You may also want to start running tasks from a specific day in the past (e.g., one day ago,) which is what is set up in the first code snippet below.), } In this case, the start_date is one day ago. Your first DAG will run yesterday's data, then any day after that. Here are some other key parameters. end_datein the code will determine the last execution date. Specifying an end date limits Airflow from going beyond the date. If you don't put in this end date, then Airflow will just keep running forever. depends_on_pastis a Boolean value. If you set it to true, the current running test instance will rely on the previous task's status. For example, suppose you set this argument to true, in this case, a daily workflow. If yesterday's task run failed, then a two-day task will not be triggered because it depends on the status of the previous date. email on failureis used to define whether you want to receive the notification if a failure happens. email on retryis used to define whether you want to receive an email every time a retry happens. retriesdictates the number of times Airflow will attempt to retry a failed task retry-delayis the duration between consecutive retries. In the example, Airflow will retry once every five minutes. A quality workflow should be able to alert/report on failures, and this is one of the key things we aim to achieve in this step. Airflow is specially designed to simplify coding in this area. This is where emailing on failures can be helpful. Configure DAG Schedule This step is about instantiating a DAG by giving it a name and passing in the default argument to your DAG here: default_args=default_args. Then set the schedule interval to specify how often DAG should be triggered and executed. In this case, it is just once per day. Below is one way you can set up your DAG. dag = DAG( 'basic_dag_1', default_args=default_args, schedule_interval=timedelta(days=1), ) If you want to run your schedule daily, then use the following code parameters: schedule_interval='@daily'. Or you can use cron instead, like this: schedule_interval='0 0 * * *'. Lay Out All the Tasks In the example below, we have three tasks using the PythonOperator, DummyOperator, and BashOperator. from airflow import DAG from airflow.operators.dummy_operator import DummyOperator from airflow.operators.python_operator import PythonOperator from datetime import datetime def my_func(): print('Hello from my_func') bashtask = BashOperator( task_id='print_date', bash_command='date', dag=dag, ) dummy_task = DummyOperator(task_id='dummy_task', retries=3) python_task = PythonOperator(task_id='python_task', python_callable=my_func) These tasks are all pretty straightforward. What you will notice is that each has a different function and requires different parameters. The DummyOperator is just a blank operator you can use to create a step that doesn't really do anything except signify the pipeline is done. The PythonOperator allows you to call a Python function and even pass it parameters. The BashOperator allows you to call bash commands. Below we will just be writing the tasks. This will not operate until you add all the pieces together. Using these basic tasks, you can now start to define dependencies, the orders in which the tasks should be executed. Define Dependencies There are two ways to define the dependencies among the tasks. The first way is to use set_downstream and set_upstream. In this case, you can use set_upstream to make the python_task depend on the BASH task or do the same with the downstream version. # This means that will depend on bashtask # running successfully to run. bashtask.set_upstream(python_task) # similar to above where dummy_task will depend on bashtask dummy_task.set_downstream(bashtask) Using this basic set up, if the BASH task is successful, then the Python task will run. Similarly, the dummy_task is dependent on the BASH task finishing. The second way you can define a dependency is by using the bit shift operator. For those unfamiliar with the bit shift operator, it looks like >> or <<. For example, if you would like to reference the Python task being dependent on the BASH task, you could write it as bashtask >> python_task. Now what if you have a few tasks dependent on one? Then you can put them as a list. In this case, the Python task and dummy_task both depend on the BASH task and are executed in parallel following the completion of the BASH task. You can use either the set_downstream method or the bit shift operator. bashtask.set_downstream([python_task, dummy_task]) Your First Airflow Pipeline Now that we have gone over each of the different pieces, we can put it all together. Below is your first basic Airflow pipeline. from airflow import DAG from airflow.operators.bash_operator import BashOperator from airflow.operators.dummy_operator import DummyOperator from airflow.operators.python_operator import PythonOperator from datetime import datetime, timedelta), } dag = DAG( 'basic_dag_1', default_args=default_args, schedule_interval=timedelta(days=1), ) def my_func(): print('Hello from my_func') bashtask = BashOperator( task_id='print_date', bash_command='date', dag=dag, ) dummy_task = DummyOperator(task_id='dummy_task', retries=3) python_task = PythonOperator(task_id='python_task', python_callable=my_func) dummy_task.set_downstream(bashtask) python_task.set_downstream(bashtask) Adding the DAG Airflow Scheduler Assuming you already have initialized your Airflow database, then you can use the webserver to add in your new DAG. Using the following commands, you can add in your pipeline. airflow webserver airflow scheduler The end result will appear on your Airflow dashboard as below. To Code, or Not to Code on Vacation: That is My Question I coded on vacation, and now I feel guilty about it. What tips do you have?
https://practicaldev-herokuapp-com.global.ssl.fastly.net/seattledataguy/how-to-write-your-first-airflow-pipeline-2j43
CC-MAIN-2020-05
refinedweb
1,340
56.05
This. This. Memory Allocation Functions There are three standard C functions that you can use to allocate memory — malloc() , calloc() , and realloc() . Their prototypes, as defined in <stdlib.h> , are: void* calloc (size_t nmemb, size_t size); void* malloc (size_t size); void* realloc (void *ptr, size_t size); malloc() is the simplest of the allocators. It simply takes an argument specifying the size of the memory block that you wish to allocate and returns a pointer to a block of memory of that size. calloc() takes in two arguments, a number of elements ( nmemb ), and a size for each element ( size ). The total size allocated by a call to calloc() will be (nmemb * size) bytes. Also, calloc() actually sets all the memory it returns to NULL . In contrast, there are no guarantees regarding the values stored in the memory returned my malloc() . realloc() is used to reallocate a section of memory that has been previously allocated. Notice that it takes in a pointer ( *ptr ) as its first argument. The second argument ( size ) indicates how much space the pointer should now contain. Sometimes realloc() has to move memory around to find the space for the new chunk. As a result, the pointer returned by realloc() may be different than the pointer you pass to it. The values stored in the memory passed in will be copied to the newly allocated block. You may also use realloc() to cut down the size of a block of memory by passing it a size that is smaller than the original allocated size of the block. All three of these allocation functions return 0 on failure. Therefore, it is imperative that you check the return value of these functions before you use the pointers. Take a look at the code in the following example: int* ptr = malloc (sizeof (int)); *ptr = 7; It looks simple enough; it allocates enough memory for an integer and then places 7 into that memory. But what if malloc() returns NULL? The next statement will de-reference that NULL value, causing a segmentation fault. The example above might not seem like such a big deal, but take a look at the code in Listing One (taken from the file include/linux/coda_linux.h in the 2.2 version of the Linux kernel). This code demonstrates a real memory allocation bug, proving that Linux is not actually a perfect operating system. Listing One: A Memory Allocation Bug in Linux if (size < 3000) { ptr = (cast)kmalloc((unsigned long) size, GFP_KERNEL); CDEBUG(D_MALLOC, “kmalloced: %lx at %p.n”, (long)size, ptr); } else { ptr = (cast)vmalloc((unsigned long) size); CDEBUG(D_MALLOC, “vmalloced: %lx at %p .n”, (long)size, ptr); } if (ptr == 0) { printk(”kernel malloc returns 0 at %s:%dn”, __FILE__, __LINE__); } memset( ptr, 0, size ); Note that the second if statement actually does go on to check the value of ptr , which in this case represents the return value of either kmalloc() , the kernel’s internal version of malloc() in the first if statement, or vmalloc in the else statement. If ptr ’s value is NULL , a warning is printed, printk (”kernel malloc returns 0 at %s:%d\n”, __FILE__, __LINE__); . However, even if the return value of kmalloc() were NULL , the program would be allowed to continue, and that NULL value would be passed on to the memset() function, which would attempt to assign a value to the non-existent memory, causing an immediate segmentation fault. The important thing to remember here is that if you forget to check the return values of your memory allocations, your program has the potential to crash, even if it’s something as important as the kernel. Naturally, one of the numerous Linux kernel hackers fixed this problem for Linux 2.4. The free() function which has a prototype of: void free (void* ptr); is used to deallocate memory that was previously allocated by one of the three allocators mentioned above. You simply pass it a pointer you used to reference the memory previously allocated. A block of memory may also be freed by passing it to realloc() with a size of 0. After a block of memory has been freed, it must no longer be used for any purpose. If you do use the memory again, it’s likely that your program will not crash right away, but at some later time when the effects of the wrongful use cause something to break. These types of bugs are difficult to find, so it’s crucial that you not use memory after its been freed. Listing Two contains a simple program that implements a linked list of integers using dynamic memory allocation. The functions defined demonstrate the use of malloc() and free() in a program. Although this program is only a toy, it’s a good example of how to correctly allocate and deallocate memory. Listing Two: Linked List Example - Part I #include <stdio.h> #include <stdlib.h> struct linked_list { int item; struct linked_list* next; }; void insert (struct linked_list** head, int new_value) { struct linked_list* new_item; new_item = malloc (sizeof (struct linked_list)); /* malloc failed. We must exit. */ if (new_item == NULL) { fprintf (stderr, “Out of memory. Exiting!n”); exit (1); } /* At this point, we know the pointer is ok, so we can use it. */ new_item->item = new_value; new_item->next = *head; /* Set the head of the list to be the new element. This inserts the element at the beginning of the list. */ *head = new_item; } void delete (struct linked_list** head, int to_delete) { struct linked_list* current_place, *previous_place; current_place = *head; previous_place = NULL; while (current_place != NULL) { /* If we’ve found the item, break out of the loop. */ if (current_place->item == to_delete) break; previous_place = current_place; current_place = current_place->next; } /* Did not find the item to delete. Just return. */ if (current_place == NULL) return; if (previous_place != NULL) /* Set the previous item in the list to skip over the item we are about to delete. */ previous_place->next = current_place->next; else /* The item to delete was the head of the list. Handle this differently by setting the head to be the second item since we are about to delete the first. */ *head = current_place->next; /* Deallocate the memory associated with the item to be deleted only after we have made sure that there are no other references to the memory. */ free (current_place); } void print_list (struct linked_list* head) { struct linked_list* current_place = head; while (current_place != NULL) { printf (”%d “, current_place->item); current_place = current_place->next; } printf (”n”); } int main () { /* A null value for the head represents an empty list */ struct linked_list* my_head = NULL; /* Add some items to the list. */ insert (&my_head, 1); insert (&my_head, 4); insert (&my_head, 2); /* Print the list and then delete the items. */ print_list (my_head); delete (&my_head, 2); print_list (my_head); delete (&my_head, 1); print_list (my_head); delete (&my_head, 4); print_list (my_head); } Under the Hood Now that we know what memory allocation and deallocation functions do, let’s go under the hood and examine exactly how these functions carry out their duties. Many different types of memory allocators exist. A full explanation of all of them is beyond the scope of this article. However, we will briefly discuss the three major techniques used and then dive deeply into the one used by the implementation of malloc() for Linux. Memory allocators are categorized mainly by how they keep track of the free blocks that they can use to parcel out memory to applications. Imagine a new program that has just started. The memory allocator has a single, large block of memory that it can use for allocations. However, after many allocations and deallocations, holes can start to appear in this block. (The technical term for this phenomenon is fragmentation.) Free blocks show up in many different sizes, and the allocator must use these free blocks in an efficient way to keep total memory use to a minimum. The first category of allocators uses a technique called “sequential-fit” to keep track of free blocks. These allocators keep all the free blocks in one doubly-linked list. When a request comes in, an allocator of this type starts looking through the list until it finds a block that it deems appropriate to satisfy the request and returns it. There are three main types of sequential-fit allocators that are worth mentioning: first-fit, next-fit, and best-fit. In a first-fit allocator, searching always starts from the beginning of the list of free blocks (i.e., the free list), and as soon as a big enough block is found, it is split up to the size required and returned to the user. Next-fit differs from first-fit in that it starts each search where the last search ended. It does not start at the beginning of the list each time, because it is believed that starting at the same place for each search can increase fragmentation. Best-fit allocators differ from the other two approaches in that they attempt to find the smallest block that has enough room to satisfy the request. Therefore, if you request four bytes, and two blocks of sizes 16 and 32 exist, the best-fit algorithm will choose the 16 byte block over the 32 byte block regardless of the order of the blocks in the free-list. However, best-fit allocators may suffer a performance hit if they are forced to search through the entire free-list on every allocation. This will be the case any time an exact fit is not found somewhere in the list. All three of the sequential-fit allocators coalesce free blocks that are adjacent to form one larger block in the free-list. “Buddy-system” allocators make up the second category of algorithms. Their focus is on how free blocks are combined together. The idea is that every time a block must be split, it is split in half, and the two halves may only be coalesced once both are completely free (i.e., the two halves are buddies). As a concrete example, imagine that there exists a free block of size 4096 bytes (that’s 212). If an application requests a block of size 750 bytes, the original block will be split into two halves of 2048 bytes, then one of those blocks will be split into two halves of 1024 bytes, one of which will be returned to the application. (It cannot split again since the resulting blocks would be too small to fulfill the request.) The remaining blocks on the free-list will be one of size 1024 and one of size 2048. This method has been known to suffer from fragmentation. Notice that in our example, 1024 bytes were allocated for a block of only 750 bytes, wasting over 250 bytes of space. The final category of allocators (and the type of allocator the standard C library in Linux uses) is those that use segregated “free-lists” or “bins.” The idea here is to keep multiple free-lists. Each free-list contains blocks for a particular range of sizes. For example, the first free-list might contain blocks with sizes between eight bytes and 64 bytes. Another free-list might contain blocks with sizes between 1024 bytes and 2048 bytes, and so on. When a request comes in, the allocator simply starts the search in the smallest bin that can definitely fill the request and returns the first block it finds. If no blocks exist in that bin, the next larger bin is searched, and so on. This method has the benefit of getting a close approximation of the best-fit method without requiring a search through all the free blocks on each request. However, it does incur the overhead of keeping track of all the different free-lists. The Linux Memory Allocator The Linux allocator was originally written by Doug Lea and is sometimes referred to as dlmalloc() . The source code can be found in the glibc source within the malloc subdirectory. The main allocation is done from a file called malloc.c. So now let’s take a closer look at this memory allocator, which uses bins to determine how to allocate free blocks of memory. The allocator keeps 128 bins of different sizes. The first half of those bins keeps exactly one size in them. These are for the smaller sizes, 16 bytes through 512 bytes (equally spaced 8 bytes apart). The other half of the bins keeps all the rest of the sizes (spaced exponentially) between 576 bytes and 231 bytes. Unlike the first half, these bins do not require exact sizes; the sizes found in a given bin will be anywhere between the previous bin size and that bin size. As of the most recent version of this allocator, all bins that contain different size blocks are sorted from smallest to largest, making the algorithm always return the best-fit for any given size. When a block is freed, it is coalesced with either or both of its free neighbors, and the resulting block is placed in the appropriate bin. The allocator keeps track of the location of a block’s neighbors (as well as how much memory is being deallocated) by making use of a “size tag.” The size tag is four bytes of memory that is allocated along with the block requested by the application. It resides right before the beginning of the user block of memory. This size is also repeated at the end of the block so that it is easy to determine the size of the block right behind any given block as well as the block ahead of any given block. The allocator uses these size tags to keep track of exactly how much memory is allocated where. This is why you do not need to specify a size when you free memory — the allocator can simply look back 4 bytes from the given address to determine the size of the block. The size tag at the beginning of the block also keeps a bit reserved to indicate whether the block is free or in use. See Figure One for a picture of an allocated block and its size tags. As you might have realized, the use of these two size tags adds an additional eight bytes to the size of any allocated block. The allocator must also somehow keep the pointers for the list of free blocks for each bin. In the Linux memory allocator, each bin is a doubly-linked list of free blocks in the given size range. The allocator cleverly stores the previous and next pointers for each block in the actual user space of the block that was allocated. Since the block has been freed, that space is no longer needed and can be used for another purpose! Figure Two shows how the memory is laid out in a block once it has been freed. Of course, by storing two pointers in the freed blocks, the allocator imposes a minimum size of 8 bytes on any allocated user block. Therefore, no matter how few bytes a program allocates, a block of at least 16 bytes will be allocated. Listing Three: A Simple Example of the Overhead of the Linux Memory Allocator #include <stdlib.h> #include <stdio.h> int main () { char *a, *b, *c; a = malloc(1); b = malloc(1); c = malloc(1); printf (”%d %d %dn”, (int) (a-a), (int) (b-a), (int) (c-a)); } As a simple example of this, look at the code in Listing Three. Notice that we only allocate 1 byte of memory for each of the three variables. However, when we print out the difference between the pointers (i.e., how far apart they are in memory), we would get the following output: ~/> ./a.out 0 16 32 ~/> The three different blocks of memory are 16 bytes apart, demonstrating the overhead incurred by the use of size tags and free block pointers. Free to Go… You’ve probably used dynamic memory allocation in many of your programs without ever having thought about what was actually going on each time you called malloc() and free() . Hopefully, this article has given you a new perspective on memory allocation. Next month, we’ll continue with this topic by discussing exactly how the operating system manages the memory it gives to processes and the system call that memory allocators use to get more memory from the kernel. Until then, happy allocating! Benjamin Chelf is an author and engineer at CodeSourcery, LLC. He can be reached at chelf@codesourcery
http://www.linux-mag.com/id/806
crawl-002
refinedweb
2,734
60.85
In the final part of this tutorial, we will demonstrate how to construct large scale ASP.NET websites. In the previous tutorials of this series, we saw how to build single ASP.NET pages where all the code for a page was written on the page itself. This approach can quickly get tedious when you have code that is common across several pages. Thus, one of the most important elements in sites with a large number of pages is the ability to share code. Hence, we first describe ways in which to write code that can be used by multiple pages without the need for repeating the code on each page. Then, we will outline an efficient way to improve the performance of serving ASP.NET pages, a task that would be quite cumbersome in classic ASP. Like the prior two parts, our emphasis is using our ASP knowledge to develop a conceptual understanding of ASP.NET. As I had mentioned earlier, we are not looking for a line-by-line conversion method from classic ASP to ASP.NET as much as to leverage our existing classic ASP knowledge to learn ASP.NET. In classic ASP, you can write code in a file and then "import" it in on another page by using <!-- #INCLUDE virtual = "Name_of_file_that_has_common_code.asp" -->. This statement has the effect of "pasting" the common code in the place of the <!-- #INCLUDE ... -->. In ASP.NET, on the other hand, there is an equivalent way to INCLUDE code, but the implementation details are quite different. <!-- #INCLUDE virtual = "Name_of_file_that_has_common_code.asp" --> <!-- #INCLUDE ... --> INCLUDE We saw, in the first part of this tutorial, that ASP.NET has "intelligent" tags or controls which "know" how certain data has to be rendered on the browser. (The <asp:DataGrid> is an example of a web control which "knows" that data associated with it is rendered within an HTML table.) In addition, ASP.NET allows you to define your own markup tags or custom controls. The presentation markup "instructions" for these custom controls is specified in another file (with an .ascx extension with a c instead of the p. You can think of the c in the .ascx extension to stand for custom control.) Only the tag for the custom control is written on the .aspx page just like any other control or presentation tag. But, when the web page with the .aspx page is sent to the browser, the custom control is replaced by the markup information specified in the separate file. Thus, you can use this ability of ASP.NET to define your own controls to "bring in" code from another file. <asp:DataGrid> .ascx c p .aspx On the .aspx page, you need to specify which .ascx page the control is associated with along with the name you want to call this custom control in your .aspx page. The reference to the .ascx page is made using the <%@ Register ... %> directive. The tag names and prefixes used to "call" the custom control are specified as attributes of this directive. Finally, the page with an .ascx extension, on the other hand, has a <%@ Control ... %> directive instead of the <%@ Page ... %> directive on .aspx pages. Apart from this difference, a page with an .ascx extension is very similar in structure to a page with an .aspx extension. <%@ Control ... %> <%@ Page ... %> Below is an example of an .ascx page and how it's referenced in a page with an .aspx extension. SimpleCustomControl_vb.ascx <%@ Control Language="VB" %> <script runat="server"> Public Color As String </script> <font color="<%=Color%>">This text will replace the custom control tag on the .aspx page.</font> SimpleCustomControl_vb.aspx <%@ Page Language="vb" %> <%@ Register TagPrefix="My" TagName="NewTag" Src="SimpleCustomControl_vb.ascx" %> <html> <head> </head> <body> <form runat="server"> <p> <My:NewTag </p> </form> </body> </html> Note how the Color attribute on the My:NewTag control in the .aspx can set the color of the text in the custom control. Color My:NewTag While the example shown is quite basic, custom controls can be as advanced as you like. We can even use web controls and programmatically access and modify their attributes. The important thing to bear in mind, though, is that, similar to the classic ASP INCLUDE file, you only have to make changes to the markup code in the .ascx file and not on every page that references the custom control. Page navigation elements are, thus, often defined as custom controls. For more details and an excellent and quick introduction to custom controls, see this asp101 lesson. Thus far, we have talked about the structure of an ASP.NET page and how we can achieve separation of server side code from presentation markup details. We have seen that ASP.NET web controls can be powerfully effective in reducing the amount of HTML and JavaScript that we have to explicitly write. We have also seen how to define custom controls, which is similar to the classic the ASP server side includes. Moreover, ASP.NET has another way of pulling out common code to a separate page. The page with the pulled out code is called the "code-behind" page, which simply means the "server side code behind the ASP.NET page." Pulling code out and placing it in a separate file has the advantage of further separating presentation from code. Graphically, what we want to accomplish is shown below: Figure 1. Code to pull out. Related Reading ASP.NET in a Nutshell By G. Andrew Duthie, Matthew MacDonald While pulling the script side code into a separate code-behind file, let us make a slight change. We can certainly continue to use Visual Basic as our server side script language. Let us take this opportunity, however, to switch languages to C# (pronounced c-sharp). The code-behind file takes the extension of the scripting language used. Since we are using C#, the extension of the code-behind file is .cs. If we were using Visual Basic, the extension of the code-behind file would be .vb. .cs .vb In classic ASP, when moving code to an Include file, we literally "cut-and-pasted" from the "mother" .asp page to a separate INCLUDE file. In ASP.NET, the code that gets moved to the code-behind page is also pretty much the same as in the "mother" .aspx page but is structured a little differently. .asp When pulling the script-side code to a separate file, the Page directive on the original .aspx file does not migrate over--it stays on the .aspx file itself and forms the glue between the .aspx file and the code-behind .cs file. The src attribute on the Page directive specifies the name of the code-behind file. Page src Code-behind files always have a namespace and at least one class specification within it. Thus, the Page directive also has an attribute that specifies the namespace and class from the code-behind page that gets inherited on the .aspx page. These specifications make the code within the referenced class "available" to the .aspx page. Below is an example of a Page directive which references a code-behind file: <%@ Page Language="C#" Src="DataGrid_Codebehind.aspx.cs" Inherits="MyNamespace.MyClass" %> The structure of the code-behind behind is as follows: namespace MyNamespace { using System; public class MyClass { // Script code goes here } } Note how the Inherits attribute on the Page directive references the namespace and class on the code-behind page. Once a class has been defined, you can now place the code from the original .aspx page. Inherits namespace Since the code-behind file is a totally separate file from the original .aspx file, any web controls that were being referenced in the original script-side block cannot be accessed without explicitly importing the System.Web.UI.Controls class. (In C#, the import directive is replaced with the using namespace; statement.) The class that uses these controls must inherit the SystemWeb.UI.Page class. Finally, you must declare the controls that will be accessed in your code. While declaring the controls, you make them protected, which means that they are accessible only within the code-behind code and the presentation page that inherits from it. Class inheritance will then make these controls available in the code-behind code so that you can access the controls and associate records to them, etc., just as you would on the .aspx page itself. System.Web.UI.Controls import using namespace; SystemWeb.UI.Page protected inherits using System; using System.Web.UI.WebControls; // import web controls namespace MyNamespace { public class MyClass : System.Web.UI.Page { protected DataGrid DataGrid1; // Declare controls void Page_Load() { // Script code goes here } // end of Page_Load function } // end of MyClass } // end of MyNamespace At this point, you can now "cut-and-paste" the code from the original .aspx page into the code-behind page. Below is the complete code-behind code for assigning records from a database table to a DataGrid. You can see the similarity between the original code in the .aspx file and how it's specified in the code-behind file. DataGrid DataGrid_Codebehind.aspx.cs using System.Web.UI.WebControls; // import web controls using System.Data; // import for DataSet class using System.Data.SqlClient; // import for SQL Server classes namespace MyNamespace { public class MyClass : System.Web.UI.Page { // Controls from the HTML page protected DataGrid DataGrid1; void Page_Load() { // Set up the SQL Server database connection string ConnStr = "server='(local)'; trusted_connection=true; database='Demo'"; DataSet RecordSet = new DataSet(); // Now, pull the records from the database string SQL = "Select * from Books"; SqlDataAdapter RecordSetAdpt = new SqlDataAdapter(SQL,ConnStr); RecordSetAdpt.Fill(RecordSet); // Set the data grid's source of data to this recordset and bind DataGrid1.DataSource = RecordSet.Tables[0].DefaultView; // Finally, bind this data to the control DataGrid1.DataBind(); } // end of Page_Load() } // end of MyClass } // end of MyNamespace The code in the .aspx file is: <%@ Page Language="C#" Src="DataGrid_Codebehind.aspx.cs" Inherits="MyNamespace.MyClass" %> <html> <head> </head> <body> <form runat="server"> <asp:DataGrid</asp:DataGrid> </form> </body> </html> As you can see from comparing the code above to the DataGrid code in Visual Basic described in Part 1, switching to C# is not a major undertaking. In fact, you should have no trouble following the C# code; in this case, the C# code is almost the same as the Visual Basic code. The reason for this close similarity is that we are essentially just using properties and methods of imported objects. In general, however, you will see greater differences. But, in a very loose sense, C# is a lot like JavaScript. Thus, instead of the if ( ) then ... endif construct in Visual Basic, you have if ( ) { ... } construct in C#. A few other salient points of C# that you will find useful as you transition from Visual Basic to C#: if ( ) then ... endif if ( ) { ... } // /* ... */ Session["YOUR_SESSION_VARIABLE"] [ ... ] ( ... ) string MY_STRING_VAR = "initial value" Dim MY_STRING_VAR as String = "initial value" AND NOT OR && ! || Record["FIELD_NAME"] Record DataRow DataRow Record (This is only a "cheat-sheet" overview of C# and does not do justice to the full range of programming constructs available, but it should get you started.) Finally, a code-behind file can be referenced by several .aspx pages as the following graphic shows: Figure 2. Multiple pages using code-behind. Hence, the code-behind files serve a similar purpose, albeit implemented differently, as the Server Side Includes in classic ASP, in that common code is not duplicated. If you have certain classes in your code-behind code which are being used on the majority of your web pages, then it may be more efficient to create what are known as assemblies and then import these assemblies like the other namespaces. Creating and importing your own custom assemblies will reduce the amount of code in your code-behind page. Building assemblies, though, is beyond the scope of this article. For more information on how to create and register assemblies, please refer to the book ASP.NET in a Nutshell. assemblies Finally, with this introduction to code-behinds, I'd like to end this section by suggesting why the term "forms" is so prevalent in ASP.NET. "Forms" are reminiscent of programming in Visual Studio where you would build the graphical user interface (GUI) by dragging and dropping controls onto a "work-area" called a "Form." The code that drives the form is placed on a separate window which you access by right-clicking on the "Form" in the Visual Studio Project pane and selecting "View Code." Visual Studio managed the relevant associations internally, so you didn't need Page directives. Thus, the terminology from Visual Studio is carried over into ASP.NET. Hopefully, this explanation should help clear some of confusion between ASP.NET forms and standard HTML forms..
http://archive.oreilly.com/pub/a/dotnet/2004/12/06/asp2aspnet_pt3.html
CC-MAIN-2015-32
refinedweb
2,119
65.73
This chapter documents the class contained in package oracle.xml.sql.dml, which handles data manipulation and modification for XML SQL Utility for Java (XSU). XSU is part of the Oracle XDK for Java. XML SQL Utility for Java generates and stores XML data to and from the database from SQL queries or result sets or tables. It achieves data transformation by mapping canonically any SQL query result to XML, and vice versa. This chapter contains these sections: Package oracle.xml.sql.dml implements data manipulation and modification functions for Oracle XDK for Java. (DML is for Data Manipulation/Modification Language.) The methods for DML operations are provided in the OracleXMLSave class contained in this package. The OracleXMLSave class supports canonical mapping from XML to object-relational tables or views. OracleXMLSave class supports canonical mapping from XML to object-relational tables or views. It supports inserts, updates and deletes. The user first creates the class by passing in the table name on which these DML operations need to be done. After that, the user is free to use the insert/update/delete on this table. Many useful functions are provided in this class to help in identifying the key columns for update or delete and to restrict the columns being updated. public class OracleXMLSave extends java.lang.Object java.lang.Object | +--oracle.xml.sql.dml.OracleXMLSave The public constructor for the OracleXMLSave class. public OracleXMLSave(java.sql.Connection oconn, java.lang.String tabName; Closes/deallocates all the context associated with this object. public void close(); Deletes the rows in the table based on the XML document. Returns the number of XML ROW elements processed. This may or may not be equal to the number of database rows deleted based on whether the rows selected through the XML document uniquely identified the rows in the table. By default, the delete processing matches all the element values with the corresponding column names. Each ROW element in the input document is taken as a separate delete statement on the table. By using the setKeyColumnList(), the list of columns that must be matched to identify the row to be deleted is set, and other elements are ignored. This is an efficient method for deleting more than one row in the table if matching is employed (since the delete statement is cached). Otherwise, a new delete statement has to be created for each ROW element in the input document. The syntax options are described in the table here. Returns a URL object identifying the target entity, given a file name or a URL. If the argument passed is not in a valid URL format, such as "http://..." or "file://...", then this method attempts to correct the argument by pre-pending "file://". If a NULL or an empty string are passed to it, NULL is returned. public static java.net.URL getURL( java.lang.String target); Inserts an XML document into the specified table. Returns the number of rows inserted. NULLvalue for all elements that are missing in the input document. By using the setUpdateColumnList(), no NULLvalues would be inserted for the rest of the columns; instead, default values would be used. The options are described in the following table. Removes the value of a top-level stylesheet parameter. If no stylesheet is registered, this method is a no op. public void removeXSLTParam( java.lang.String name); Changes the batch size used during DML operations. When performing inserts, updates or deletes, it is recommended to batch the operations to minimize I/O cycles; however, this requires more cache for storing the bind values while the operations are executing. When batching is used, the commits occur only in terms of batches. If a single statement inside a batch fails, the entire batch is rolled back. If this behavior is undesirable, set batch size to 1. The default batch size is DEFAULT_BATCH_SIZE. public void setBatchSize(int size); Sets the commit batch size, which refers to the number of records inserted after which a commit must follow. If size < 1, or the session is in "auto-commit" mode, the XSU does not make any explicit commits. Default commit-batch size is 0. public void setCommitBatch( int size); Describes to the XSU the format of the dates in the XML document. By default, OracleXMLSave assumes that the date is in format 'MM/dd/yyyy HH:mm:ss'. You can override this default format by calling this function. The syntax of the date format pattern (i.e. the date mask), should conform to the requirements of the java.text.SimpleDateFormat class. Setting the mask to NULL or an empty string, causes the use of the default mask -- OracleXMLSave.DATE_FORMAT. public void setDateFormat( java.lang.String mask); The XSU performs mapping of XML elements to database columns or attributes based on the element names (XML tags). This function instructs the XSU to perform a case-insensitive match. This may affect the metadata caching performed when creating the Save object. public void setIgnoreCase(boolean ignore); Sets the list of columns to be used for identifying a particular row in the database table during update or delete. This call is ignored for the insert case. The key columns must be set before updates can be done. It is optional for deletes. When this key columns is set, then the values from these tags in the XML document is used to identify the database row for update or delete. Currently, there is no way to update the values of the key columns themselves, since there is no way in the XML document to specify that case. public void setKeyColumnList( java.lang.String[] keyColNames); Instructs the XSU whether to preserve whitespaces. public void setPreserveWhitespace( boolean flag); Names the tag used in the XML doc so to enclose the XML elements corresponding to each row value. Setting the value of this to NULL implies that there is no row tag present, and the top level elements of the document correspond to the rows themselves. public void setRowTag( java.lang.String rowTag); This turns on or off escaping of XML tags when the SQL object name, which is mapped to a XML identifier, is not a valid XML identifier. public void setSQLToXMLNameEscaping( boolean flag); Set the column values to be updated. Applies to inserts and updates, not deletes. public void setUpdateColumnList(java.lang.String[] updColNames); Registers a XSL transform to be applied to generated XML. If a stylesheet was already registered, it gets replaced by the new one. To un-register the stylesheet pass in a NULL for the stylesheet argument. The options are described in the following table. Sets the value of a top-level stylesheet parameter. The parameter value is expected to be a valid XPath expression (note that string literal values would therefore have to be explicitly quoted). If no stylesheet is registered, this method is a no op. public void setXSLTParam(java.lang.String name, java.lang.String value); Updates the table given the XML document. Returns the number of XML elements processed. This may or may not be equal to the number of database rows modified, depending on whether the rows selected through the XML document uniquely identify the rows in the table. The options are described in the following table.
http://docs.oracle.com/cd/B10501_01/appdev.920/a96609/arj_xmlsqldml.htm
CC-MAIN-2015-27
refinedweb
1,208
58.38
I need to use requests module in my python script to scraping HTML data with XPath. I am getting ModuleNotFound error when import requests module. Could anyone please tell me how to add python module in test complete? Solved! Go to Solution. You should put your external libraries in python lib in testcomplete eg C:\Program Files (x86)\SmartBear\TestComplete 12\Bin\Extensions\Python\Python36\Lib If you have other python installations in your machine, just installing (eg via pip command) will not effect to python inside test complete. View solution in original post Have you followed the documentation on importing 3rd party Python packages into TestComplete automation? Can anyone elobrate in more detail? from os import syssys.path.insert(0, '%PATH_TO_PYTHON_DIRECTORY%\Lib\site-packages') import requests But the TC tool showing an error 'No module named 'requests'. So how to import packages, please let me know with an example or else syntax in detail. Documentation linked further up as well as explanation of where you need to put your "requests" module for TestComplete to be able to read it.
https://community.smartbear.com/t5/TestComplete-General-Discussions/How-to-add-python-module-in-test-complete/m-p/162627
CC-MAIN-2021-17
refinedweb
180
56.15
Hello, World: Getting Started with IE8 Visual Search ★★★★★★★★★★★★★★★ ieblogSeptember 18, 200854 Share 0 0 PingBack from how about the ability to save the actual image resolution of the thumbnail using accelerators it’s much faster that way. Is this possible to do? how about the ability to save the actual image dimension or size of thumbnail image using accelerators for example if user mouse over a thumbnail image which is not the actual size of the image the accelerators would allow me to save image with the right dimension. Will this 64 bit really work on my system? Will it inhance the 3D graphics? I cannot wait to see them. Sincerely, Mike Vallad Thanks! I’ll have to give this a try for my site =) Unfortunetly however visual search doesn’t seem to be working for me? I tried ie8 and it didn’t work… soon after I reinstalled Vista for some other reason and later downloaded the beta again, and it still didn’t work. I have no idea why 🙁 Its not just the visual search, its all the instant search results… i just get "no results" popping up. Если было ло бы так нужно можно было бы объявить тендер на перевод этого текста но видимо тендер не кто не обьявить и текст останется не переведен What’s with the half a mega uncompressed bitmap in this post? Even paint can do png’s! What’s to stop every site I visit calling the AddServiceProvider function without me clicking a link that I want to add their service? Wouldn’t I end up with loads and loads of these things adding up in my accelerator? This is neat and all, but i’m still waiting for the blog posts about increased standards support. Things like SVG, <audio>, <video>, XHR2 (instead of XDR), addEventListener. Closing the gap between: if (/* check for standard objects */) { /* do it the right way */ } else { /* ie way */ } is always appreciated. @Joshua [quote]Unfortunetly however visual search doesn’t seem to be working for me?[/quote] Try setting your first language to [en-US] in tools=>options=>general tab=>languages and remove en readd your search providers (you might have to temporary switch defaults in order to remove them) Will it be possible to send back different results to different users based on a session cookie? The instance I am thinking of is an intranet app that requires login. @Joe: AddSearchProvider must be called after a user-initiated action (like a mouse click) and you are prompted for permission before install. @George Jones: You will get better results if you use HTTP authentication (specifically NTLM or Kerberos) instead of a session cookie. @Fowl: Sorry about the bitmap; we’ll take a look. @Mike Vallad: No, using a 64bit browser will not improve the graphics in any way. The primary downside to 64-bit IE is that most ActiveX controls and browser addons are only compiled to run in 32bit IE. Why is <Query/> sibling to <Section/>? There are reasons that a single static XML might be used for several common query strings related to a page, so putting <Section/> under <Query/> would allow a single XML file to support different query terms. Make scroll wheels functional in smart address bar. @Mike: No, Accelerators do not allow you to do this–accelerators help you send selected text to third-party providers. For what you are trying to do, which is local to your machine, you have to right click, then "Save image as", and the image has to already be a full-scale image. @Joshua: As hAl said, the language change might work–this is when some services haven’t yet implemented responses other than for the English language. @Fowl: Thanks for letting us know–this was mistakenly saved as a bitmap with a PNG extension instead. This is now fixed, sorry. @George Jones: As EricLaw mentioned, you might want to use HTTP authentication. However, session cookies (and cookies in general) should work, yes–just like they do when you browse. @Heath Stewart: Thanks for the feedback, we’ll look into it. @Brez: Thanks for catching this. This seems to work after you expand a second section in the smart address bar, provided the first one displayed the scrollbars. I filed a bug internally. Hello: I know this is the wrong place to ask, but will IE8 have an FTPS (FTP over SSL) client built into it? IIS7 supports FTPS. Thank You! @FTPS user– no, there’s no FTPS client. If there was, they would have put it in beta-2. It’s clear that FTP/FTPS isn’t an important scenario for the IE team. There are better FTP clients that can be freely downloaded. @John: Go elsewhere, troll. Next feature would be a smart favorite center that automatically organize the same website address in groups. Today i just organize my favorite website and put all the same website together(not in new folders but next to each others)now when i browse my favorite center all i see is website with same favicons together its so clean to look at. In IE 8 beta 2 do we still need to restart the whole browser when deleting browser history to refresh and completely delete everything. If not then next features would be deleting browsing history without having to close the whole browser is nice one as well. Windows explorer/IE 8 beta 2 smart address bar scroll bar have animation effect when hovering over it but the scroll bar on the main IE 8 beta window has no animation at all. Can this small thing be fix? IE 7 has this too. I notice FF3 does have the animation scrollbar working. When I tried this example, it worked, but the images don’t show up, instead I am given the box with a red cross in it. I’ve tried the URL of the face, and it works fine, so any known reason why it won’t work in the search window? This following applied to IE7, where is it in IE8? "I want to close Internet Explorer but I have a lot of tabs open. Is there anything I can do to make them re-open the next time I start Internet Explorer? Yes. When you close Internet Explorer, you will be asked whether you want to close all tabs. When the prompt appears, click Show Options, select Open these the next time I use Internet Explorer, and then click Close Tabs. When you reopen Internet Explorer the tabs will be restored." @Victoria: On the new tabs page (URL "about:tabs") click the "Reopen last browsing session" link. Is it possible to highlight a sentence/word click the accelarator button to paste the sentence/word directly to any input box. Let say I’m typing a comment in IEblog input box and found a word in the current tab or the other tabs that i want to put in the input box. I like the Search in Smart address bar feature but I just hate typing "? xbox 360" in the smart address bar. Why? because i have to press Shift + ? in keyboard. I rather press button . or , or / those keyboard button requires only one push button. Well it make sense to add ? but i rather have one push button. What about a Internet Explorer Gallery for other languages? I would like one for The Netherlands. This is not the first time I have to re-post… Am I being moderated, or is there something wrong in the comments system? There is a bug in the developer tools, debugger window: when you check the ‘show all properties’ checkbox and then change the object being inspected, the checkbox stays checked but not all properties are visible; you need to manually uncheck the box and check it again to have it take effect. Will the bug affecting pseudoframes (rolling back to top on any keyboard or mouse event) be corrected? Is there a better way to dynamically change the selected option in a SELECT node than to rewrite it completely with innerHTML on SELECT’s parent, since .selected on OPTIONs and .selectedIndex are read-only properties (or just don’t exist) in IE? Basicamente, el proyecto Opensearch es de proveer de una manera simple y sencilla busquedas en los sitios web, de manera que lo puedan acceder desde el mismo sitio web. Quizas esta descripcion te confunda, pero lo que hace Opensearch es de proveer de @Chris: You might have to kill all instances of iexplore.exe. This is a known bug. @Brez (1): Which animations are you talking about? I see them working identically between the search box and the smart address bar in IE8 Beta 2. @Brez (2): No, this is not possible. Copy/Pasting is available from the right-click contextual menu. We wanted to keep the Accelerators button focused on its purpose. @Mitch 74 Thanks for the bug report on the tools! We have this in our beta bug database. Thanks! Bug: When i click to Favourites Button it doesn’t work. Tells me Don’t Send or Send Error Report. When i click Don’t Send it opens second time and then closes. Suggestion: What you think about making thumbnails of Web Pages when you mouse over it (Like in Windows Vista Aero Glass Style Taskbar Thumbnails)? How do we test this? I’m trying to use paths without the domain name so that the file will work on development, staging and production machines. Am I really going to be required to include a domain name in the URLs? Why can’t it just assume the domain name in use? <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns=""> <ShortName>GF Search</ShortName> <Url type="text/html" template="/content/search.aspx?searchtext={searchTerms}" /> <Url type="application/x-suggestions+xml" template="/Content/OpenSearchSuggestions.xml"/> <Image height="16" width="16" type="image/icon"></Image> </OpenSearchDescription> @Kasya: Crashes when you click on the Favorites button are almost exclusively due to a particular buggy add-on. Please see for information on running without addons. In your Manage Addons list, do you have a "DriveLetterAccess" add-on listed? Also, how are you going to prevent unscrupulous vendors from naming their search names "Google" or "Live"? Ok. I just implemented this. For reference, you need to htmlencode the URL. Otherwise, ampersands in a URL will break the XML. Also, I had to give a width and a height on the Image element, or it wouldn’t show. Question: Is there a way for there to be a wait time before the search page is queried? Right now, it seems like the search attempt is instantaneous when a key is entered. So, if someone typed "balloon" really fast, 7 searches are made. This could potentially be a lot of concurrent hits on the server. There needs to be something like a 100-300ms pause after the last keystroke is made before a search is attempted. Also, are you caching any results? Thanks, John @John: Results are cached depending on the HTTP response headers from the server. To learn more about setting proper HTTP response headers, please see @john: The user is shown information about the origins of a search engine when they’re given the opportunity to install it. As noted in the dialog box: "Search provider names can be misleading. Only add search providers from websites you trust." Of course, trademark law does apply in this case as well. Cool. You might have missed the question about the opensearch.xml file. Is there a way to use /content/abc instead of in the opensearch.xml file? This way the file could stay the same regardless of whether it was development, staging or production? @john: Sorry, no, the OpenSearch spec doesn’t support relative paths, and neither does IE. You can easily write a PHP, ASP, CGI, etc which dynamically generates the XML file based on the current hostname. I’ve been trying to implement this, mostly successfully, and I’ve got two questions: (note I’m using the XML suggestions file method, rather than JSON) 1) If I want to use the common example of providing suggestions for an "xbox" query, how would I have it show the results when only "xb" was entered (that is, show the suggestions when only part of the query had been typed)? Do I have to set up a suggestion for "x", "xb", "xbo" as well as "xbox"? 2) I seem to be getting errors when I try to provide suggestions for multiple possible queries. For instance, if I wanted to provide suggestions when the user typed "xbox", and provide different suggestions when the user typed "microsoft", what is the structure for providing multiple query suggestions? Is it: <SearchSuggestion> <Query>xbox</Query> <Section> … </Section> <Query>Microsoft</Query> <Section> … </Section> </SearchSuggestion> Or: <SearchSuggestion> <Query>xbox</Query> <Section> … </Section> </SearchSuggestion> <SearchSuggestion> <Query>Microsoft</Query> <Section> … </Section> </SearchSuggestion> Both tell me "An error has occured" in the suggestions box when I type the second query…I can’t seem to find clear (in my mind) documentation on this, nor can I find any examples that show more than one query! Could you shed some light on this for me? Thanks so much for this article! @james3mg– The search suggestion format is documented here: You should only have one SearchSuggestion node as the root of your document. If you want to match the text as the user types, you would typically use a PHP/ASP/ASPNET page on the server that accepts the {searchTerms} parameter from the url and dynamically generates a QUERY tag with the matching term. I want to go back to caching/performance for a minute. I understand that Visual Search will honor caching. However, the first time someone types in "Bariatric", there will be 9 calls to the search page, since the browser doesn’t have cache info for any of the character combinations yet. Sure, after that it will be pulled from cache, but 9 calls without a keystroke pause parameter is too much. The person wasn’t trying to get search results for B, BA, BAR, etc., and IE should wait a predetermined time to see if the user is truly done typing and ready to see results. Does that make sense? I’m really not trying to be dense, but that’s the document I’ve been trying to follow. You’ll notice that they only show ONE query: xbox Furthermore, SearchSuggestion, Query and Section ALL say they should appear only once in the document. So I still don’t know what the xml file would look like if I wanted to provide different results for xbox AND microsoft (ignoring my previous question about results for and incompletely-typed query). I’ve tried everything I can think of, and I keep getting the result "an error has occured". Sorry I keep jumping in, John. James, I might be redundant, but you can’t do anything other than a single query, and the query value in SearchSuggestion> <Query>xbox</Query> <Section> has to match what’s in the search box directly. If you want to do separate queries in a situation like this: SearchSuggestion> <Query>microsoft xbox</Query> <Section> where you return separate results for each word, then you can use the <Separator> element, as shown in this example: <Separator title="Microsoft" /> <Item>…</Item> <Item>…</Item> <Separator title="Xbox" /> <Item>…</Item> <Item>…</Item> At least then it will be in separate sections in the dropdown. Of course, your xml/json results generation code will have to do two queries, one for Microsoft and one for xbox, before merging the results into the single resulting xml/json response. Does this help? If not, hopefully the MS people will give the right answer :). Thanks for the help. So, if I understand you corrently, it’s NOT POSSIBLE to provide search suggestions like so: with a static XML file? The server HAS to create the file at search-time dynamically? I just have a small site with very few common terms I wanted to provide ‘single-click’ links to, with a hand-created, static XML file. So I guess I’ll be off learning some new skills and quit bugging you all. Thanks for your patience. =) @james3mg, No, it’s not possible. The browser does no filtering of the results whatsoever, since it expects you to have done the processing yourself based on the passed querystring. So if you return xml that has search results for microsoft and xbox, the browser will display them as-is. Thanks for the final answer 🙂 I’d assumed when I read this article that you’d be able to have a static xml file with MULTIPLE <query> nodes, and the browser would request the sub-nodes of the query that exactly matched what was typed, and just display those. That way, there’s no overhead of the browser actually trying to filter it. Of course, the potential for quite large XML files is probably why they didn’t go that way. But, it doesn’t work that way. I’ll learn to work quite happily within the system the way it does work, I’m sure =) Hi Sébastien! Can you <em>please</em> make sure to define and use right XML namespaces instead of just adding new XML elements or using an ad-hoc XML syntax without namespace? The this message at the OpenSearch mailing list: Thanks and greetings, Jakob Beta 2 of Internet Explorer has been out for a while now and as you already know one of the new functionalities I have been running IE8 as my default browser since Beta 2 was released a few months ago and I have been What is Windows 7 doing at a web conference like MIX09 ? Last week I went along to the above titled session, 緣起承繼上篇【[IE8]搜尋功能介紹】,IE8新增了視覺式搜尋的功能。當小喵看到這個功能之後,身為WebAppDeveloper的小喵不禁開始想,如果小喵的系統,也能夠提供這樣的功能給使用者,該…
https://blogs.msdn.microsoft.com/ie/2008/09/18/hello-world-getting-started-with-ie8-visual-search/
CC-MAIN-2017-47
refinedweb
2,995
71.14
The Build Job feature allows you to deploy and execute a Job on any server, independent of Talend Studio. By executing build scripts generated from the templates defined in Project Settings, the Build Job feature adds all of the files required to execute the Job to an archive, including the .bat and .sh along with any context-parameter files or other related files. Note Your Talend Studio provides a set of default build script templates. You can customize those templates to meet your actual needs. For more information, see Customizing Maven build script templates. Window > Preferences, select Talend > Import/Export, and then select the Add classpath jar in exported jobs check box to wrap the Jars in a classpath.jar file added to the built Job. Warning. To build Jobs, complete the following: In the Repository tree view, right-click the Job you want to build, and select Build Job to open the [Build Job] dialog box. Note You can show/hide a tree view of all created Jobs in Talend Studio earlier selected in the Studio tree view display with selected check boxes. Job. From the Select the Job version area, select the version number of the Job you want to build if you have created more than one version of the Job. Select the Build Type from the list between Standalone Job, Axis Webservice (WAR), Axis Webservice (Zip) and OSGI Bundle For ESB. Note that data service Jobs that include the tRESTRequest component can only be built a build type between Binaries and Sources (Maven) and. A zipped file for the Jobs is created in the defined place. Note If the Job to be built calls a user routine that contains one or more extra Java classes in parallel with the public class named the same as the user routine, the extra class or classes will not be included in the exported file. To export such classes, you need to include them within the class with the routine name as inner classes. For more information about user routines, see Managing user routines. For more information about classes and inner classes, see relevant Java manuals.. Note After being exported, the context selection information is stored in the .bat or .sh file and the context settings are stored in the context .properties file. In the [Build Job] dialog box, you can change the build type in order to build the Job selection as Webservice archive. Select the type of archive you want to use in your Web application. Once the archive is produced, place the WAR or the relevant Class from the ZIP (or unzipped files) into the relevant location, of your Web application server. The URL to be used to deploy the Job, typically reads as follow: where the parameters stand as follow: The call return from the Web application is 0 when there is no error and different from 0 in case of error. For a real-life example of creating and building a Job as a Webservice and calling the built Job from a browser, see An example of building a Job as a Web service. The tBufferOutput component was especially designed for this type of deployment. For more information regarding this component, see Talend Components Reference Guide. This scenario describes first a simple Job that creates a .txt file and writes in it the current date along with first and last names. Secondly, it shows how to build this Job as a Webservice. And finally, it calls the Job built as a Webservice from a browser. The built Job as a Webservice will simply return the "return code" given by the operating system. Creating the Job: Drop the following components from the Palette onto the design workspace: tFixedFlowInput and tFileOutputDelimited. Connect tFixedFlowInput to tFileOutputDelimited using a Row > Main link. In the design workspace, select tFixedFlowInput, and click the Component tab to define the basic settings for tFixedFlowInput. Set the Schema to Built-In and click the [...] button next to Edit Schema to describe the data structure you want to create from internal variables. In this scenario, the schema is made of three columns, now, firstname, and lastname. Click the [+] button to add the three parameter lines and define your variables, and then click OK to close the dialog box and accept propagating the changes when prompted by the system. The three defined columns display in the Values table of the Basic settings view of tFixedFlowInput. In the Value cell of each of the three defined columns, press Ctrl+Space to access the global variable list, and select TalendDate.getCurrentDate(), talendDatagenerator.getFirstName, and talendDataGenerator.getLastName for the now, firstname, and lastname columns respectively. In the Number of rows field, enter the number of lines to be generated. In the design workspace, select tFileOutputDelimited, click the Component tab for tFileOutputDelimited, and browse to the output file to set its path in the File name field. Define other properties as needed. If you press F6 to execute the Job, three rows holding the current date and first and last names will be written to the set output file. Building the Job as a Webservice: In the Repository tree view, right-click the above created Job and select Build Job. The [Build Job] dialog box appears. Click the Browse... button to select a directory to archive your Job in. In the Job Version area, select the version of the Job you want to build as a web service. In the Build type area, select the build type you want to use in your Web application (WAR in this example) and click Finish. The [Build Job] dialog box disappears. Copy the War folder and paste it in the Tomcat webapp directory. Calling the Job from a browser: Type the following URL into your browser: "export_job" is the name of the webapp directory deployed in Tomcat and "export_job2" is the name of the Job. Click Enter to execute the Job from your browser. The return code from the Web application is 0 when there is no error and 1 if an error occurs. For a real-life example of creating and building a Job as a Webservices using the tBufferOutput component, see the tBufferOutput component in Talend Components Reference Guide. In the [Build Job] dialog box, you can change the build type in order to build the Job selection as an OSGI Bundle in order to deploy your Job in Talend ESB Container. In the Job Version area, select the version number of the Job you want to build if you have created more than one version of the Job. In the Build type area, select OSGI Bundle For ESB to build your Job as an OSGI Bundle. The extension of your build automatically change to .jar as it is what Talend ESB Container is expecting..
https://help.talend.com/reader/eGbThEsTyqkXi6bEn7Tw8g/jkU~1VVypQak7HOqbMmLwg
CC-MAIN-2019-39
refinedweb
1,134
62.17
Hi, I'm having trouble with using this module, more specifically the 'search' function. How do i get the atom list from a pdb file (is this using PDBParser)? How do I use it to find the nearest residues from the co-crystallized ligand? Cheers Hi, I'm having trouble with using this module, more specifically the 'search' function. How do i get the atom list from a pdb file (is this using PDBParser)? How do I use it to find the nearest residues from the co-crystallized ligand? Cheers The basic syntax of NeighborSearch is the following. If you tell me what you want to do exactly I can give you a more precise hint. Also, see the Bio.PDB FAQ (pdf). The script basically loads the local pdb structure file molecule.pdb, creates a list of all atoms, gets the coordinates of the first atom, and prints a list of all resides within 5 angstrom of it (source). from Bio.PDB import * import sys structure = PDBParser().get_structure('X', 'molecule.pdb') # load your molecule atom_list = Selection.unfold_entities(structure, 'A') # A for atoms ns = NeighborSearch(atom_list) center = atom_list[0].get_coord() neighbors = ns.search(center, 5.0) # 5.0 for distance in angstrom residue_list = Selection.unfold_entities(neighbor_list, 'R') # R for residues print residue_list If you are interested in the neighbors of just a few atoms/residues, you might want to consider PyMOL since it has a nice set of GUI features that are more easily accessible. Thanks very much for the reply. Basically, what I want to do is find the nearest neighbors to the ligand bound in a protein. I need to do this for quite a few (over 200 pdbs) hence I wanted to use a script. So in this test case, the ligand has a resseq=600 and the chain starts at resseq=307. Does this mean in center = atom_list[0].get_coord() the 0 would be 293? Thanks again No, because in one case you have a list of atoms and in the other residue IDs. In the above example, you could get all residues (instead of atoms) first, search for your ligand there, and then get the corresponding ligand atoms and perform NeighborSearch. Hi I have large number of hetero-dimeric proteins. I need to check for all atoms of chain A for which atoms of chain B are within 10Å and need to obtain the list of residue numbers of those atoms as output. I am new to programming and I do not know how to write the code for this using the NeighborSearch module. Would you be able to help me out? here is a script that calculates the fnat using modeller - and it is a bit more straightforward to tease apart the ligand and receptor (it should all work automatically, no messy renumbering stages). it is however quite slow, but maybe you will find it useful ... ! from modeller import * from modeller.scripts import complete_pdb import sys,os import math import numpy as np def calculate_fnat(model,cut_off): fnat = {} for a in model.chains: for b in model.chains: if a != b: sel_a = selection(a) sel_b = selection(b) for ca_a in sel_a: ax = ca_a.x ay = ca_a.y az = ca_a.z for ca_b in sel_b: bx = ca_b.x by = ca_b.y bz = ca_b.z dist = math.sqrt(((ax-bx)**2)+((ay-by)**2)+((az-bz)**2)) if dist <= cut_off: a_res = ca_a.residue b_res = ca_b.residue if (int(b_res._num)+1,b.name,int(a_res._num)+1, a.name) not in fnat: if (int(a_res._num)+1,a.name,int(b_res._num)+1, b.name) not in fnat: fnat[int(a_res._num)+1,a.name,int(b_res._num)+1, b.name] = dist return fnat # initialise the environment . env = environ() env.libs.topology.read('${LIB}/top_heav.lib') env.libs.parameters.read('${LIB}/par.lib') #calculate native contacts m1 = complete_pdb(env, 'native.pdb') rmsd_sel = selection(m1).only_atom_types('CA') fnat = calculate_fnat(m1,5) filenames = [] for pdb_file in [l for l in os.listdir("./") if l.endswith(".pdb")]: print pdb_file filenames.append(pdb_file) results = [] # calculate model contacts for fname in filenames: m2 = complete_pdb(env, fname) fnat_mod = calculate_fnat(m2,5) count = float(0) #compare native contacts to model contacts for f in fnat: if f in fnat_mod: count += 1 score = count/len(fnat) results.append([fname,score]) #output results f1 = open('data_4.txt','w') f1.write('fname\tfnat\n') for line in results: f1.write('%s\t%s\n' % (line[0],line[1])) f1.close Can you show us the code you already have got?
https://www.biostars.org/p/1816/
CC-MAIN-2018-39
refinedweb
754
77.74
To Do this Create an XML-based Web service using ASP.NET In Visual Studio .NET, create a new ASP.NET Web Service project. Visual Studio .NET will create the application in IIS and create a default Web service file with an .asmx extension. Test a Web service Browse the .asmx file containing the Web service from a Web browser. ASP.NET will generate documentation pages for the Web service, as well as allowing you to invoke the Web service. Change the default namespace of a Web service Add or modify the WebService attribute, adding a Namespace?“<new URL>” parameter. Advertise a Web service Either create a discovery document for your Web services and publish the document to a public location, such as your organization’s Web site, or register your Web services in one of the publicly available UDDI business registries, making sure to provide the URL to the WSDL contract for each Web service you register. Consume a Web service from ASP.NET Use the wsdl.exe utility to create a proxy class based on the WSDL contract for the Web service, compile the proxy class, and instantiate the proxy class from your client application as you would any other .NET class. Alternatively, simply right-click on the project root in Visual Studio .NET, click Add Web Reference on the drop-down menu, and then enter a URL. If the URL is recognized and the methods available are displayed, click Add Reference.
http://www.yaldex.com/asp_tutorial_2/LiB0074.html
CC-MAIN-2016-50
refinedweb
244
65.93
Implementing Custom States. The commit() method is optional; it is useful if the bolt manages state on its own. This is currently used only by internal system bolts (such as CheckpointSpout). KeyValueState implementations should also implement the methods defined in the KeyValueState interface. The framework instantiates state through the corresponding StateProvider implementation. A custom state should also provide a StateProvider implementation that can load and return the state based on the namespace. Each state belongs to a unique namespace. The namespace is typically unique to a task, so that each task can have its own state. The StateProvider and corresponding State implementation should be available in the class path of Storm, by placing them in the extlib directory.
https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.1.1/developing-storm-applications/content/implementing_custom_states.html
CC-MAIN-2019-47
refinedweb
118
58.18
NPFL092 Technology for NLP (Natural Language Processing) - Lecturer: Zdeněk Žabokrtský <zabokrtsky@ufal.mff.cuni.cz>, teaching assistant: Rudolf Rosa <rosa@ufal.mff.cuni.cz> - Time and location: Thursday 09:50–12.10 SU2 Course schedule overview - Linux and Bash - survival in Linux - Bash command line and scripting - text-processing commands - Python - introduction to Python - text processing - regular expressions - object-oriented interface for processing linguistic structures in Python - XML - representing linguistic structures in XML - processing XML in Python - Extras (covered fully or partially based on remaining time at the end of the term) - selected good practices in software development (not only in NLP, not only in Python) - NLP tools and frameworks, processing morphologically and syntactically annotated data, visualization, search - data and software licensing More detailed course schedule - Introduction - slides - Motivation - Course requirements: MFF linux lab account - Course plan, overview of required work, assignment requirements - keyboard shortcuts in KDE/GNOME, selected e.g. from here - motivation for scripting, command line features (completion, history...), keyboard shortcuts - bash in a nutshell (ls (-l,-a,-1,-R), cd, pwd, cp (-R), mv, rm (-r, -f), mkdir (-p), rmdir, chmod, ssh (-XY), less, more, cat, ln (-s), .bashrc, wget, head, tail, file, man...) - - on Windows, you can use e.g. the Putty software - on any computer with the Chrome browser, you can use the Secure Shell extension (and there are similar extensions for other browsers as well) which allows you to open a remote terminal in a browser tab -- this is probably the most comfortable way - on an Android device, you can use e.g. JuiceSSH - Supplementary reading - Czech: Libor Forst's Úvod do UNIXu - Advanced Bash-Scripting Guide - Homework: Connect remotely from your home computer to the MS lab, check that you can see there the data from the class (or use wget and unzip to get the UDHR data to the computer -- see link above),You can also try connecting to the MS lab from your smartphone and running a few commands -- this will let you experience the power of being able to work remotely in Bash from anywhere... This homework does not require you to submit anything to us, just practice as much as you feel that you need so that you feel confident in Bash. Do this homework before coming to the next lab. And, as always, if you run into any problems, contact us per e-mail! - Character encoding (very short) - ascii, 8-bits, unicode, conversions, locales (LC_*) - slides - Questions: answer the following questions: - What is ASCII? - What 8-bit encoding do you know for Czech or for your native language? How do they differ from ASCII? - What is Unicode? - What Unicode encodings do you know? - What is the relation between UTF-8 a ASCII? - Take a sample of Czech text (containing some diacritics), store it into a plain text file and convert it (by iconv) to at least two different 8-bit encodings, and to utf-8 and utf-16. Explain the differences in file sizes. - How can you detect file encoding? - Store any Czech web page into a file, change the file encoding and the encoding specified in the file header, and find out how it is displayed in your browser if the two encodings differ. - How do you specify file encoding when storing a plain text or a source code in your favourite text editor? - requirements on a modern source-code editor - modes (progr. languages, xml, html...) - syntax highlighting - completion - indentation - support for encodings (utf-8) - integration with compiler... -) (boring for those who already know or use vi, too long for 45 minutes) - Homework: make sure you know how to invoke all the mentioned features in your favourite text editor - Text-processing commands in bash - sort, uniq, cat, cut, [e]grep, sed, head, tail, rev, diff, patch, set, pipelines, man... - regular expressions - exercises - Homework: read Unix for Poets by Kenneth Ward Church - if, while, for - xargs : Compare sed 's/:/\n/g' <<< "$PATH" | \ grep $USER | \ while read path ; do ls $path donewith sed 's/:/\n/g' <<< "$PATH" | \ grep $USER | \ xargs ls Shell script, patch to show changes we made–just run patch -p0 < script.sh Makefiles - make at Wikipedia - Makefile tutorial - very simple Makefile sample (from the lesson): Makefile - simple Makefile sample: Makefile - Basic Git Intro (slides of Yoad Snapir) - Basics of working with git (using Redmine) - tryGit -- a nice and easy-to-follow interactive introduction to git - Homework 01: - Write your Makefile with targets t2—t18 from the Exercises. Put the HW into 2017-npfl092/hw01/(and commit it and push it to Redmine) - Deadline: Wednesday 1st November 2017 - Please create a fresh git clone of the homework repo in the unix lab (recall that you can access it remotely using ssh) to double-check that everything is in its place. - Bash cont. - warm-up exercises: - Task 1: constract a bash pipeline that extracts words from an English text read from the input, and sorts them in the "rhyming" order (lexicographical ordering, but from the last letter to the first letter; "retrográdní uspořádání" in Czech) (hint: use the command revfor reverting individual lines) - Task 1: construct a bash pipeline that reads an English text from the input and finds 3-letter "suffices" that are most frequent in the words that are contained in the text, irrespectively of the words' frequencies (suffices not in the linguistic sense, simply just the last 3 letters from a word that contains at least 5 letters) (hint: you can use e.g. - Introduction to Python - Study the Python Tutorial as homework - To solve practical tasks, Google is your friend… - By default, we will use Python version 3: python3A editor. (Rudolf uses vim, but heard PyCharm is real good.) - First Python exercises (simple language modelling): we got up to the 4th item only - Simple language modelling in Python - Finishing the Language modelling exercises from last class A sample solution to exercises 1 to 13 can be found in solution_1.py - the string data type in Python - a tutorial - case changing (lower, upper, capitalize, title, swapcase) - is* tests (isupper, isalnum...) - matching substrings (find, startswith, endswith, count, replace) - split, splitlines, join - other useful methods (not necessarily for strings): dir, sorted, set - my string ipython3 session from the lab (unfiltered) - Warmdown: implement a simple wc-like tool in Python, so that running python3 wc.py textfile.txtwill print out three numbers: the number of lines, words, and characters in the file (for words, you can simply use whitespace-delimited strings -- there is a string method that does just that...) - Homework hw02: Implement at least three items from the extensions of the language modelling exercises (extension 1 is obligatory; the simplest to do are then probably 2 and 3, the rest may require more googling). You can get bonus points for implementing more of the extensions. Put your homework into 2017-npfl092/hw02/(and don't forget to add it, commit, and push). Deadline: Wednesday 15th November 2017 19:00 - If you need help, try (preferably in this order): - asking at/after the next lab - asking per e-mail (please send the e-mail to both of us, as this increases your chances of getting an early reply) - Python: text processing, regular expressions - a warm-up exercise: find palindrome words in English - A palindrome word reads the same forward and backward, e.g. "level" - Write a python script that reads text from stdin and prints detected palindromes (one per line) to stdout - print only palindrome words longer than three letters - apply your script on the English translation of Homer's The Odyssey available as an UTF-8 encoded Project Gutenberg ebook here. - a slightly more advanced extension (optional): try to find longer expressions that read same in both directions after spaces are removed (two or more words; a contiguous segment of the input text, possibly crossing line boundaries) - encoding in Python - differences in handling of encoded data between Python 2 and Python 3 -) - more about the topic can be found here - regular expressions in Python - a python regexp tutorial - to be able to use the regexmodule: 1. in bash: pip3 install --user regex 2. in python: import regex as re (Python has built-in regex support in the remodule, but regexseems to be more powerful while using the same API.) - search, findall, sub - raw strings ( r'...'), character classes ( [[:alnum:]], \w, ...), flags ( flags=re.I) - Homework 03: Redo hw01 in Python, implementing the targets t2 to t18 from the Exercises in one Python script called make.py, so that e.g. running python3 make.py t16prints out the frequency list of letters in the skakalpes file; running you script with no parameters should invoke all the targets. Of course, do not take the tasks word-by-word, as they explicitly name Bash commands to use, while you have to use Python commands instead. E.g. for t2, you can use urllib.request.urlopen, which returns an object with many methods, including read()(you must first import urllib.request). In t3, just print the text once (you don't have to implement less). For t4, look for decode()... Put the HW to 2017-npfl092/hw03/ Deadline: Monday 27th November 2017 23:59 - Python: modules, packages, classes - Specification:(l if l<"\r" else l[:-1]+"\tN\n" for l in sys.stdin)'<tagger-eval.tsv|./eval-tagger.sh prec=897/2618=0.342627960275019 - Homework HW04: a simple POS tagger, this time OO solution - turn your warm-up exercise solution into an OO solution: - implement a class Tagger - the tagger class has a method tagger.see(word,pos)which gets a word-pos instance from the training data (and probably stores it into a dictionary or something) - the tagger class has a method tagger.train()that infers a model (if needed) - the tagger class has a method tagger.save(filename)that saves the model to a file (again, it is recommended to use pickle) - the tagger class has a method tagger.load(filename)that loads the model from a file - the tagger class has a method tagger.predict(word)that predicts a POS tag for a word given the trained model - the tagger should be usable as a Python module: - e.g. if your Taggerclass resides in my-tagger-class.py, you should be able to use it in another script (e.g. calling-my-tagger.py) by importing it ( from my-tagger-class import Tagger) - one option of achieving this is by having just the Taggerclass into the test file - eval - prints the accuracy - Put your solution into 2017-npfl092/hw04/ - Deadline: Monday 11th December 2017, 23:59 CET - A gentle introduction to XML - Motivation for XML, basics of XML syntax, examples, well-formedness/validity, dtd, xmllint - Slides - XML exercise: create an XML file representing some data structures (ideally NLP-related) manually in a text editor, or by a Python script. The file should contain at least 7 different elements, some of them should have attributes. Create a DTD file and make sure that the XML file is valid w.r.t. the DTD file. - Create a Makefile that has targets "wellformed" and "valid" and uses xmllintto verify the file's well-formedness and its validity with respect to the DTD file. - Homework 05: - finish the exercise: XML+DTD files - store it into 2017-npfl092/hw05/in your git repository (and don't forget to commit and push it) - Deadline: Monday 18th December 2017, 23:59 CET - XML, cont. - Exercise: For all file in sample.zip, check whether they are well-formed xml files or not (e.g. by xmllint), and if not then fix them (possibly manually in a text editor, or any way you want). - Exercise: write a Python script that recognizes (at least some of) the well-formedness violations present in the above mentioned files, without using any specific library for XML processing - overview of Python modules for XML (DOM approach, SAX approach, ElementTree library); study materials: XML Chapter in the "Dive into Python 3" book, ElementTree module tutorial - Homework 06: - download a simplified file with Universal Dependencies trees dependency_trees_from_ud.tsv (note: simplification = some columns removed from the standard conllu format) - write a Python script that converts this data into a reasonably structured XML file - write a Python script that converts the XML file back into the original (tab-separated) format, check the identity of the output with the original file - write a Python script that converts the XML file into a simply formatted HTML - organize it all in a Makefile with targets download, toxml, totsv, tohtml - put your solution into 2017-npfl092/hw06/ - NLTK and other NLP frameworks - NLP frameworks - NLTK tutorial - Homework 07: - train and evaluate a Czech part-of-speech tagger in NLTK - use any of the trainable taggers available in NLTK (tnt looked quite promising); you can experiment with multiple taggers and multiple settings and improvements to achieve a good accuracy (this is not required and there is no minimum accuracy you must achieve, but you can get bonus points; but still your result should not be something obviously wrong, such as 20% accuracy) - use the data from the previous tagging homework: tagger-devel.tsv as training data, tagger-eval.tsv as evaluation data - note that you have to convert the input data appropriately into a format which is expected by the tagger - put your solution into 2017-npfl092/hw07/ - NLTK and other NLP frameworks, vol 2 - warmup: once again processing genesis, this time in NLTK: - read in the text of the first chapter of Genesis - use NLTK to split the text into sentences, split the sentences into tokens, and tag the tokens for part-of-speech - print out the output as TSV, one token per line, wordform POStagseparated by a tab, with an empty line separating sentences - named entites in NLTK - tree structure and visualization in NLTK - parsing in UDPipe - Selected good practices in software development (not only in NLP, not only in Python) - warm-up exercise: find English word groups in which the words are derived one from the other, such as interest-interesting-interestingly; use the list of 10,000 most frequent English lemmas bnc_freq_10000.txt - good development practices - slides (testing, benchmarking, profiling, code reviewing, bug reporting) - exercise: - exchange solutions of HW05 with one of your colleagues - implement unit tests (using unittest) of his/her solution - if you find some problems, send him/her a bugreport The future is under construction!!! - Homework HW-- (not yet set): word frequency colorizer - write a Python script that reads some big text (e.g. the one from the morning exercise), tokenizes it, performs some trivial stemming (e.g. removing the most frequent inflectional and derivational suffixes like -ed or -ly), collects numbers of occurrences of such stems, and generated an HTML file which contains e.g. first 1000 words colorized according to their stem's frequency (e.g. three bands - green - very frequent words, yello - middle band, red - very rare words) - Commit your solution into npfl092/hw--/ - Data visualization Morning warm-up exercise: (1) make a frequency list of html tag frequencies of this web page, (2) supposing the page is a well formed XML, write a converter that transforms its content into a simply formatted plain text (such as \n, several spaces and * in front of every list item). You can use any standard technique for processing XML files (Twig, Sax, XPath...). - gnuplot - dot/graphviz - figures/tables for latex - Homework: ACL-style article draft containing a learning curve of your tagger (or of any other trainable tool). Create a Makefile that - applies your previously created POS-tagger on gradually increasing training data (or apply any other tool for which a quantity can be measured that is dependent on the input data size) and evaluate it in each iteration (always on the same test data). It is recommended to use exponentially growing size of the training data (e.g. 100 words, 1kW, 10kW, 100kW ...). You can use any other trainable NLP tool (but not tools developed by your colleagues in the course). The simplest acceptable solution is a tool measuring OOV (out-of-vocabulary rate - how many words in the test data have not been seen in the training data). - collects the learning curve statistics from the individual iterations and converts them to a LaTeX table as well as to a graphical form: data size (preferably in log scale) on the horizontal axis, and tool performance on the vertical axis. Use gnuplot for the latter task. - downloads the LaTeX article style for ACL 2011 conference papers and compiles your article into PDF. Create a simple LaTeX article using this style and include the generated table and figure into it and fill the table's and figure's captions (the text in the rest of the article is not important). 2016-npfl092/hw08/. Make sure that the Makefile performs all the steps correctly on a fresh checkout of the directory. Deadline: 16th January 2016, 12:00. - Data and Software licensing - morning exercise: theater of the absurd is a form of drama; one of its characteristics lies in using repetitive dialogues, sometimes with utterances swapped between two or more actors. Task: find occurrences of swapped utterances in Václav Havel's play Zahradní slavnost (The Garden Party), and print out whose replicas were repeated by whom. - Licenses - authors' rights in the Czech Republic, slides authors_rights_intro.pdf - open source movement - GPL, Artistic license - Creative Commons (mainly CC0 and Attribution) and Open Data Commons: - Licenses for PDT, CNK, - data distributors, ELRA/ELDA, LDC, currently emerging networks - Checking all your homework tasks. - Premium task (T.B.A.) - Final written test Required work - solving all homework tasks - solving lab tasks along the semester - written test at the end of the semester Rules for homework - all homeworks must be submitted into the designated repository (details will be announced soon); homeworks sent by email will not be accepted - submit your work into your personal directories into 2017-npfl092/hw01/, 2017-npfl092/hw02/, etc. - there will be an explicit deadline for submitting each homework - if the deadline is not met by a student, an additional homework will be specified for the student. Late and additional homeworks have no deadline, but they must be submitted at least 1 day before the final test. The student must inform us by e-mail that his late or additional homework is ready to be checked. - if a student wishes to increase his/her point average, he/she can solve any additional homework - each student is supposed to create all homework solutions himself/herself; any cheating will be penalized Premium tasks - occasionally, there will be special programming tasks announced in the class - students can get additional points - for each premium task, only the winner (the one who is fastest) will get the points Rules for the final test - final test will be written in the last class in the semestr - all homework tasks (including penalty ones) must be submitted before the final test - examples of test questions Determination of the final grade - excellent: > 90 % - very good: > 70 % - good: > 50 % - (final test 35%, homework tasks 50%, lab activity 15%)
http://ufal.mff.cuni.cz/~zabokrtsky/courses/npfl092/html/
CC-MAIN-2017-51
refinedweb
3,168
52.02
Wikis : Information, communities, and feedback loops. Wikis are cool. People who wiki are part of a community People who are part of a community make a difference If you don’t have much information to process, don’t have a community, or don’t want others to make a difference, you don’t need a wiki. (but they’re still cool) — Wikis enable people to cooperate to build new information and push boundaries. Wikis are changing our world Implementations include Mediawiki, Twiki, Socialtext. Wiki is Hawaiian for fast; keeping up with fast things is what this talk is about — Communication: Recentchanges. Talk pages for every article. Special community pages; other community baggage (Mailing lists, IM, Skype, etc.) — Soft Security and tradeoffs. Robots and vandals. Mechanisms, tradeoffs. Troublemakers and trolls. Community mechanisms, tradeoffs. Advantages to not being too hard : exponential growth varies on the time to make an edit. 95% of users are good. Slowing down edits; imposing delays. — The unexpected Having no structure means having to make all structure; almost all parts of the site/km can be changed. Slower to do known things, but faster to do unknown things. many unknown things are simply impossible with other tools (separate sidebars, page-changing templates and other inclusions, extended namespaces for disambiguation as meta-content grows)
https://blogs.harvard.edu/longestnow/wikis-information-communities-and-feedback-loops/
CC-MAIN-2021-31
refinedweb
215
59.8
One of the most frustrating thing about clicking on blog posts is having to scroll through people's long winded explanations of stuff when you can just put the answer at the top. Here's how you do the thing in the title: <Router> <NavLink to="/homepage">Homepage</NavLink> <Route path="/homepage" render={props => (<Homepage {...props} pieceOfState={this.state.pieceOfState}/>) }/> </Router> If you want details on that, please feel free to read on :) The Router itself you can put in whatever place you want -- but it makes the most sense to pick a pretty top-level part of your app, so usually in the render method of your App.js file. As you can see above, the NavLink we are using points to the homepage of this particular site or app, and the route is the driving force which will actually do the work of rendering the component. If you don't need to pass the component any state, you would usually just see the router like so: <Route path='/homepage' component={Homepage} /> But in React, passing state (or helper methods) is where all the power comes from -- it's what makes React so reactive. So you will want to use the first code snippet to get the functionality you want. The Route path in that code is using the render method to pass an inline function which will render the Homepage -- you may be wondering, why can't we just pass an inline function using the regular component method from snippet #2 and get the same result? The answer is that the component method will actually unmount and remount the entire component every time the state changes if you use an inline function with it. This creates an unnecessarily energy-expensive program when you could just use the neat render method that the friendly React devs intended you to use. Now that that part's out of the way, here are the aforementioned Other Fun Things: 1. Passing the whole dang state Sometimes, when writing in React, it's hard to keep the code DRY. You may find yourself writing this.state a ton of times while passing specific state pieces to the components you want. A fun little tip to help avoid that issue: you can pass the whole dang state over without specifying pieces. It looks like this: <Homepage state={this.state}/> That's pretty straight-forward. That's pretty state-forward? At any rate, you can then access the state pieces inside of that component by using this.props.state.pieceOfState. 2. Active links Stylizing a link so that it responds when a user is on the associated page has never been easier. You can simply give the NavLink a class of activestyle (along with whatever CSS you want to occur) like so: <NavLink to='/homepage' activeStyle={{fontWeight: "bold", color: 'blue'}}>Homepage</NavLink> React will handle listening for which page the user is on. 3. Rendering a 404 Sometimes the users of your site will get wayward and decide that they can probably guess the available paths, so they will just type that path in expecting to see it come up. React is nice, and it won't break your site, but it won't tell the user that the page doesn't exist. To render a 404, it's useful to group your routes with a Switch tag. <Switch> <Route path='/homepage' component={Homepage}/> <Route path='/profile' component={Profile}/> <Route path='/seaturtles' component={Seaturtles}/> <Route component={NoMatch}/> </Switch> In the above, the component "NoMatch" is not given a route, so all routes which are not defined will render the component, which you can build out to render whatever you want your 404 page to look like. You can put anything there. An image of Johnny Bravo. A link to the Wikipedia page on 404's. A never ending scroll loop of the Constitution. The world is your oyster. 4. Redirects Intuitively, if your user is logged in, you won't want them to be able to navigate to the '/signin' page. BUT, you also don't want them to see a 404 page there. It's time to implement a redirect. This is accomplished by specifying another Route to '/signin' and giving it the instructions to render a Redirect. Observe: <Route path="/signin" render={()=> (<Redirect to='/search'/>)}/> This code shows the Route using the same render method as with passing props, but without the props themselves. The anonymous function points to our Redirect, and we get to specify the URL to which we want our user sent. An Important Note You will have to import any and all Router elements into whatever file you are intending to use them. For example, to do everything listed in this post, you would need to import the proper items at the top of your file: import {BrowserRouter as Router, Route, NavLink, Switch, Redirect} from 'react-router-dom'; Thanks for stopping by, and happy routing! Discussion (8) I rarely if ever leave comments, but I am leaving one for you, because you are awesome. You put an incredibly clear answer to the question right at the top of your blog post. Every one should do this. I applaud you. You made my day. I am so glad it helped! I too found it so much helpful and informative. Been trying to figure out an easy way for days for passing states and rendering with router. You are awesome. Hi, now after doing '1. Passing the whole dang state', how do i update the state (of App component in App.js) from Homepage component (launched using BrowserRouter) ? edited: I have to pass the update functions as another prop just like passing state as prop. I signed up for this website just to say this was the most helpful article I have ever read. I love the summary at the top. Had to leave a comment. THANK YOU for not making me skim through 5 paragraphs on what react is and the history of javascript. Answer right at the top, keep up the good work! Whoah!! This.Was.Awesome. And what sort of Saint places the solution at the top of the blog? You are wonderful. That was really awesome, really helped......how about for functional based components though..sorry for asking newbie here!!
https://dev.to/halented/passing-state-to-components-rendered-by-react-router-and-other-fun-things-3pjf
CC-MAIN-2021-43
refinedweb
1,053
71.55
Java is a case sensitive programming language. To be able to write a program in Java, it is necessary to know the exact structure of a Java program. So, let us study the structure of a Java program along with its important syntax and keywords. Basic Structure of a Java Program A Java program involves the following sections some of which are compulsory and others are optional. These are:- - Documentation Section - Package Section - Import Section - Class Definition - Main Method Class Documentation Section It is an optional section. It is used to write any number of comments in the program. These comments help a programmer to understand the code in an easier manner. It is written to define the purpose and the action being taken in different steps of the program. There are two ways to provide comments. These are: - // comment to be written : used for a single line comment. - /* comment to be written */ : used for comments which occupy multiple lines. Package Section Package is a way of grouping related classes and interfaces that are defined by a common name. It is an optional section. Package keyword is used to declare a package which tells the compiler that the class used in the program belongs to a particular package. It is declared as: package package_name; For example: package MyPackage; Import Section This statement instructs the compiler to load particular classes that are defined under a particular package. This is also an optional section. This is done when you want to use a class of another package. To do so you have to make use of the import keyword. For example: import java.lang.*; Class Definition Classes are one of the main and essential parts of any Java program. This is a compulsory statement as this section is used to define a class for the program. A Java program may contain many class definitions depending on the requirement of the programmer. In order to define a class the class keyword is used followed by the class_name, starting with curly braces ‘{‘and ending with closing curly braces ‘}’. For example: class Welcome { …. class body ….} Main Method Class The essential part of Structure of a Java Program is main method since it is the starting point of the program. It is a compulsory section, which tells the compiler about the beginning of the execution of the program. If we are defining multiple classes in our program, only one class can define the main method. A class may have several methods including main(). When Java interpreter starts execution of the program, it starts from main() and it calls all the other methods required to run the application from there. The syntax to define main() is as follows: public static void main(String args[]) { … body of main() …} - The word public means that it can be used by code outside this class. - The word static is used when we want to access a method without creating its objects. - The word void indicates that the method does not return any value. - The word main specifies the starting of the main method. - The words String args[] are used to specify input parameters when the program is run through the console. Example of a simple Java program // Name this file welcome.java public class Welcome { /* This is a simple Java program It will print a simple message on screen */ public static void main(String args[]) { System.out.println(“Welcome to the world of Java”); } } Program output: Welcome to the world of Java
https://csveda.com/java/structure-of-a-java-program/
CC-MAIN-2022-33
refinedweb
581
64.2